The most obvious distributed projects I can think off offhand are collaborative software development programs consisting of many individual components; for instance, the GNU software project, or Numpy/SciPy/Matplotlib and associated scientific applications, FreeBSD and its descendants, and so forth. However, I think when you delve into these projects you’ll find that there are a few key performers doing most of the architectural heavy lifting (the Matplotlib core was essentially written by one guy in the span of five years), a few people gluing everything together, and then a bunch of people doing testing and writing documentation. “Massive” R&D projects crossing technical disciplines have to managed by their very nature. They can and should be broken into individual components, but someone has to build a structure for interface and coordination (i.e. the systems engineering) and shepherd teams so that things come together on schedule. It is no good having your spacecraft at the launch pad if the launch vehicle team is still trying to figure out why their engine is blowing up on the test stand.
I’ve worked on a few unsolicited proposals which, due to cost and scheduling were done in distributed fashion with everyone supposed to just know to communicate and coordinate their efforts. This never, ever works, because it only takes a single person to fall down on the job or fail to communicate to derail the entire effort. This is why good proposals are written by people shoved into a room for fourteen hours a day until they hate the sight of one another; from that disgust and desperation comes a desire to delivery a proposal so complete and obviously successful that you never have to return to the hell of a proposal “war room” again.