[T]echnology across software domains should be consistent. There should be a standard, however de facto, that works everywhere. Skills should be portable across those markets, so that someone with experience in desktop software development has a decent chance of developing for handhelds. Everything should just work together, and development across all devices should be relatively straightforward to someone with experience in any one of them.
Microsoft has been very good at this. […]
I wonder whether the open source world can match that consistency. Granted, there are movements to make things consistent, such as the LAMP set of technologies for Linux, or the Linux Standard Base project. However, the structure of development for open source products militates against the establishment of any consistent standard.
Open source development relies mostly on voluntary contributions. People don’t tend to contribute to ALL open source products (that would be impossible). Rather, they concentrate mostly on the products that interest them.
This results in a developer version of tunnel vision. Since all that matters is the creation of the handful of products to which the volunteer contributes, what matters is the interests of contributors to a single project, not the interests of technology which should span ALL products.
Great points, though I clearly disagree that the open source community is fundamentally unable to match Microsoft on providing a consistent environment. It’s going to take some work on our part though.
One of the reasons Microsoft has historically been so successful is its unique ability to integrate disparate technologies into cohesive wholes, as well as to leave the mechanisms they use to do this just open enough that others can integrate with them to a degree but never so open that they can do so better than Microsoft can.
Of course, Microsoft can do this because they develop, own, and control each of the pieces and the interfaces between them. Other platforms (Linux, Java, UNIX, even the Web to a certain extent) are comparative mishmashes for the exact reasons John describes, namely that every project is developed independently, and that there’s comparatively little thought put into how the pieces interact.
The only parties that do think about these issues, the Linux distributors, tend to do so after the fact, and, of course, they have no direct control over the projects that produce the pieces to push any integration improvements they might make upstream (in the general case anyway).
Furthermore, once multiple solutions in a given problem domain emerge, it’s not always possible to generalize the interfaces when the projects behind them don’t agree with each other on the “right solution”. In other words, because it’s an after-the-fact thing, “not invented here” and general inertia come into play.
So, while the technical aspect of the problem is important, namely the interfaces between components (APIs at the source level and ABIs at the binary level), there’s a social aspect here too. We need to encourage the kind of competition at the implementation level that results in technical excellence while fostering a sense of cooperation at the interface level to ensure that, collectively, the various implementations fit seamlessly into a larger, consistent environment.
In other words, we don’t want to get in the way of open source project and Linux distro innovation, but we do want to encourage open source projects and Linux distros to innovate in a collaborative fashion so the technology that ultimately wins in the marketplace can be standardized with as few legacy issues as possible (or, in the case where multiple best practices emerge, for legacy reasons or otherwise, we want to collaboratively define a uniform interface, i.e., cooperate on interface, compete on implementation, etc.).
That’s the only way we’re going to get there.