Ok .. now to my point....
What is striking is that anyone new in IT today (i.e. the current generation of teen-agers) will be unable to gain the same exposure to this field. In essence, specialisation is the price we pay for the ever increasing complexity. Most senior software developers today will have grown with the technology -- learning new skills and techniques slowly as the field matured. But, the next generation of senior developers will just never have had the opportunity to actually get into a simpler field -- they are entering a fairly mature and certainly very complex field.
Personally, I do think that this generation will miss out on many of the joys and excitement of technology -- it is hard to get a thrill out of something that you have grown up with as a normal facet of life. My 6yo son got more excited about an old typewriter (stunned that you manually feed paper into it) than a new iPod.
All this raises some interesting questions (most of which I do not want to answer in this post -- feel free to leave a comment though):
Can we actually manage the complexity and pass on the knowledge to the next generation (that has not built it -- and hence is unlikely to have any emotional/personal attachment). Can this technology then be maintained? There was a recent Slashdot article essentially stating that Linux is having some difficulty attracting younger talent. Is it really a problem?
The current crop of engineers build technology assuming that others will have the same perspective/background and knowledge. Will the educators and the current-generation (under 20yo) put in the 7-10 years of effort to learn the skills and knowledge? Unfortunately as they learn .. the state of art keeps moving making it harder and harder to catch up.
Will abstraction/componentization and API around these services solve the problem? It buys time and there are trade-offs. There is a big problem with layers and layers of abstraction -- they leak. That is, if something breaks and you do not know what/how this abstraction works you may not be able to debug the source of the problem easily. The common counter argument is that few people understand operating systems -- yet we can use them. This argument has some merit -- but, most of the layers and abstractions that we use in software development have not had the same level of attention paid to them.
If you build applications using 20 different third-party components (some open-source, some commercial) -- two of these break intermittently. Are you able to isolate the problem? These confound people with serious experience, how will someone new to the field cope?
The current expectation is that we will weave software using 100s of services and abstractions (some of which may be on the cloud). I know from experience as an academic, we are not even getting though the basics -- let alone educate an under-graduate in 3 years to face this type of situation.
I for one am looking forward to how the field evolves and deals with the issues that I raised here. I am deeply curious and keen to see how far we can push the boundaries of complexity before it starts to cause serious problems.