- Hardware: USB Memory Sticks, Multi-core, LCD panels, DDRx memory, iPhone, iPod, Digital Cameras
- Comms: ADSL2, Wi-Fi, Bittorrent, Digital TV
- Web: Google search, Blogging, RSS feeds, GMail, Reddit, Firefox
- Social web: Facebook & Twitter
- Apple - Amazon - eBay - Salesforce
- Op.Sys: OSX, Ubuntu, Windows XP
- Wikipedia
- Slashdot still survives to serve the geeks (briefly distracted by Reddit/Digg)
- Adobe Flash (I hope this one does not make it on the list for next decade)
- Dev: Java, C#, AJAX, Web apps.
Wednesday, December 30, 2009
A decade of technology ...
I just wanted to sum up the decade that has just passed by in terms of technology. The list is from my own perspective and in no particular order....
Sunday, November 29, 2009
Something scary from USA
I was playing around with Google Insight to see if there are any interesting trends regarding an improving economy etc.
If you select the link below, it will take you to the search trend for "unemployment benefits". Watch the animation of the search volume over time (the controls are under the map of USA).
See: http://www.google.com/insights/search/#geo=US&q=unemployment+benefits&cmpt=q
I also tried: "food for free"
http://www.google.com/insights/search/#geo=US&q=food+for+free&cmpt=q
Sadly, both of these are trending upwards. If you want to see the scary 'unemployment trend' (the video below is really quite sad and scary, especially since behind the abstractions are 'real people' that are not in a happy place.
------
Just in case you are wondering if there is anything that actually is stable -- it is "insurance"
http://www.google.com/insights/search/overviewReport?cat=&q=insurance&geo=US&cmpt=q#
-- rv
If you select the link below, it will take you to the search trend for "unemployment benefits". Watch the animation of the search volume over time (the controls are under the map of USA).
See: http://www.google.com/insights/search/#geo=US&q=unemployment+benefits&cmpt=q
I also tried: "food for free"
http://www.google.com/insights/search/#geo=US&q=food+for+free&cmpt=q
Sadly, both of these are trending upwards. If you want to see the scary 'unemployment trend' (the video below is really quite sad and scary, especially since behind the abstractions are 'real people' that are not in a happy place.
------
Just in case you are wondering if there is anything that actually is stable -- it is "insurance"
http://www.google.com/insights/search/overviewReport?cat=&q=insurance&geo=US&cmpt=q#
-- rv
Monday, October 26, 2009
Evidence based software engineering....
This is my personal pet interests. Most software engineering today is based on 'hear-say', 'guess work', 'poor mathematics' and 'statements from so called gurus'.
But, how much of it is actually correct? Is there any evidence to back up most of what passes for software engineering?
See the article at:
But, how much of it is actually correct? Is there any evidence to back up most of what passes for software engineering?
See the article at:
Bits of Evidence
View more presentations from Greg Wilson.
Saturday, October 24, 2009
Google Wave ... used it and .....
Just played around with Google Wave. Currently, I don't quite know what to do with it. It is a communication tool and at this point in time I do not know if anyone else will use it if I do.
Wave is one of those tools where I really have a gut feel that it will get used in ways that the creators have never imagined. It is one of these creative adventures that I feel will eventually help it.
Either way, the Javascript that powers it is impressive. Kudos to the engineering team that built something like this with HTML5/CSS and Javascript (on the client side anyway). Even if this does not take off, the tooling and knowledge gained by this exercise will help us build richer web applications.
-- rv
PS: It really does need a Google Wave notifier -- I for one will not log in every day till there is some level of critical mass.
Sunday, October 04, 2009
Feeling guilty about testing .....
Software testing is one of those fuzzy things where the theory in the books completely differs from how testing is done in practice. Unfortunately, the poor practitioners all too often end up feeling guilty for not testing their products properly. Compounding this problem is the fact that there is a whole lot of techniques that are positioned as 'best practice', 'will find the most bugs' etc. -- unfortunately they do no such thing expect adding to the guilt that the testing is poor.
Why is it so messy? Do we really need to test software as per the books?
The key observation from my experience is "normal developers test (execute) code to discover behaviour" -- so, they explore the program to check if it broadly matches the expected behaviour. Further, developers also work with requirements that are incomplete, potentially inconsistent and sadly vague.
Developers to some extent guess what is expected. They will fill in the gaps based on similar software systems (or) their common-sense (or) gut-feel (or) experience as reference points. This guess work is unavoidable, unless the person providing the requirements, and the developer implementing the requirements both are 'perfect beings'.
Back to the question at hand -- So how does one go about testing properly?
Rather than directly answering it I would like to take a detour to make my point.
Lets' say you downloaded some new 'browser' software. Your intention is to browse the web, check e-mail, Facebook, Twitter etc. How do you go about testing the viability of this product for your needs? Do you start by writing down all their tasks, define expected behaviour and then proceed to validate? People explore tools and systems -- and if they are not too painful, they get used.
So, what is the most effective way to test a product? The simplest and easiest method is 'use it like the end-user would' -- and do not feel guilty that you are not doing enough.
I must add some limitations/variations:
1. If you have a well defined set of mathematical functions -- these can be tested (more) formally and quite rigorously.
2. If you have a workflow (or) set of rules that are available as a mathematical expression (some graph, logic rules) again a testing approach that matches inputs to outputs will work.
3. Safety critical systems -- typically quite a lot of effort goes into the requirements to make sure that the fuzzy aspects are completely reduced.
In cases like the above, test coverage also comes in very handy. Effort can be put into automation and formal testing since it is actually likely to work. Things like compilers, parsers, business rule engines, workflow engines, chess playing software, data structures, well defined algorithms etc. will all fall into the above two categories. Rest of the time ... "use it to test it".
-- rv
Friday, October 02, 2009
Is Google the next Microsoft?
How does an organisation evolve over time? From start-up to corporate giant. Is Google the next Microsoft?
I want to start off with a broad illustration of the steps first on how a start-up slowly gets into the corporate monolith -- but, taking it purely from the perspective of the Executive/Senior management. We can pretty much see where a company is headed purely by observing the profiles of the top management.
Stage 1: Start-up. The management and leadership is closely involved in the product development. In many cases, they are engineers, designers, developers -- the actual builders have all of the power and set the direction. "Lets create/innovate" is the mantra. The company is in the 'Yes we Can' mode.
Stage 2: Market-Development. In this stage, the sales and marketing people start to run the company. They gain power, they generate the revenue -- they dictate the next minor feature to appease the next new client. The product start to loose some coherence, but overall the company can still maintain the innovation. The Chief's in the company will be the Solution architects, Sales and Marketing people. Growth by adding customers is the mantra -- there is a whole lot of positive energy in the company.
Stage 3: Slash-and-Burn. In this stage, the financial and operational arms take control of the company. The easy growth phase is over. Revenues are fairly stable now. The only way to show profits is by optimising resources, cutting costs, being careful with every penny. The Sales and Marketing people are asked to put in a budget and estimate revenue. MBA's start taking control -- Excel is the tool of management choice. Optimisation is the mantra. The employees start to look back fondly at the old days when they made money solving customer problems. [If a company is considering Outsourcing -- they have entered this phase]
Stage 4: We-are-Borg. In this stage, the only way to keep growing is through acquisitions (or) by being acquired. The other options are via lobbying governments for preferential treatment. Monopoly practices, bullying, playing at the edge of law, re-interpreting ethics, borrowing as much as possible etc. The company is now run by the legal department the Chiefs tend to have a background in Finance, Takeovers and/or Politics. There are many mantra's by this point in time - 'Greed is Good', 'Last man standing', 'Heads I Win, Tails you Lose', 'Stealing is ok, getting caught is bad', 'It is only illegal till we re-write the law' etc.
Stage 5: Implosion / Explosion. The entity dies due to imbalances within itself as it becomes completely paranoid, inconsistent and diseased -- like in nature, the most useful parts are pickup up first by other companies and the rest is left to slowly decay.
---
My contention is that almost all companies (or departments) run by normal humans will go through these phases -- the only question is how long they spend in each phase. If the company is large enough, different departments may be at different stages too. This is not a continuous linear process -- that is, companies can move back stage and then go forward again too.
Why does this happen? Simple -- most companies want to grow and be more profitable over time. In a finite world, there will come a time when growth is only possible by taking resources and profits away from someone else. There is no known example of an entity that has grown continuously forever -- and there probably will not be.
So the question is 'Where is Google?'. I think they are in Stage 2. They claim they have built a culture that slows down the natural forces that compel growth and profit taking ... the answer will be fairly evident in about a decade or so.
-- rv
I want to start off with a broad illustration of the steps first on how a start-up slowly gets into the corporate monolith -- but, taking it purely from the perspective of the Executive/Senior management. We can pretty much see where a company is headed purely by observing the profiles of the top management.
Stage 1: Start-up. The management and leadership is closely involved in the product development. In many cases, they are engineers, designers, developers -- the actual builders have all of the power and set the direction. "Lets create/innovate" is the mantra. The company is in the 'Yes we Can' mode.
Stage 2: Market-Development. In this stage, the sales and marketing people start to run the company. They gain power, they generate the revenue -- they dictate the next minor feature to appease the next new client. The product start to loose some coherence, but overall the company can still maintain the innovation. The Chief's in the company will be the Solution architects, Sales and Marketing people. Growth by adding customers is the mantra -- there is a whole lot of positive energy in the company.
Stage 3: Slash-and-Burn. In this stage, the financial and operational arms take control of the company. The easy growth phase is over. Revenues are fairly stable now. The only way to show profits is by optimising resources, cutting costs, being careful with every penny. The Sales and Marketing people are asked to put in a budget and estimate revenue. MBA's start taking control -- Excel is the tool of management choice. Optimisation is the mantra. The employees start to look back fondly at the old days when they made money solving customer problems. [If a company is considering Outsourcing -- they have entered this phase]
Stage 4: We-are-Borg. In this stage, the only way to keep growing is through acquisitions (or) by being acquired. The other options are via lobbying governments for preferential treatment. Monopoly practices, bullying, playing at the edge of law, re-interpreting ethics, borrowing as much as possible etc. The company is now run by the legal department the Chiefs tend to have a background in Finance, Takeovers and/or Politics. There are many mantra's by this point in time - 'Greed is Good', 'Last man standing', 'Heads I Win, Tails you Lose', 'Stealing is ok, getting caught is bad', 'It is only illegal till we re-write the law' etc.
Stage 5: Implosion / Explosion. The entity dies due to imbalances within itself as it becomes completely paranoid, inconsistent and diseased -- like in nature, the most useful parts are pickup up first by other companies and the rest is left to slowly decay.
---
My contention is that almost all companies (or departments) run by normal humans will go through these phases -- the only question is how long they spend in each phase. If the company is large enough, different departments may be at different stages too. This is not a continuous linear process -- that is, companies can move back stage and then go forward again too.
Why does this happen? Simple -- most companies want to grow and be more profitable over time. In a finite world, there will come a time when growth is only possible by taking resources and profits away from someone else. There is no known example of an entity that has grown continuously forever -- and there probably will not be.
So the question is 'Where is Google?'. I think they are in Stage 2. They claim they have built a culture that slows down the natural forces that compel growth and profit taking ... the answer will be fairly evident in about a decade or so.
-- rv
Thursday, September 24, 2009
Microsoft Launch Party Video = Definition of "cringe"
This is worse than watching Basil Faulty (of the Faulty Towers) in terms of cringe. Leave a comment if you survive past the first 30 seconds. If this how M$ will market Windows 7 -- then it is dooommmeed.
Wednesday, September 02, 2009
Interesting XCode Feature (OSX IDE)
Most modern compilers perform some level of static analysis in order to check for potential issues in the code (typically for bugs like attempting to access variables that have not been properly initialised).
XCode in its latest incarnation now offers a rather interesting 'graphical representation' of how the errors will be caused/triggered.
See the screen shots (courtesy Apple Dev. Doc.) -- with the blue lines generated by XCode in the IDE. These features are great for experienced developers, but absolutely fantastic when one is just starting to learn how to program. I also like the clearer messages that the tool is now showing -- this is a great advance compared to the typical cryptic messages that 'gcc' generates. A list of the improvements in terms of how errors are now reported is available at the following page: http://clang.llvm.org/diagnostics.html . If all goes well for LLVM/CLang -- the days of GCC may be numbered. Who knows ... C language may even be useful for teaching computer programming -- rather than its current purpose, which is to scare half the students out of computer science/IT.
XCode in its latest incarnation now offers a rather interesting 'graphical representation' of how the errors will be caused/triggered.
See the screen shots (courtesy Apple Dev. Doc.) -- with the blue lines generated by XCode in the IDE. These features are great for experienced developers, but absolutely fantastic when one is just starting to learn how to program. I also like the clearer messages that the tool is now showing -- this is a great advance compared to the typical cryptic messages that 'gcc' generates. A list of the improvements in terms of how errors are now reported is available at the following page: http://clang.llvm.org/diagnostics.html . If all goes well for LLVM/CLang -- the days of GCC may be numbered. Who knows ... C language may even be useful for teaching computer programming -- rather than its current purpose, which is to scare half the students out of computer science/IT.
Tuesday, September 01, 2009
64-bit Operating Systems....
The latest incarnation of the OSX(10.6) now supports a full 64-bit kernel. Is this the future? Should developers jump on the bandwagon and be 64-bit all the way?
The short answer: for 2009 (and very likely till well into 2012) -- 32-bit will be fine. For normal desktop computing use, 32-bit will be sufficient for the next decade or so unless MS-Office and the browsers start gaining a ridiculous amount of volume in the near future and want to be contestants on "The Biggest Loser". My guess is that we are starting to hit certain cognitive limitations (of the human), and new features will be incremental adjustments, rather than massive bloat.
So, why the fuss about 64-bit? The 32-bit kernel is limited to 4Gb of addressable memory -- and we are starting to get machines with a lot more memory now. But, the real issue with needing 64-bit is with how the kernel actually manages memory.
All operating systems break-up available RAM into pages (OSX has 4 kilobyte pages). In order to manage these pages, the operating system actually uses a 64 byte (in OSX) data structure to hold some information about the page. So, if you have 4Gb of RAM -- then the kernel needs 64Mb of space for the memory management data structures.
The issue starts showing up when you have 32Gb of RAM .. you start needing nearly 0.5Gb just for memory management. 64Gb server .. will mean that the kernel now needs 1Gb for memory management data structures. This is the underlying driver for operating systems moving to the 64-bit land -- much more than anything much else. When you bring in virtual memory (the space on say a solid state hard-disk), these newer kernels start to make much more sense.
Will 64-bit applications run faster? The benchmarks on the raw CPU performance say so -- but, for real-world applications (Office, Browser, Photoshop), there will be no discernable difference. Certain types of mathematical computations may run a bit faster -- but then again a 32-bit GPU will perform some mathematical operations a lot faster than any CPU ever will, so not a great comparision.
The short answer: for 2009 (and very likely till well into 2012) -- 32-bit will be fine. For normal desktop computing use, 32-bit will be sufficient for the next decade or so unless MS-Office and the browsers start gaining a ridiculous amount of volume in the near future and want to be contestants on "The Biggest Loser". My guess is that we are starting to hit certain cognitive limitations (of the human), and new features will be incremental adjustments, rather than massive bloat.
So, why the fuss about 64-bit? The 32-bit kernel is limited to 4Gb of addressable memory -- and we are starting to get machines with a lot more memory now. But, the real issue with needing 64-bit is with how the kernel actually manages memory.
All operating systems break-up available RAM into pages (OSX has 4 kilobyte pages). In order to manage these pages, the operating system actually uses a 64 byte (in OSX) data structure to hold some information about the page. So, if you have 4Gb of RAM -- then the kernel needs 64Mb of space for the memory management data structures.
The issue starts showing up when you have 32Gb of RAM .. you start needing nearly 0.5Gb just for memory management. 64Gb server .. will mean that the kernel now needs 1Gb for memory management data structures. This is the underlying driver for operating systems moving to the 64-bit land -- much more than anything much else. When you bring in virtual memory (the space on say a solid state hard-disk), these newer kernels start to make much more sense.
Will 64-bit applications run faster? The benchmarks on the raw CPU performance say so -- but, for real-world applications (Office, Browser, Photoshop), there will be no discernable difference. Certain types of mathematical computations may run a bit faster -- but then again a 32-bit GPU will perform some mathematical operations a lot faster than any CPU ever will, so not a great comparision.
Sunday, August 30, 2009
OSX Snow Leopard ... Apple Retail Experience
I upgraded my primary work machine from Leopard to Snow Leopard (see image of the real animal taken in Afghanistan).
So, what has changed? Nothing really observable.
Applications do start up noticably fasters (esp. the Apple software). Apart from that ... as far as I can tell, nothing much else. There are a number of tweaks -- but my personal user experience has not changed. I do however like the incremental changes that Apple makes to their product line -- rather than ground breaking modifiations to the entire user experience -- which would be rather annoying on a machine that I use everyday, esp. if I cannot find stuff that I used to be able to easily.
In terms of the user experience changes, Windows 7 will be a shock to the majority that will move from Windows XP. There are a lot of changes ... enough to cause a lot of frustration -- esp. Windows explorer and the fact that the menu bar seems to be dissapearing from more and more Microsoft applications. Eventually, I feel that Windows will have a single menu bar anchored at the top of the screen exactly like Apple.
I also spent some time yesterday at the Apple Retail Shop ( Doncaster, Australia). My younger son liked it a lot, esp. since they had machines placed on a kids-desk with games. This clever distraction for the kids essentially meant that I had to spend way longer than planned in the shop -- observing everything else. The really interesting part is that the shop was busy -- they seem to be selling a lot of products. I did not notice anyone walking out with computers (in the short duration I was there -- so no reflection on reality), but a lot of iPods seem to be sold and many more accessories. There were a lot of Apple people (in brightly colored t-shirts), so it was easy to get attention, despite the crowd. I could not place anything specific that seems to be so compelling, but the overall retail experience is nice. The real irony is that Telstra attempted to mimick the Apple retail concept -- only the Telstra shops seem to be permanently empty -- with staff in suits glued to their computers at the far end of the shop, as opposed to wandering the store enthusiastically.
The other interesting observation I made was that most of the store employee's were relatively young and male. It certainly has nothing to do with capability of older people or girls -- rather it seems to be a reflection of interest and preference (or Apple has a hiring policy that breaks the law, which is unlikely).
Now that Microsoft has stated they will also be into retail .. lets see how they compare against Apple. I'm sure it will be a great job explaining the differences between the 5 editions of Windows 7 (every day) and working as a MS technical support that helps remove viruses from a machine.
-- rv
PS: The real wierd bit is that the kid wants to go back to the Apple shop again -- no doubt to play games (atleast it is cheaper than a trip to the zoo).
So, what has changed? Nothing really observable.
Applications do start up noticably fasters (esp. the Apple software). Apart from that ... as far as I can tell, nothing much else. There are a number of tweaks -- but my personal user experience has not changed. I do however like the incremental changes that Apple makes to their product line -- rather than ground breaking modifiations to the entire user experience -- which would be rather annoying on a machine that I use everyday, esp. if I cannot find stuff that I used to be able to easily.
In terms of the user experience changes, Windows 7 will be a shock to the majority that will move from Windows XP. There are a lot of changes ... enough to cause a lot of frustration -- esp. Windows explorer and the fact that the menu bar seems to be dissapearing from more and more Microsoft applications. Eventually, I feel that Windows will have a single menu bar anchored at the top of the screen exactly like Apple.
I also spent some time yesterday at the Apple Retail Shop ( Doncaster, Australia). My younger son liked it a lot, esp. since they had machines placed on a kids-desk with games. This clever distraction for the kids essentially meant that I had to spend way longer than planned in the shop -- observing everything else. The really interesting part is that the shop was busy -- they seem to be selling a lot of products. I did not notice anyone walking out with computers (in the short duration I was there -- so no reflection on reality), but a lot of iPods seem to be sold and many more accessories. There were a lot of Apple people (in brightly colored t-shirts), so it was easy to get attention, despite the crowd. I could not place anything specific that seems to be so compelling, but the overall retail experience is nice. The real irony is that Telstra attempted to mimick the Apple retail concept -- only the Telstra shops seem to be permanently empty -- with staff in suits glued to their computers at the far end of the shop, as opposed to wandering the store enthusiastically.
The other interesting observation I made was that most of the store employee's were relatively young and male. It certainly has nothing to do with capability of older people or girls -- rather it seems to be a reflection of interest and preference (or Apple has a hiring policy that breaks the law, which is unlikely).
Now that Microsoft has stated they will also be into retail .. lets see how they compare against Apple. I'm sure it will be a great job explaining the differences between the 5 editions of Windows 7 (every day) and working as a MS technical support that helps remove viruses from a machine.
-- rv
PS: The real wierd bit is that the kid wants to go back to the Apple shop again -- no doubt to play games (atleast it is cheaper than a trip to the zoo).
Friday, August 21, 2009
Creating a simple cloud ready application..
IBM DeveloperWorks just posted a fantastic article on how to create a simple application for the Google App Engine cloud infrastructure.
See: https://www.ibm.com/developerworks/java/library/j-javadev2-1/index.html
-------
Looking back around 9 years ago when I was fairly new to the Web application land -- the cost of what is currently being offered by Google App Engine for near free easily ran into many thousands of dollars (software/hardware/routers alone were well over $20k). Further, if you setup custom infrastructure you end up needing Network administrators (part-time contractors at a minimum), system administrators (to patch O/S and monitor infrastructure) -- this skilled work added to the underlying costs as well. If you had a good concept and wanted to take it into the Internet land -- you would have spent $200k - $300k just setting up the IT infrastructure and hiring the engineers to look after it. Of course the hardware and software depreciates exponentially. In the early days start-up companies still setup and ran their own e-mail servers (most of the time very badly managed).
I still recall attending an auction a few months after the dot.com collapse -- $25k Sun Servers were being sold for less than $5k (including all the software on it -- but no one realised the legal implications of transfering the software license). The irony at the auction was that the only items that held value were the fancy furniture many of these startup's bought (at least you did not need a $100k engineer to maintain them).
The new cheaper infrastrucutre world (not to mention the reduction in deployment and maintenance costs) will contribute towards a reduction in overall IT costs -- but, the real benefit is that it allows developers with good idea to translate them into software and make it available at a farily low cost.
The side effect of all this is that the developers can come from any part of the world, the infrastructure capital requirements are now sufficiently low that skilled engineers from a lot more countries (I'm thinking India, China, Eastern Europe, Russia, Brazil) can create software and compete for market share. Will this oppertunity be taken up .... I do not know the answer, but we will soon find out.
-- rv
See: https://www.ibm.com/developerworks/java/library/j-javadev2-1/index.html
-------
Looking back around 9 years ago when I was fairly new to the Web application land -- the cost of what is currently being offered by Google App Engine for near free easily ran into many thousands of dollars (software/hardware/routers alone were well over $20k). Further, if you setup custom infrastructure you end up needing Network administrators (part-time contractors at a minimum), system administrators (to patch O/S and monitor infrastructure) -- this skilled work added to the underlying costs as well. If you had a good concept and wanted to take it into the Internet land -- you would have spent $200k - $300k just setting up the IT infrastructure and hiring the engineers to look after it. Of course the hardware and software depreciates exponentially. In the early days start-up companies still setup and ran their own e-mail servers (most of the time very badly managed).
I still recall attending an auction a few months after the dot.com collapse -- $25k Sun Servers were being sold for less than $5k (including all the software on it -- but no one realised the legal implications of transfering the software license). The irony at the auction was that the only items that held value were the fancy furniture many of these startup's bought (at least you did not need a $100k engineer to maintain them).
The new cheaper infrastrucutre world (not to mention the reduction in deployment and maintenance costs) will contribute towards a reduction in overall IT costs -- but, the real benefit is that it allows developers with good idea to translate them into software and make it available at a farily low cost.
The side effect of all this is that the developers can come from any part of the world, the infrastructure capital requirements are now sufficiently low that skilled engineers from a lot more countries (I'm thinking India, China, Eastern Europe, Russia, Brazil) can create software and compete for market share. Will this oppertunity be taken up .... I do not know the answer, but we will soon find out.
-- rv
Thursday, August 20, 2009
The future of software development is cloudy!
This post was triggered by an article I read recently about Apple Inc's Cloud Ambitions. Apple is building one of the world's largest data centers' -- however they have not officially indicated a purpose or motivation. Microsoft has also embarked on a similar data centre venture (potentially to support their Azure platform). Google has its App Engine, Amazon has EC2. Adobe and Yahoo are wandering on the outer-rim, but with no firm roadmaps. There are a whole stack of other domain specific platforms out there as well (SAP, Oracle, SalesForce to name a few).
Currently, we are still in the pre-beta land for these cloudy ambitions. In another 5-7 years, these platforms will be mature. I'm going to be bold and predict that you will get the following from each vendor:
* A fully integrated IDE (one that will plug into the cloud, potentially running directly on the cloud -- i.e. runs within a web browser)
* A mature API stack (significantly more mature than what we currently have -- potentially domain specific maturity as well)
* A database system (to store/retrieve data -- file systems etc.)
* Language support for: Java, C#/VB.net, C and Python [others maybe -- but will certainly have to target either the JVM or the .NET CLI]. If Apple joins the fray at this level, expect Objective-C as well.
* Legal/deployment terms that will work for a number of different commercial domains (even finance and govt.).
* Services that will allow export of data stored on the cloud (e.g. GMail offers the ability to pull down all email to a local client if you want).
---
Why only a few languages? Cloud architectures rely on a method to distribute execution of code across multiple machines -- currently we have maturish platforms that will execute Java and C# applications. Support for other languages may never get the funding needed (esp. given the current financial climate) and hence will stay within academia or within a small group of enthusiasts. For a new language to take off, it has to offer something much much more. Java has a very large library pool built over the last decade (its core strenght), C# has 20k engineers just at Microsoft, it also has some strong language features.
What does it mean for software development? New projects will start to consider a cloud platform and very likely get locked in (kinda like they do now, if they choose SQL Server, .NET/Java, Oracle etc.). The key difference will be obvious in the Job ads -- companies will want Azure experience or Google App. Engine experience with knowledge of a certain set of API/libraries.
One of the largest change that I expect to see will be in the way we interact with database management systems. Currently, developers still have to be careful with the way they write queries and how they store data (i.e. the data structure). However, my personal experience has been that over the last 10 years -- developers have be able to get sloppy with the way they program because machines are fast and most developers are now able to get away with in-efficient code. This will now extend into the data-structure and query world in the cloud. If you have Google like retreival at your finger-tips, why bother thinking through your database schema at any level of depth? Do queries matter that much -- just throw keywords and guess till it starts to get the rightish data back. There are still going to be a few aspects that need a bit more attention, but nothing like we do now.
Once the clouds start to spread, the biggest change will be to current "PC" industry. In another 5-10 years, many more TVs, iPods, Mobile phones, Automobiles will have a full internet connectivity, quick CPUs, very likely a sizeable hard-disk and ability to hook into the cloud (turned on the default). The question is, will we still need a separate desktop PC at home?, for what purpose? If you thinking gaming ... consoles have already won this battle -- PC only games are no longer a growth market.
Good news: There will be plenty of work as we migrate legacy applications into the cloud. New innovations.
Bad news: More learning (hopefully, this will last our careers)
Currently, we are still in the pre-beta land for these cloudy ambitions. In another 5-7 years, these platforms will be mature. I'm going to be bold and predict that you will get the following from each vendor:
* A fully integrated IDE (one that will plug into the cloud, potentially running directly on the cloud -- i.e. runs within a web browser)
* A mature API stack (significantly more mature than what we currently have -- potentially domain specific maturity as well)
* A database system (to store/retrieve data -- file systems etc.)
* Language support for: Java, C#/VB.net, C and Python [others maybe -- but will certainly have to target either the JVM or the .NET CLI]. If Apple joins the fray at this level, expect Objective-C as well.
* Legal/deployment terms that will work for a number of different commercial domains (even finance and govt.).
* Services that will allow export of data stored on the cloud (e.g. GMail offers the ability to pull down all email to a local client if you want).
---
Why only a few languages? Cloud architectures rely on a method to distribute execution of code across multiple machines -- currently we have maturish platforms that will execute Java and C# applications. Support for other languages may never get the funding needed (esp. given the current financial climate) and hence will stay within academia or within a small group of enthusiasts. For a new language to take off, it has to offer something much much more. Java has a very large library pool built over the last decade (its core strenght), C# has 20k engineers just at Microsoft, it also has some strong language features.
What does it mean for software development? New projects will start to consider a cloud platform and very likely get locked in (kinda like they do now, if they choose SQL Server, .NET/Java, Oracle etc.). The key difference will be obvious in the Job ads -- companies will want Azure experience or Google App. Engine experience with knowledge of a certain set of API/libraries.
One of the largest change that I expect to see will be in the way we interact with database management systems. Currently, developers still have to be careful with the way they write queries and how they store data (i.e. the data structure). However, my personal experience has been that over the last 10 years -- developers have be able to get sloppy with the way they program because machines are fast and most developers are now able to get away with in-efficient code. This will now extend into the data-structure and query world in the cloud. If you have Google like retreival at your finger-tips, why bother thinking through your database schema at any level of depth? Do queries matter that much -- just throw keywords and guess till it starts to get the rightish data back. There are still going to be a few aspects that need a bit more attention, but nothing like we do now.
Once the clouds start to spread, the biggest change will be to current "PC" industry. In another 5-10 years, many more TVs, iPods, Mobile phones, Automobiles will have a full internet connectivity, quick CPUs, very likely a sizeable hard-disk and ability to hook into the cloud (turned on the default). The question is, will we still need a separate desktop PC at home?, for what purpose? If you thinking gaming ... consoles have already won this battle -- PC only games are no longer a growth market.
Good news: There will be plenty of work as we migrate legacy applications into the cloud. New innovations.
Bad news: More learning (hopefully, this will last our careers)
Saturday, July 18, 2009
Software Engineering: An Idea who's time has come and gone!
An interesting perspective from Tom DeMarco in IEEE Computer (via Reddit) in a 2-page article, where he questions if Software Engineering is past its prime.
See: http://www2.computer.org/cms/Computer.org/ComputingNow/homepage/2009/0709/rW_SO_Viewpoints.pdf
-----
The gist of the article is that software that transforms the world (he cites Wikipedia, Google Earth) were not delivered by a project team that was controled every inch of the way -- that is, they were not engineered as recommended in the text books. The whole article essentially kinda sorta states that Agile methods are fine (in a round about way). Tom is known as the "metrics guy", the "Software engineering guru" etc. -- so, this is quite a big statement for someone like him. Just look at the books Tom has authored.
Interestingly, the hardest part in software development has always been in the 'solution definition' -- that is defining the concept and abstract forms in sufficient detail to allow a group to be able to work together productively and build it. The programming and construction side is comparatively easier, mainly because given the right tools with discipline and focus we can actually get over this aspect.
Is software engineering dead? Tom is possibly right with respect to the management perspective. However, there have been many good ideas from this field that are still very relevant -- though most are still at software construction level. For instance, I still think it is a good idea to comment code (level of detail is determined by context), plan an iteration (with the full knowledge that we plan to prepare and synchronize the team, not to execute precisely against it), measure size and complexity (for feedback to allow reflection, not for control), modular architectures are still good, testing is still critical. Most importantly, the SE field provides a general organisation and scaffolding of various aspects to help us teach software development to the next generation.
-- rv
See: http://www2.computer.org/cms/Computer.org/ComputingNow/homepage/2009/0709/rW_SO_Viewpoints.pdf
-----
The gist of the article is that software that transforms the world (he cites Wikipedia, Google Earth) were not delivered by a project team that was controled every inch of the way -- that is, they were not engineered as recommended in the text books. The whole article essentially kinda sorta states that Agile methods are fine (in a round about way). Tom is known as the "metrics guy", the "Software engineering guru" etc. -- so, this is quite a big statement for someone like him. Just look at the books Tom has authored.
Interestingly, the hardest part in software development has always been in the 'solution definition' -- that is defining the concept and abstract forms in sufficient detail to allow a group to be able to work together productively and build it. The programming and construction side is comparatively easier, mainly because given the right tools with discipline and focus we can actually get over this aspect.
Is software engineering dead? Tom is possibly right with respect to the management perspective. However, there have been many good ideas from this field that are still very relevant -- though most are still at software construction level. For instance, I still think it is a good idea to comment code (level of detail is determined by context), plan an iteration (with the full knowledge that we plan to prepare and synchronize the team, not to execute precisely against it), measure size and complexity (for feedback to allow reflection, not for control), modular architectures are still good, testing is still critical. Most importantly, the SE field provides a general organisation and scaffolding of various aspects to help us teach software development to the next generation.
-- rv
Wednesday, May 13, 2009
It costs $30,000 to fill up your iPod (Says Microsoft)
M$ marketing machine has now completely lost their mind (or) we have a space-time rift.
Their new ad, states:
Problem: It costs $30k to fill a 120Gb iPod. (Assuming $1/song and 30k songs)
Solution*: Pay $15/month and get a Zune (+$300 for the Zune)
*You lose all your music the day you stop paying. Then again, in this day and age -- what if M$ gets credit crunched -- this is quite possible given the their current marketing brain.
Two possibilities:
(a) Microsoft does not 'get' that people have friendly contacts* and how they interact and use the internet
(b) This 'ad' was meant for an alternative reality and somehow it leaked into our world -- caused by a space-time rift.
* The definition of a friendly contact on the internet is an entity that will freely share stuff they may or may not have actually purchased/created.
I find that the Internet Radio thing with its large set of channels quite sufficient for my music fix.
PS: If you are worried about the space-time rift, you can learn more about it at this article.
Friday, April 24, 2009
Cheeseburger in a can?
If you are the wandering type -- and have decided to go on a long trek. Can you get a Cheeseburger in a can? The answer is .. "yes you can". The Germans' (of all people) have come up with the solution for a few Euro.
Well, in case you are saying "I like extra cheese" -- esp. the pre-melted stuff. Fear not for Kraft has a solution for you in their "Easy Cheese" range.
PLUS
I guess, this must be what progress and civilisation is all about.
If you are interested, you should read a more comprehensive review of this product (Gizmodo recently also attempted to consume it).
Fortunately, this nourishment is not available in Australia (yet!).
Saturday, April 18, 2009
UI Design (or lack of...)
Some designers never think .... these are a couple of links that came up on Reddit.
http://ingoodhands.com (A Financial Company home page)
http://now.sprint.com/widget/ (Sprint Telecommunications)
----
Think about the time a group of people spent on actually creating these sites. There is absolutely no rational explanation ... except, may be, this is the way the geeks and artists are getting back at the Telephone companies (for their plethora of nasty policies) and Financial companies for the current economic mess.
Interestingly, both of these User interfaces would look "so" cool for a 4-5 second shot in a movie. This is possibly how they got User acceptance -- just showed the project sponsors a couple of shot video clips.
--------
If you are into UI design, do read the article from 37 Signals - Learning from Bad UI design.
There is always the classic - UI Hall of Shame on what not to do (it is quite old, but will certainly bring back memories from the past).
My fav. one about tabs from the UI hall of shame is ...
There was a time when MS Word got pretty close to this kind of mess with tabs. Thankfully, these days, the designers heavily use grouping icons and lists on the right hand side to provide drill-downs as needed (Eclipse IDE especially).
-- rv
Saturday, April 11, 2009
Thursday, April 09, 2009
Java in the cloud...
Java is finally available on Google App Engine. The initial developer preview release that Google announced yesterday provides the basic infrastructure needed to build web applications using Java and deploy / run it on the Google infrastructure (their cloud computing infrastructure).
The process is fairly simple - write Java code, bundle it up as a WAR (web archive) and deploy it to Google infrastructure. That is all. They will scale it as needed. The best part is that Google is providing access to their Data store (based on Big Table) via JDO (Java Data Objects).
JDO provides persistence to Java objects, and the retrieval of data is via a set of SQL like queries. The current generation JDO is pretty mature -- it handles all of the standard one-to-one, one-to-many relationships, inheritance etc. The actual persistence is itself configured via simple annotations. Google data store is highly optimised for web application needs -- i.e. reads and queries are super quick and easy to do. Writes are a little bit slower. They have transactions and all of the standard features one would expect to see in any data store.
---
This is fairly ground-breaking in my opinion. There are a large number of people that already know Java -- and given how simple it is now to create and deploy basic applications, we are likely to see Java regain its lost shine. Unless Microsoft starts providing quick access to something like this with .NET (they have it in development but not sure of its current status), many of the newer start-ups will start to closely look at companies like Google for hosting their applications.
In a nut shell developers no longer have to worry about:
1. Server operating system (and patching it)
2. Database server (esp. load balancing, writing code to scale up to multiple database servers)
3. Mail servers (installing it, configuring it, providing it with sufficient space for persistance)
4. Web server (securing it, bandwidth)
5. DoS attacks, network configuation
6. Memory cache (Google App Engine provides access to their own memcache for temporary persistance)
7. Messaging (Google is proving access to their underlying messaging infrastructure)
All this is infrastructure -- do not need to install, deploy, maintain or think about it.
Why is hosting a big deal? Because, most developers know very little about managing networks and infrastructure. The U/G courses in CS and Software development focus heavily on building solutions -- the deployment and management aspects if covered are presented at minimal depth (since this is not the core focus of the course). Most developers do pick this up over time on real projects, but never really enjoy it or care much for it. Most developers I know enjoy building new things, not pouring over log files and worrying about peak loads, upgrading and patching servers, DoS attacks, monitoring email traffic etc.
From the business perspective, it is a lot cheaper to have someone else provide the reliable and scalable computing infrastructure (this is a skill set that very few companies can afford to build and maintain). Not to mention the start-up costs that one needs if they roll out their own infrastructure. Cloud computing infrastructure is cheaper than LAMP, esp. since it takes 10 minutes to sign-up and have the entire infrastructure up and running.
With the emerging cloud computing infrastructure, developers can go back to focusing on solving user domain problems and letting the machines / experts worry about the installation, configuration and administration.
Limitations? As exciting as this all is, we are still in early days -- some applications will still need custom infrastructure (for instance, net banking). But, in another 3-5 years these infrastructures will get better, and compete for developer mind share (Amazon has EC2, Microsoft will have their platform soon enough, IBM has one in the works). Sadly, the only company that has .NET cloud infrastructure for now is Microsoft.
Interestingly -- this is how computing used to be in the hey-day of mainframes. I guess, history does repeat. We are back to just paying for CPU, Disk and Bandwidth while someone else worries about the underlying infrastructure.
-- rv
Monday, March 16, 2009
Windows 7 - Will it take off?
Many years ago, I wrote a blog post that pretty much stated that Windows Vista will have a very very slow uptake. I was kinda sorta right.
Windows 7 is about to be released. Will this have a fast uptake in the market place?
Short answer: The uptake will be about as lethargic as for Windows Vista by the business users. New computers will come with it pre-installed, most likely users will keep it (rather than down grading to XP).
What is the basis for this bold assertion?
Reason 1: Corporate Support Staff know Windows XP inside out -- most know the typical issues and resolutions. They can do it in their sleep. When we bring a new O/S into the equation, this is the area that ends up being the bottle neck. Support staff just will not be able to pick it up and have it supportable in anything under 24 months (minimum). Sadly, this will mean that Microsoft will struggle to gain sufficient traction and very likely have Windows 8 ready.
One other reason --- Windows 7 does not have any ground breaking features that most users are dying for and will care to part hard-earned cash for (esp. in a recession). It will be about as exciting as a new Operating System for your mobile phone -- most people do not know nor care. The geeks will go gaga over it, the tech. press will fill pages about the fantastic features, the Apple fans will still be cool -- but, most users will want to know if Facebook will render nicely on it and if they need to still install a virus scanner.
Yet another reason -- IT budgets are getting chopped and trimmed (right-sized is the term these days). Companies will want to know cost savings of an upgrade to Windows 7. Microsoft better have real good answers for this aspect (and people are not just going to believe as facts Gartner reports or claims made in PC World).
Sadly ... we are pretty much at the cusp of a new ear. One where PC Operating Systems are slowly going to disappear into the background, reliably doing their job, just like they do on the mobile phones, game consoles and a billion other devices.
-- rv
Saturday, February 21, 2009
Credit Crisis Visualized...
The following videos explain quite a lot of things in a nice visualization. Tad simplified, but correct. Both Part 1 and Part 2 are embedded below (together they run for approx. 10 min.). If you want a more detailed explanation Chris Martenson has a Crash Course (a few hours). If you want a rigorous and academic version, Prof. Steve Keen's blog (of Uni. Western Sydney) is well worth reading and his research is also worth supporting.
Part 1:
Part 2:
Part 1:
Part 2:
Friday, February 20, 2009
Mobile phones will have a Universal Charger...
Most mobile phone companies have signed up to have a Universal charger by 2012 (ZDNet article). Seems to be a mini-USB jack.
The question is - will plug-in type chargers be the preference in 3 years time? There is a company called Powermat that offers a mat that will charge a phone (or mobile device) without any wires. You simply place it on the mat, and it gives it the juice (see their site for details on the technology).
There are a number of companies that are also experimenting with mini fuel cells and other newer batteries that will work on cigarette lighter fuel for about 1 month with no top-up (a very small quantity is used -- so it should not ignite and blow up, unless it is a used in a Hollywood movie in which case it will be used to level a small city).
The most interesting aspect of this new is that Apple and Palm did not sign up. My bet is that they are planning a charger with no wires soon. And in 3 years time, most others will just follow down this path. The Palm Pre which is out soon has a wireless dock (called the Palm Touchstone). Apple most likely is close to having their version out soon.
So what will happen? We are likely to have many different wire-free charging stations, and a Universal cellphone charger that is DOA. Lets hope they standardize the wire-free technology a little faster.
-- rv
The question is - will plug-in type chargers be the preference in 3 years time? There is a company called Powermat that offers a mat that will charge a phone (or mobile device) without any wires. You simply place it on the mat, and it gives it the juice (see their site for details on the technology).
There are a number of companies that are also experimenting with mini fuel cells and other newer batteries that will work on cigarette lighter fuel for about 1 month with no top-up (a very small quantity is used -- so it should not ignite and blow up, unless it is a used in a Hollywood movie in which case it will be used to level a small city).
The most interesting aspect of this new is that Apple and Palm did not sign up. My bet is that they are planning a charger with no wires soon. And in 3 years time, most others will just follow down this path. The Palm Pre which is out soon has a wireless dock (called the Palm Touchstone). Apple most likely is close to having their version out soon.
So what will happen? We are likely to have many different wire-free charging stations, and a Universal cellphone charger that is DOA. Lets hope they standardize the wire-free technology a little faster.
-- rv
Thursday, February 19, 2009
Chompr - Hamburger grasper....
My mind just cannot compute this. More at - Gizmodo. See the comments for some serious entertainment.
I wonder if they have conducted any Usability trials on this device?
So does this have any practical use?
These are in the class of devices that once can consider "cool". Kinda like those fancy user interfaces in Sci-fi movies, where the objective is to design a user interface that looks nice on a super large screen [ridiculously large fonts, insanely rich colors, useless animation effects, excotic sound effects, animated backgrounds etc. etc.]. It would be hell if one were forced to actually use these user interfaces everyday. Personally, I classify the animation effects of Windows Vista (screen flipping) and the new UI themes in Linux into the same bucket. They look fantastic in the demo's, but have no real value beyond that.
Will the burger holder work? All I can see is anyone using this will now have to clean their hands as well as this device. Not to mention all that accumulated build-up of sauce over time on this contraption.
Will it sell? Such devices will move in some volume around Christmas and potentially Fathers day. To be avoided at all costs for Valentines day or Mothers day.
-- rv
Friday, February 13, 2009
Monkey see ... Monkey do...
Microsoft will soon open retail outlets (just like Apple).
If that news alone is not weird enough, given the current economic climate ... it gets worse. They actually hired a Wal*Mart veteran to help them with their retail store. So, I guess they are planning on selling Windows 7 right out of shipping containers in dimly lit stores at $2.00 per box.
Oh yes, they are also going to provide an App Store for Windows Mobile just like the iPhone has.
I just hope Windows 7 sells well .... they are going to be needing the $$ to pull a few more holes.
If that news alone is not weird enough, given the current economic climate ... it gets worse. They actually hired a Wal*Mart veteran to help them with their retail store. So, I guess they are planning on selling Windows 7 right out of shipping containers in dimly lit stores at $2.00 per box.
Oh yes, they are also going to provide an App Store for Windows Mobile just like the iPhone has.
I just hope Windows 7 sells well .... they are going to be needing the $$ to pull a few more holes.
Thursday, February 12, 2009
Interesting test questions (from mid-term exam)
This is a link I picked up on Reddit:
http://econpage.com/201/exams/mt1/index.html
See questions 19, 20 and 21.
I'm now going to spend valuable time thinking of viable ways to translate this into Programming related (or) Project Management related questions. Wondering if the University will invest some $$ into research material acquisition .... [[it will all make sense once you look at the actual questions]]
http://econpage.com/201/exams/mt1/index.html
See questions 19, 20 and 21.
I'm now going to spend valuable time thinking of viable ways to translate this into Programming related (or) Project Management related questions. Wondering if the University will invest some $$ into research material acquisition .... [[it will all make sense once you look at the actual questions]]
Wednesday, February 11, 2009
ANZ Money Manager again...
I recently wrote a post about ANZ Money Manager. My review was not exactly too kind, esp. regarding their choice of Flash animation. As far as I can see, the default page has changed (it was probably something they planned to do). The new Flash animation is a little bit more interesting, and most certainly different. I still think a simple static page with images of the actual screens will be more effective -- and it may happen as they built it out.
The real interesting part was that ANZ actually responded to the blog (you can see their response at the end of my blog message, it was polite and actually asked me to send an email to their support staff raising my browser compatibility issue). It was really good to see a solid and pro-active team that was responding to the feedback. I will also send in my browser rendering issue directly (with a screen-grab, so they can see and correct as needed).
Now, the real question was -- "How did ANZ actually find my blog post?". I was chatting with Andrew this morning about that, and he suggested doing a quick search to see if the blog was picked up by Google.
So was Andrew right? Well, he was spot on --if you search "ANZ Money Manager blog" it shows that post I made as the top hit by Google Page rank.
-- RV
The real interesting part was that ANZ actually responded to the blog (you can see their response at the end of my blog message, it was polite and actually asked me to send an email to their support staff raising my browser compatibility issue). It was really good to see a solid and pro-active team that was responding to the feedback. I will also send in my browser rendering issue directly (with a screen-grab, so they can see and correct as needed).
Now, the real question was -- "How did ANZ actually find my blog post?". I was chatting with Andrew this morning about that, and he suggested doing a quick search to see if the blog was picked up by Google.
So was Andrew right? Well, he was spot on --if you search "ANZ Money Manager blog" it shows that post I made as the top hit by Google Page rank.
-- RV
What should a plan contain?
I was just reading some of the 'new plan' from the US Govt. about how they will get the economy fixed. I have not been formally training in Economics, so to some extent I'm not quite sure I understand their mental model and vocabulary.
But from any sensible perspective, a plan should consist of the following parts:
To some extent, the common item that comes to mind for most people when the word 'plan' is used is a 'Gantt Chart' or something similar like a Work Breakdown Structure. This will explain what work will get done and by which resource. Unfortunately, in many cases the problem statement and assumptions matter a lot more. The human brain is wired to look for solutions based on how a problem is framed.
The actions from the US govt. suggest that they see 'lack of consumption' as the problem. But, what if we state the problem as debt driven consumption? Would the current solutions be still valid?
Now, back to 'new plan' by the US govt., as far as I could see .... the problem, the objectives, the assumptions, nor the resources available have NOT been stated clearly. As far as I can tell, there is no validity checks made.
So, here is my attempt at articulating it...
Problem: People are not spending money, causing the economy to shrink (as it is currently measured).
Goal: Expand the economy by getting people to spend money again.
Assumptions:
1. Economy can expand forever -- no questions.
2. Resources available are infinite.
3. Where resources are finite (as evident to a 4 year old), it is possible to increase productivity infinitely, because humans are creative, inventive etc. This will imply resources are infinite.
4. When we do not have resources today, we can borrow from the future or other people. Paying back these people is optional.
5. Energy is cheap, will continue to be cheap forever.
6. People around the world will continue to work for low wages, face hard choices, pollute their local environments and send their savings/manufactured goods to USA forever.
7. Most of the people on the USA will focus their efforts increasingly towards inventing/'designing' new gadgets. People from the rest of the planet (particuarly Asia) will take these designs and convert them into gadgets for consumption/enjoyment of the Americans. This will continue forever, because people around the planet can think of nothing better to do.
8. Climate 2.0 will never be released.
9. People from around the world will continue to give precious resources and energy to the USA in exchange for Hollywood movies.
10. People are happy to be employed. The salary they take home is not as important as a job.
Resources Available: US dollars, good-will, a super-large military, land, buildings, civil servants.
Actions to solve problem: "Spend money". Where money does not exist, borrow it. If borrowing is not possible, print it.
Risks: The money printing press may run out of ink as we add more zeros. Mitigation: Force everyone to use digital currency.
Sanity Check: All assumptions are valid. External environment/Global political and social context will be static.
----
So, do I think it will work? .... I'll let you arrive at your own conclusion. But, I'm looking towards the day when hard-currency will be completely banned in favor of digital currencies and transactions to the risk outlined above. There is going to be a lot of work for IT professionals :)
But from any sensible perspective, a plan should consist of the following parts:
- A statement of the current problem (where are we?)
- What are the objectives? (Where do we want to go? and Why?). Objective need to be S.M.A.R.T
- Assumptions made in order to determine the problem as well as the objectives
- What are the resources available, and How will they be put to use to achieve objectives.
- What is likely to go wrong? (Risks!)
- Sanity Check -- Is the plan internally consistent? Is the plan likely to work in the external context?
To some extent, the common item that comes to mind for most people when the word 'plan' is used is a 'Gantt Chart' or something similar like a Work Breakdown Structure. This will explain what work will get done and by which resource. Unfortunately, in many cases the problem statement and assumptions matter a lot more. The human brain is wired to look for solutions based on how a problem is framed.
The actions from the US govt. suggest that they see 'lack of consumption' as the problem. But, what if we state the problem as debt driven consumption? Would the current solutions be still valid?
Now, back to 'new plan' by the US govt., as far as I could see .... the problem, the objectives, the assumptions, nor the resources available have NOT been stated clearly. As far as I can tell, there is no validity checks made.
So, here is my attempt at articulating it...
Problem: People are not spending money, causing the economy to shrink (as it is currently measured).
Goal: Expand the economy by getting people to spend money again.
Assumptions:
1. Economy can expand forever -- no questions.
2. Resources available are infinite.
3. Where resources are finite (as evident to a 4 year old), it is possible to increase productivity infinitely, because humans are creative, inventive etc. This will imply resources are infinite.
4. When we do not have resources today, we can borrow from the future or other people. Paying back these people is optional.
5. Energy is cheap, will continue to be cheap forever.
6. People around the world will continue to work for low wages, face hard choices, pollute their local environments and send their savings/manufactured goods to USA forever.
7. Most of the people on the USA will focus their efforts increasingly towards inventing/'designing' new gadgets. People from the rest of the planet (particuarly Asia) will take these designs and convert them into gadgets for consumption/enjoyment of the Americans. This will continue forever, because people around the planet can think of nothing better to do.
8. Climate 2.0 will never be released.
9. People from around the world will continue to give precious resources and energy to the USA in exchange for Hollywood movies.
10. People are happy to be employed. The salary they take home is not as important as a job.
Resources Available: US dollars, good-will, a super-large military, land, buildings, civil servants.
Actions to solve problem: "Spend money". Where money does not exist, borrow it. If borrowing is not possible, print it.
Risks: The money printing press may run out of ink as we add more zeros. Mitigation: Force everyone to use digital currency.
Sanity Check: All assumptions are valid. External environment/Global political and social context will be static.
----
So, do I think it will work? .... I'll let you arrive at your own conclusion. But, I'm looking towards the day when hard-currency will be completely banned in favor of digital currencies and transactions to the risk outlined above. There is going to be a lot of work for IT professionals :)
Friday, February 06, 2009
ANZ Money Manager....
ANZ Bank recently launched a new product called 'Money Manager'. The aim is that it would take a 'read-only' snap shot of all your bank transactions and then display them visually -- summarize the data -- and potentially scare you half-to-death once you know the real state of affairs. It potentially will have links to Doctors and other medical practitioners.
The idea is not new, this has been done quite well by a product called 'Mint' in US. That service gets just about every award there is for online financial management tools.
---
So, what can I tell you about ANZ Money Manager? The site is pretty much a work in progress (it even spouts the Beta logo) --- but it has some very interesting design choices.
1. When you attempt to learn about the service -- it spends about a couple of geological era's actually loading a massive Flash animation. It has a cute robot (not the best metaphor for financial matters btw). They pretty much broke the first rule of web design -- making it rather hard and painful for anyone remotely interested in their product.
2. The Flash animation sadly now attempts to actually layout and render 'text'. Yup -- they decided that HTML was not good enough, and the entire product information is actually presented from inside the Flash applet. (Thankfully, I found a tiny link at the bottom of the page, that says 'Plain HTML'. My recommendation if anyone from ANZ is reading this blog -- please please make the 'Plain HTML' the default page that anyone sees.
3. You want to learn about a product -- so one would expect to see some nice images, screen shots -- anything worth actually looking at. ANZ has decided that they shall provide all content in 'text' -- embedded inside a Flash applet. Yup -- the only animation/image is a silly robot that makes weird noises at random intervals. I'm certain that the designer was inspired by the 'MS Office Robot' (which was an option that you could use to switch from the paper-clip help dude). Sadly, it just does not work -- the whole thing is a woeful mess in need of some serious adult supervision.
4. Thankfully, the actual site is much better laid out. No more Flash applets or robots. The form that they display for Registering your account seems to have been tested on IE 6, so all other browsers beware -- it does not properly render the page.
5. I have not been brave as yet to provide them with my Netbank login details. They assure me, that it is safe! The technically interesting aspect is that they are 'scraping' the website of other online banks in order to get the information that they need. This would mean that they did not get back-end readonly access from external parties. It would quite a challenge to keep updating the scraping software as other external parties update their online banking systems. I would actually go as far as saying that this is going to be quite messy to maintain in the long-term. I hope they can reach read-only data sharing agreements soon -- they actually should have most of this underlying infrastructure already in place information since their ATM's talk to each other and they can transfer funds between each other.
---
So will it work! ... My take is that it would be fine for ANZ customers, but scraping financial information from 100 different web sites it not a viable long-term solution. It may be possible and something you can push along for a period of time -- but certainly not a long term solution that you can rely on. We are *not* scraping weather information, sports scores or real-estate listing (I have written simple software to do these myself) -- we are talking about financial data. Any minor errors can cause a lot of heart-ache for far too many people -- not to mention the support nightmare.
I'm happy that this option is starting to become available -- maybe the 'Which Bank' will notice :)
-- rv
3.
The idea is not new, this has been done quite well by a product called 'Mint' in US. That service gets just about every award there is for online financial management tools.
---
So, what can I tell you about ANZ Money Manager? The site is pretty much a work in progress (it even spouts the Beta logo) --- but it has some very interesting design choices.
1. When you attempt to learn about the service -- it spends about a couple of geological era's actually loading a massive Flash animation. It has a cute robot (not the best metaphor for financial matters btw). They pretty much broke the first rule of web design -- making it rather hard and painful for anyone remotely interested in their product.
2. The Flash animation sadly now attempts to actually layout and render 'text'. Yup -- they decided that HTML was not good enough, and the entire product information is actually presented from inside the Flash applet. (Thankfully, I found a tiny link at the bottom of the page, that says 'Plain HTML'. My recommendation if anyone from ANZ is reading this blog -- please please make the 'Plain HTML' the default page that anyone sees.
3. You want to learn about a product -- so one would expect to see some nice images, screen shots -- anything worth actually looking at. ANZ has decided that they shall provide all content in 'text' -- embedded inside a Flash applet. Yup -- the only animation/image is a silly robot that makes weird noises at random intervals. I'm certain that the designer was inspired by the 'MS Office Robot' (which was an option that you could use to switch from the paper-clip help dude). Sadly, it just does not work -- the whole thing is a woeful mess in need of some serious adult supervision.
4. Thankfully, the actual site is much better laid out. No more Flash applets or robots. The form that they display for Registering your account seems to have been tested on IE 6, so all other browsers beware -- it does not properly render the page.
5. I have not been brave as yet to provide them with my Netbank login details. They assure me, that it is safe! The technically interesting aspect is that they are 'scraping' the website of other online banks in order to get the information that they need. This would mean that they did not get back-end readonly access from external parties. It would quite a challenge to keep updating the scraping software as other external parties update their online banking systems. I would actually go as far as saying that this is going to be quite messy to maintain in the long-term. I hope they can reach read-only data sharing agreements soon -- they actually should have most of this underlying infrastructure already in place information since their ATM's talk to each other and they can transfer funds between each other.
---
So will it work! ... My take is that it would be fine for ANZ customers, but scraping financial information from 100 different web sites it not a viable long-term solution. It may be possible and something you can push along for a period of time -- but certainly not a long term solution that you can rely on. We are *not* scraping weather information, sports scores or real-estate listing (I have written simple software to do these myself) -- we are talking about financial data. Any minor errors can cause a lot of heart-ache for far too many people -- not to mention the support nightmare.
I'm happy that this option is starting to become available -- maybe the 'Which Bank' will notice :)
-- rv
3.
Thursday, February 05, 2009
Wednesday, January 21, 2009
Economic crisis - yet another post....
This is a post to explain the current economic mess from my perspective.
I've been following this economic crisis quite closely (not quite an obsession though) and in a nut-shell the whole mess can be described as an issue of "solvency -- not liquidity".
Solvency crisis = Business model is broken, unsustainable -- i.e. you cannot make a profit no matter what, with the current approach and strategy. Example: US car manufacturers (even when they make a small profit on cars, they give that all up funding pension/health plans). This crisis can only be solved with a change in culture/habit.
Liquidity crisis = This is more to do with cash-flow. Business is solid, but some payments are a little late (or) a sudden unexpected expense cropped up. Example: Fire gutted the warehouse, need an infusion of $100k to get over the next month before insurance claim comes through. This can be solved with a short-term loan.
The current economic crisis is mostly about 'solvency' of a lot of companies and entire sovereign states (Iceland -- UK potentially -- US most certainly) are spending way way more than they earn. They are covering for the short fall with 'credit' (borrowing from the future or other countries)
Unfortunately, the response from almost all governments has been to increase the credit limits so far -- as in they are acting as if it is a liquidity problem (for a whole lot of reasons which they argue are valid). They hold to the belief that all companies are solvent and are just going through a tough period. If we give them a life-line they will come out in good shape. A good example of this is the US car companies requesting money from the govt.
Making a company/state solvent would require deeper cultural changes -- which does take a longer period of time (potentially decades at the state level). Private enterprises that cannot get a govt. bail-out are pretty much doing this by cutting their expenses (salaries, new investments etc).
So what next?
Sadly governments will try to solve this problem by .....
Is there a solution?
Free market economies are bottom-up organized systems. They are not directed and planned -- billions of micro-decisions made at the local level created our current economic system. Being a complex non-linear system -- simple acts like 'cutting taxes' etc. may or may not work. There is no way of knowing what the impact really will be. One thing is for certain -- the current mess took many many years to create and will take us a period of time to change the habits that caused this mess.
Now for the crux -- there is almost always a solution approach, but it depends on what has been framed as the problem. Almost all current solutions are being positioned to take us back a few years -- so we can live off credit and perpetual virtual growth. But, what if the existing model was the problem? and the current crisis is the start of a solution?
-- rv
I've been following this economic crisis quite closely (not quite an obsession though) and in a nut-shell the whole mess can be described as an issue of "solvency -- not liquidity".
Solvency crisis = Business model is broken, unsustainable -- i.e. you cannot make a profit no matter what, with the current approach and strategy. Example: US car manufacturers (even when they make a small profit on cars, they give that all up funding pension/health plans). This crisis can only be solved with a change in culture/habit.
Liquidity crisis = This is more to do with cash-flow. Business is solid, but some payments are a little late (or) a sudden unexpected expense cropped up. Example: Fire gutted the warehouse, need an infusion of $100k to get over the next month before insurance claim comes through. This can be solved with a short-term loan.
The current economic crisis is mostly about 'solvency' of a lot of companies and entire sovereign states (Iceland -- UK potentially -- US most certainly) are spending way way more than they earn. They are covering for the short fall with 'credit' (borrowing from the future or other countries)
Unfortunately, the response from almost all governments has been to increase the credit limits so far -- as in they are acting as if it is a liquidity problem (for a whole lot of reasons which they argue are valid). They hold to the belief that all companies are solvent and are just going through a tough period. If we give them a life-line they will come out in good shape. A good example of this is the US car companies requesting money from the govt.
Making a company/state solvent would require deeper cultural changes -- which does take a longer period of time (potentially decades at the state level). Private enterprises that cannot get a govt. bail-out are pretty much doing this by cutting their expenses (salaries, new investments etc).
So what next?
- The insolvent/inefficient/poorly-managed companies will die (if they were not hugely profitable in good times -- imagine their position now)
- The companies that face sudden cash-flow issues are also in trouble because credit is harder to get now.
- Companies that have been built/expanded with debt are in trouble. They have to re-pay the debt with reduced revenue.
- Many people are being cautious and behaving differently -- they actually are moving towards saving before spending -- they are just questioning if they need yet another piece of plastic -- they are planning to keep their cars beyond 5 years -- they are looking at doing other activities beyond consumption of things.
Sadly governments will try to solve this problem by .....
- Cutting interest rates -- will not help if you are struggling to pay the principal
- Printing money -- nice in the short-term, but it comes back via inflation in the long term
- Extending credit to companies/banks -- will only work if you can actually make them profitable
- Cutting taxes - May help if you cut govt. expenses (i.e. cut jobs, spending). It will just move the pain around.
- Spending money - again great in the short-term to keep some people employed, but to date no entities has become wealthy by consuming and spending. And the worst part is most govt. get the money they spend via taxes now or in the future.
Is there a solution?
Free market economies are bottom-up organized systems. They are not directed and planned -- billions of micro-decisions made at the local level created our current economic system. Being a complex non-linear system -- simple acts like 'cutting taxes' etc. may or may not work. There is no way of knowing what the impact really will be. One thing is for certain -- the current mess took many many years to create and will take us a period of time to change the habits that caused this mess.
Now for the crux -- there is almost always a solution approach, but it depends on what has been framed as the problem. Almost all current solutions are being positioned to take us back a few years -- so we can live off credit and perpetual virtual growth. But, what if the existing model was the problem? and the current crisis is the start of a solution?
-- rv
Saturday, January 17, 2009
Bird and Fortune - A skit regarding the current financial mess...
Follow the link:
http://www.calculatedriskblog.com/2009/01/bird-and-fortune-silly-money.html
(2 You Tube Video links). They also did a (IMO must better) skit last year. This is a follow-up that that one.
-- rv
PS: A little over 150k layoffs have been announced just this year (i.e. in 3 weeks) by large companies in USA. The small/medium enterprises + self-employed must be going through a far tougher time.
http://www.calculatedriskblog.com/2009/01/bird-and-fortune-silly-money.html
(2 You Tube Video links). They also did a (IMO must better) skit last year. This is a follow-up that that one.
-- rv
PS: A little over 150k layoffs have been announced just this year (i.e. in 3 weeks) by large companies in USA. The small/medium enterprises + self-employed must be going through a far tougher time.
Friday, January 16, 2009
Interesting links...
If you have ever wondered how they inspect high-voltage cables .... even if you did not care (like me) .. check out the video on this link -> http://www.neatorama.com/2009/01/14/high-voltage-cable-inspection/ (from Reddit)
The whole story and video are quite mind bending stuff.
---
Our world might be a giant hologram (New Scientist Article)
This is an area of research that has always spiked by interest driven by an innate curiosity regarding the nature of the world and self . The whole article talks about some new unexpected findings that suggest the holographic university theory may have some experimental evidence (certainly not conclusive, but gives hope to the may be argument. It also starts providing some additional data for the string theorists that have been wandering the mathematical abstraction land for far too many years).
Another area that I study is eastern philosophy (esp. Vedic philosophy and Advaita) which has certain interesting fundamental beliefs which would kinda-sorta give modern scientific validation to some age old beliefs:
1. The whole world as we see it is nothing but 'maya' (a near translation would mean illusion). In effect they state that nothing that we see/perceive is real, but a projection from a singularity (brahman -- though tempting to match up with a god it is not quite right).
2. The key term is singularity. The belief is that there exists only one. The fact that we see multiple (or duality) is purely due maya and it is not the fundamental truth. The word 'advaita' in Sanskrit quite literally means - 'not dual'. The whole philosophy is built on an negation argument (rather than describe its properties, they go down the path of negation and state what it is not)
I've over simplified here ... but, it is quite fascinating to see this 'may be' from the physicists.
-- rv
The whole story and video are quite mind bending stuff.
---
Our world might be a giant hologram (New Scientist Article)
This is an area of research that has always spiked by interest driven by an innate curiosity regarding the nature of the world and self . The whole article talks about some new unexpected findings that suggest the holographic university theory may have some experimental evidence (certainly not conclusive, but gives hope to the may be argument. It also starts providing some additional data for the string theorists that have been wandering the mathematical abstraction land for far too many years).
Another area that I study is eastern philosophy (esp. Vedic philosophy and Advaita) which has certain interesting fundamental beliefs which would kinda-sorta give modern scientific validation to some age old beliefs:
1. The whole world as we see it is nothing but 'maya' (a near translation would mean illusion). In effect they state that nothing that we see/perceive is real, but a projection from a singularity (brahman -- though tempting to match up with a god it is not quite right).
2. The key term is singularity. The belief is that there exists only one. The fact that we see multiple (or duality) is purely due maya and it is not the fundamental truth. The word 'advaita' in Sanskrit quite literally means - 'not dual'. The whole philosophy is built on an negation argument (rather than describe its properties, they go down the path of negation and state what it is not)
I've over simplified here ... but, it is quite fascinating to see this 'may be' from the physicists.
-- rv
Thursday, January 15, 2009
Project plans in software projects....
Some key concepts to get out of the way first ...
Project: Has an objective, a clear start date and a specific end date. If these are missing, a different term may be more suitable (undertaking or a venture come to mind).
Now to the plan and a few of my issues with current generation project management tools.
A plan has the following core components:
------
Now to the real interesting part - each of these components from a 'problem framing' perspective require very very different thinking models.
1. Work breakdown is a 'decomposition problem'. We need to consider the level of detail/abstraction. But it is generally a good idea to have work expressed and communicated as a set of outcomes (much easier to know if you have actually completed the task this way)
2. Allocation resources is well .... an 'optimization problem'. We have a fixed pool of resources with certain skills and knowledge. We need to allocate these for the most optimal outcome. A first pass of this can be done without taking into consideration the time-line (though it is tempting). Again a good practice, because you are not preemptively thinking ahead too much.
3. Scheduling yet another 'optimization problem', only now you have to take all aspects into the equation. The overall strategy, actual work as decomposed, people and time/cost.
Planning is a complex problem solving activity, with some distinct problem types each of which require a slightly different hat and frame of thinking.
----
So far so good. Now for the mess-up by the tool vendors. Almost all traditional project management tools provide a user-interface model that requires the user to think about all the of above activities pretty much at the same time. So, we create a task, allocate resources and set start/end dates and include dependencies. Good planners do innately understand the above process and tend not to get too carried away by the tools focus, but when we teach project planning -- these is a need for a tool that essentially forces a certain structure (or) at least allows the user to choose a 'reference frame' which hides all non relevant information.
Some of the work done by David Allen in GTD (Getting Things Done) essentially take this perspective where they get people to focus on tasks from a context.
I do not want to directly blame the tool vendors for producing the current generation of planning tools, just a request to see some additional features that allow the user to 'hide' information. I also strongly believe that this would most certainly improve the quality and effectiveness of the plans generated via this process, even if it takes a little longer due to the iterative nature of the process described above.
-- rv
Project: Has an objective, a clear start date and a specific end date. If these are missing, a different term may be more suitable (undertaking or a venture come to mind).
Now to the plan and a few of my issues with current generation project management tools.
A plan has the following core components:
- A break-down of the work that needs to be completed (best guess anyway -- can be determined reasonably well for the short-term, but gets harder as we move into the future)
- Resources that will undertake the work (Can be allocated with some confidence at the 2-4 weeks scale)
- The order in which work will take place -- a schedule of sorts with a time-line
------
Now to the real interesting part - each of these components from a 'problem framing' perspective require very very different thinking models.
1. Work breakdown is a 'decomposition problem'. We need to consider the level of detail/abstraction. But it is generally a good idea to have work expressed and communicated as a set of outcomes (much easier to know if you have actually completed the task this way)
2. Allocation resources is well .... an 'optimization problem'. We have a fixed pool of resources with certain skills and knowledge. We need to allocate these for the most optimal outcome. A first pass of this can be done without taking into consideration the time-line (though it is tempting). Again a good practice, because you are not preemptively thinking ahead too much.
3. Scheduling yet another 'optimization problem', only now you have to take all aspects into the equation. The overall strategy, actual work as decomposed, people and time/cost.
Planning is a complex problem solving activity, with some distinct problem types each of which require a slightly different hat and frame of thinking.
----
So far so good. Now for the mess-up by the tool vendors. Almost all traditional project management tools provide a user-interface model that requires the user to think about all the of above activities pretty much at the same time. So, we create a task, allocate resources and set start/end dates and include dependencies. Good planners do innately understand the above process and tend not to get too carried away by the tools focus, but when we teach project planning -- these is a need for a tool that essentially forces a certain structure (or) at least allows the user to choose a 'reference frame' which hides all non relevant information.
Some of the work done by David Allen in GTD (Getting Things Done) essentially take this perspective where they get people to focus on tasks from a context.
I do not want to directly blame the tool vendors for producing the current generation of planning tools, just a request to see some additional features that allow the user to 'hide' information. I also strongly believe that this would most certainly improve the quality and effectiveness of the plans generated via this process, even if it takes a little longer due to the iterative nature of the process described above.
-- rv
Wednesday, January 14, 2009
Lectures ... boring?
I saw this article on NYTimes (via Slashdot) about the changes being made at MIT to the lecturing method. In a nutshell -- move away from the 50 min. lecture in large halls towards smaller classes that are more activity focused.
Having been lecturing for way too long (I think I just crossed the decade mark recently - *sigh*), I actually wanted to add a few of my own observations and opinions/reflections regarding lectures and the educational model in general.
1. It is very very hard to focus, listen and actually absorb effectively if the lecture is longer than 40-50 minutes.
2. My own experience (from listening to lectures, guest speakers and sessions at conferences) has been that after about the first 20-30 min. the mind wanders off and one has to been aware and attentive. Not impossible, just a tad tedious. Your mind starts to put in more effort just to stay focused, rather than absorb and understand.
3. The content on the lecture slide is more effective during review and revision (typically undertaken in the last weeks of the semester in order to prepare for the examination or to solve an assignment). In order for this to actually work, it is really important to take notes and put down annotations on concepts .. esp. note stuff that one is not quite sure about.
4. More than 50% of the students stop attending lectures after the first few weeks of the semester (good to see that they are having the same problem at MIT. So in a way, students are similar in some aspects in other parts of the world).
5. Real learning tends to take place when students actually *do and reflect* in a structured approach on their work, rather than just listen and guess.
The strangest part of IT education is we expect students to actually learn by 'listening' and creating mental abstractions and connecting them all in their heads when the content is presented in as a bullet points in a long 2 hours continuous session. It would be like teaching to play the guitar in a 'lecture hall' concept by concept for 2 hours (then set an assignment to compose new music and play it) ... and of course, wonder why students did not quite get it. The real wonder for me is how much students actually learn, given the teaching method.
-----
So why is it all like this and can it be fixed?
The strange aspect is that I never realized all this when I first started teaching. I joined the Faculty, essentially mimicked existing senior staff and added a few of my own spices to the mix. This reflection and understanding only started to form after the first 3-4 years of following the existing model. The odd part is that I never though that whole approach was not the most effective at that point in time. The long lecture, a set of assignments and a fixed examinations were all the model that I was used to for most of my University life. Simply put, I did not have a frame of mind or the vocabulary to think outside of the box. I never even thought to question it, except after seeing unfortunately high fail rates (the jolt is when students you know are capable failing).
Can this be fixed? Will I make changes?
The current models evolved and are to some extent focused on 'mass education'. If the system has to take 300-500 students and teach them all Introductory Programming at a reasonable price and in a fixed time-frame, then it is very challenging to shift from the current model and still be effective. Alternative methods like the one being touted at MIT are nice and possible, but if each class-room costs US$2.5 million, it does not come cheap (the material development costs goes on top of that -- very few Universities can consistently invest $10 million developing a single course).
Will I make changes? I have over time made adjustments to the overall teaching method and in the selection of the assignment problems with varying degrees of success. I will continue on this process. The one really hard constraint that makes it challenging is the time-boxed model of education. All students are expected to learn in 15-weeks, and move to the next stage with the full knowledge of that module. Reality is that students learn at different rates and a more flexible learning model is needed (something for me to consider then I start my own University I guess)
A closing though ... the existing system for all its flaws is still surprisingly effective and substantially better than random meandering. Almost all of the teaching staff are quite aware of the challenges (I am not alone).
I am confident that it will continue to improve (some of the efforts that my colleagues make me certain of this). We as a Faculty are much better now than we were many years ago and we will continuously improve (this, despite a substantial increase in student diversity and number).
I am looking forward to mentoring the capstone projects (Professional Software Development) and developing a couple of new subjects this year.
Having been lecturing for way too long (I think I just crossed the decade mark recently - *sigh*), I actually wanted to add a few of my own observations and opinions/reflections regarding lectures and the educational model in general.
1. It is very very hard to focus, listen and actually absorb effectively if the lecture is longer than 40-50 minutes.
2. My own experience (from listening to lectures, guest speakers and sessions at conferences) has been that after about the first 20-30 min. the mind wanders off and one has to been aware and attentive. Not impossible, just a tad tedious. Your mind starts to put in more effort just to stay focused, rather than absorb and understand.
3. The content on the lecture slide is more effective during review and revision (typically undertaken in the last weeks of the semester in order to prepare for the examination or to solve an assignment). In order for this to actually work, it is really important to take notes and put down annotations on concepts .. esp. note stuff that one is not quite sure about.
4. More than 50% of the students stop attending lectures after the first few weeks of the semester (good to see that they are having the same problem at MIT. So in a way, students are similar in some aspects in other parts of the world).
5. Real learning tends to take place when students actually *do and reflect* in a structured approach on their work, rather than just listen and guess.
The strangest part of IT education is we expect students to actually learn by 'listening' and creating mental abstractions and connecting them all in their heads when the content is presented in as a bullet points in a long 2 hours continuous session. It would be like teaching to play the guitar in a 'lecture hall' concept by concept for 2 hours (then set an assignment to compose new music and play it) ... and of course, wonder why students did not quite get it. The real wonder for me is how much students actually learn, given the teaching method.
-----
So why is it all like this and can it be fixed?
The strange aspect is that I never realized all this when I first started teaching. I joined the Faculty, essentially mimicked existing senior staff and added a few of my own spices to the mix. This reflection and understanding only started to form after the first 3-4 years of following the existing model. The odd part is that I never though that whole approach was not the most effective at that point in time. The long lecture, a set of assignments and a fixed examinations were all the model that I was used to for most of my University life. Simply put, I did not have a frame of mind or the vocabulary to think outside of the box. I never even thought to question it, except after seeing unfortunately high fail rates (the jolt is when students you know are capable failing).
Can this be fixed? Will I make changes?
The current models evolved and are to some extent focused on 'mass education'. If the system has to take 300-500 students and teach them all Introductory Programming at a reasonable price and in a fixed time-frame, then it is very challenging to shift from the current model and still be effective. Alternative methods like the one being touted at MIT are nice and possible, but if each class-room costs US$2.5 million, it does not come cheap (the material development costs goes on top of that -- very few Universities can consistently invest $10 million developing a single course).
Will I make changes? I have over time made adjustments to the overall teaching method and in the selection of the assignment problems with varying degrees of success. I will continue on this process. The one really hard constraint that makes it challenging is the time-boxed model of education. All students are expected to learn in 15-weeks, and move to the next stage with the full knowledge of that module. Reality is that students learn at different rates and a more flexible learning model is needed (something for me to consider then I start my own University I guess)
A closing though ... the existing system for all its flaws is still surprisingly effective and substantially better than random meandering. Almost all of the teaching staff are quite aware of the challenges (I am not alone).
I am confident that it will continue to improve (some of the efforts that my colleagues make me certain of this). We as a Faculty are much better now than we were many years ago and we will continuously improve (this, despite a substantial increase in student diversity and number).
I am looking forward to mentoring the capstone projects (Professional Software Development) and developing a couple of new subjects this year.
Subscribe to:
Posts (Atom)