Wednesday, September 08, 2010

Programming mobile phones (Part 2) .. Android wins

I have made more progress on the mobile phone subject. We are going with Android as the platform but the bulk of the subject will be platform agnostic.

The initial plan was to use iPhone as the device for teaching (driven by their popularity). However, after carefully considering the practical aspects (need for dedicated Mac labs, teaching Objective-C and the learning curve involved) we are going with Android.

Broad topics that we will cover are:

1. Mobile devices -- Hardware and Operating Systems (we will cover general principles with Android & iPhone as case studies)
2. Interaction Design and UX (mainly usability related aspects and applying a design approach that works well -- navigation and state charts)
3. Development Tools, Libraries and Frameworks (Android and iPhone. Will add Windows Mobile 7 if sufficient detail is available)
4. Design Patterns for User Interface Development (Splash screens, Status updates, MVC, Passing information around, modality, state handling, event handling etc.)
5. Data Handling (File, Network I/O, Local DBMS like sqLite, Resource bundles)
6. Mobile Web Applications (Concepts, Principles, Design patterns, HTML5)

7. Programming mobile devices (Concept-Design-Program-Deploy. Android as the dev. platform)

Teaching method will be based on lectures with demos, spikes, 2 short tests and a portfolio.

I am currently working around 1 hour lecture and a 2 hour lab session model.

We are also planning to develop 3/5-day short courses that are going to focus on training iPhone programmers and Android programmers.
========

One of the interesting things that I find in touch devices is the complete lack of a concept of 'tool-tips' as you hover over a button or a link. This essentially means that the icons have to be really well designed to convey the meaning of the action (or) we need labels -- esp. when you first start playing with an application. But, given the small screen size labels are a challenge -- so what if the user presses a button to learn what the icon does?

There a number of other little subtle aspects that need to be carefully considered when designing applications, esp. related to algorithm efficiency, data capture/handling/representation and strorage.

-- rv

Thursday, September 02, 2010

Programming mobile phones ...

I am currently designing/developing a new subject that will teach students how to build software for mobile devices. It is proving to be a real challenge. Especially finding a suitable balance between showing how to use API and the higher level design/engineering concepts.

Why is this hard? For a number of reasons Universities want to be seen as places of 'education' rather than 'training'. That is, they want the students to have a greater depth rather than knowledge of the API. Personally I prefer to the training first and then principles later approach. I find that the best way to get students moving forward is to get them started and learn by example (doing stuff even it is a bit sloppy) --- build deeper understanding at a later stage.

Why not start with concepts first? The simple answer is that the brain has a reference point to start with and the conversations are a lot more meaningful. A weak analogy is that we do not start 2 year old kids on language semantics/grammar and spelling first -- we just talk to them and worry about semantics of a language later. That is the learning is via a process of incremental refinement -- and most of the time we do not even bother correcting poor grammar.

Will they pick up bad habits if we do not give them deeper concepts first? Well, that depends on the teacher. If you use sloppy and poorly thought out examples -- then sure, they will pick up bad habits and potentially do a lot of damage to their learning.

Back to the mobile phones ...

I have not programmed a mobile phone at any serious level except for a small JavaME software that convinced me that JavaME was a dead technology. Currently I am systematically learning how to program an iPhone taking a lot of notes to reflect on my own learning experience.

Based on progress so far ...

  • The baseline knowledge required to build a reasonable piece of software for the iPhone is quite high. This is not a platform to start students new to programming on. Need to have skill in use of design patterns, events, user-interface construction/APIs and OOP.
  • XCode IDE does require some level of formal 'tool training' to speed up the learning process (and more importantly to ensure that students do not feel dumb/lost -- quite important because the motivation to learn drops rather fast once that feeling of being lost sets in).
  • Designing each screen and the various iterations on paper is proving to be very helpful. I am not sufficiently familiar with the API and libraries, so I do not yet know what is available out of the box and what requires custom engineering. As a part of this exercise, I am looking at different software designed and built for mobile devices and writing down how they may have been assembled -- mostly as an experiment to see if they helps me learn developing for these devices faster. So far the results are promising, I'll not try this technique with others and refine as needed.
  • I am planning to use a few systems as case studies and get students to decompose them in order to learn how they are built. Essentially a critical review process to understand the design patterns, navigation strategies, UX issues etc.
  • I am allocating quite a bit of time for the mobile operating system related issues -- how processes / apps are scheduled / allocated resources etc. File I/O, Network I/O.
  • Data handling is slightly different and things like sqlite need to be discussed within the context of mobile applications.
  • Mobile web applications (HTML5/CSS/Javascript) -- mostly the considerations are to do with design rather than the actual technologies. Esp. touch interface considerations.
  • These devices have very different usability issues associated with them. The simulator is just not enough.
Well in a few months I have to make the harder choices -- identifying and defining a final set of learning outcomes that can be achieved in a single semester and then start building out the material.

-- rv

Monday, May 24, 2010

Coders at work...

I finally finished reading Coders At Work, a book that captures conversations with 15 computer-scientists and programmers. In a nut-shell -- if you enjoy programming, read this book there is a lot of wisdom captured within these conversations. The down-side is that almost all of these conversations start sounding similar by about mid-way.

As I was reading it all there was a consistent theme -- most of these people started their adventures in IT during their teenage and slowly picked up a number of different skills over time. They essentially influenced / invented / shaped and evolved many key technologies slowly over time (Javascript, Java, UNIX, Latex to name a few). The real interesting part is that all 15 started their life in IT when it was a relatively young and comparatively simple field. They (like me) could actually learn and appreciate many of the foundations upon which modern day software systems is built .. slowly and gradually.

Ok .. now to my point....

What is striking is that anyone new in IT today (i.e. the current generation of teen-agers) will be unable to gain the same exposure to this field. In essence, specialisation is the price we pay for the ever increasing complexity. Most senior software developers today will have grown with the technology -- learning new skills and techniques slowly as the field matured. But, the next generation of senior developers will just never have had the opportunity to actually get into a simpler field -- they are entering a fairly mature and certainly very complex field.

Personally, I do think that this generation will miss out on many of the joys and excitement of technology -- it is hard to get a thrill out of something that you have grown up with as a normal facet of life. My 6yo son got more excited about an old typewriter (stunned that you manually feed paper into it) than a new iPod.

All this raises some interesting questions (most of which I do not want to answer in this post -- feel free to leave a comment though):

Can we actually manage the complexity and pass on the knowledge to the next generation (that has not built it -- and hence is unlikely to have any emotional/personal attachment). Can this technology then be maintained? There was a recent Slashdot article essentially stating that Linux is having some difficulty attracting younger talent. Is it really a problem?

The current crop of engineers build technology assuming that others will have the same perspective/background and knowledge. Will the educators and the current-generation (under 20yo) put in the 7-10 years of effort to learn the skills and knowledge? Unfortunately as they learn .. the state of art keeps moving making it harder and harder to catch up.

Will abstraction/componentization and API around these services solve the problem? It buys time and there are trade-offs. There is a big problem with layers and layers of abstraction -- they leak. That is, if something breaks and you do not know what/how this abstraction works you may not be able to debug the source of the problem easily. The common counter argument is that few people understand operating systems -- yet we can use them. This argument has some merit -- but, most of the layers and abstractions that we use in software development have not had the same level of attention paid to them.

If you build applications using 20 different third-party components (some open-source, some commercial) -- two of these break intermittently. Are you able to isolate the problem? These confound people with serious experience, how will someone new to the field cope?

The current expectation is that we will weave software using 100s of services and abstractions (some of which may be on the cloud). I know from experience as an academic, we are not even getting though the basics -- let alone educate an under-graduate in 3 years to face this type of situation.

I for one am looking forward to how the field evolves and deals with the issues that I raised here. I am deeply curious and keen to see how far we can push the boundaries of complexity before it starts to cause serious problems.

-- rv


Sunday, May 09, 2010

Was the stock market crash a conspiracy?

Last week the NY stock market fell by 1000 points in about 30 min. only to rebound right back up. A lot of words were written into the blogosphere. Many posts are essentially saying that it is a massive conspiracy, with some powerful group (typically the US Fed. Reserve, Secret society XYZ, Bilderberg group) intentionally causing the market to crash etc. The more silly explanation is that some one mis-typed "B" for "M" causing billions of dollars worth of trades.

My position is that this was more likely caused by systemic complexity and interconnections rather than some all-seeing and all-knowing group causing the event. I think it is just a cascading ripple that just went way too far. But what triggered it? -- The best explanation so far is that the trading computers did not come to a proper halt when the NYSE gave them the "time out" instructions. A synchronisation issue that should not have taken place -- and this specific error will not happen again because they will put in a solution fairly soon.

====
I also do not buy the fat-finger theory that someone typed "b" instead of "m". This explanation is the most ludicrous of all. The fact that major newspapers even bothered to report this tells you how desperate they are to create "news" and report "opinion" rather than think about the headline for a few minutes. I know a few people that work as traders. In a nut-shell, no single trader is allowed to spend billions of dollars in a single or few transactions (and certainly not in 30 min.). Billion dollar trades do take place, but certainly not by a low level trader making a mistake. They also undertake such large transactions over slightly longer time frames (days -- weeks).

Now to the conspiracy theory. This is also a bit silly. Unless some powerful group has full knowledge of all the rules (and the bugs in the systems) that different traders have setup into their trading systems there is no realistic way to predict how a trigger will play out. We are talking about people that did not foresee a massive mortgage bubble, the technology stock bubble, the banking collapse, the recession, massive fraud at all levels, inability to take over and manage a small country etc. There is no evidence that they have any ability, nor do they have good insight or sufficient information about what is actually going on.

It is also highly likely that they (central banks and or other powerful control groups) do not fully understand the network that is a modern economy. They have some abstract models that may be able to estimate the situation at a very high level. These models are pretty high-level and can certainly say -- "it will be hot in summer and cold in winter" -- but beyond that it is nothing more than luck. Personally, I am convinced that they do not really know what their actions will do -- they are guessing and hoping for the best (read Prof. Steve Keen's work for more on the silly models that are used by economists and bankers).

I also do not buy the position that many of these powerful and rich people are working closely together allied for some common goal (as in the secret society theory). There is no correlation between wealth/power and people starting to collaborate well. In fact, history suggests that wealthy and powerful people are more likely to have problems managing their ego's, they over-estimate their abilities and are tend to compete aggressively.

Reality will be of the following format -- we have a number of these wealthy groups that are loosely allied, constantly changing their alliances, making mistakes and attacking each other's empires within misguided intentions and incomplete information. In a nutshell, they are just not organised enough to prepare, plane and pull a stunt like this.

-- rv



Thursday, April 22, 2010

Is web usage a distraction?

I recently started a study into how people use the web at work. It got picked up by PC World and a few other companies today. It is the first in a series of articles.
PC World Article is at http://bit.ly/cFrx2A

Do up vote on Reddit if you find it interesting http://www.reddit.com/tb/btyw6

The report itself is currently behind an "email address" wall since it was sponsored by a commercial organisation. The interesting aspect is of course the technique used to mine the data and how we arrived at the conclusions most of which are not discussed in the PC World report. One of the outcomes of this venture should be a browser plug-in (in time) that will measure the distraction and profile monthly usage allowing people to adjust their behaviour if they choose to.

What is the difference when compared to the many other tools out there? Rather than just attempting to dump a table of top sites with charts and timers, I am attempting to first observe the users behaviour (over some time) -- then build a pattern template and watch if this behaviour pattern changes. The approach I am taking is to see if the behaviour is changing slowly over time -- and if there are sudden changes.

Obviously, most web users may not directly care about this information, but employers are keen to find effective methods to integrate web use at the workplace. This is the area that I am hoping to make some impact. The web will be around -- it will be used at work -- lets hope that sensible policies are developed to use the web, rather than put in strict regulations based on a few exceptional events. To provide some balance we need good data to inform policy.

Strange as it may be, my experience has been that policies are often developed to compensate for some exception (that is they develop policy for that "one" person out of 1000 that spends a bit too much time on Facebook). If policies are not properly regulated by using good empirical data, over time the policies will eventually be outdated -- inflexible and get to a point where they are going to be silly and even dangerous.

The web will be around for a while -- it is going to be used at work wisely by the majority, and poorly by some. This is to be expected. I am hoping that we can improve the quality of the feedback loop so that people can make adjustments to their web usage themselves.

-- rv

Friday, April 09, 2010

On Integrated Development Environments...

I was asked an interesting question recently -- "What is the minimum expected functionality in an IDE? (for me)". This was part of a broader discussion related to what functionality belong's in the "core" of a product.
----
Here is my answer -- let me define a bare-bones IDE first: A program that should allow user to -- Type code + Build/Run programs + Jump to line from compiler output if needed + Jump to line from stack trace (if one gets generated).

Now for the minimum set of features/functions (in no particular order)::
  1. Syntax highlighting (bold keywords would be minimum, prefer colour highlighting)
  2. Jump to method body from a method call -- including methods in another class (F3 in Eclipse)
  3. Code completion
  4. Multiple tabs (not many -- but 4-5 buffers is sufficient)
  5. Short-cuts for open/close files/tabs, to compile, run, navigation, code completion.
I have intentionally left out a whole bunch of things that many others would consider critical, so let me explain my rationale::

  • Project resource organisation (src, lib, dependencies etc): I prefer these in an external build script. As long as the IDE allows one to invoke this external build script via a short-cut then it is sufficient. Build scripts are also handy in large team development.
  • Code folding: Never really used it in Java/C#, will not miss it if they yanked it out. I have used it a bit when digging around some large XML files (which I am certain was required due to some serious bad karma in my previous life -- I have learnt my lesson now, so all will be well by my next birth). Probably handy for HTML developers, but I can live without it.
  • Project file tree view: Handy, but the OS provides file explorers. When a good command interface is used properly, I can get to the code file pretty quick (i.e. Norton Commander like UI -- muCommander these days).
  • SVN/Git integration: Happy to switch to another application that specifically is designed to do this well rather than load a slow buggy plug-in inside the IDE. It is nice to have the intelligent "diff" inside the IDE, but to me it is not a killer feature -- and I can use it quite effectively outside of the IDE.
  • Defect repository integration: Again, prefer to use an application that is specifically designed for this task.
  • Debugger: Necessary tool, but not "core". You see, I am an optimist (and far too arrogant to admit that I would ever need a debugger). Well they did invent printf so that developers can avoid using the debugger.
  • Models/UML/Program visualization: Models are better on a whiteboard. We can get UML reverse engineered if pretty pictures are needed in a document (or) for more rigorous communication (i.e. forced on me). Program visualisation? Handy if I am new to a code base -- but they are not my type. Forward engineering from UML? Tried that relationship too, did not work out, so we broke up. My brain seem to work better and faster in code than in UML.
As you have probably guessed by now .. my preference is to have a bunch of small applications that are built to increase productivity in a specific task. I do acknowledge that there are many tasks that developers do as part of their work-flow. But, the issue is "how often" we do some tasks. Compared to cutting code -- the rest of the other tasks, in general, take up a lot less time hence I prefer these out of the IDE.

I am a part-time developer these days -- so the question is I am missing something that full-time developers consider critical?

Saturday, April 03, 2010

Scheduling in software projects...

Next week, I am giving guest lecture on "Scheduling in Software Projects" -- hence this blog post reflecting on where I find it hard when managing real-world projects (as well as even student projects).

I will start out with stating the obvious -- "scheduling in software projects is a hard problem". Why hard? Simple ... almost all (simple) scheduling techniques are based on two weak assumptions:

1. We can estimate how long a task will take (minimum time -- maximum time).

2. These estimates are reasonably stable (say for at least 2-4 weeks -- this implies the task specification is sufficiently clear, and the developers have the knowledge+skill and technology to complete the task)

These assumptions are the crux of the problem because we cannot estimate well and task specifications are often fluid. But, there is some good news -- developers often can improve the accuracy of their estimate once they start working on a task (even when it changes a bit), and this estimate gets better the longer they work on a task uninterrupted.

Here is an analogy for why estimation in software development is hard:
"You are asked to translate a sci-fi fantasy book from French to English. But, the author of the French edition does not have a complete story nor has she fleshed out all of the characters fully. She has completed about 50% of the book in French and wants the English translation to start soon, so the publisher can release both books at the same time". Can we estimate how long it will take to complete the English edition? Well, how about if we got 5 English writers to work at the same time? Would you like the job of managing these 5 writers -- and produce a project plan against which you have to give a weekly status update?

The analogy is a tad crude -- but it is about as close to software development reality as it gets.
-----

There is another side to the scheduling story. A vast majority of software development work involves extending and maintaining a software system. Which makes scheduling hard since we never know who will be needed for other higher priority tasks. Why can cause a higher priority task?

Well ... the short answer is: bugs! + meetings (sad, but true) + emotions (random interrupt that will pre-empt all other functions in the brain -- esp. strong in meetings).

We cannot estimate how long it will take to fix a bug easily -- Why?

1. Well the developers have to replicate the bug and find out it is a bug --> It take some unknown quantum of time > 10 minutes (why 10? Glad you asked. Over the years, I identified that it took me, at a minimum, 10 minutes read a defect report -- complain about it being very poorly written -- replicate it -- write up a brief note either acquiring it for investigation, assigning it or closing it).

2. If it is indeed a bug, identify root cause. --> Will take over 10 minutes (why 10 again? Well... one has to check out appropriate version from repository, compile and pray that all goes well, reluctantly run in debugger, find culprit and court martial them, got through a mini-localised-merge nightmare, check-in code with appropriate comment, update defect tracking tool, update time tracking tool, check Facebook, write a comment on Reddit, read twitter updates, complain to the world on twitter about the sad state of the codebase).

3. Integration + Regression testing. --> Pick a number > 1 (trust me -- it will be at least 1 minute).

In a perfect world, it will take a minimum of 21 minutes per defect (maximum or a realistic time to fix and evaluate ripple impacts are completely unknown).

Bugs are relatively easier to work on ... but, "new features" and "requested enhancements" these are a completely different ball game when it comes to estimation. Why? Well, since we now are entering the world of "serial meetings" and "wonderful emotions". It is sufficient to say that even the most trivial enhancement is likely to translate into a time quantum > 1 person hour/feature (really? minimum of 1 hour? Well the tasks will involve at a minimum -- thinking/spec + analysis/high-level design + programming + unit test + merge + integrate + regression test + emails + release notes + discussions + arguing why the feature is a good idea + arguing why the feature is a dumb idea + complaining to the sales/marketing team that they should ask before selling a dumb idea + staring blankly at the walls).
----------

Alright .. so estimation is hard, which makes scheduling difficult. So, why bother? Why should one even attempt this? If you are still reading .. thank you for hanging in there. Here is the reason....

We create a plan in order to "prepare the development team" and only hopefully to follow it (i.e. sticking to the plan is a bonus and can happen if the stars are all aligned in the right quadrant of the sky). The entire exercise helps the team consider the issues involved and you are very likely to get the "minimum time it will take" reasonably correct. My experience is that the "minimum time" gives the clients a sufficiently large heart condition.

I find planning exercises are useful if the team collaboratively develops it and reflects on it regularly (every 3-4 weeks works well in my experience). It improves productivity (indirectly), helps team communication a bit better (at least you know what is going on), and allows the team to get a big picture overview and provides some sense of the business drivers/pressures.

However, the fastest way to kill morale and productivity in the development team is to actually force them to stick to some "schedule" that was developed early in the project life cycle by the "all seeing management".



Saturday, March 27, 2010

Shaving blades....

This post is way off topic for me and certainly about something I do not look forward to.

I saw an article (http://consumerist.com/2010/03/make-your-disposable-razor-blade-last-for-20-months.html) that claims to make a disposable razor blade last 20 months.

If you have tried something like it -- please do leave a comment. I will attempt it over this month anyway, but currently I am very sceptical about getting 20 months out of a razor.

The real question I have is, why techniques such as these are not passed down through the generations or popular culture. Looking at the economic state that the US and much of EU have ended up in, it will not be long before we will see these techniques get pushed around by even the mainstream press in the next big TV show most likely called -- "The bigger miser", "Master home garden", etc..

I would love to know of well tested and viable techniques like this as it will allow me to reduce the amount of "stuff" that we buy and eventually throw away.

Warning: The article contains a "u-tube" video with a guy demonstrating the technique, he is no model -- sadly does not wear a shirt (he should) -- will look a tad strange if you view this vid. in a cubicle farm (and is certain to attract attention of nearly farm residents). Horrible sound as well.

-- rv

Friday, March 12, 2010

Project planning - Problem framing approach

One of the components of project management is planning a project. However, there is a lot more to this than meets the eye. Further, the tools that are widely used do not make it easy to plan because of the way they are designed. But, by using the slightly different frame of reference and understanding these limitations, I belive that we can plan projects a bit better.

Before I get to the gist of the message, I want to define the vocabulary used.

Project: Has an objective, a clear start date and a specific end date. If these are missing, a different term may be more suitable (undertaking or a venture come to mind).

Now to the plan and where many project management tools struggle a bit.

A plan has the following core components:

  1. A break-down of the work that needs to be completed (often can be determined reasonably well for the short-term, but gets harder as we move into the future)
  2. Resources that will undertake the work (Can be allocated with some confidence at the 2-4 weeks scale, but harder beyond that)
  3. The order in which work will take place — a schedule of sorts with a time-line

A simpler way is -- What do we want to get done, Who will do it and How/When will they go about should be apparent in proper plan.

Now to the real interesting part – each of these components from a "problem framing" perspective require very different thinking models, and different skills to solve the problem as well.

Work breakdown is a ‘decomposition problem’. We need to consider the level of detail/abstraction. But it is generally a good idea to have work expressed and communicated as a set of outcomes rather than prescribed granular tasks. Outcomes makes it easier to check if you have actually completed the task and give the worker a lot more automony on how to execute.

Allocation resources is well …. an ‘optimization problem’. We have a fixed pool of resources with certain skills and knowledge. We need to allocate these for the most optimal outcome. A first pass of this can be done without taking into consideration the time-line. Allocating resources without using the time constaints seems odd initially, but it is a proven good practice because you are not preemptively thinking ahead too much.

Scheduling yet another ‘optimization problem’, only now you have to take all aspects into the equation, especially time -- specifically: the overall strategy, actual work, people and time/cost.

Planning is a complex problem solving activity, with some distinct problem types each of which require a slightly different hat and frame of thinking.

So far so good. Now for the mess-up by the tool vendors. The traditional project management tools (as in those that follow and mimic the M$ Project paradigm) provide a user-interface model that requires the user to think about all the of above activities pretty much at the same time.

So, we create a task, allocate resources and set start/end dates and include dependencies. Good planners do innately understand the above process and tend not to get too carried away by the tools focus, but this learning is gained over time. However by understanding that a different frame of reference is needed, the planners can overcome the way the tools focus our mind.

I am in particular impressed by David Allen's methods in GTD (Getting Things Done) as he tends to take this perspective where they get people to focus on tasks from a specific context.

-- R. Vasa

Tuesday, March 09, 2010

Economic growth - what happens when it stops?

I have been studying the 'nature of growth' mainly in software systems, but also in the general economy over the last 4-5 years. My interest in how economies grow has been more of a side-effect, since I wanted to learn how growth is measured and understood in other fields.

[[Warning -- long post]]

This blog entry is a personal reflection on what might happens if there is no economic growth. But, what is economic growth? Simply put -- economies grow when more resources flow through the system. That is, we use more energy/people/natural-resources and transform them. To keep growing, we need to consume more and more energy.

Now, for the PREDICAMENT -- our current economic system relies on 'fossil fuels' for almost all of its energy requirements, and we need to keep getting "more and more" energy to maintain our growth. However, we are running out of cheap and easily available energy sources (as can be seen in the relatively high oil price even in a global recession). We also currently do not have a way around this predicament -- a detailed explanation of this bold claim will take far too many pages, so I will leave it for now.

Given, we are unable to increase the energy sources -- the economic system is likely to also stop growing. But, what will happen? Will the stock market collapse? Will we all slowly starve to death? Will it be a chaotic society? The short answers: ... the stock market as we currently know it will end (slowly) -- we are unlikely to starve to death, but few people will be considered obese. Economies will change (slowly) to be highly localized, but history (and my own experiences) suggest that most humans will live quite well with each other -- that is, we are not likely to start killing each other at the local level (global wars are a different matter).

The real changes will however be in how we will start using material resources and the focus of work + life. The focus will change to be on "quality", rather than on having many many things. We will have less, but "better quality" stuff. Companies that manufacture "cheap trinkets" will go bust -- even better, no sane person will start such a company in the near future.

In terms of work + life -- this is where the biggest changes are likely to be. If companies have to produce "good" quality stuff that lasts a long long time -- then you will be expected to produce really high quality stuff using the least amount of resources and energy. Efficiency and quality are valued. Things like "first to market", "growth of customer base" will be irrelevant. The aim of a company will be to maintain a stable equilibrium. There is going to be some growth at times -- but overall -- the aim is to be at a stable equilibrium.

In this "stable" economic system, there are going to be a number of benefits:

  • You can have a very fulfilling work life, since the focus shifts to actually building "better" products, "caring" for your customers and producing stuff that adds value. The odd aspect, is that you do this with the full knowledge that it will make no difference to your end-pay.
  • You can develop skills slowly and carefully over a lifetime, rather than live in the "fad" of the day. Systematic cultivation of skill will be useful and rewarded by society (in terms of respect -- rather than material reward).
  • In equilibrium economies -- skills are valued. In fact, the only way to survive is to gain a good level of skill. The education system will adapt to provide these skills, the work culture will adapt to ensure that people have the time and space to grow and refine these skills.
  • There will still be greed and the profit motive -- but, it will be at a different scale. Companies cannot concentrate and gain a large amount of wealth easily.

The downside? Well .. the process of "change" is going to be painful, slow, erratic, messy and highly stressful because of the uncertainty of how it will all play out. Of course, the govt. and those in power will try to stop it -- control it -- slow it -- only to prolong the uncertainty.

Will there be rich people? Absolutely, but there will only be 0.01% of the population, the rest will have more or less the same amount of stuff. In this world -- the rich will have to live in palaces and castles. If they are smart, they will make sure that their lifestyle is not public knowledge, they will have to "re-educate" people to accept their role as being a "divine" appointment. Again, this is not likely to happen overnight.

What will happen to the current rich people? Well ... without strong and well organized governments, the wealthy just cannot protect most of their assets. In a cheap energy starved world, the large and complex entities like big trans-national corporations, large governments are the first to go (slowly -- nothing dies in an instant like in the movies). Employees in a company do not swear a personal allegiance to work there and protect the boss, the day the pay checks stop if the day the company collapses.

What type of management model will survive and thrive? Answer: Mafia like organizations -- where the big boss has full control because all key members are family, and those that are not, have a mental attitude similar to the clan. If you do not have workers that will stick with you in hard times (i.e. work for nothing more than food/shelter for months at a time), then you are not likely to be able to build and hold anything substantial.

In this world with expensive energy, you need people with good leadership skills to control people -- not administrative-managers (that cannot inspire most workers). The alternative is to embed a mythology into the culture and society where they are inclined to accept some families to be 'divinely' appointed leaders. This is a possibility, but this level of change requires a good 70-80 years in a well established democracy, since you have to brain-wash and re-educate everyone from birth.

Though, I am convinced that the world will change, I am used to the current way of things. So, the change though anticipated, is likely to be hard for me. However, my kids will probably adapt far better and the generation after that will most likely thrive in it.

-- R. Vasa

Monday, March 08, 2010

Programming...

I just spent some time reading two interesting articles on programming -- both of them by Mike Taylor.

The first one essentially is an opinion on how programming has changed to become a task that mostly involves assembling different components/libraries, rather than (complex) algorithm development (+testing/debugging). It explains why this copy-paste method of building software end up being a horrible experience. I agree with some of these points -- though life without libraries would be equally horrible.


Mike has been king enough to actually compile the key comments and his response to them is posted at:


This second article was far more informative ... and enjoyable. I hope more authors that get long and detailed comments do this. It would be great if someone "thoughtfully" summarised key points in the comment-flood on good Slashdot / Reddit articles.

The key points that I tend to agree with:

1. Frameworks can be dangerous beasts that over-promise and under-deliver at a great cost to flexibility.

2. The explosion of libraries and technologies to be used even on simple applications -- best summed up by the comment at - http://news.ycombinator.com/item?id=1166107 (I may be getting old and my rusty brain is no longer able to cope as well).

---
R. Vasa

Saturday, February 20, 2010

Is the web ruining everything? I do not think so.


"When printed books first became popular, thanks to Gutenberg's press, you saw this great expansion of eloquence and experimentation.

All of which came out of the fact that here was a technology that encouraged people to read deeply, with great concentration and focus.

And as we move to the new technology of the screen ... it has a very different effect, an almost opposite effect, and you will see a retreat from the sophistication and eloquence that characterized the printed page."

-- Nicholas Carr http://www.theatlantic.com/doc/200807/google (Is Google Making us Stupid?).

Well written quote -- almost convincing. But wrong!

My own feelings are that there is a clear and perceptible change in how we access, use and process information. Over time, good communicators will learn the use (and abuse) the new medium of web to ensure that the message gets across.

Sadly, I keep hearing far too many people rant that younger generation are being ruined by the web, multi-tasking, mobile phones, [[insert random item here]] far too often, especially in the ivory tower where I currently work.

If anything, we are still evolving and learning to make effective use the web. There is still a lot of experimentation that is taking place -- new ideas are being generated -- tried -- some live, some die. Overtime, some level of stability will emerge and we will learn to make effective use of the technology.

The web and Google are not going to make us stupid. The human-kind is capable of stupidity without any external inputs. The real stupidity is in the assumption that the web generation is going to be permanently distracted and be ruined by the web. Making statements like the web is going to make us stupid or is ineffective communication medium is like watching a baby learning to walk and conclude that the baby has no hope of ever running.

It is far more likely that the generation that grew up with the web and mobile technology will adapt to the new environment and figure out the most efficient and effective way to use it.