Wednesday, October 29, 2003

Cover Pages: JSR 168 Portlet API Specification 1.0 Released for Public Review.

Cover Pages: JSR 168 Portlet API Specification 1.0 Released for Public Review.

The above is a very good introduction to portals, portlets and the jsr 168 spec. Among other things, I came across a good definition for a container in general- namely that it provides an entity with a runtime environment. It manages the entity's lifecycle, and performs basic support functions like deployment, state management, persistence etc. JSR 168 standardizes how components of portal servers are developed. SO one can now develop one set of 168-compliant portlets, and then host them on any portal server.

XForms - the next generation of Web forms

XForms - the next generation of Web forms


What are XForms? Another means of separating data from presentation. Traditional form syntax deals with both the data handling and the presentation aspects, which creates complexity and limits their use. XForms separates these two functions into the XForms Model and the XForms User Interface. In fact the latter can be replaced with any other form of UI- such as XHTML, printer, fax machine, or thin clients. The data which the forms accept is called XML instance data.

They also provide for interesting features like suspend the

Tuesday, October 21, 2003

BBC News | SCI/TECH | Mobiles go clockwork

This story on the BBC website describes a manually operated device for charging mobile phones. BBC News | SCI/TECH | Mobiles go clockwork. The claim is that this makes mobiles "truly mobile", since one is no longer constained by a charging facility through an electrical outlet. It comes from a company called Freeplay (www.freeplay.net, which also gave us the wind-up radio. The website describes their competence as:

"wind-up, solar and rechargeable power into unique, portable, consumer electronic products replacing conventional battery-powered systems that are wasteful and costly."

I have two interesting perspectives on this:

1. Firstly, such self-sufficient technologies are very attractive for developing countries like India, where the rural populace doesn't mind cranking a lever for half an hour to get power supply for the night. The vision of ubiquitous transmitted power is far from getting realized; there are semi-urban areas where such power is available for a few hours a day in summers. Rural interior areas are undoubtedly far worse. So this is one need. The second need is that of small-scale "battery" power, for devices like cellphones, cell-powered radios, and so on. Realizing the benefits of powered electronic appliances at a small cost and without power infrastructure is a need that can be fulfilled using this technology.

2. This form of power is a fundamental paradigm shift. It is something not very apparent on the face of it, but when you think about it- it is a thought-provoking change. The change is this: the artificial energy infrastructure that we have developed in the last 100 years is a Centralized one- it largely uses centralized generation units (power plants), and then an enormously large Transmission and Distribution (T&D) network to distribute this to consumption centers. The idea is that centralized power generation is much much cheaper, even with the additional cost of T&D. We can theoretically have small localised power generation units as well, but they would be much less efficient and would cost more, per unit of power generated. This is true of thermal-, hydro- and nuclear- power. At the same time, business needs have necessitated some amount of distributed power generation as well- as through deisel generators, or micro-hydel power projects. But these are most often the exception- developed in situations where the centralized system doesn't work too well, or needs backup.

Now compare this predominantly centralized energy generation system developed by us with the one built by nature. Yes- there are energy consumption units all around us, and they have been here much before we built our first power drives. The most conspicuous of them walk on two legs. Humans are a consumption center- and they generate their usable energy on their own (actually generate is an inaccurate term- we merely transform energy- but then that's what all that our power infrastructure does too). We have the equivalent of batteries in the form of our digestion and respiratory systems. And there are hundreds of such battery systems in the form of various plants and animals in the ecosystem. All these organisms consume energy- but they don't have to plug into a socket- they are truly mobile in that sense.

This distributed form of power generation has an advantage- and this is an advantage that in the end might prove to be the panacea of our power infrastructure problems- problems that we are facing today, and much more severe problems that we are bound to face in the future. This advantage is that of robustness. We run a tremendous risk of failure when we centralize our power generation capability. All that needs to be disrupted is the centralized plant- and there is widespread chaos. We are literally putting all our eggs, vegetables and grocery in one giant basket, and hoping it will hold. The causes for such a disruption could be natural (a calamity, earthquake, meteriote), or man-made (sabotage, terrorism)- but the result is the same. One 11 KV transmission line broken would leave thousands, maybe hundreds of thousands, without the lifeline called electricity. This nightmarish scenario has been previewed in recent times in developed countries like UK and US- as brownouts and blackouts of large proportions.

Distributing our energy generation capability makes it more reliable. In the ideal case of "every system for itself", the reliability is at its peak. One failure only means one affected system. And such a mechanism will finally have to be developed, as we find out in the long run that calamities do happen- if we have to survive the evolutionary process which tests all systems with all kinds of adverse scanarios, we have to place a premium on reliability. The antithesis of reliabilty in this case is cost. We don't make a battery for every consumption unit because it is a hundred times more expensive to do so. However, as we have found in the past, cost is something that a widely adopted technology overcomes. The cost at which we create microchips today is a thousandth of what it was when the technology was nascent. The cost of accessing the internet ten years ago was many times what it is today. Mobile phone connectivity today is a fraction of what it was a few years ago. We will finally have to make the switch- and we will overcome the cost factor as well when we have to do that.

What is a realistic scenario? That people will continue with the centralized system because it works fine for now. However, they can be sensitized to the utility of decentralized power generation- and part adoption can be done. It still depends on business rules, not technology rules- so people will have to find viable business models, like Freeplay has done, to accelerate this adoption. In the long run, more and more decenralization will happen- till the end when every system will produce for itself.

Wednesday, October 15, 2003

johnhagel.com: Where Business meets IT

In this interesting commentary by the 16-year McKinsey veteran, and now Web Services (WS) strategy expert John Hagel:
johnhagel.com: Where Business meets IT,
he explains how:

1. There are two paths through which an enterprise adopts web services today. One is the IT-dept. driven path, which is typified by prototyping, experimenting, and short-term projects with tangible benefits to show. The second is the path driven by the non-techie line executives who initiate the effort with business objectives in mind. It is this second category of efforts which have resulted in most adoptions of the technology at production level; the other technology initiaitves generally only resulting in an affirmation of the business value of WS.

2. There is a inherent trade-off between going for an quick-complete IT project with WS which gives immidiate benefits, and a long-term, well thought-out set of initiatives based on a solid IT architecture. If a company focusses exclusively on starting island projects in an ad-hoc manner, it will face the problem of too much heterogeneity in its IT infrastructure and systems. It might face serious bottlenecks at a later stage, and in the long term end up losing out on the business value of WS. If, on the other hand, she tries to develop a full-scale IT blueprint for the next 5-10 years, and then drive all IT system developments in a top-down manner from that, there is the danger of over-specifying, and not being able to achieve tangible results in the reasonable time frame. There is also a danger of the IT architecture becoming too inflexible, and make later corrections impossible or very expensive.

3. So, Hagel recommends, we should follow what he calls the Fast Approach, and do a bit of both. Have a high-level architecture that keeps evolving, and keep the waves of initiatives going meanwhile, resulting in real benefits.

Well, this points out a valid dilemma, of course. I'll have to think about it and form a point of view.

Tuesday, October 14, 2003

Philip Greenspun's Weblog:

Philip Greenspun's Weblog:

This is another interesting blog by Phillip Greenspun. Phillip Greenspun (I write the name again so I remember it) is a teacher at MIT (web applications), and founded a company called ArsDigita which has grown pretty big (not sure, says USD 20 million). Your typical IT guru.

The blog is interesting. Will follow up on it.