IT today - technical, social and organizational aspects

IT today - technical, social and organizational aspects

The 6th IBM day at the Computer Science and Media faculty at HDM Stuttgart brought four interesting talks by managing consultants or senior architects of IBM Global Business Services. Looking back at the talks I noticed that IT has never before been so complex with respect to the technical, social and organizational environment where it is applied. I blame this on the the fact that IT has never before been so ubiquitous, mission critical and penetrating the lives of people at work and at home. And IT consultants are certainly at the front of this development as the bringers of good or bad news to people.

The technical side of IT and therefore also of computer science shows its limitations rather drastically: The introduction of new projects fails due to outright rejection by employees (or silent abuse). Companies miss critical adjustments of their infrastructure. Some oscillate helplessly between outsourcing and insourcing of their IT. Companies want to install install more social software and have to learn that many departments are not using the services at all.

The IT specialists might even have some suspicions about the reason for failures: social or organizational conflicts. But they lack the psychological or social know-how to tackle these problems. Many specialists working in IT have never been educated in the areas they are working now. But recently more and more seem to recognize the deficiencies and start to learn from psychologists or sociologists. The first and second talk on change management and realistic architectures are a good example for this trend.

IT related problems show the same social qualities as other - admittedly more critical - phenomenons like global warming, the oil crisis and perhaps the current financial crisis as well: nobody wants to deal with them voluntarily. Sometimes everybody seems to wait for everybody else - an effect which gets reinforced by knowing about the others (see Shirky, Here comes Everybody). A good example for this is IPV6 - the necessary, unavoidable and at the same time largely ignored new base of the internet.

IT today seems to have two heads: one head talks about the possibilities of new digital and mobile technologies. Iphone, mp3-players and other gadgets are materialized evidence of this side of IT. The other head is talking total control via IT supported processess. The world as a complex program, made possible by the unmatched planning and control powers of IT. But who controls those powers and their consequences? ITIL - the IT Infrastructure Library is a set of best practices for running a large IT infrastructure - is an excellent example on how deeply IT penetrates our business life today and the positive and negative consequences.

I'd like to structure this article as follows:

Change Management - social manipulation or necessary risk management strategy?
Enterprise Architecture - between theory and tons of constraints
IPV6 - preaching the gospel like global warming, oil crisis and ecological disaster
ITIL - Maxwells daemon for the enterprise?

Change Management - social manipulation or necessary risk management strategy?

A few basic statements about change are in order: Change is wanted and hated at the same time. People seem to have a tendency to abhorr change initially. I suspect that this behavior is also tightly related to the company structure, especially how strong hierarchies are developed. People working in strictly hierarchical organizations have all reasons to be careful about things pushed at them from above.

Acceptance problems need to be expected and dealt with especially in IT projects as they deeply penetrate our work and how it is done. IT can have a massive influence on how we perfrom our work and how we feel about it every day. Trying to establish something new without thinking about the management of change as a separate and important issue is very dangerous and can potentially kill your project.

Resistance to change is not only an end-user quality. Do not overestimate the will to accept change in your own development team. In practice IT specialists tend to be much less flexible and open-minded once their specialties are becoming a target for change...

It would be very wrong to see change management simply as clever tactics to convince people of the good side of the proposed change. Kind of nudge them towards paradise brought by the new IT processes you are going to establish. IT processes are materialized forms of social and economic control and as such represent e.g. the economic power of capital over employees. Change through IT processes is not established to improve the life of employess, it is established to improve the way capital gets proliferated or to improve control over society in general (observation by the state etc.).

The management of change inevitably contains both: nudging people to overcome their inital resistance to necessary change (e.g. by challenging existing but unproductive use of programming languages, tools etc. And also social engineering tacticts - lets call it social manipulation - to enforce the interests of top management and capital in further rationalization of processes. At the price of fewer degrees of freedom for the employees. And if you are in charge of a project and start to manage change you will inevitably be in a very difficult position with respect to the people who will see their ways changed by your actions.

So how is change management successfully done? (by now you understand that "successfully" does depend on your specific point of view or interests involved). Communication is surely rather important. There are many ways to inform your employees about changes:

The newspaper or TV (The xyz department has been sold to..)
The big announcement to your employees herded together in some large auditorium ("dear fellow employees, as the CEO of ...")
Via e-mail (with reorg-charts etc. attached)
Via one-to-one talks or talks to teams

Obviously there is a big difference already in the way change is communicated and the rule is simple: the more personal the better. But the channel make only the tone, it does not change the goal.

A typical IT problem with related change management issues is the rollout of new IT applications. New travel expense reporting, new ways to report time or overtime etc. Today companies operate globally and tend to roll out solutions on a worldwide scale. Frequently a "one size fits all" pattern is used across vastly different areas.

Change management in this case starts with carefully listening to the employees: there might be a reason for resistance and it might not be only the human tendency to resist change initially. There might be real resons (technological, organizational, social) behind resistence and listening closely can save you tons of money and trouble later.

Watch the FULL effects of a proposed change. Markus Samarajiwa brought a nice example in his talk. Employees were supposed to be equipped with mobile phones. Alternative were an Apple IPhone or an old style device with a big antenna and lots of weight. The first order economic view concentrates on price and cost. The first order technical view concentrates on fitness for the intended purpose. Bad change management stops here. Good change management takes into account the first orde social view as well: what kind of message is delivered with the decision? About company hierarchies? About the value of departments and employees? Second order views on economics, technological fitness and social consequences might then come to a very different result: handing out Iphones might be more costly and less robust. But the lack of technical robustness might be compensated by the care people use with their beloved gadgets and motivation might be strengthened considerably.

The same reasoning applies to the different channels of communication presented above: newspaper, town hall, e-mail and personal talk all have different economic, organizational and last but not least social qualities attached to them. Sometimes the social message created by the use of a certain channel completely dominates the content of the communication or can give it a certain reinforcement or twist. But again, it does not change the intended goal.

Finally, be open with the subjects of your change management strategy. There is no denying that change management is about control and that the decisions by top management will be enforced and that you are part of this.

Enterprise Architecture - between theory and tons of constraints

There used to be a time when the specific technology used in a project was determined by IT reasons alone. E.g. the interest of developers in using the latest technology. Or by enforcing certain IT standards on a company wide level and at any cost. Or the familiarity of developers with certain tools or languages. Some people familiar with consulting companies claimed that wherever a customers phone call ends up within a company he will get a different technical solution: An application based on stored procedures within a DB. A c++ fat client. A downloadable Java application, a client/server solution with a thin browser client only, a set of Perl scripts, an Open Source tool, and so on.

All these reasons for a certain solution have two things in common: the first is that they are completely driven by IT interests. They got nothing to do with the situation at the customer site. The second is that they are arbitrary. Any one might do - or none.

Over the years we have learned something here and successful consulting work better respects the lessons learned. Peter Kutschera mentioned the core lession in his talk on practical enterprise architecture: Architecture is never right or wrong, it is only applicable and useful or not.

Always make sure that the application and tools fit the problem. Don't use full-blown J2EE and EJB when a small, isolated Ruby on Rails application will do.
Remember that it takes years before a technology matures into supporting scalable and available solutions.
Be wary of "best practices" of a certain technology announced seconds after the technology itself has been published.
The more your solution needs to scale, the better a conservative approach towards the technology used might be, but:
Always verify your architecture first with tests for scalability, availability and performance.
Make sure your architecture complies with the situation at the customer. The customer will have standards, rules etc.
Make sure your architecture complies with the skills available at the customer site. Do not voluntarily disdain existing knowledge because:
Remember that you can use almost any technology to achieve a certain solution.
Use Open Source but make sure the customer understands and agrees. Get backing for it within your company.
Justify your architectural decisions and have them discussed and revwied.

It boils down to the fact that architectural decisions are far from being only technically motivated. Don't fall into the technology trap and use common sense instead.

IPV6 - preaching the gospel like global warming, oil crisis and ecological disaster

I have to admit that the talk by Peter Demharter on IPV6 contained quite some surprises for me. The biggest one was certainly to learn that IPV6 will seriously affect us software people. It is far from being transparent. It is far from being only something the router people need to worry about. It is far from being contained in hardware boxes. The speaker made it clear that due to the fact that IPV6 is also an end-to-end communication technology it will need changes to APIs on several layers.

What is IPV6? To me it was simply one thing: A bigger range of IP numbers available. And a vague memory of tunnels being mentioned which would connect and convert between the differnt worlds. No problem therefore.

It looks like the truth is quite different. The numbers are bigger, yes. But IPV6 brings a truckload full of new features as well includeing end-to-end security, mobile support and many many more.

Why do we need IPV6 at all? The answer was another suprise for me. I did remember that one day there will be no more available IP addresses from IPV4 but I did not know that e.g. tools like itunes, Google earth/map etc. use hundreds of IP addresses from client to servers. But that is not all: the internet of things (sensors, car technology, home equipment etc.) will need many more addresses because every device that needs to be addressed needs an IP address of course. Those demands completely overwhelm IPV4 and make an easy and convincing case for moving toward IPV6.

Then came the next surprise: Acceptance of IPV6 is very slow. The reasons for the lack of acceptance lie for once in the different level of need (or pain) regarding IP addresses. Some countries still have plenty of addresses available (especially the US). Some will have problems in the coming years (Europe e.g.) and some have problems right now (Asia). Asia has been late with deployment of IP technology and did not receive large numbers of large network addresses and at the same time Asians like to use game software, consoles etc. which use IP addresses excessively.

Another reason for the slow acceptance might be due to the overload on features and the many consequences these have on existing hardware and software. Many providers, hardware makers and large corporation shy away from the costs involved with a conversion or better: addition of an IPV6 stack. The fact that many IPV6 stacks seem to be incompatible with most firewall filters does not help here (my Linux reports during boot that iptables does not work with IPV6 yet).

So the situation is not much different from others. Like the wait for the global warming to turn into the predicted disaster, the oil reserves finally to drain up or the financial blow-up to happen: it seems to be an economic disadvantage to make the first move. Or the other way round: it is unclear what kind of advantage a headstart with IPV6 could bring. Adding an additional IPV6 stack is not really necessary when you got a working IPV4 stack. It only might cause severe security problems (see firewall incompatibility above).

The first very critical moment for IPV4 will come once Asia - for the lack of IPV4 addresses - starts to deploy pure IPV6 networks. Those networks will be unable to interoperate with legacy (IPV4) because they have no dual stack. They are just IPV6. It will be interesting to see if this will create enough economic pressure to make more players move towards IPV6. Currently it looks like the proponents of IPV6 hope for political pressure towards the conversion.

ITIL - Maxwells daemon for the enterprise IT?

Ever since I started working in the IT industry I have seen the trouble business seems to have with IT, their own or external IT services. IT seemed to be unable to provide the services or products requested and/or exceeded cost limits. In general IT looked like a rather unreliable and independently acting agent with a mind of its own. And this was probably the case till the end of the last century - a period whrere IT departments seemed to be able to define their own goals independently of the business and supported with independent budgets.

This has changed dramatically, symbolized through CIOs being demoted and IT changed into a dependent service organization. Process methodologies like ITIL have played a major role in this. It has been fascinating to watch the spreading of process methodologies within large corporations during the last 8 years. Of course process methodologies are not restricted to IT at all. All parts of a corporation have been affected by process approaches. As I am only familiar with IT I will concentrate on the effects of such a process technology in IT.

To get a better handle on the problem I am using concepts from James R. Beniger. In his book "The Control Revolution - Technological and Economic Origins of the Information Society) he describes information processing and the modern information society as the result of a control crisis brought on by industrialim. Control expressed as processing or programming is a core concept here and it can be easily seen that the problems between business and IT can be described in terms of control and processing.

The tension between a sponsor and his agent (acting as a proxy) is as old as mankind and strategies to prevent or solve conflicts in interests (or abilities) are just as old. Honor codes, kinship based trust relations, friendship, law and later more rationalized, externalized means of control through further intermediaries (banks, brokers) have been used. But the development of information processig has again increased the means of control considerably by developing automated processors. Before only human processor were able to act as proxies which created the "purpose of interest" problem in the first place. Process technologies do not depend on non-human machines because they can create program like instructions for human processors as well and find ways to control them. But they can certainly integrate information processing machines. It is fascinating to see the control technologies created by IT being applied to IT.

Created by IBM, Microsoft, Oracle and others ITIL - the IT Infrastructure Library - has gotten quite popular during the last years. ITIL covers best practices for running IT infrastructures. This is a common approach which is used in other areas as well like IT-Security (BSI Grundschutzhandbuch or Common Criteria), Methodology (Vorgehensmodell, Rational Modeling Framework etc.). They operate on different levels of detail and all need to be tailored to the specific case. Dr. Lubenow gave an excellent introduction to the concepts of ITIL to which I am adding some critical comments now.

What is the main philosophy behind ITIL? I guess it is best understood when you look at IT simply as services provided. And these services are best designed and implemented in a way that allows easy outsourcing. ITIL both couples and de-couples the IT services with a company. IT services are de-coupled through the use of interface specifications which possibly hide implementations details of the services. And they are coupled to the enterprise by the use of legal agreements, so called Service Level Agreements (SLAs). Business and IT create contracts between each other which regulate how much certain services will cost and what exactly those services will provide. Implicitly contained in ITIL is the rule that all activity starts in business and that there is no IT-activity at all without an existing SLA. Business drives and controls IT.

In the end there is a control problem between business and IT, actually a distributed control problem (in the sense described by James R. Beniger) because the service could be rendered remotely. The main instrument for control here is the SLA which - according to ITIL - needs to be very specific and detailed with an emphasis on requirements and service definition. In my opinion there are contradictory forces behind the SLA approach as a detailed specification: on one hand the sponsor wants to de-couple himself from the work needed to provide the service but at the same time. On the other hand the sponsor gives detailed instructions on what exactly has to be done thus reducing the flexibility of the service provider. Sometimes sponsors even describe not only what has to be done but also how - thereby reducing the abstraction level of the requirements event further. Unnecessary levels of detail can create a cost explosion at the service provider because he is unable to use existing and optimal processes to provide the service.

A lot depends on how the requirements for an SLA are created and on what level of abstraction they are specified. Common IT wisdom claims: as detailed and low level as possible to prevent the development of services or products which are useless to the sponsor. This approach turns the service provider into an mechanically executing agent. But this wisdom is probably false: instead of putting a lot of emphasis on the command part of control, it might be beneficial to use the feedback part of control much more, especially when modern communication equipment allows frequent and easy communication. The "program" for the service provider could be specified in a declarative way with the service provider already involved so that the requirements have a chance to match the real abilities of the service provider. This approach also brings some additional costs due to errors in interpretation or execution of the requirements.

After the bursting of the Dotcom bubble many large corporations have removed the head of IT from the company board and IT was no longer seen as being so strategic as before. More emphasis was put on another aspect of ITIL: providing (standard) services consistently with a certain quality of service and at defined and agreed costs. To achieve this high level of planned quality ITIL uses strictly defined processes and lots of measurements during those. IT is turned into a machine or factory this way where resources (human and technical) are bound into defined processes and create results in accordance with the SLAs which have been agreed on.

ITIL cannot deny its heritage as a child of huge corporations that are involved in software production and IT services. The standard put a lot of emphasis on requirements capturing in SLAs (a pre-condition for outsourcing of services) and at configuration and asset management (e.g. software licenses). Not really a surprise considering the parents of ITIL. There is another connection to the big parents as well: The costs of introducing ITIL processes are quite substantial and it might be the case that those costs are only justifiable for huge corporations in turn.

But before we talk about costs and other effects let's discuss the common sense behind ITIL. Like many other standards ITIL is a set of observations turned into experience. This experience controls the what and perhaps in some standards also the how. ITIL is on a rather high level with a focus on "what": You need to organize storage, you need to monitor systems etc. As this experience comes from the observation of many different services and processes it is clear that the result will be a set of rules that governs all aspects of IT - many of them totally irrelevant to your specific case (or simply a bad fit as described further down). And like all methodologies of that kind ITIL requires tailoring the standard to your needs for that reason.

Tailoring a methodology is the exact opposite of creating one. During creation a serious problem appears: how do you define the borders of your methodology? One essential border is the level of detail required by a methodology. During development we frequently tend to acquire way too much detail. Try to build an ontology or a topic map for a certain area. The classic example is a topic map for a cake recipe. Do you need to include a description for the crystal structure of sugar? at what level of detail? Without tailoring methodology you will experience a bad case of diminishing returns: you don't get value for all the effort put into acquiring details and defining things.

What is the proper level of detail for a specific case? Let's take a step back and take once again a look at what ITIL is and does. It defines processes to achieve a smoothly running IT Infrastructure. It plans for eventual problems by introducing redundancies in the resources needed by those processes. So in the end an application of ITIL within a corporation means programming the IT infrastructure. The question about the necessary level of detail is answered quite easily now: the level needed to create programs which operate as processes of the IT infrastructure. You are programming the organization. (Again I am using here terminology from James Beniger, The Control Revolution, Technological and Economic Origins of the Information Society). But the programs themselves depend on their processors. Frequently those processors are human beings, sometimes they are machines. The level of detail needed for the implementation (not specification) of ITIL programs (processes) finally depends on the kind of processors used. With human beings as processors you might skip some level of detail and rely on common sense and context information of your processors - at the risk of a mismatch in purpose and interests. If you think about machines executing your programs you must be rather specific I guess. So which level of detail do you finally chose? This is in the end also a question of cost, our next topic. But if you want to retain the possibility to oursource your services, you will have to chose the level of detail needed for processing by machines because theoretically you've got no influence on the implementation of the service by an external partern.

The title to this section mentions Maxwells Daemon and asks whether ITIL plays the same role in the information society.

The picture was taken from the Physics homepage of University of Berkeley where a few interesting links to the topic can be found. Another excellent source in German is from Joerg Resag Entropie und Information chapter 7 in "Die Symmetrie der Naturgesetze". While still debated Maxwells thought experiment turned up some very interesting connections between thermodynamics and information theory.

The original thought experiment had two chambers connected through a hole protected by a slider. A demon was supposed to watch all the molecules of gas in those two chambers and - by operating the slider - manage to let the hot ones pass from one side to the other and the cold molecules in the opposite direction thereby creating an energy gradient which could be used to work a steam engine. And at the same time this setup would seriously violate the second law of thermodynamics which claims that spending energy on work cannot lead to a reduction in entropy on the energy side. It it were possible we would have a perpetuum mobile.

The discussion of Maxwells demon soon took a focus on the measurements needed by the demon to decide which molecules to let pass. The energy spent on interacting with the molecules would force the demon to spend energy equivalent or higher than the gain by creating the gradient. But then measurements which did not involve energy were detected and the focus shifted to the general information processing and storage needed by the demon. This created the physical side of information. Later consequences for quantum computing etc. were found (see the discussion on the Berkely phyiscs site and perhaps in Short et al.: The Connection between Logical and Thermodynamical Irreversibility, July 2005.

The ITIL demon seems to need similiar features like Maxwells energy creating demon. It certainly needs a lot of definitional work: observing existing practices, counting assets, creating and negotiating SLAs, observing and measuring processes and their results. This - especially if it involves external companies via outsourcing but also true in case of internal use - is energy intensive and economically costly work.

Still, sometimes it is made too look like a no-energy process using accounting tricks (external contracts are different from internal employees) or organizational tricks (showing how the number of employees is reduced - speak cost savings are achieved - while hiding the numbers and costs of oursourced work. OK, so it is costly but it works, doesn't it? Well - sometimes perhaps. Let's take a look at second order consequences (using the terminology of Dietrich Doerner: the consequences of the consequences). Frequently we see a high failure rate with off-shoring e.g. to India. But also the so called near-shoring to companies close by leads in many cases to problems. Sometimes external SLA partner show a frightening turn-over rate of employees that had just begun to become productive. Of course, the sponsoring company is theoretically protected by the signed and legally enforcable SLA, but that does not really help much in practice. Lately security scandals around call centers and bank or telecom date etc. have raised attention to the potential loss of secrecy during the process. There are sureley documents in ITIL which describe processes to fight data abuse but some threats cannot be completely eradicated. Same goes for cloud computing and security.

Requiremens handling is an important part of SLA creation and at the same time a never ending reason for disputes. Today IT systems exist that do nothing else than capture requirements and create a detailed archive. Sponsors and suppliers of services can then check who asked for what and when. This becomes an issue usually when the sponsor realizes a need for changes later. And how can you truly calculate the costs of ad-hoc changes? Then we are back to the old tension between sponsor and agent and who is following which interests. Of course the internal service implementation of a supplier can be made transparent with enough pressure from the sponsor and this is done today, again made easier by modern information processing technology. But that should not be necessary according to economic theory because it finally creates a centrally planned economy - something that is not supposed to work if I remember correctly.

As we are talking about second order consequences I have noticed an interesting trend: the more large companies live the service idea, the more they tend to outsource even high-level services like vision and strategy creation. It is quite common e.g. in automitive corporations to sponsor even the development of future products, together with the necessary requirements etc. While I do not know how well this works in large corporations I saw small and medium size businesses lose the ability for product planning because they had lost the specialists to do this (and chances to make experiences as well). Some of these companies have started insourcing those capabilities by e.g. creating a development department for prototyping new products. In the large corporations on the other hand it might be a successfull strategy to reduce the corporation to a shell that privides financial backing and information integration processing.

To close the discussion on ITIL for outsourcing services: if you plan to use external services it is certainly a good idea to take a look at ITIL standards to learn what is needed for successfull cooperations in IT. But again, you need to realize the hidden costs associated with it and also the risks which cannot really be covered completely by SLAs. Think about second order consequences like what it does to your own company.

At the turn of the century more and more companies started to use ITIL to organize their internal IT as well. They took IT budgets away, demoted the CIOs and turned IT into a business controlled activity. Strictly requirements bound development forced IT to justify every development with a business case. Cost control was much more enforced and permanent measurements installed.

The consequences were also quite substantial. Previously informal relations were turned into legal ones. Creation and negotiation of SLAs is now common practice and creates lots of overhead. Requirements creation is a pre-requisite for an SLA and it is a task for the business side of the company - which hardly ever understands what IT can really do in a certain case and whose requirements frequently turn out as pure fantasy - driven by the desperate need to come up with something because "the process" demands it.

While IT was used to define their own requirements and have them sanctioned by business it is now confronted with detailed requirements specifications which have been developed without input from IT. The specifications specify "how" things have to be done and in many cases fit badly to the existing infrastructure which raises costs enormously. In many cases early matching of requirements with abilities could cut down on costs while providing the same level of service quality.

Cost control certainly is important in IT but you will also realize the effect of diminishing returns here: in many cases the loss in flexibility is not worth the money saved. And it might have strange consequences: it forces project managers in development e.g. to behave just like in an external product development shop: all costs that do not involve direct business functionality (debugging, re-factoring, release upgrades of third party products etc.) need to be hidden behind business functions sold via an SLA. Yes, there are maintenance SLAs but they do not cover the refactoring needed etc. Because ITIL does not allow hiding such costs e.g. behind maintenance activities to run the IT infrastructure (as is still frequently done). And not to forget: cost control is today an information processing activity based on information technology and as such - due to the progress in this field - able to work at ever increasing levels of detail, causing ever increasing levels of overhead perhaps.

But I see the biggest consequences in the way programmed processes influence and control the way people work. To get a grasp on those issues let's ask some questions first:

the activities you are trying to program - are they open (changing) or closed?
How many degrees of freedom do "resources" have once processes are defined and programmed?
How well do requirements fit to the available technical infrastructure and skills?
Do you as a member of IT have influence on the requirements?
Do you think that process technology is going to affect motivation and morale?
How much time and money is spent on process definition?
Has process definition become a goal in itself, perhaps with a deciding influence of carreers?
Are employees hiding behind processes (or missing processes)?
How did reaction times change from before? Are they now longer than before? (of course as stated in the SLA). In other wordss: did the codification of services in SLAs change the way they are now provided?
Did you talk to employees on the lower rungs and how they perceive the changes?
Do you require excellence in your employees work or will a constant average do?
Are you trying to solve communication and personal problems with processes and legal (SLA) documents?
Is the final specification of services or products done via command or feedback?
When did working in your company stop being fun?

The "programming" of IT or other services in your company will have technical, economic and very much also social effects which will change the way your company operates in the long run. Of course development activities are not a typical area of ITIL and other process methodologies. The ratio of hard to plan for exceptions etc. is rather high and motivation of employees is key. Still, companies are about to program more and more of their activities. Many of those have nothing to do with IT services (or little). That makes it a rather ironic point that IT departments now complain about this trend which has been created and extended exactly by the ever increasing use of information processing technology. Seen like this ITIL is just one of many process and control technologies that are based on programmed activities.

It is interesting to contrast agile methodologies (eXtreme programming, scrum) with process based methodologies. Agile methodologies do also use the concept of process. They describe phases and steps. But they do it usually with a lower level of detail. They specify the what and not the how of things to be done thus leaving degrees of freedom for the individual. And they cut down on the costs of creating enormously detailed "programs" of control by using feedback as the main control technology. Agile methodologies are much more than just some technology to create IT products and services. I wonder whether they could be a replacement for process methodologies in non-IT areas as well. Is scrum getting used outside of IT?

I guess the basic question is whether the approach of "total control" through the use of process methodology based on powerful information processing technology is not only dangerous on the level of society (the 1984 question) but also less efficient as a means to control production and services?