Welcome to the kriha.org weblog

What's New

8th Games Day at HdM

We just had our 8th Games Day at Hdm. We heard a very interesting talk on new business models behind so called casual games. It is hard to believe that micropayments pay so much when combined with social forces like to best the friends. An effect similiar to what happens between neighbours sometimes. And a honest statement on how hard working in the games industry can be: on time and on budget is a must. The second talk was from Stefan Radicke - one of the founders of game development at HdM and now technical director of company that produces WII games. Excellent. And I could not attend the last one by our own Andreas Stiegler on artificial intelligence in games due to other duties.

Luckily the games day is pretty much organized by our game interested students. A big thanks to Stefan Soller, Andreas Stiegler and the current topics team.

8th IBM Day at HdM - Next Generation Internet, Clouds and Enterprise Architecture

The next generation of the Internet could look much different from todays Internet: static IP adresses for everybody and everything (IPv6) and a good integration with powerful compute clouds might be something good but there are also rather problematic things looming over the next Internet: the days where all bits are equal could be over with providers trying to regain profits lost to google and co. And don't think the rage around Wikileaks will not have a profound effect on politicians and lobbyists trying to control the "egalitarian" Internet of today.

And who would be better equipped to answer our questions on the next Internet than Peter Demharter of IBM. He has been giving talks at HdM for a couple of years now and each and every time was something very special. And because networks alone aren't everything he brings with him Güther Triep, a specialist in enterprise architectures. We will ask them both on how e.g. the new enterprise architectures include cloud computing.

For a lively friday afternoon join us on Friday 17th December 2010 at 13.15 at HdM.

Note

As always the talks are free of charge and open to the interested public. For directions see the HdM homepage . Location: room 056, Nobelstrasse 10, Stuttgart-Vaihingen. Live Stream under events at HdM .

5th Web Day at HdM - Know your tools

The 5th Web Day at HdM offers a wide variety of topics around web publishing and web apps. From Apache Wicket to type3 extensions, php and css secrets and web to print technologies.

Agenda: 14:00 - 14:10 Welcome 14:10 - 15:10 Know your tools, Keeps it simple. Rocking with PHP, CSS, HTML5 Stephan Soller, Medieninformatik HdM Stuttgart 15:15 - 16:10 Web to Print - Web Applikation "Designer" for personalized products. Sebastian Freytag and Daniel Dihardja, Weitclick GmbH 16:15 - 17:10 Extensions with PHP, Extbase and Fluid Benjamin Mack, TYPO3 Core Team 17:15 - 18:10 Java Web Applications with Apache Wicket Michael Gerlinger, WidasConcepts GmbH

Note

As always the conference is free of charge and open to the interested public. For directions see the HdM homepage . Location: room 056, Nobelstrasse 10, Stuttgart-Vaihingen

Privacy and Piracy in digital media - 2nd Digital Rights Day at HdM

The world of digital media has become a legal battle ground between holders of rights and users. And this is just a small part of the fight for or against intellectBlog 2012ual property rights in general which is fought between developing countries and the western nations. Strange things happen during this fight: those who usually want to control citizens without end by storing all kinds of data for a long time start a weird form of google streetview bashing and want to make us believe they are doing this for our protection.

Organizatons like ACTA try to - secretly - increase the rights of rights holders without public discussion. Internet providers will be made responsible for content and censorship. File sharing is getting controlled which raises the question of a neutral internet. If something goes wrong and user data are exposed the usual answer involves outsource and in the future perhaps cloud computing problems.

Wikileaks - the living proof that secrecy has no longer a role in democratic states. But is it legal to make secret documents public? Think about the secret contracts between Berlin and water supply companies guaranteeing them huge profits at high costs for citizens?

I have some special projects in this term: I am trying to understand "The power of crowds": how can we learn about the lifetime of equipment? the Quality of services provided? Are we all getting the same prices using web shops or are we offered individual deals? Usually our experiences are as an individual against a company. We cannot learn about aggregate data this way. Can we build sites which collect this information? Is this legal?

It's about time to discuss these developments and we have invited specialists from law enforcement, lawers and the new political party "Piratenpartei" to shed some light on piracy and privacy issues.

Agenda 13:15 Uhr Eröffnung der Verhandlung Prof. Walter Kriha, Björn von Prollius Studiengang Medieninformatik, HdM Stuttgart 13:30 Uhr Social Media and Privacy Prof. Dr. Hendrik Speck Studiengang Informatik und Mikrosystemtechnik, FH Kaiserslautern 14:30 Uhr Neue digitale Rechte für die Presse? Prof. Dr. Michael Veddern Studiengang Mediapublishing, HdM Stuttgart 15:30 Uhr Digital Rights, Urheberrecht und Softwarepatent Dipl.-Kfm. Jan Lüdtke-Reißmann, Piratenpartei Baden-Württemberg 16:30 Uhr Cloud Computing und Compliance - Ein Widerspruch? Rechtsanwalt Joachim Dorschel, Bartsch und Partner, Karlsruhe 17:30 Uhr Das Netz - ein rechtfreier Raum? Strafverfolgung online Staatsanwalt Thomas Hochstein, Stuttgart 18:30 Uhr Urteilsverkündung und Diskussion Prof. Walter Kriha, Björn von Prollius Studiengang Medieninformatik, HdM Stuttgart

Note

05.11.2010 13.15-18.30 at HdM Nobelstrasse 10, Stuttgart. Room 56. Free of charge and open to the interested public.

Beyond fear tour 2010 - visiting two excellent companies, more..

Last week we did our "beyond fear" motorbike tour for the 6th time (if I'm counting right). It is amazing what a couple of days on a bike - together with a group of alumnis, students and colleagues - does for your brain (:-). We visited the Technorama in Winterthur, an excellent place for young and old people to learn about physics and have lots of fun too. You should take your children there one day.

The first night we spent at the CAP Rotach campground in Friedrichshafen. They got group tents and a huge barbecue which we shared with another group. The second night we pitched out tents at the Zurich Seebucht campground and met several of our alumnis who work in Zurich. We went to the "Rote Fabrik" which is close to the campground and had a good time there.

On our first day we visited Innovations in Immenstaad. They belong to the Bosch Group since a couple of years. We got an excellent introduction to their product and technology chain and we realized that they were a perfect fit to our technology driven faculty. They produce business rule engines and have established a high-performance development based on open source software and excellent people. They treat their employees well (no micro cubicles, good food, flexible times and processes, long term employment etc.) and are a living proof that high-tech is still not only possible in Germany but can be also very successfull. Their growth rate is staggering and we invited them to our Media Night in July to meet our students and faculty staff. No outsourcing of core technology at this place!

A core idea I took away for our current strategy discussion in MI is about testing as core part of development and management skill. Innovations uses extensive Unit tests for their rule engines and like the company below follows a "designed for testability" approach. This needs to be reflected in our computer science master (especially in the business strategy and management area)

The second company we visited is Sauter AG, located in Basle and specializing in building automation (an area where some of our assistants want to do their doctoral thesis in). On our first beyond fear tour we had visited Sauter AG already and noticed quite a number of differences: They have grown a lot too. A new building serves as a Minergy demonstration object combining a product assembly section, a huge robot-driven storage and others which I have forgotten.

Man, do they have a deep production chain: from creating their own cases using advanced plastic engineering to creating their own electronic boards for sensors, actors and control computers there is almost nothing they can't do. And if they find something to be inefficient they just build a machine for it. It goes almost without saying that this approach requires excellent peopleBlog 2012 (e.g. 160 in production which is not really a lot given the depth of the processes). The Swiss faible for quality does help too I guess. Using their own technology Sauter AG (privately owned) is a living proof for the business possibilites once you control the technology behind your product. Of course we have also invited them to the next Media Night to meet our developers, e.g. the group around Kai Aras that build OpenAMI - a low cost building automation system including Iphone front-end, sensor controllers etc., all based on a low cost router running Python apps.

And of course Sauter considers offering automation services based on their experience with building efficient machines and they can offer excellent "on time/on budget" quality because they control most parts of their products. They have a demand for advanced visualization techniques for their control stations and could probably profit from our development skills in other areas as well.

As on every tour we had quite some fun. Just the weather turned out to be rather cold and rainy. But it looks like we still picked the best three days from a very rainy week. Let's see what we are going to do next year.

Games Day Reflections

cheating in realtime graphics, value transport in games - a disturbing thought or a useful perspective for "killer games"?, free to play approaches and community management, game engine design as a simulation exercise, the current state of game industry in BW. Guests from Reutlingen, Wolframstrasse (Project ideas DAAD), Ludwigsburg, summer university, MfG study use?

the effects of buffer removal on interconnected systems

queuing theory, buffer problem, disruption and consequences, lean companies and fragility, capital and profit ratio, financial crisis, credit system as buffer. Leanness and latency trade-offs. Latency not modelled in QT

Testing as an agile discipline

Last week we were hosting a BWTest event on explorative testing. Stefan Vogel gave a talk on explorative methods and Kai Leppler added a short presentation on practical experiences with this approach. It was lots of fun because I knew Stefan (and Claus Gittinger who was also present) for many years (actually from the very first day I started working in Unix operating systems. This was in the middle of 86 in Munich and we were all working/consulting for Siemens at this time. And Stefan uses our common past to add many stories to his presentation on agile testing. In other words, it was like a ride into the past.

In 86 Siemens used a rather formal and organized approach for testing our kernels and I remember that all testers were in a different department. This caused some friction at the beginning but over the time I was able to change my relation to "my" tester. He became a friend and we started something that can only be called agile testing from hindsight. It ended with him sitting next to me writing test code while I wrote operating system functions. He knew me very well, my strenght and weaknesses, the problems of the architecture etc. and used it to write test code focussing on the core problems.

It turned out during the talk that this is nowadays called explorative testing. Testers use their intuition and cleverness to write test code with a focus on the real problems. They try to uncover the "black swans" - relatively rare bugs with hard consequences. This goes along with an expectation that assumes a rather low bug count of code leaving development. It is assumed that development does automated testing for the defined use cases as part of regular development practices.

It looks like testing follows the schism between formal, requirements driven process technology and agile, human focussed development methods: The formal testing approach consisting of automated functional testing against defined use cases opposed to a "tummy based", explorative testing approach which relies on the intuition and cleverness of the testers.

The resulting objections from more business oriented participants were not really a surprise. They pointed out the advantages of a formal, automated testing process: the expected functionality is tested rather exhaustively. But that's exactly its weakness as well: Unexpected problems are usually not found. Here the intuition of testers comes into play with the explorative testing approach. By knowing components and developers the can follow their hunches for rare and unexpected events.

This means that both approaches are not only compatible but necessary for a good coverage. But does it end here? The pattern of strength turning into weakness shown in the case of formal use case testing (only expected problems will be found) repeats itself: While the intuition behind explorative testing is its strength by concentrating on probable problem areas it is the same intuition that might prevent some serious bugs from being uncovered simply because the intuition pointed the tester into another direction.

But if we replace the intuition with randomness we have a chance to catch even those bugs. The approach of random testing is called "fuzzing" and has proven extremely successful with operating systems or browsers. By shooting randomly at all kinds of interfaces and input channels of an application truly unexpected problems can be found. The disadvantage lies in the fact that bugs in deeper layers of an architecture are not easily reachable from the top layer interfaces available to random service calls (without a successfull login e.g. the test code won't reach most functions).

And how does automation fit into this picture? Claus gave us the right idea: Automation is orthogonal to the axis made of formal, explorative and random testing. This axis could be called "degree of bug expectation". I have tried to come up wiht the following diagram showing the testing dimensions and giving some examples.

But the evening brought more pleasant surprises. Kai Leppler showed that his team became much more successfull in finding rare bugs using the explorative testing. And he offered to hold a workshop on testing methodologies and techniques for our students which I gladly accepted. Expect a testing event in the near future at HdM...

Finally we are planning a major test-related event in September and we are hoping to get a very interesting speaker who is in charge of testing for the worlds most popular search engine. Hold your thumbs that we will be successful..

GTUG Hackathon at HDM

The Google Technology User Group (GTUG) and the computer science and media faculty at HDM are offering a special "hackaton" on the Chrome browser and extensions.

 Agenda 
9.00 - Welcome & arrival
9.30 - Introductory session on Chrome Extensions
and code examples
10.00 - GTUG Battle explained / Collect ideas /
Build Teams
10.30 - Start of Hackathon
12.00 - Lunch
15.00 - Coffee break
Ort:
Hochschule der Medien, Nobelstrasse 10, Stuttgart
Seminarraum U32 (neben S-Bar im Untergeschoss)
Zeit:
Fr. 5.3.2010, 9-17 Uhr Eintritt frei.

Securitization

I would have never expected that a major human catastrophy - the devastating earthquake that hit Haiti last week - could become a prime example for what has been called "securitization": a political approach that sees everything in the world as a problem of security and that is massively powered by the military and the military industry. As a result of securitization the differences between external and internal security are disappearing both technically and legally. Citizens become potential enemies.

Telepolis had two articles lately which describe the application of securitization in Haiti. Harald Neuber asks whether the US troups will establish a permanent control of Haiti . US media are transporting a picture of Haiti that has some street violenceat the core of problems in Haiti and which asks for military action. (This is nothing new: Remember the reports of street violence in New Orleans (mostly faked) by US media which put the mostly coloured population into a very bad light). As a result the majority of flights now serves military purposes and civil organizations say they cannot get their equipment and helpers into the country for that reason.

Now there is an active military-industrial complex in the EU as well which fears the US competition. J. de St. Leu and Matthias Monroy describe the use of european paramilitary forces in Haiti . Financed by Finmeccanica (EUs largest mil.complex) and other mil. corporations the troups are supposed to get practice, evaluate materials etc. This gives us a glance at the future relations between the West and other countries: If a country does not function according to the Western logic, troups will be sent.

For Haiti the result is clear: the human catastrophy does not really matter. It is simply a battlefield for competing military-industrial corporations, justified by securitization. For more background information take a look at the NeoConOpticon paper.

Facebook Scalability as a function of memory access latency

Thomas Fankhauser pointed me to an interesting article on Facebook performance and measurements: Real-World Web Application Benchmarking by Jonathan Heiliger explains why Facebook uses a custom testbed and approach and not e.g. SPEC. An important statement of the article is that Facebook saw a major effect of the memory architecture of their platform. This is at the same time a nice example of how careful one has to be with statements on what gives good performance and throughput.

As a social network the Facebook architecture is far from common - even though it may look like a regular 3-tier architecture in the beginning. They keep almost everything in RAM using huge clusters of memcached and use many cheap UDP requests to get to those data. This means that their access paths are already highly optimized and different to e.g. googles with its big distributed file system. And it is a reminder that all things said about performance are relative to platforms and architectures and what fits the one need not fit the other.

Finally the paper shows that performance/watt is a critical value for datacenter use.

StudiVZ, XING and Co. - Architecture and Operational Aspects of large Social Networks

System Engineering is rarely a topic at universities - too small are the usual research projects to demand such drastic measures like performance testing, monitoring and alarming and so on. Last summer I have started a course on ultra-large scale systems and this winter term I am extending it with practical advice on system engineering tasks. Students learn monitoring tools and how to use them.

It turned out to be far from easy to create an environement which allows the application of performance and load-test tools, of monitoring, alarming, caching, parallel processing and other techniques. Only now are we able to run the tools and to start integrated testing with the goal to further optimize the architecture of our test-platform based on empirical data.

Another big problem was the lack of a model of our runtime environment. For this purpose we went to Prof. Reussner and his team at the KIT Karlsruhe and took a close look at the Palladio Simulation Environment. This will become a topic in future terms here at HDM.

But perhaps the biggest surprise was to learn that there is almost zero literature on that topic that would allow beginners to enter the world of large-scale system engineers. The most current book we found was from 1991. We will collect the papers written in this course and attach them to the draft on ultra large scale systems.

Finally, to learn how the "big ones" do it, we have invited the makers of several large social networks to join us for an event on performance and scalability and other topics.

We will have Dennis Bemmann, founder of StudiVZ with us. Dr. Johannes Mainusch, Vice President of Operations at XING will show us why "the slow ones will be left" and Heiko Specht of Gomez will demonstrate the need for worldwide external monitoring of large sites to ensure quality and usability. Members of my master class on system engineering will demonstrate some tools used to measure a Mediawiki installation based on a LAMP stack.

Note

22.1.2010 at HDM Stuttgart, Nobelstrasse 10, room 056. The event is free and open to the public. Directions can be found at the HDM homepage . And the Live-Stream URL

Agenda: 

12:30 Begrüssung und kurze Einführung (Prof. Walter Kriha)

12:45 Wikimedia - Ausmessen des LAMP Stacks mit Werkzeugen (Studenten der HdM, Computer Science and Media)

13:35 Externes Monitoring (Heiko Specht, Account Manager Gomez Deutschland)

14:45 Wer langsam ist wird verlassen - Performance großer Websites und ein Blick hinter die
Kulissen von XING (Dr. Johannes Mainusch, Vice President Operations XING)

16:00 Client-side Optimizations (Jakob Schröter, HdM, Computer Science and Media)

16:35 VZ-Netzwerke: Entstehung, Technik, Skalierung - Technik und Diskussion, z.B. Schutz persönlicher Daten (Dennis Bemmann, StudiVZ Gründer und Ex-CTO)

18:00 Veranstaltungsende

Proof and Causality in Computer Science

Recently I had some students asking me why we seem to be unable to come up with a final decision on certain technologies and their merits. Is OO better than procedural? What is the best software production technology? Agile or RUP? And so on. There are several reasons for this situation: The first one is a fundamental one. Complex theories are very hard to falsify as Thomas Kuhn has shown. It is logically undecidable whether to improve a theory or simply replace it.

Another reason is the fact that we don't even try to empirically falsify certain statements. This might be caused by the fact that most computer science people do have some math background and mathematics is not an empirical science. We do not gain mathematical statements by empirical observation. Prof. Tichy of KIT once gave an interesting talk at SPIW in Freiburg about empirical validation of software development approaches. He mentioned e.g. that the famous multi-version-approach (use three different types of hardware, languages and programmers work on the same problem assuming that not all of them will fail at the same moment) turned out to be much less effective: most versions crashed at the same problem location and turned out to be far from independent. Prof. Tichy's talk included basics on the philosophy of science and mentioned the book by Alan Chalmers on "how science really works". You can find the talk on Die Rolle der Empirie in der Softwaretechnik (in german) in the SPIQ archive.

Finally the third reason is that prooving causal relations is very hard. It is not by chance that Jude Pearl'slatest book is in the side-bar. There are two very nice slide sets by the author which give a gentle introduction to causality and its problems. The first one is The Art and Science of Cause and Effect and explains e.g. the adjustment problem. This is the problem of choosing the right set of variables for causal analysis. The second one "Reasoning with Cause and Effect" also discusses the differences between logical and causal explanations (the death squad problem). This gave me some ideas on the differences between "permissions" (logic) and "authority" (causal ability) in access control mechanisms.

Android Security - UID based isolation and better semantics

Markus Schlichting wrote a nice paper on Android security (in german). I used it to compare the approach taken by Google with a systematic damage reduction technology based on the Principle of Least Authority (POLA). The core features of Android security in my point of view is the isolation of application via different UIDs (they run with different identities and different associated rights), a modular interface to application as services which allows rights redution when calling applications, the use of signatures to create application families whith a common UID and finally better semantics for user intentions. The insallation process is also based on descriptive information.

The isolation of appliations via different UIDs works a bit like encapsulation via objects: there is no uncontrolled external access to application internal data. Different UIDs also map nicely to different partitions in storage and keep application data separated. The problem of course is Inter-Component-Communication when one application needs to use another application. This can be controlled at installation time via a manifest.xml description and users need to give permissions for such uses. Due to the service interfaces the permissions can be rather granular and on a higher semantic level which is understandable to users.

But ICC is not unproblematic and there is a mode which allows an application to call another application which runs then with the rights of the original application: kind of a reverse setuid feature. I do not understand how the called application would then be able to access its own resources). The mechanism described works like a function call in a functional language where only the parameters are available to the function called.

Using a signature to identify the creator of code is quite common now. Android does not use this as a guarantee for quality. It is simply a mechanism to identify code and to allow automatic updates. An interesting feature is the combination of applications under one signature: they will then run with one UID and are able to access each others resources. An important part of this features is the tracking of a products quality via reputation on a public site: over time the signature of a code creator becomes a sign for quality and misbehaving applications or malware become shunned.

Does this create a system which minimizes damage in case of malfunction or attack? It certainly looks like an improvement over regular operating systems e.g. by requiring applications to be installed before they can run. But in general there is still lots of ambient authority around: applications can use many system calls which they do not need. Users need to associate e.g. address books with applications wwhich need to use them but there is no mechanism to restrict this to one address only. All in all the mechanisms used by Android are not much different to what is e.g. done in Polaris and it shows the limits of authority reduction in a system that does not use capabilities at its heart.