quinta-feira, 21 de Maio de 2009

Overunderdoing: patterns in Microsoft's interoperability stance


To achieve good things to the extent that they can be loudly announced but to a slightly lesser extent than would actually make them useful.

It looks like Microsoft is reinventing itself. After years and years of isolation on its self ruled monopoly the company now desires to interoperate with everyone else. Or does it? We will never know. What we know is simply what can be inferred from the information that reaches us. And our evaluation depends on whether we focus on the titles or we examine the fine print instead. Why does every announcement leave a deja vu sensation in the air?

It's currently very uncool to be anti openness. Open is nice, open is 2.0. Wikipedia, Firefox, Creative Commons, Google, Linux, you name it. But openness can kill a monopoly, as it imposes merit based competition. Unfortunately, unless a way is found to bring Quantum Mechanics to the IT market, it's physically impossible to be Open and Closed simultaneously. But can one remain closed while appearing to be open?

Well, if we're talking about interoperability it has been done. Or tried. We could call it overunderdoing. Here's the algorithm:

  1. pick one interoperability concern

  2. solve it up to 85% (percentage may vary)

  3. wisely choose the 15% to leave out so interoperability can be said to work but with constant annoyances

  4. make a loud announcement to the press

  5. profit: everyone will read the announcement but only a small fraction of such readers will notice the missing bits; despite the claimable advances over the previous situation it still won't be practical to make any use of the announced interoperability

While this is obviously not a perfect plan, the less informed the audience the more likely it will work. Often, the left out 15% are subtle, sometimes only noticeable when testing / implementing. Most announcement readers will read and move on with their daily work. That's how it works.

Let's look at some examples.

By the end of 2006 Microsoft decided to submit OOXML to ISO. The specification was indeed mostly open but:

  • it was underspecified in the details (attributes like AutoSpaceLikeWord95 were impossible to implement)

  • it was redundant as there was already another standard for documents (ODF)

  • it was neither patent free nor royalty free (this is still somewhat unclear now)

  • there was no multi platform implementation (interoperability was unproven)

Still, at a first glance it sounded a good move. People said: wow! Dozens of them, some with very important social roles, wrote support letters for OOXML. The trust levels were so high that people from public institutes without any interoperability experience applied to the local Technical Committees just to vote for Microsoft. Some more informed people wondered: why not supporting the already existing multiplatform ODF? Why do we need several standards? How it went from there everyone knows by now.

Some months later, in 2007, Microsoft announced Silverlight, a cross-browser, multiplatform “solution for video and interactivity”. But guess which platform was left out? For the Silverlight 2 announcement they claimed they'd support Linux. However, as of May/2009 the Linux support, which is developed by Novell not Microsoft, is still in Beta. This means double profit: announcing that Linux support will be taken care of sounds good while leaving the “relatively few” users they expect on Linux without a solution that works for Silverlight based websites. The more Silverlight the less Linux. But if pressure mounts they can still point people to the Moonlight website which eventually will have a stable implementation, one day.

Another example was this interoperability announcement that got very popular in the Press. This happened after the company was fined by the EU and sounded like an opportunity to save face. Again, it was received with quite some praise within the IT crowd in general but the Open Source community, one claimed target of the initiative, quickly read this as a useless PR stunt. In fact the interoperability terms introduced a difference for free and commercial uses, something that is definitely not the way Open Source works. No Open Source project will sacrifice freedom of use for any patent encumbered Microsoft specification.

Finally we arrive the major announcement in 2009: Microsoft Office 2007 SP2 supports ODF. This is huge. A great headline! Despite all the OOXML mess Microsoft supports ODF first. But you can't exchange any basic spreadsheets between Office 2007 SP2 and any other ODF producer because not even the most basic 2+2 SUM() is compatible. Unbelievable? Maybe. But it's true and they don't even deny it. Regardless of all the existing ODF implementations (Open Office, Symphony, Koffice, odf-converter, Sun ODF Plugin) Microsoft handled formulas on their own incompatible way. There's no possible excuse for this. Even odf-converter, wich is co-developed by Microsoft, managed to play nicely with the others.

It's clear that all these examples match the same pattern. Now the question is: how long will they manage to fool everyone? I have to admit that in many countries this strategy works well enough. Many important people in Portugal were convinced that OOXML and the interoperability announcement were honest and transparent initiatives. What will they think in face of this ODF implementation nonsense? Haven't things gone too far?

We should feel tired of being fooled by now, even if we're being so smartly fooled.

segunda-feira, 11 de Maio de 2009

Guest clock runs too fast - guest / host clock synchronization

Seems that millions of VMWare users are having clock drift problems through different OS versions. A google search will return too many result with too many different solutions.

Here's what to do if your guest clock runs too fast under VMWare Server.

Host Operating System

1. Find your maximum CPU frequency:
cat /proc/cpuinfo |grep -i mhz

2. Add the following lines to /etc/vmware/config:
host.cpukHz = 2800000 (replace with your CPU MHZ * 1000)
host.noTSC = TRUE
ptsc.noTSC = TRUE
3. Restart VMWare.

Guest Operating System

4. Change your kernel boot parameters according to the table found here.
5. Configure ntpd according to this document.
6. Ensure ntpd starts on boot.
7. Reboot.

This was tested under VMWare 1.0.9 running on a Centos 5.3 64bit host with CentOS 5.3 64 bit guests. All kernels are SMP.


Apparently even with the above procedure the guest clock still runs faster something like 1/2 minute per hour. If the applications running on the VM are not time-sensitive one can run an hourly ntpdate cron job instead of ntpd (ntpd doesn't work if the guest clock drifts too much).

may be that disabling power management on the host helps but I wouldn't go that route given a "good enough" solution as described above, that will work well for the scenarios the guest is being used on.