Total Pageviews

Tuesday, September 6, 2011

KVM on Illumos

Illumos for those who don't know is the community fork of opensolaris.  Openindiana is a robust operating system distribution based on illumos.   Now, that we've got the introductions out of the way what's the deal with KVM?

Engineers at Joyent have added what appears to be one of the most significant additions to the opensolaris platform since the fork - a significant effort, the porting of the Linux KVM capability to the illumos kernel.

It's a great achievement, but it's not a complete port at the moment.  The focus is currently only on systems with VT-x extensions, AMD SVM isn't on the current roadmap, but during the port no design choices were made that excludes that possibility.   From a performance perspective, the port is on par with Linux KVM with some of the simpler, more abstract benchmarks.  No formal testing, such as SpecVirt has been performed as yet to get a more realistic understanding of the performance.

Notably, and deliberately, there is no guest memory overcommit, no KSM providing memory de-duplication and no nested virtualistion.  The latter being a feature I really enjoy in my test lab.

It's not all about features being missing though, the Illumos KVM port has added CPU performance counter virtualisation and implements all KVM timers using the cyclic subsystem.    Of course, Illumos has ZFS as the underlying operating system and for a hypervisor platform there are some great features in ZFS.  

From a security perspective, the QEMU guest is run inside a local zone, this gives further isolation of the guest from the underlying system as well as providing resource management, i/o throttling, billing and instrumentation hooks etc.   This is a nice approach.  Linux tackles this in a different way with selinux and cgroups for example.   Further exploitation of the container capability occurs with network virtualisation where a vnic is created for each KVM guest which inherits the vnic capability from the container - anti-spoofing, resource management.  Further enhancements were made in the area of kernel stats and of course DTRACE.

This is a great addition to the KVM community, it's a different approach and i'm certain the injection of any fresh ideas and concepts is only going to enrich KVM capability.

Sunday, September 4, 2011

Message based frameworks and 'Cloud Computing'

Several years ago I thought to myself that for many if not all enterprise environments a message architecture or framework was the ideal way of performing enterprise wide administration of heterogeneous server environments in a standardised manner.   Now of course we have things like ESB.  The problem is that those who in general look after server farms don't really understand the power of ESB and maintaining an ESB has its own unique challenges :-)  

Consider the enterprise server management problem at the moment.  We have thousands of servers and applications, including a variety of monitoring and measuring equipment and applications.  They all typically use a client server style model to report or exchange information with a central server.  They all have their own built in security mechanisms (if you're lucky :-) ).  They all have their own ports, requiring firewall holes punched in a variety of different manners.   All this just lends itself to complexity.

The idea many years ago was to stop doing that sort of individual rubbish and instead have everyone basically place messages into a messaging framework that would in turn handle security, priority and delivery.  Imagine how neat and tidy that would make the managed environment, not to mention the flexibility you could have.

This of course doesn't just apply to monitoring / reporting tools.  A messaging framework has messages travelling in multiple directions.  An authorized conversation could be used to trigger an action on a target server - ie start processes, create userids , reboot host etc etc.  The list is limitless.

So what is the link to virtualisation and cloud computing?   Imagine the power of a messaging framework in such an environment.  An application could create a message to spawn new server instances and they could be routed through to the application that can actually clone systems from templates etc (yes, of course there will be business logic wrapped around this for a variety of reasons).  A business aware event management system may see that a VM is unresponsive from an application perspective and choose to reboot it.    You do not want to have each application being able to directly interface with your cloud management infrastructure,  you want an event/action message created, appropriately approved and subsequently delivered to an application that will perform the desired action correctly and reliably.

Sounds good, so what?   Well it's beginning to appear.  While not technically what I would consider an open system, VMware now has AMQP plugins for their orchestration tool (vCO).   This allows you to achieve the scenario I mentioned above,  an application doesn't need to know how to provision new servers, it simply posts a message and triggers the workflow and would receive a message back when it is complete.  The same could be done as a result of a monitoring event.

The  oVIRT node (upstream of RHEV-H) has the Apache Qpid as part of the stack.  In this case the goal is to implement the Qpid Messaging Framework (QMF), QMFv2 to be more precise.   The architecture behind QMFv2 is basically what I was describing above.

In the case of oVIRT, it is intended that the matahari upstream project will provide a set of management agents that utilize the QMF and the underlying Qpid (AMQP) messaging system - neat stuff!

If you're designing Cloud API's and management tools - think messaging, think messaging framework and let someone else do the heavy lifting for you.