As a purist or a scientist, this is a bare metal hypervisor but its implementation model says it is not going to create partitions that share cores or I/O ports or I/O slots. It's our way of providing containment, as there is no longer a single point of failure.
What was the impetus behind the creation of this technology?
We had conducted a survey in 2005-2006 to find a virtualization technology that would give us the characteristics of partition so that we could deliver it against the mission critical workload. Not finding it caused us to build the technology. It was built first for our proprietary mainframe technology, the ClearPath suite and it has been shipping on the ClearPath suite since 2010. It was built for the enterprise level, mission critical, highly sensitive workload and it has proven itself in this space.
Nowadays, it's about what applications you want to run. For this lets just create containers of the right size and drop them in there and if they happen to be Linux, its fine; if its Windows its fine; If its one of our proprietary OS's its fine. The beauty of this is that any system assets you free up can be re-provisioned as anything. You could take something that is running our ClearPath MCP OS and decommission it and re-commission it as a Windows environment or whatever it is you need it to be. If you look at the source of the converged infrastructure players with their mixing between Power, Xeon and Itanium, consider that Itanium blades will run HP-UX, Power blades will run AIX and Xeon blades will run everything else. So if you decommission some Power workload, those blades aren't doing anything else, and this goes back to (why we are providing) the uniform infrastructure concept where customers are willing to buy into a uniform infrastructure that will give them this great flexibility about how you want to deploy it.
This ability for repurposing must be the main USP of the solution.
Yes I absolutely think so. I don't see anything like this in the market place. Again we are not doing this on a blade based architecture either, so it can be as big or small as you need it to be. Sometimes just scaling things up changes the performance dynamic of the system to the point where it was better than it was before.
Our hallmark has always been "change the tech without impacting the customer's investment in software". There is no better example of this than considering our proprietary environment systems. Take the MCP which came from Burroughs and the OS 2200 which came from Sperry Univac - both very old (environments). We can run the same object code that runs on proprietary processors on Intel Xeon processors with no recompilation, no reformatting of the data and you (still) get performance. It's a mean feat to change the processing architecture out from underneath the software stack and continue to have it run in a very transparent way. But this goes back to protecting the customer's investments. Clearly if he is willing to recompile it, I can give him better performance. But I can run his code for as long as he wants to run that code in that form. If he wants to recompile, that's wonderful and we can give him better performance. If he wants to migrate pieces of it around the complex and to different environments, so be it. We can go ahead and do that as we have the infrastructure in place to support such stuff.
Sign up for CIO Asia eNewsletters.