Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

The real reason Microsoft open sourced .NET

Mary Branscombe | Jan. 28, 2016
DevOps, microservices, and the shift to containers and lightweight computing environments explain a lot about Microsoft’s position on .NET, open source and Nano Server.

The reasons customers like Verizon give Microsoft for wanting .NET running in containers isn’t because they want to move to Linux for its own sake, according to Snover, and it leaves a definite opportunity for Windows Server.

“When you pull on that thread, what really motivated them is the desire to have a really lightweight compute environment, and the ability to stand up and restart and scale things very, very agilely,” says Snover. “This was something they were not able to achieve with a full Windows Server stack and the full .NET stack.  They will be able to do that now, with Windows Server, thanks to Nano Server and our container work.”

Moving to microservices

.NET itself is changing, as the recent name change for the open source version (from .NET Core 5 and ASP.NET 5 to .NET Core 1.0 and ASP.NET Core 1.0), underlines. .NET Core doesn’t cover as much as the full .NET 4.6 framework (it doesn’t have the server-side graphics libraries, for instance). The same goes for ASP.NET 4.6 and 5 (which has the Web API but not SignalR, VB or F# support yet). The newer versions don’t completely replace the current versions, although they’ll get the missing pieces in the future. They’re also built in a new way, with faster releases and more emphasis on moving forward than on avoiding breaking changes.

That’s the same shift you’re seeing across Microsoft. Over the last decade, building Azure has taught the company a lot about the advantages of microservices for what would otherwise be large, monolithic applications. The original service front end managed resources like compute, storage, networking and the core infrastructure components – for the whole worldwide service – in a single app. It was a large and complicated codebase, running in a single data center, and it took up to a month to release an update – after it was finished and tested – which meant it was only updated once a quarter. Plus, the management tools for all the different components were secured by a single certificate.

Rewriting that as around 25 different microservices makes it easier to develop, test and release new features. New features can be “flighted” to a test system to see how they perform, and releasing updates takes no more than three days … even though the resource providers that manage compute, storage and networking now run in the individual data centre. That improves performance because there’s far less latency when, for instance, the compute used in the Azure data centre in Australia is managed by a resource provider running in that same data center rather than in Texas. Putting compute and data together isn’t just faster, and easier to scale; it makes things more reliable, because you’re not relying on the network between data centers for management. Limiting each microservice to operating in its own area improves security too.

 

Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.