This vendor-written piece has been edited by Executive Networks Media to eliminate product promotion, but readers should note it will likely favour the submitter's approach.
It's that time of year again, where we look ahead to determine what tech trends will make the most significant impact in 2016. Here are a few tech trends I see that will play out in the year ahead:
Prediction 1: Containers will become enterprise-grade
Once the preserve of web-scale companies such as Facebook and Netflix, containers have been generating considerable interest as an agile and effective method for building next-generation mobile, web, and big data applications. In 2016, container technologies will evolve as functionality is added to make them suitable for enterprise application deployments.
Enterprise-grade containers will need to address two important challenges. Firstly, the containers need to be able to support run-time data persistence. This attribute is not required for the stateless applications required by social media companies. If a container in a social networking web-scale application fails, it can simply be recreated - but persistence is vital for providing a consistent enterprise-grade service where all data is easily recoverable in the event of a fault. The solution to persistence is to build present enterprise grade storage products into the container specification via both existing protocols and new container specific abstractions.
Secondly, containers will need to mature to support enterprise security and governance requirements. At present, containerised architectures delivered as microservices are not fully enterprise-ready. There are limited or no concepts of audit, trust or validation. Even the basic concept of an enterprise firewall does not currently exist in containers. This is something we expect to see remedied in 2016 as enterprise-grade security and governance features are re-envisioned and applied to the container specification in native ways.
Prediction 2: Big data and real-time analytics will come together
In 2016, we will see a new chapter open in the world of big data analytics as a two-tier model emerges. Tier one will comprise 'traditional' big data analytics: large scale data analysed in non-real-time. On the other hand, the new second tier will comprise relatively large data being analysed in real-time, courtesy of in-memory analytics. In this new phase of big data, technologies such as DSSD, Apache Spark and GemFire will be every bit as important as Hadoop. This second tier represents a new and exciting way of using data lakes - for on-the-fly analysis to influence events as they happen. It has the power to give businesses a level of control and agility simply not seen before.
For in-memory analytics to live up to its promise however, two things need to happen. First, the underlying technologies need to develop to ensure there is enough memory and space to house large scale data sets. Some thought will also need to be given as to how data can be efficiently moved between big object stores and the in-memory capability. The two operate at radically different performance curves and IT teams will need to be able to manage the demarcation point to ensure data can move back and forth at speed transparently. Work is currently underway with new object stores, rack-scale flash storage, and technology to make them work together as a system. Open source initiatives will play an important role in meeting this challenge.
Sign up for CIO Asia eNewsletters.