Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Implementing an in-memory data management environment: how to start, what to know

Lee Chew Tan, Vice President & General Manager, HP Servers, Enterprise Group, Hewlett-Packard Asia Pacific and Japan | Aug. 21, 2015
Here's how businesses can turn information into business advantage in real time.

Let's look at three requirements that should be part of your strategic migration plan.

#1: Levels of Service
Can the solution provide mission-critical capabilities and end-to-end services and support?

As you move to more advanced workloads, you have more advanced mission-critical needs that require the capabilities found on high-end UNIX systems — fault-tolerant architecture, very high I/O bandwidths, self-healing analytics and automation. Make sure these capabilities are built into your system to keep downtime at a minimum and disaster recovery as fast as possible. For example, hard partitioning technology increases system reliability and agility significantly. Each hard partition has its own independent CPUs, memory and I/O resources as part of the blades that make up the partition. These are tied together through a fault tolerant crossbar. So a fault in one part of the system has no impact on the rest of the system. In terms of agility, these partitions enable you to run CRM, ERP and BW solutions on a single system – a significant step forward in realising 'real time' enterprise.

Even with mission-critical capabilities built in, it's wise to augment your system with end-to-end services that enable speedy deployment, smooth migration, and business continuity. Data management requires 24/7 support so you can proactively prevent issues, maximise system performance, and accelerate problem resolution.

#2: Performance and Scale
Can the solution handle my needs?

To get the most from your in-memory data management environment, you need application-optimised systems that are purpose-built to meet your in-memory computing needs of high performance and high availability. These systems should have scalable architecture design to allow easy scalability to protect your current investment and expand as needed into the future. The industry standard is scalable up to 6 TB, but if you are looking to really use SAP HANA, for instance, for your largest business application environments, a ceiling of 4 or 6 TB in-memory computing doesn't address your needs. It's not uncommon for large systems to run on databases that are 20 or 30 TB. Make sure you have adequate scalability to handle data warehouse environments as you move along your big data journey.

And remember, not all memory is created equal. Some vendors jam memory into their hardware which depletes workable memory. Make sure systems have high level chips that can accommodate higher RAM so you are productively using the increased memory size and maintaining a high level of performance.

#3: Secure investment
Will the solution allow my in-memory data management environment to grow and consolidate across the entire company?

Let's face it. Migrating your business applications to an in-memory data management environment will not be easy or inexpensive, but it will be more cost efficient than continuing to operate sluggish and disconnected databases. And it will definitely deliver faster time to value for your business.

 

Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.