The Hedvig Distributed Storage Platform consists of three components:
- Hedvig Storage Service: A patented distributed-systems engine that scales storage performance and capacity with off-the-shelf x86 and ARM servers. The Hedvig Storage Service can be run on-premises or on public clouds like AWS, Azure, and Google. It delivers all of the storage options and capabilities required for an enterprise deployment, including inline deduplication, inline compression, snapshots, clones, thin provisioning, autotiering, and caching.
- Hedvig Storage Proxy: A lightweight VM or container that enables access to the Hedvig Storage Service via industry-standard protocols. Hedvig currently supports NFS for file and iSCSI for block, as well as OpenStack Cinder and Docker drivers. The Hedvig Storage Proxy also enables client-side caching and deduplication with local SSD and PCIe flash resources for fast local reads and efficient data transfers.
- Hedvig APIs: REST and RPC-based APIs for both object storage and Hedvig operations. Hedvig currently supports Amazon S3 and Swift for object storage. Developers and IT operations admins can use the management APIs to enable access to all Hedvig storage features to automate provisioning and management with self-service portals, applications, and clouds.
Hedvig supports hyperconvergence by bundling the Hedvig Storage Proxy and the Hedvig Storage Service as virtual appliances running on a commodity server with a hypervisor or container OS. For hyperscale, the Hedvig Storage Service is deployed on bare-metal servers to form a dedicated storage tier while the Hedvig Storage Proxy is deployed as a VM or container on each server at the compute tier.
Why choose hyperscale for storage
Data is growing far faster than storage budgets. The economics are crippling for enterprises that do not have the resources of Internet goliaths like Amazon, Google, and Facebook. Thus, enterprises must embrace software-defined and commodity-based storage to reduce costs and maintain the flexibility and scalability needed to keep up with business requirements.
At Hedvig, we've noticed that about 80 percent of the time, customers choose a hyperscale architecture rather than hyperconverged, despite the fact we support both. What's even more interesting is that many of our customers come to us thinking the exact opposite. About 80 percent initially request a hyperconverged solution, but after they do their homework, they opt for the hyperscale approach.
Why? In a nutshell, because they favor flexibility (or agility, if you must use that term) above all else when architecting their infrastructure. Consider the following:
- A hyperconverged system offers a simplified "building block" approach to IT. For lean IT organizations looking to lower the overhead of deploying and expanding a cloudlike infrastructure, hyperconvergence provides a good solution. But it requires a relatively predictable set of workloads where "data locality" is a top priority, meaning that the application or VM must be located as close to the data as possible. This is why VDI has been a poster child for hyperconvergence. Users want their "virtual C: drive" local. But it's not flexible, as it involves scaling all elements in lockstep.
- A hyperscale system keeps storage independent of compute, enabling enterprise IT to scale capacity when the business requires. The hyperscale approach to data center and cloud infrastructure offers a high level of elasticity, helping organizations rapidly respond to changing application and data storage needs. It's also an architecture that better matches modern workloads like Hadoop and NoSQL, as well as those architected with cloud platforms like OpenStack and Docker. All of these are examples of distributed systems that benefit from independently scaled shared storage.
Sign up for CIO Asia eNewsletters.