Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Hadoop + GPU: Boost performance of your big data project by 50x-200x?

Vladimir Starostenkov | June 25, 2013
Hadoop, an open source framework that enables distributed computing, has changed the way we deal with big data.

You can have a look at the 2010 study by Intel that provides performance results for 14 types of exemplary use cases. According to Intel's figures, one can hardly achieve a 10x-1,000x increase in performance per single node — instead, 2.5x or so will be more realistic. The total improvement for a cluster may be even smaller.

So, since the speed of data transfer may be rather slow, the ideal use case is when the amount of input/output data for each GPU is relatively small compared to the number of computations to be performed. It is important to keep in mind that, first, the type of the task should match the GPU's capabilities; and second, the task can be divided into parallel independent sub-processes with Hadoop.

Some examples of such tasks could include calculating complicated mathematical formulas (e.g., matrix multiplication), generating large sets of random values, similar scientific modeling tasks or other general-purpose GPU applications.

Tools to use
To create a prototype and accelerate your big data system using Hadoop coupled with GPUs, you have to use some libraries or bindings that allow for accessing a GPU. Today, the main tools you can use to employ the GPU's capabilities are as follows:

* JCUDA. The JCUDA project provides Java bindings for Nvidia CUDA and related libraries, such as JCublas, JCusparse (a library for working with matrix), JCufft (Java bindings for general signal processing), JCurand (a library for generating random numbers in GPU), etc. However, this will only work for GPUs by Nvidia.

* Java Aparapi. Aparapi converts Java bytecode to OpenCL at runtime and executes it on a GPU. Among all of the systems that use GPUs for computations with Hadoop, Aparapi and the OpenCL method seem to have the best long-term perspectives. Aparapi was developed by AMD JavaLabs, a laboratory of AMD. Released as an open-source product in 2011, the project is growing rapidly. You can take a look at some real-life use cases for this technology at the official website of the AMD Fusion Developer Summit conference.

OpenCL is an open, cross-platform standard supported by a large number of hardware vendors that allows for writing the same code base for both the CPU and the GPU. If no GPU is installed on a particular machine, OpenCL employs its CPU.

The standard is being developed by the Khronos Group, an industry consortium that includes around 100 companies such as AMD, Intel, Nvidia, Altera, Xilinx, etc. Code written with this framework can be executed on CPUs of the supported brands (AMD and Intel), as well as on GPUs manufactured by AMD and Nvidia. New solutions compatible with OpenCL appear every year, which is a big advantage.


Previous Page  1  2  3  4  Next Page 

Sign up for CIO Asia eNewsletters.