Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Hadoop + GPU: Boost performance of your big data project by 50x-200x?

Vladimir Starostenkov | June 25, 2013
Hadoop, an open source framework that enables distributed computing, has changed the way we deal with big data.

* Creating native code to access the GPU. It is a good idea to use native code for complex mathematical computations that require a powerful GPU. The resulting performance will be much faster than in solutions that use bindings and connectors. However, if you need to deliver a solution in the shortest time possible, you may opt for frameworks like Aparapi. Then, if you are not satisfied with its performance, the original Aparapi code can be partially or completely replaced with native code. The resulting product will be considerably faster but also much less flexible.

You can use the C-language API (with Nvidia CUDA or OpenCL) to create native code that enables Hadoop to use the GPU via JNA (if your application is written in Java) or Hadoop Streaming (if your application is written in C).

GPU-Hadoop frameworks
You can also try investigating into custom GPU-Hadoop frameworks that were created after the Mars project had been launched. These include Grex, Panda, C-MR, GPMR, Shredder, SteamMR and others. However, most of them are no longer supported and were built for particular scientific projects. That means you can hardly apply a Monte Carlo simulation framework for, say, a bioinformatics project based on other algorithms.

Processor technologies are evolving, as well. You can see revolutionary new architectures in Sony PlayStation 4, Adapteva's multicore microprocessor, Mali GPU by ARM, etc. Both Adapteva and Mali GPU will be compatible with OpenCL.

Intel has also launched the Xeon Phi co-processor that works with OpenCL, too. It is a 60-core co-processor with an x86-like architecture that supports the PCI Express standard. Its performance is 1 TFLOPS in double precision with an energy consumption of just 300 Watt. This co-processor is already implemented in Tianhe-2, the most powerful supercomputer so far.

Still, it is hard to tell which architecture will become mainstream in high performance and distributed computing. In case they evolve — and some of them certainly will — it may change our understanding of how huge arrays of data should be processed.

Vladimir Starostenkov is a senior R&D engineer at Altoros Systems, a company that focuses on accelerating big data projects and platform-as-a-service enablement. He has more than five years of experience in implementing complex software architectures, including data-intensive systems and Hadoop-driven applications. Having strong background in computer science, Vladimir is interested in artificial intelligence and machine learning algorithms.


Previous Page  1  2  3  4 

Sign up for CIO Asia eNewsletters.