Back

 Industry News Details

 
5 Hadoop analytic tools to bring value to your big data Posted on : Dec 16 - 2015

Having lots of data is one thing, but getting value from it is the real goal so make the most of your data with these tools.

The worlds of big data and Hadoop are so closely entwined that it is necessary to be reminded that they are not one and the same.

However, given Hadoop's popularity, a large amount of analytics tools have been developed to help business get value from the data in it.

Hadoop, the Java based programming framework, is designed to support the processing of vast amounts of data across a distributed computing environment.

CBR identifies some of the main tools for making the most out of your data.

1. Apache Spark

Often used as a framework for building analytic tools on top of, Spark is an open source processing engine that is built for speed, ease of use and sophisticated analytics.

A huge amount of backing is being given to Spark, with over 750 contributors from over 200 organisations aiming to develop on it and advance it.

A number of companies such as Hortonworks and IBM have all been busy integrating Spark capabilities into their big data platforms, and it could be set to become the default analytics power for Hadoop.

2. IBM BigInsights

Pitched by IBM as offering the best of open source software with enterprise-grade capabilities, BigInsights helps to both manage and analyse your big data.

It offers a Data Scientist module; aimed at meeting the highest levels of data analysis, it also offers visualisation tools along with developers tools.

The company has also made it available in the cloud, in order to add more flexibility to the offering.

3. Kudu

Kudu aims to provide fast analytical and real-time capabilities; it's a storage system for tables of structured data that is designed to enable real-time analytic applications in Hadoop.

Released in September of this year by Cloudera, it has been three years in the making and was created to contribute the likes of Apache HBase and HDFS.

One of its benefits is that it supports both low-latency random access and high-throughput analytics which simplifies Hadoop architectures for real-time use cases.

4. MapReduce

The heart of Hadoop, MapReduce is a programming model that processes and generates large data sets with a parallel, distributed algorithm on a cluster.

Its position at the heart of Hadoop may be coming to an end, as it is surpassed by Spark. Companies like Cloudera looking to make Spark the default data processing framework for Hadoop.

Google stopped using the technology, creating its own framework, Dataflow.

 

Despite this move away from MapReduce, it still offers a huge amount of processing power that can scale across hundreds of thousands of servers across Hadoop. Source