Back

 Industry News Details

 
The History of Apache Hadoop's Support for Amazon S3 Posted on : Oct 11 - 2016

Hadoop’s ability to work with Amazon S3 storage goes back to 2006 and the issue HADOOP-574, “FileSystem implementation for Amazon S3”. This filesystem client, “s3://” implemented an inode-style filesystem atop S3: it could support bigger files than S3 could then support, some its operations (directory rename and delete) were fast. The s3 filesystem allowed Hadoop to be run in Amazon’s EMR infrastructure, using S3 as the persistent store of work. This piece of open source code predated Amazon’s release of EMR, “Elastic MapReduce” by over two years. It’s also notable as the piece of work which gained Tom White, author of “Hadoop, the Definitive Guide”, committer status.

A weakness of the S3:// filesystem client was that it wasn’t compatible with any other form of data stored in S3: there was no easy way to write data into S3 for the Hadoop MapReduce to read, or for the results to be written back. (Remember —at the time, Hadoop meant MapReduce only). This was addressed in 2008  by the  HADOOP-931 work and the “S3 Native Filesystem ” client, “S3N”. S3N paths have URLs which begin “s3n://”, followed by the name of the S3 “bucket” and the path underneath. S3N made collecting data for Hadoop-in-EC2 clusters easier, as well as allowing the output of work to be published directly for other applications. Since that date, s3n:// has been the ubiquitous prefix on URLs used when Apache Hadoop reads data from S3.

But not, notably, from Amazon’s EMR: it uses a scheme, “s3:”, which resembles s3n but has a closed source implementation underneath. Amazon have done some good work there, and the Apache code has lagged.

The S3N code has been relatively stable since 2008, with intermittent updates to the underlying jets3t library. It didn’t get much attention, however. The functionality of the jets3t library slowly fell behind that offered by Amazon’s own SDK  —which added better authentication, advanced upload operations, and more. There was also work going on by Hadoop-in-cloud users such as Netflix, whose S3mper code addressed S3’s eventual consistency problem —so allowing it to be used as a direct output of analytics jobs.

In the meantime, other object stores have become supported by Hadoop. In 2013 a joint operation between ourselves, Rackspace and Mirantis, OpenStack Swift support came in. A collaboration between Microsoft and Hortonworks added Azure WASB support —this shipped with HD/Insights, and is now bundled with Hadoop. Meanwhile, S3N continued to work — mostly with bug fixes.

In late 2014, a successor to S3N was introduced: S3A (HADOOP-10400). This was a great community contribution by Jordan Mendelson of Common Crawl ; S3A since has become the focus of all work S3 integration and performance. Shipping in Hadoop 2.6, it’s teething problems had stabilized by Hadoop 2.7 when it also arrived in HDP2.3 (HADOOP-11571).

Since then, it’s gone from strength to strength, with work by the Hortonworks team, suppliers of S3-compatible filesystems, users wanting specific features (encryption, new authentication mechanisms), and others. We’ve been very busy in this code, targeting HDP-2.5 and Hadoop 2.8.

The list of what’s new in S3A is long and includes:

  • Support for IAM and environment variable authentication
  • Encryption of AWS secrets in Hadoop credential files.
  • Metric collection for performance diagnostics.
  • Better error reporting.
  • Better Scalability.

Most of all, though, many, many performance improvements for applications using S3A as the means to work with data in S3. That includes Apache Hive, Apache Spark and any other application depending on the Hadoop libraries for their S3 integration.

Where next? We’ve done a lot on the read pipeline. There’s more to do, including effort in profiling Hive and Spark to identify small improvements, improvements which eventually add up to large improvements. Azure storage is improving with the Azure Data Lake connector, while the Aliyun Object Store connector is adding connectivity to the Aliyun Object Store, popular in China. These are both things we are keeping an eye on.

We’re currently turning our attention to the S3 write pipeline, where we hope that one piece of work, S3Guard, will allow us to deliver high-performance job output. If we can do that, then we will be well on the way to making Open Source Apache Hadoop a first-class citizen on Amazon AWS, the way it is in Azure. Source