The Colossal Pipe Map Reduce Framework
Copyright © 2010-2014 Think Big Analytics, Inc. All Rights Reserved.
Colossal Pipe is a new framework for map reduce programming in Java. It’s designed to make it easy to write programs using an OO-style, programming with POJOs. It comes with support for working with pipelines - determining file dependencies and running the phases needed to emit the corresponding results. The framework is still in alpha test, but we’d love you to try it and give us feedback.
Key concepts: * a Pipe - defines a set of inter-related phases (i.e., Hadoop jobs) that produce final outputs. This is represented by an instance of the class ColPipe. * a Phase - defines a single map/reduce pair that transforms an input to an output. This will include a phase that joins multiple inputs to produce a single output. This is represented by an instance of the class ColPhase. * a File - defines a single input or output file, with a given output type. This is represented by an instance of the class ColFile.
Creating a Pipe Colossal Pipe uses a fluid/builder syntax much like jMock, in which you can apply multiple changes to a single object. You first need to create a pipe: ColPipe analysis = new ColPipe(getClass()).named(“analysis”);
You define the input and output files that are important to you, e.g.,
You also need to declare the final outputs that your pipeline needs to produce: analysis.produces(output);
Finally you define the phases (jobs) that produce outputs from their inputs: ColPhase analyze = new ColPhase().map(AnalysisMapper.class).reduce(AnalysisReducer.class) .groupBy(“id,str”).sortBy(“count desc”).reads(input).writes(output);
Mappers and reducers in colossal pipe work on objects, both for input and output. A mapper looks like this:
public static class AnalysisMapper extends BaseMapper
A reducer looks like this:
public static class AnalysisReducer extends BaseReducer
The default file format for Colossal Pipe is Avro, but it has support for text and JSON output and soon we’ll add pluggable formats to support a variety of input and output formats. Currently, you use Avro schemas to define your objects, then use the Avro code generation tool to create Java classes to represent them. As soon as Avro effectively supports reflection-based map/reduce we’ll also support reflective access to your Java classes, letting them define the file schema instead.