Hadoop & Mapreduce Tutorial | Hadoop Data Types

Hadoop MapReduce uses typed data at all times when it interacts with user-provided Mappers and Reducers: data read from files into Mappers, emitted by mappers to reducers, and emitted by reducers into output files is all stored in Java objects.

Writable Types

Objects which can be marshaled to or from files and across the network must obey a particular interface, called Writable, which allows Hadoop to read and write the data in a serialized form for transmission. Hadoop provides several stock classes which implement Writable: Text (which stores String data), IntWritable, LongWritable, FloatWritable, BooleanWritable, and several others. The entire list is in the org.apache.hadoop.io package of the Hadoop source.

In addition to these types, you are free to define your own classes which implement Writable. You can organize a structure of virtually any layout to fit your data and be transmitted by Hadoop. As a motivating example, consider a mapper which emits key-value pairs where the key is the name of an object, and the value is its coordinates in some 3-dimensional space. The key is some string-based data, and the value is a structure of the form:

struct point3d {

float x;

float y;

float z;

}

The key can be represented as a Text object, but what about the value? How do we build a Point3D class which Hadoop can transmit? The answer is to implement the Writable interface, which requires two methods:

public interface Writable {

void readFields(DataInput in);

void write(DataOutput out);

}

The first of these methods initializes all of the fields of the object based on data contained in the binary stream in. The latter writes all the information needed to reconstruct the object to the binary stream out. The DataInput and DataOutput classes (part of java.io) contain methods to serialize most basic types of data; the important contract between your readFields() and write() methods is that they read and write the data from and to the binary stream in the same order. The following code implements a Point3D class usable by Hadoop:

A Point class which implements Writable

public class Point3D implements Writable {

public float x;

public float y;

public float z;

public Point3D(float x, float y, float z) {

this.x = x;

this.y = y;

this.z = z;

}

public Point3D() {

this(0.0f, 0.0f, 0.0f);

}

public void write(DataOutput out) throws IOException {

out.writeFloat(x);

out.writeFloat(y);

out.writeFloat(z);

}

public void readFields(DataInput in) throws IOException {

x = in.readFloat();

y = in.readFloat();

z = in.readFloat();

}

public String toString() {

return Float.toString(x) + “, “

+ Float.toString(y) + “, “

+ Float.toString(z);

}

}

Apply for Big Data and Hadoop Developer Certification

https://www.vskills.in/certification/certified-big-data-and-apache-hadoop-developer

Back to Tutorials

Replication with Gossip protocol
Certified HBase Professional

Get industry recognized certification – Contact us

keyboard_arrow_up
Open chat
Need help?
Hello 👋
Can we help you?