本文是大数据系列第 10 篇,用 Java 实现一个完整的 MapReduce WordCount 程序,深入理解 MapReduce 编程模型和 Hadoop 序列化。

完整图文版(含完整代码):CSDN 原文 | 掘金

为什么 Hadoop 不用 Java 原生序列化

Java 原生序列化(Serializable)产生的字节流包含大量类信息,体积大、传输慢。Hadoop 的 Writable 序列化机制:

  • 更紧凑,传输效率高
  • 专为 RPC 和 MapReduce 数据传输优化
  • 支持直接写入/读出字节流

核心 Writable 类型

Java 类型Hadoop Writable
StringText
intIntWritable
longLongWritable
floatFloatWritable
booleanBooleanWritable

Maven 依赖

<dependencies>
  <dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-common</artifactId>
    <version>2.9.0</version>
  </dependency>
  <dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-mapreduce-client-core</artifactId>
    <version>2.9.0</version>
  </dependency>
</dependencies>

1. WordCountMapper

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;

/**
 * 泛型参数:<输入Key类型, 输入Value类型, 输出Key类型, 输出Value类型>
 * 输入:行偏移量(LongWritable), 行文本(Text)
 * 输出:单词(Text), 出现次数(IntWritable)
 */
public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable> {

    private Text word = new Text();
    private IntWritable one = new IntWritable(1);

    @Override
    protected void map(LongWritable key, Text value, Context context)
            throws IOException, InterruptedException {
        // 将一行文本按空格切分
        String[] words = value.toString().split("\\s+");
        for (String w : words) {
            word.set(w);
            context.write(word, one);  // 输出 (word, 1)
        }
    }
}

2. WordCountReducer

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;

/**
 * 输入:单词(Text), 出现次数列表(Iterable<IntWritable>)
 * 输出:单词(Text), 总次数(IntWritable)
 */
public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {

    private IntWritable total = new IntWritable();

    @Override
    protected void reduce(Text key, Iterable<IntWritable> values, Context context)
            throws IOException, InterruptedException {
        int sum = 0;
        for (IntWritable val : values) {
            sum += val.get();
        }
        total.set(sum);
        context.write(key, total);
    }
}

3. WordCountDriver

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCountDriver {

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "WordCount");

        // 设置 Jar 主类
        job.setJarByClass(WordCountDriver.class);

        // 设置 Mapper 和 Reducer
        job.setMapperClass(WordCountMapper.class);
        job.setReducerClass(WordCountReducer.class);

        // 设置输出类型
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

        // 设置输入输出路径
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        // 提交任务
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

打包与运行

# 打包为 jar(Maven)
mvn clean package -DskipTests

# 提交到 Hadoop 集群
hadoop jar wordcount.jar WordCountDriver /test/input /wcoutput3

# 本地模式运行(不启动集群)
hadoop jar wordcount.jar WordCountDriver file:///local/input file:///local/output

MapReduce 执行流程

输入文件 → [InputFormat切分] → Map(并行) → Shuffle(排序/分组) → Reduce → 输出文件

每个 Block 对应一个 Map Task,多个 Map Task 并行执行,Shuffle 阶段将相同 key 的数据汇聚,Reduce 阶段聚合输出最终结果。

完整 Maven 工程(含 pom.xml 和 log4j 配置)见 CSDN 原文

下一篇:大数据-11 MapReduce JOIN 操作