eclipse中开发Hadoop2.x的Map/Reduce项目
内容导读
互联网集市收集整理的这篇技术教程文章主要介绍了eclipse中开发Hadoop2.x的Map/Reduce项目,小编现在分享给大家,供广大互联网技能从业者学习和参考。文章包含4822字,纯文字阅读大概需要7分钟。
内容图文
本文演示如何在Eclipse中开发一个Map/Reduce项目: 1、环境说明 Hadoop2.2.0 Eclipse?Juno SR2 Hadoop2.x-eclipse-plugin 插件的编译安装配置的过程参考:http://www.micmiu.com/bigdata/hadoop/hadoop2-x-eclipse-plugin-build-install/ 2、新建MR工程 依次
本文演示如何在Eclipse中开发一个Map/Reduce项目: 1、环境说明- Hadoop2.2.0
- Eclipse?Juno SR2
- Hadoop2.x-eclipse-plugin 插件的编译安装配置的过程参考:http://www.micmiu.com/bigdata/hadoop/hadoop2-x-eclipse-plugin-build-install/
package com.micmiu.mr; /** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class WordCount { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable4、准备测试数据 micmiu-01.txt:values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: wordcount "); System.exit(2); } //conf.set("fs.defaultFS", "hdfs://192.168.6.77:9000"); Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
Hi Michael welcome to Hadoop more see micmiu.commicmiu-02.txt:
Hi Michael welcome to BigData more see micmiu.commicmiu-03.txt:
Hi Michael welcome to Spark more see micmiu.com把 micmiu 打头的三个文件上传到hdfs:
micmiu-mbp:Downloads micmiu$ hdfs dfs -copyFromLocal micmiu-*.txt /user/micmiu/test/input micmiu-mbp:Downloads micmiu$ hdfs dfs -ls /user/micmiu/test/input Found 3 items -rw-r--r-- 1 micmiu supergroup 50 2014-04-15 14:53 /user/micmiu/test/input/micmiu-01.txt -rw-r--r-- 1 micmiu supergroup 50 2014-04-15 14:53 /user/micmiu/test/input/micmiu-02.txt -rw-r--r-- 1 micmiu supergroup 49 2014-04-15 14:53 /user/micmiu/test/input/micmiu-03.txt micmiu-mbp:Downloads micmiu$5、配置运行参数 Run As →?Run Configurations… ,在Arguments中配置运行参数,例如程序的输入参数: 6、运行 Run As -> Run on Hadoop ,执行完成后可以看到如下信息: 到此Eclipse中调用Hadoop2x本地伪分布式模式执行MR演示成功。 ps:调用集群环境MR运行一直失败,暂时没有找到原因。 —————– ?EOF?@Michael Sun?—————–
原文地址:eclipse中开发Hadoop2.x的Map/Reduce项目, 感谢原作者分享。
内容总结
以上是互联网集市为您收集整理的eclipse中开发Hadoop2.x的Map/Reduce项目全部内容,希望文章能够帮你解决eclipse中开发Hadoop2.x的Map/Reduce项目所遇到的程序开发问题。 如果觉得互联网集市技术教程内容还不错,欢迎将互联网集市网站推荐给程序员好友。
内容备注
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 gblab@vip.qq.com 举报,一经查实,本站将立刻删除。
内容手机端
扫描二维码推送至手机访问。