java – Hadoop MapReduce错误不同的数据类型
内容导读
互联网集市收集整理的这篇技术教程文章主要介绍了java – Hadoop MapReduce错误不同的数据类型,小编现在分享给大家,供广大互联网技能从业者学习和参考。文章包含5342字,纯文字阅读大概需要8分钟。
内容图文
![java – Hadoop MapReduce错误不同的数据类型](/upload/InfoBanner/zyjiaocheng/789/12c40811c6604a28938042e7f5c4e943.jpg)
mapreduce程序中出现两个问题
> java.io.IOException:错误的值类:class org.apache.hadoop.io.IntWritable不是类org.apache.hadoop.io.Text
> java.lang.ArrayIndexOutOfBoundsException:4
我已经在其他帖子中设置了地图输出键和值类,但仍然无法解决这两个问题.对于第二个问题,我专门测试了map中导致问题的代码集,它在一个简单的文件读取程序中是正确的.
作为参考,这是问题1的完整输出
Error: java.io.IOException: wrong value class: class org.apache.hadoop.io.IntWritable is not class org.apache.hadoop.io.Text
at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:194)
at org.apache.hadoop.mapred.Task$CombineOutputCollector.collect(Task.java:1350)
at peoplemail.DomainGenderCount$ReduceClass.reduce(DomainGenderCount.java:52)
at peoplemail.DomainGenderCount$ReduceClass.reduce(DomainGenderCount.java:1)
at org.apache.hadoop.mapred.Task$OldCombinerRunner.combine(Task.java:1615)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1637)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1489)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:460)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
这是问题2的全部输出
Error: java.lang.ArrayIndexOutOfBoundsException: 4
at peoplemail.DomainGenderCount$MapClass.map(DomainGenderCount.java:34)
at peoplemail.DomainGenderCount$MapClass.map(DomainGenderCount.java:1)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
数据
这是我正在处理的几行csv文件
18,Daveen,Cupitt,dcupitth@last.fm,6288608483,Female
19,Marney,Eskell,meskelli@nifty.com,8164369834,Female
20,Teri,Yitzhak,tyitzhakj@bloglovin.com,2548784310,Female
21,Alain,Niblo,aniblok@howstuffworks.com,5195420924,Male
22,Vin,Creevy,vcreevyl@sfgate.com,8574528831,Female
23,Ermina,Pena,epenam@mediafire.com,2236545787,Female
24,Chrisy,Chue,cchuen@google.com,9455751444,Male
25,Morgen,Izakof,mizakofo@noaa.gov,8031181365,Male
MapClass
public static class MapClass
extends MapReduceBase implements Mapper<LongWritable, Text, Text, Text>{
@Override
public void map(LongWritable key,Text value,
OutputCollector<Text,Text> output, Reporter r)throws IOException{
String fields[] = value.toString().split(",");
String gender = fields[5];
String domain = fields[3].split("@")[1];
output.collect(new Text(domain), new Text(gender));
}
}
ReduceClass
public static class ReduceClass
extends MapReduceBase implements Reducer<Text, Text, Text, IntWritable>{
@Override
public void reduce(Text key, Iterator<Text> value,
OutputCollector<Text,IntWritable> output, Reporter r)throws IOException{
int count=0;
while(value.hasNext()){
value.next();
count++;
}
output.collect(key, new IntWritable(count));
}
}
运行方法
public int run(String[] paths) throws Exception {
JobConf jobConf = new JobConf(getConf(), DomainGenderCount.class);
jobConf.setMapOutputKeyClass(Text.class);
jobConf.setMapOutputValueClass(Text.class);
jobConf.setJobName("Number of Users in each domain:");
jobConf.setOutputKeyClass(Text.class);
jobConf.setOutputValueClass(IntWritable.class);
jobConf.setMapperClass(MapClass.class);
jobConf.setReducerClass(ReduceClass.class);
jobConf.setCombinerClass(ReduceClass.class);
FileInputFormat.setInputPaths(jobConf, new Path(paths[0]));
FileOutputFormat.setOutputPath(jobConf, new Path(paths[1]));
JobClient.runJob(jobConf);
return 0;
}
这是我对hadoop的号召
hadoop jar C:\Users\suman\Desktop\domaingendercount.jar /Data/people.csv /Data/Output/
我用这个小程序测试的输入文件
package peoplemail;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
public class Test {
public static void main(String[] args) throws IOException {
File file = new File("C:\\Users\\suman\\Desktop\\people.csv");
BufferedReader bufferedReader = new BufferedReader(new FileReader(file));
String line;
while (null != (line=bufferedReader.readLine())) {
String fields[] = line.split(",");
String gender = fields[5];
String domain = fields[3].split("@")[1];
System.out.println(domain + " " + gender);
}
bufferedReader.close();
}
}
这段代码运行正常.
这些文件包含hadoop的所有代码,数据和输出.
解决方法:
你的数组字段[]将有5个元素,索引从0开始,由于字段长度为5,字段[5]给出“ArrayIndexOutOfBoundsException”.
这是校正的映射器,
public static class MapClass extends MapReduceBase implements Mapper<LongWritable, Text, Text, Text>{
@Override
public void map(LongWritable key,Text value,
OutputCollector<Text,Text> output, Reporter r)throws IOException{
String fields[] = value.toString().split(",");
String domain = fields[3].split("@")[1];
String gender = fields[5];
output.collect(new Text(domain), new Text(gender));
}
}
内容总结
以上是互联网集市为您收集整理的java – Hadoop MapReduce错误不同的数据类型全部内容,希望文章能够帮你解决java – Hadoop MapReduce错误不同的数据类型所遇到的程序开发问题。 如果觉得互联网集市技术教程内容还不错,欢迎将互联网集市网站推荐给程序员好友。
内容备注
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 gblab@vip.qq.com 举报,一经查实,本站将立刻删除。
内容手机端
扫描二维码推送至手机访问。