linux – 结合HBase和HDFS会在makeDirOnFileSystem中导致异常
内容导读
互联网集市收集整理的这篇技术教程文章主要介绍了linux – 结合HBase和HDFS会在makeDirOnFileSystem中导致异常,小编现在分享给大家,供广大互联网技能从业者学习和参考。文章包含5646字,纯文字阅读大概需要9分钟。
内容图文
介绍
尝试将HBase和HDFS结合使用会产生以下结果:
2014-06-09 00:15:14,777 WARN org.apache.hadoop.hbase.HBaseFileSystem: Create Dir
ectory, retries exhausted
2014-06-09 00:15:14,780 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled
exception. Starting shutdown.
java.io.IOException: Exception in makeDirOnFileSystem
at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile
System.java:136)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFi
leSystem.java:428)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSyst
emLayout(MasterFileSystem.java:148)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSyst
em.java:133)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.j
ava:572)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:432)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied:
user=hbase, access=WRITE, inode="/":vagrant:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe
rmissionChecker.java:224)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe
rmissionChecker.java:204)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermi
ssion(FSPermissionChecker.java:149)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F
SNamesystem.java:4891)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F
SNamesystem.java:4873)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAcce
ss(FSNamesystem.java:4847)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FS
Namesystem.java:3192)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNames
ystem.java:3156)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesyst
em.java:3137)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameN
odeRpcServer.java:669)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTra
nslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cl
ientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4497
0)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.cal
l(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
tion.java:1438)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
orAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
onstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExce
ption.java:90)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExc
eption.java:57)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2153)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2122)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSy
stem.java:545)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1915)
at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile
System.java:129)
... 6 more
配置和系统设置如下:
[vagrant@localhost hadoop-hdfs]$hadoop fs -ls hdfs://localhost/
Found 1 items
-rw-r--r-- 3 vagrant supergroup 1010827264 2014-06-08 19:01 hdfs://localhost/u
buntu-14.04-desktop-amd64.iso
[vagrant@localhost hadoop-hdfs]$
/etc/hadoop/conf/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:8020</value>
</property>
</configuration>
/etc/hbase/conf/hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8020/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
/etc/hadoop/conf/hdfs-site.xml
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/var/lib/hadoop-hdfs/cache</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/tmp/hellodatanode</value>
</property>
</configuration>
NameNode目录权限
[vagrant@localhost hadoop-hdfs]$ls -ltr /var/lib/hadoop-hdfs/cache
total 8
-rwxrwxrwx. 1 hbase hdfs 15 Jun 8 23:43 in_use.lock
drwxrwxrwx. 2 hbase hdfs 4096 Jun 8 23:43 current
[vagrant@localhost hadoop-hdfs]$
如果在core-site.xml中注释了fs.defaultFS属性,则HMaster可以启动
NameNode正在侦听
[vagrant@localhost hadoop-hdfs]$netstat -nato | grep 50070
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LIST
EN off (0.00/0/0)
tcp 0 0 33.33.33.33:50070 33.33.33.1:57493 ESTA
BLISHED off (0.00/0/0)
并可通过导航到http://33.33.33.33:50070/dfshealth.jsp访问.
题
如何解决makeDirOnFileSystem异常并让HBase连接到HDFS?
解决方法:
您需要知道的是在堆栈跟踪的这一行:
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied:
user=hbase, access=WRITE, inode=”/”:vagrant:supergroup:drwxr-xr-x
用户hbase没有写入HDFS根目录(/)的权限,因为它由vargrant拥有,并且设置为仅允许所有者写入它.
使用hadoop fs -chmod修改权限.
编辑:
您也可以成功创建目录/ hbase并将hbase用户设置为所有者.这样您就不必允许hbase写入根目录.
内容总结
以上是互联网集市为您收集整理的linux – 结合HBase和HDFS会在makeDirOnFileSystem中导致异常全部内容,希望文章能够帮你解决linux – 结合HBase和HDFS会在makeDirOnFileSystem中导致异常所遇到的程序开发问题。 如果觉得互联网集市技术教程内容还不错,欢迎将互联网集市网站推荐给程序员好友。
内容备注
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 gblab@vip.qq.com 举报,一经查实,本站将立刻删除。
内容手机端
扫描二维码推送至手机访问。