首页 / JAVA / 解决spark-submit的There is insufficient memory for the Java Runtime Environment to continue.(老顽固问题) fa
解决spark-submit的There is insufficient memory for the Java Runtime Environment to continue.(老顽固问题) fa
内容导读
互联网集市收集整理的这篇技术教程文章主要介绍了解决spark-submit的There is insufficient memory for the Java Runtime Environment to continue.(老顽固问题) fa,小编现在分享给大家,供广大互联网技能从业者学习和参考。文章包含3750字,纯文字阅读大概需要6分钟。
内容图文
Q:第一次提交wordcount案例,OK,一切正常。再次提交,出现下述错误。完整错误粘贴如下:
21/01/27 14:55:48 INFO spark.SecurityManager: Changing modify acls groups to:
21/01/27 14:55:48 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000d9a80000, 66060288, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 66060288 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/local/spark/spark-2.4.7/bin/hs_err_pid8477.log
A:解决办法
1、按照网上说的 free-m 看一下
hadoop@master:/usr/local/spark/spark-2.4.7/bin$ free -m
total used free shared buff/cache available
Mem: 3757 3467 201 2 87 128
Swap:
可以看到一共3757M,已经用了3467M,剩下可用的只有128M。这肯定不能运行了
2、
2.1、减小服务中对JVM的内存配置。
2.2、关必一些不必要的且占用内存大的进程。{linux上利用top命令查看所有进程,看哪些进程占用的内存太大,选择性的kill,释放内存,注意:安歇进程是不需要的。}
2.3、扩展内存。
这里采用第二种方法
hadoop@master:/usr/local/spark/spark-2.4.7/bin$ top
top - 15:07:42 up 1:19, 1 user, load average: 1.30, 1.11, 1.03
Tasks: 90 total, 1 running, 56 sleeping, 0 stopped, 0 zombie
%Cpu(s): 50.4 us, 0.0 sy, 0.0 ni, 49.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 3847476 total, 183948 free, 3560476 used, 103052 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 115160 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3687 hadoop 20 0 2653672 2.284g 0 S 100.0 62.2 53:00.72 kdevtmpfsi
1 root 20 0 77700 2096 0 S 0.0 0.1 0:00.93 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
罪魁祸首找到了。然后,kill掉占用内存高的java进程。再次提交任务就可以了!
hadoop@master:/usr/local/spark/spark-2.4.7/bin$ kill -9 3687
hadoop@master:/usr/local/spark/spark-2.4.7/bin$ top
top - 15:08:07 up 1:20, 1 user, load average: 1.04, 1.06, 1.02
Tasks: 91 total, 1 running, 56 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.2 us, 0.0 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 3847476 total, 2578636 free, 1161816 used, 107024 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 2511832 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
744 root 10 -10 146756 8344 0 S 0.3 0.2 0:15.89 AliYunDun
4113 hadoop 20 0 3540084 391160 8308 S 0.3 10.2 0:11.74 java
1 root 20 0 77700 2096 0 S 0.0 0.1 0:00.93 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H
参考链接:https://blog.csdn.net/jingzi123456789/article/details/83545355
内容总结
以上是互联网集市为您收集整理的解决spark-submit的There is insufficient memory for the Java Runtime Environment to continue.(老顽固问题) fa全部内容,希望文章能够帮你解决解决spark-submit的There is insufficient memory for the Java Runtime Environment to continue.(老顽固问题) fa所遇到的程序开发问题。 如果觉得互联网集市技术教程内容还不错,欢迎将互联网集市网站推荐给程序员好友。
内容备注
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 gblab@vip.qq.com 举报,一经查实,本站将立刻删除。
内容手机端
扫描二维码推送至手机访问。