(原)tensorflow中函数执行完毕,显存不自动释放
内容导读
互联网集市收集整理的这篇技术教程文章主要介绍了(原)tensorflow中函数执行完毕,显存不自动释放,小编现在分享给大家,供广大互联网技能从业者学习和参考。文章包含2362字,纯文字阅读大概需要4分钟。
内容图文
转载请注明出处:
http://www.cnblogs.com/darkknightzh/p/7608916.html
参考网址:
https://stackoverflow.com/questions/39758094/clearing-tensorflow-gpu-memory-after-model-execution
https://github.com/tensorflow/tensorflow/issues/1727#issuecomment-285815312s
tensorflow中,在一个函数内配置完GPU,tf分配了显存,等函数执行完,显存不会释放(貌似torch7中也一样。。。)。第二个参考网址指出:
As for the original problem, currently the Allocator in the GPUDevice belongs to the ProcessState, which is essentially a global singleton. The first session using GPU initializes it, and frees itself when the process shuts down. Even if a second session chooses a different GPUOptions, it would not take effect.
第一个session对GPU初始化后,即便释放了显存,第二个sess使用不同的GPU选项来初始化GPU,也不会起效。
第一个网址Oli Blum指出,use processes and shut them down after the computation才能释放显存。具体代码如下(可以参考第一个网址):
1 import tensorflow as tf 2 import multiprocessing 3 import numpy as np 4 5 def run_tensorflow(): 6 7 n_input = 10000 8 n_classes = 1000 910# Create model11def multilayer_perceptron(x, weight): 12# Hidden layer with RELU activation13 layer_1 = tf.matmul(x, weight) 14return layer_1 1516# Store layers weight & bias17 weights = tf.Variable(tf.random_normal([n_input, n_classes])) 181920 x = tf.placeholder("float", [None, n_input]) 21 y = tf.placeholder("float", [None, n_classes]) 22 pred = multilayer_perceptron(x, weights) 2324 cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y)) 25 optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost) 2627 init = tf.global_variables_initializer() 2829 with tf.Session() as sess: 30 sess.run(init) 3132for i in range(100): 33 batch_x = np.random.rand(10, 10000) 34 batch_y = np.random.rand(10, 1000) 35 sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y}) 3637print"finished doing stuff with tensorflow!"383940if__name__ == "__main__": 4142# option 1: execute code with extra process43 p = multiprocessing.Process(target=run_tensorflow) 44 p.start() 45 p.join() 4647# wait until user presses enter key48 raw_input() 4950# option 2: just execute the function51 run_tensorflow() 5253# wait until user presses enter key54 raw_input()
使用multiprocessing.Process运行run_tensorflow后,显存会自动释放,但是如果直接执行run_tensorflow,显存不会自动释放。当然,该函数计算量较小,如果显卡太好,可能看不到运行multiprocessing.Process后,显存分配、计算并释放的过程,感觉就像没有运行一样。。。
原文:http://www.cnblogs.com/darkknightzh/p/7608916.html
内容总结
以上是互联网集市为您收集整理的(原)tensorflow中函数执行完毕,显存不自动释放全部内容,希望文章能够帮你解决(原)tensorflow中函数执行完毕,显存不自动释放所遇到的程序开发问题。 如果觉得互联网集市技术教程内容还不错,欢迎将互联网集市网站推荐给程序员好友。
内容备注
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 gblab@vip.qq.com 举报,一经查实,本站将立刻删除。
内容手机端
扫描二维码推送至手机访问。