Tensorflow Memory Leak When Building Graph In A Loop
I noticed this when my grid search for selecting hyper-parameters of a Tensorflow (version 1.12.0) model crashed due to explosion in memory consumption. Notice that unlike similar-
Solution 1:
You need to clear the graph after each iteration of your for loop before instantiating a new graph. Adding tf.reset_default_graph()
at the end of your for loop should resolve your memory leak issue.
for i in range(N_REPS):
with tf.Graph().as_default():
net = tf.contrib.layers.fully_connected(x_test, 200)
...
mem.append(process.memory_info().rss)
tf.reset_default_graph()
Solution 2:
Try and take the loop inside the session. Don't create the graph and session for every iteration. Every time the graph is created and variable initialized, you are not redefining the old graph but creating new ones leading to memory leaks. I was facing a similar issue and was able to solve this by taking the loop inside the session.
From How not program Tensorflow
- Be conscious of when you’re creating ops, and only create the ones you need. Try to keep op creation distinct from op execution.
- Especially if you’re just working with the default graph and running interactively in a regular REPL or a notebook, you can end up with a lot of abandoned ops in your graph. Every time you re-run a notebook cell that defines any graph ops, you aren’t just redefining ops—you’re creating new ones.
Post a Comment for "Tensorflow Memory Leak When Building Graph In A Loop"