Keras Batchnormalization Population Parameters Update While Training In Tensorflow
I am using Keras 2.0.8 with Tensorflow 1.3.0 in Ubuntu 16.04 with Cuda 8.0 and cuDNN 6. I am using two BatchNormalization layers( keras layers ) in my model and training using tens
Solution 1:
I ran into the same problem a few weeks ago. Internally, keras layers can add additional update operations to a model (e.g. batchnorm). So you need to run these additional ops explicitly. For the batchnorm these updates seem to be just some assign_ops which swap the current mean/variance with the new values. If you do not create a keras model this might work; assuming x is a tensor you like to normalize
bn = keras.layers.BatchNormalization()
x = bn(x)
....
sess.run([minimizer_op,bn.updates],K.learning_phase(): 1)
In my workflow, I am creating a keras model (w/o compiling it) and then run the following
model = keras.Model(inputs=inputs, outputs=prediction)
sess.run([minimizer_op,model.updates],K.learning_phase(): 1)
where inputs can be something like
inputs = [keras.layers.Input(tensor=input_variables)]
and outputs is a list of tensorflow tensors. The model seems to aggregate all additional updates operations between inputs and outputs automatically.
Post a Comment for "Keras Batchnormalization Population Parameters Update While Training In Tensorflow"