Tensor-flow How To Use Padding And Masking Layer In Case Of Mlps?
Solution 1:
You cannot ignore a single Features in an MLP. Mathematically we are talking about a matrix multiplication. The only dimensions you can "ignore" are time dimensions in recurrent layers since the number of weights does not scale with the dimension of time and so a single layer can take different sizes in the time dimension.
If you are only using Dense layers you cannot skip anything because your only dimension (besides the batch dimensions) scales directly with the number of weights.
Solution 2:
Thank you @dennis-ec your answer is very precise. I wanna just add this:
We can ignore all time steps for a given feature.This is is supported in Keras with LSTMs, but not Dense layers (We cannot ignore a single Features in an MLP)
we can suffice with padding (Zero padding or specify a value to use, e.g. -1) and see the performance.
Post a Comment for "Tensor-flow How To Use Padding And Masking Layer In Case Of Mlps?"