Once applied dropout is applied ,how to keep the same structure for applying on the gradient (tensorflow) -


here do:

i have nn (let's g(x))

i can output g(x) drop out (it drop out number of neurons) s fine me.

but right after, i'd compute gradient of same g(x) (same hidden units dropped out).

is there way memorize units dropped out , apply them gradient in tensorflow?


Comments

Popular posts from this blog

python - Operations inside variables -

Generic Map Parameter java -

arrays - What causes a java.lang.ArrayIndexOutOfBoundsException and how do I prevent it? -