Once applied dropout is applied ,how to keep the same structure for applying on the gradient (tensorflow) -


here do:

i have nn (let's g(x))

i can output g(x) drop out (it drop out number of neurons) s fine me.

but right after, i'd compute gradient of same g(x) (same hidden units dropped out).

is there way memorize units dropped out , apply them gradient in tensorflow?


Comments

Popular posts from this blog

ubuntu - PHP script to find files of certain extensions in a directory, returns populated array when run in browser, but empty array when run from terminal -

php - How can i create a user dashboard -

javascript - How to detect toggling of the fullscreen-toolbar in jQuery Mobile? -