-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AutoEncoder activation #166
Comments
Hi @chenmaosi , Indeed, you are right, silly me! I'll fix this error asap, thanks a lot. Aurélien |
I fixed the |
Hi Aurélien, Thank you for your reply. One more question about AutoEncoder (AE): if there are more than 2 layers of AE, should I only use elu+None in the outer AE and elu+elu for ALL inner AEs? Thanks. |
Yes: if I built a stacked autoencoder in just one shot, I would use the ELU activation function in all hidden layers, and no activation function in the output layer, so if I want to train one autoencoder at a time, I would use ELU+None only for the outer autoencoder, and ELU+ELU for all the other autoencoders. |
Thank you, Aurélien. |
Dear Aurélien,
I recently purchased your book "Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems" and benefited a lot from it. Thank you for sharing your knowledge.
Currently I'm adopting your AutoEncoder code in the section "Training one Autoencoder at a time in multiple graphs" for dimensionality reduction in my project.
Each layer has the encoder with
tf.nn.elu
activation and the decoder with linear activation (Jupyter Notebook line [19]). But the code on applying the trained model (line [21]) usedtf.nn.elu
for the inner most decoder:hidden3 = tf.nn.elu(tf.matmul(hidden2, W3) + b3)
Should we remove the activation part?
hidden3 = tf.matmul(hidden2, W3) + b3
Thanks.
Maosi Chen
The text was updated successfully, but these errors were encountered: