Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

better layer configuration output #28

Merged
merged 1 commit into from
Nov 12, 2015
Merged

better layer configuration output #28

merged 1 commit into from
Nov 12, 2015

Conversation

janm399
Copy link
Member

@janm399 janm399 commented Nov 11, 2015

This PR allows the model to write out not just the number of units, but also the activation functions. So, the output in x_layers.txt is now

1200 id 500 relu 200 relu 3 logistic

The matching muvr-ios PR muvr/muvr-ios#92 implements the load counterpart.

@@ -64,7 +66,7 @@ def generate_default_model(self, num_labels):
nout=250,
init=init_norm,
bias=bias_init,
activation=Rectlin()))
activation=Tanh()))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this one intentional?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes—insofar as I wanted to test that we can load it and use it in iOS. It happens to perform slightly better for the core dataset, too.

@tmbo
Copy link
Contributor

tmbo commented Nov 12, 2015

👍

@tmbo tmbo mentioned this pull request Nov 12, 2015
3 tasks
@by-dam
Copy link
Contributor

by-dam commented Nov 12, 2015

👍

by-dam added a commit that referenced this pull request Nov 12, 2015
better layer configuration output
@by-dam by-dam merged commit e66fed2 into develop Nov 12, 2015
@by-dam by-dam deleted the feature/layerconfig branch November 12, 2015 16:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants