Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minor bug fix #5058

Merged
merged 3 commits into from
Aug 30, 2018
Merged

minor bug fix #5058

merged 3 commits into from
Aug 30, 2018

Conversation

raymond-yuan
Copy link
Contributor

  • fixed entropy sign
  • changed entropy implementation for numeric stability

@raymond-yuan raymond-yuan requested a review from a team as a code owner August 10, 2018 16:53
@googlebot
Copy link

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed (or fixed any issues), please reply here (e.g. I signed it!) and we'll verify it.


What to do if you already signed the CLA

Individual signers
Corporate signers

@raymond-yuan
Copy link
Contributor Author

I signed it!

@googlebot
Copy link

CLAs look good, thanks!

Copy link
Member

@qlzh727 qlzh727 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure who is the owner of this code, @karmel and @fchollet.

policy = tf.nn.softmax(logits)
entropy = tf.reduce_sum(policy * tf.log(policy + 1e-20), axis=1)
entropy = -tf.nn.softmax_cross_entropy_with_logits_v2(labels=policy, logits=logits)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any reason that the entropy is negative here?

@karmel karmel requested review from MarkDaoust and random-forests and removed request for karmel August 30, 2018 15:39
@MarkDaoust MarkDaoust self-assigned this Aug 30, 2018
policy_loss *= tf.stop_gradient(advantage)
policy_loss -= 0.01 * entropy
policy_loss = 0.01 * entropy
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think "policy_loss -= 0.01 * entropy" might be correct, overwriting the policy_loss does not make sense.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are totally correct. I fixed this before merging.

@MarkDaoust
Copy link
Member

MarkDaoust commented Aug 30, 2018

@qlzh727 I think that negative is there because of the -= later on.
I removed the double negative.

But that didn't look right. This "entropy regularization" is supposed to discourage premature convergence, so it should encourage a high entropy not a low entropy.

There seems to be a lot of confusion around this point in other tutorials on the subject, because of the way the paper is written.

I think this (subtracting the entropy) is right. I'm merging as is.

@raymond-yuan LMK if you disagree.

@MarkDaoust MarkDaoust merged commit 2aec950 into tensorflow:master Aug 30, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants