Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

float32 -> float16 cast consistency across implementations #13857

Merged
merged 7 commits into from
Jan 30, 2019

Conversation

DickJC123
Copy link
Contributor

@DickJC123 DickJC123 commented Jan 12, 2019

Description

While trying to get all the CI runners to pass for PR #13749, I discovered that the handling of the float32->float16 cast on the CPU varies based on whether the f16c library is available and enabled. If the f16c library is not available, as is the case for the Windows CI runner using a MSVC++ compiler, then the mshadow float2half() routine is used and the cast is performed by truncating the bits that don't fit in the float16 representation. The _cvtss_sh(data, 0) call employed by mshadow when the f16c library is present performs a round-to-nearest conversion, with ties rounded to the value with a 0 LSB. This round-to-nearest-even policy also is employed by the default GPU context implementation and numpy.

In order to improve MXNet model and CI consistency across all backends, I'm correcting the mshadow float2half() implementation to perform matching round-to-nearest-even rounding. The first commit introduces only a test that demonstrates the problem, which I am expecting to fail on the Windows CI runner. Then I'll work up an mshadow PR with the new float2half() routine. Next, I'll change the mshadow commit used by MXNet to point to this PR, to demonstrate its effectiveness. Once the mshadow PR has been accepted, I'll change the mshadow commit used by MXNet once again so the PR can be accepted.

This PR should only effect models run without the f16c library on a CPU. I intend to make the round-to-nearest-even behavior the new default for these scenarios, bringing them in line with other systems. I'll provide a simple build flag to restore the legacy behavior. My new float2half() implementation is 50% faster on the cpu despite the additional rounding logic.

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • [X ] All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • [ X] Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http:https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

@anirudhacharya
Copy link
Member

@mxnet-label-bot add [pr-awaiting-review]

@DickJC123 DickJC123 requested a review from szha as a code owner January 15, 2019 03:05
Copy link
Contributor

@larroy larroy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice addition!

for numerator in range(0, denominator):
for y in [-1.0, 0.0, 1.0]:
small_delta = y / 2**fp32_fraction_bits
val = (-1.0)**sign_bit * 2.0**exponent * (1.0 +
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Could we break (1.0 + also in the next line for readability?

# Test requires all platforms to round float32->float16 with same round-to-nearest-even policy.
@with_seed()
def test_cast_float32_to_float16():
fp16_fraction_bits = 10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we capitalize constants as per PEP8?

sym_output = exe.outputs[0].asnumpy()
for fp32_val, model_fp16_val, np_fp16_val in zip(input_np, sym_output, expected_output):
if model_fp16_val != np_fp16_val:
raise RuntimeError('fp32->fp16 cast mismatches seen, e.g. with val {}, model_fp16 = {},'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better raise assertionerror or use https://nose.readthedocs.io/en/latest/testing_tools.html as RuntimeError has a different semantics

@sandeep-krishnamurthy
Copy link
Contributor

@larroy - Can you please take a look back at this PR? Your comments are addressed.
@DickJC123 - Thanks for your contributions.

@DickJC123
Copy link
Contributor Author

Now that a dependent mshadow PR has been merged, I will be updating the mshadow SHA used by this PR shortly, after which this PR will be ready for merging. Should be an easy approval process as this PR only introduces a test.

@DickJC123
Copy link
Contributor Author

@larroy This PR is ready for your final review.

Copy link
Member

@yuxihu yuxihu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@larroy
Copy link
Contributor

larroy commented Jan 30, 2019

Sorry, having a look now.

Copy link
Contributor

@larroy larroy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LVGTM

@yuxihu
Copy link
Member

yuxihu commented Jan 30, 2019

@mxnet-label-bot update [Operator, pr-awaiting-merge]

@marcoabreu marcoabreu added pr-awaiting-merge Review and CI is complete. Ready to Merge Operator and removed Operator pr-awaiting-review PR is waiting for code review labels Jan 30, 2019
@szha szha merged commit c939c2d into apache:master Jan 30, 2019
stephenrawls pushed a commit to stephenrawls/incubator-mxnet that referenced this pull request Feb 16, 2019
)

* Added test showing float32->float16 discrepancy when mshadow float2half() is used.

* Temp update mshadow submodule SHA to point to PR368 (b211cb7).

* Temp switch to url = https://github.com/DickJC123/mshadow.git

* Updata mshadow submodule SHA.

* Improve code style per reviewer comments.

* Move back to dmlc/mshadow.git, now with float->half rounding.

* Expand test_operator.py:test_cast_float32_to_float16 to test np.nan.
vdantu pushed a commit to vdantu/incubator-mxnet that referenced this pull request Mar 31, 2019
)

* Added test showing float32->float16 discrepancy when mshadow float2half() is used.

* Temp update mshadow submodule SHA to point to PR368 (b211cb7).

* Temp switch to url = https://github.com/DickJC123/mshadow.git

* Updata mshadow submodule SHA.

* Improve code style per reviewer comments.

* Move back to dmlc/mshadow.git, now with float->half rounding.

* Expand test_operator.py:test_cast_float32_to_float16 to test np.nan.
haohuanw pushed a commit to haohuanw/incubator-mxnet that referenced this pull request Jun 23, 2019
)

* Added test showing float32->float16 discrepancy when mshadow float2half() is used.

* Temp update mshadow submodule SHA to point to PR368 (b211cb7).

* Temp switch to url = https://github.com/DickJC123/mshadow.git

* Updata mshadow submodule SHA.

* Improve code style per reviewer comments.

* Move back to dmlc/mshadow.git, now with float->half rounding.

* Expand test_operator.py:test_cast_float32_to_float16 to test np.nan.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Operator pr-awaiting-merge Review and CI is complete. Ready to Merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants