Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MXNET-1260] Float64 DType computation support in Scala/Java #13678

Merged
merged 41 commits into from
Jan 10, 2019

Conversation

piyushghai
Copy link
Contributor

@piyushghai piyushghai commented Dec 18, 2018

Description

This PR introduces Float64/Double data type support in NDArrays in Scala. Currently we only allow precision upto Float32 in Scala as a result of which there are issues when one tries to load a model trained using float64 (in another language binding).

This also fixes two long standing issues : fixes #11315 & fixes #10338

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http:https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Comments

  • Interesting edge cases to note here
  • Need to complete the F64 support in other classes as well, and then test out and compare training of a model using float32 and float64. The comparison would be in terms of the precision of the loss, accuracy of the trained model, memory occupied by the model during training process.

@piyushghai
Copy link
Contributor Author

@Roshrini
Copy link
Member

@mxnet-label-bot Add [pr-work-in-progress, Scala]

@marcoabreu marcoabreu added pr-work-in-progress PR is still work in progress Scala labels Dec 18, 2018
Copy link
Member

@lanking520 lanking520 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add some Scala test cases

@piyushghai
Copy link
Contributor Author

@lanking520 I have already added the Scala tests in NDArraySuite.scala class. It's in this commit : 5529f94

@piyushghai piyushghai changed the title [MXNET-1260] [WIP] [DO NOT MERGE] Float64 support in NDArray in Scala [MXNET-1260] [WIP] Float64 support in NDArray in Scala Dec 27, 2018
Copy link
Member

@lanking520 lanking520 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks to your great contribution! Overall look clean and tidy!

@piyushghai piyushghai changed the title [MXNET-1260] [WIP] Float64 support in NDArray in Scala [MXNET-1260] Float64 DType computation support in Scala Jan 3, 2019
@piyushghai
Copy link
Contributor Author

@mxnet-label-bot remove [pr-work-in-progress]

@marcoabreu marcoabreu removed the pr-work-in-progress PR is still work in progress label Jan 3, 2019
@piyushghai
Copy link
Contributor Author

@mxnet-label-bot Add [pr-awaiting-review]

@marcoabreu marcoabreu added the pr-awaiting-review PR is waiting for code review label Jan 3, 2019
@piyushghai piyushghai changed the title [MXNET-1260] Float64 DType computation support in Scala [MXNET-1260] Float64 DType computation support in Scala/Java Jan 4, 2019
Copy link
Member

@lanking520 lanking520 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@andrewfayres andrewfayres left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lanking520
Copy link
Member

@piyushghai please Satisfy the lint god (╯‵□′)╯︵┻━┻

@lanking520
Copy link
Member

@piyushghai please Satisfy the lint god (‵□′)

@lanking520 lanking520 merged commit ed7ca26 into apache:master Jan 10, 2019
zhaoyao73 added a commit to zhaoyao73/incubator-mxnet that referenced this pull request Jan 11, 2019
* upstream/master: (109 commits)
  Code modification for  testcases of various network models in directory example (apache#12498)
  [CI] Prevent timeouts when rebuilding containers with docker. (apache#13818)
  fix Makefile for rpkg (apache#13590)
  change to compile time (apache#13835)
  Disabled flaky test (apache#13758)
  Improve license_header tool by only traversing files under revision c… (apache#13803)
  Removes unneeded nvidia driver ppa installation (apache#13814)
  Add Local test stage and option to jump directly to menu item from commandline (apache#13809)
  Remove MXNET_STORAGE_FALLBACK_LOG_VERBOSE from test_autograd.py (apache#13830)
  Fix scala doc build break for v1.3.1 (apache#13820)
  [MXNET-1263] Unit Tests for Java Predictor and Object Detector APIs (apache#13794)
  [MXNET-1260] Float64 DType computation support in Scala/Java (apache#13678)
  onnx export ops (apache#13821)
  [MXNET-880] ONNX export: Random uniform, Random normal, MaxRoiPool (apache#13676)
  fix minor indentation (apache#13827)
  Fixing a symlink issue with R install (apache#13708)
  remove useless code (apache#13777)
  ONNX ops: norm exported and lpnormalization imported (apache#13806)
  Add new Maven build for Scala package (apache#13819)
  Dockerfiles for Publish Testing (apache#13707)
  ...
piyushghai added a commit to piyushghai/incubator-mxnet that referenced this pull request Jan 22, 2019
haohuanw pushed a commit to haohuanw/incubator-mxnet that referenced this pull request Jun 23, 2019
…13678)

* Added Float64 as a supported datatype in NDArray

* Added unit tests for Float64 in NDArray

* Fix for failing Clojure unit tests

* Added Float and Double as MX_PRIMITIVES for computation in Scala

* Trying out second approach --> Private Impl methods with generic signature, and public methods calling the Impls

* Fixed errors in *= method

* Added Float64 in IO.scala and DataIter.scala

* Added another testcase for IO.DataDesc creation

* Fixed failing CI

* Added Float64 in Predictor class

* Added Float64 in Classifier class

* Added Double as a possible return type to : classifyWithNDArray

* Added unit tests for Classifier and Predictor.scala classes for Float64/Double

* Approach 3 --> Using a trait to mirror Float and Double in Scala

* Added comments on MX_PRIMITIVES.scala

* Added Float64/Double support for inference in ImageClassifier APIs

* Added unary- and compareTo in MX_NUMBER_LIKE

* Renamed MX_NUMBER_LIKE to MX_PRIMITIVE_TYPE

* Fixed linting issue

* Now specifying dType from the available data in copyTo and MXDataIter.scala for creating a new DataIterator

* Add primitives support handling to the generator for proper conversion

* Reduced code duplication in classify method in Classifier.scala

* Fix infer package for new signatures and address some bugs

* Removed code duplication in getPixelsArray

* remove debugging

* Changed classifyWithNDArray method in Classifier.scala

* Removed code duplication in predictImpl

* Satisfying lint god _/\_

* Fixed failing PredictorSuite test

* Renamed MX_FLOAT to Camel case

* Revert "Renamed MX_FLOAT to Camel case"

This reverts commit 9d7c3ce.

* Added an implicit conversion from int--> float to support int operations in NDArrays. (These ops were already supported in the previous versions)

* Added Float64 as a training option to ImClassification Suite. Also added integration tests for it

* Satisfy Lint God _/\_

* Added Float64 support in Java NDArray

* Added Float64 support in Java's Predictor API

* Added yours truly to the Contributors list

* Added method comments on Predictor.predict with Array[Double] as a possible input

* Added method comments explaining what MX_PRIMITIVE_TYPE is

*  Fixed errors cause by rebasing with master

* Added licences to the files
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
pr-awaiting-review PR is waiting for code review Scala
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support Double precision type in Scala NDArray saved in Python cannot be loaded in Scala
7 participants