Skip to content

Tags: belonesox/onnxruntime

Tags

v1.10.0

Toggle v1.10.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
add copyright (microsoft#9943)

v1.9.1

Toggle v1.9.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Force Windows AI NuGet pipeline to use Windows SDK 19041 (microsoft#9255

) (microsoft#9256)

* Force Windows AI Nuget pipeline to use 19041 Windows SDK as 22000 casues a downlevel regression by importing LoadLibraryW

* move into quotes

Co-authored-by: Sheil Kumar <[email protected]>

Co-authored-by: Sheil Kumar <[email protected]>

v1.9.0

Toggle v1.9.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Fixes to rel-1.9.0 to compile and pass for AMD ROCm (microsoft#9144)

* Revert "Fix nightly CI pipeline to generate ROCm 4.2 wheels and add ROCm 4.3.1 wheels (microsoft#9101)"

This reverts commit 4788839.

* Add BatchNorm kernel for ROCm (microsoft#9014)

* Add BatchNorm kernel for ROCm, update BN test

* correct epsilon_ setting; limit min epsilon

* Upgrade ROCm CI pipeline for ROCm 4.3.1 and permit run inside container (microsoft#9070)

* try to run inside 4.3.1 container

* no \ in container run command

* remove networking options

* try with adding video render groups

* add job to build docker image

* try without 1st stage

* change alpha, beta to float

* try adding service connection

* retain huggingface directory

* static video and render gid

* use runtime expression for variables

* install torch-ort

* pin sacrebleu==1.5.1

* update curves for rocm 4.3.1

* try again

* disable determinism and only check tail of loss curve and with a much larger threshold of 0.05

* disable RoBERTa due to high run variablity on ROCm 4.3.1

* put reduction unit tests back in

* Fix nightly CI pipeline to generate ROCm 4.2 wheels and add ROCm 4.3.1 wheels (microsoft#9101)

* make work for both rocm 4.2 and rocm 4.3.1

* fix rocm 4.3.1 docker image reference

* fix CUDA_VERSION to ROCM_VERSION

* fix ReduceConsts conflict def

* add ifdef to miopen_common.h as well

* trailing ws

Co-authored-by: wangye <[email protected]>
Co-authored-by: mindest <[email protected]>

v1.8.2

Toggle v1.8.2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
bump ORT version to 1.8.2 (microsoft#8630)

v1.8.1

Toggle v1.8.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Liqun/havenka/rel 1.8.1 round3 (microsoft#8246)

* Revert the cuda algo finding change as this causes a significant memory bloat. (microsoft#8181)

* Revert the cuda algo finding change as this causes a significant memory bloat.

* Address PR comment

* Make pipelines to support torch1.8.1 and torch1.9.0 (microsoft#8084)

* Add post-install command to build PyTorch CPP extensions from within onnxruntime package (microsoft#8027)

ORTModule requires two PyTorch CPP extensions that are currently JIT compiled. The runtime compilation can cause issues in some environments without all build requirements or in environments with multiple instances of ORTModule running in parallel

This PR creates a custom command to compile such extensions that must be manually executed before ORTModule is executed for the first time. When users try to use ORTModule before the extensions are compiled, an error with instructions are raised

PyTorch CPP Extensions for ORTModule can be compiled by running:
python -m onnxruntime.training.ortmodule.torch_cpp_extensions.install

Full build environment is needed for this

* Patch orttraining-ortmodule pipeline with latest fix on master

* add cuda version to build config

* lib path

* .

* .

* .

* .

* .

* .

* .

* .

* .

* .

* .

* Remove auto doc gen

Co-authored-by: Pranav Sharma <[email protected]>
Co-authored-by: Thiago Crepaldi <[email protected]>
Co-authored-by: Baiju Meswani <[email protected]>

v1.8.0

Toggle v1.8.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Cherry pick outstanding changes into release branch (round 2) (micros…

…oft#7921)

* [OpenVINO-EP] Adding OpenVINO-EP samples to Msft Repo (microsoft#7826)

* Added ONNX_OV_EP samples

->Added cpp, python and csharp samples
using OpenVINO Execution Provider.

Signed-off-by: MaajidKhan <[email protected]>

* [js/web] update README.md (microsoft#7894)

* Add API_IMPL_* blocks around shared provider methods as they are C APIs (microsoft#7908)

* Missing logic for cuda nuget package (microsoft#7911)

Co-authored-by: Maajid khan <[email protected]>
Co-authored-by: Yulong Wang <[email protected]>
Co-authored-by: Ryan Hill <[email protected]>

v1.7.2

Toggle v1.7.2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
CP Fixes to enable C# UWP Apps to install the Microsoft.AI.MachineLea…

…rning Package (microsoft#7129)

* Fix app packaging in UWP (microsoft#6804)

* Change msbuild condition for UAP

* update .netcore target as well

* create nuget packages with _native path

* validate path under _native directory for windowsai package

* pep8

* add diagnostic error message

* pep8

* use baseame

* lib\uap10.0

* uap10

* build\\uap10.0

* Manually binplace winmds into appx when PackageReference is used.

* always binplace winmd regardless of packagereference since c# should work with packages.config also

* resolve all paths to full paths to avoid some reference warnings

* move winmds out of lib folder to prevent automatic component registration

Co-authored-by: Sheil Kumar <[email protected]>

* Only set _native folder for Microsoft.AI.MachineLearning package (microsoft#6939)

* only set _native folder for Microsoft.AI.MachineLearning package

Co-authored-by: Sheil Kumar <[email protected]>

Co-authored-by: Tiago Koji Castro Shibata <[email protected]>
Co-authored-by: Sheil Kumar <[email protected]>
Co-authored-by: Changming Sun <[email protected]>

v1.7.1

Toggle v1.7.1's commit message
Patch release: 1.7.1

Adjust the build flags for the Nuget GPU and C API/Java GPU pipeline to remove debug symbols for Linux.

v1.7.0

Toggle v1.7.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Revert fuse conv fix err (microsoft#6859)

* merge fuse cuda conv revert

* resolve merge conflict revert exclude unsupported type

* add Stream for slicing

* remove file

* add Stream

Co-authored-by: RandySheriffH <[email protected]>

v1.6.0

Toggle v1.6.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
Second round of cherry-pick (microsoft#6083)

* Fix PR microsoft#5550 reverted in microsoft#5911 (performance improvment for operator Transpose) (microsoft#5916)

* Improves implementation of transpose operator
* Fix issue mentioned in microsoft#5911
* adding unit test for function DoTransposeImpl

* Make operator TreeEnsemble 5x faster for batches of size 100.000 (microsoft#5965)

* improves processing time by 10
* extend coverage unit test coverage
* better implementation for the multi regression case
* better comment, keep parallelization by trees when not enough trees

* Initialize a structure in operator ReduceSum (microsoft#6005)

* fix initialisation issue

* Fuse MatMulIntegerToFloat only when scales are scalar (microsoft#6008)

MatMulIntegerToFloat fusion fuses per-row and per-column MatMulInteger, which is not supported by the MatMulIntegerToFloat kernel now. Limit the fusion to per-matrix only before we supporting the per-channel fully.

* Disable Python 3.9 for training Python packaging build. (microsoft#6012)

Disable Python 3.9 for training Python packaging build. Python 3.9 is not supported by the PyTorch dependency.

* Fix bugs for 1: Calibrator should check model inputs; 2: (microsoft#6017)

quantize_inupts forgot to use parameter initializer_use_weight_qtyp.

* Bump highlight.js from 10.2.1 to 10.4.1 in /nodejs

Bumps [highlight.js](https://github.com/highlightjs/highlight.js) from 10.2.1 to 10.4.1.
- [Release notes](https://github.com/highlightjs/highlight.js/releases)
- [Changelog](https://github.com/highlightjs/highlight.js/blob/master/CHANGES.md)
- [Commits](highlightjs/highlight.js@10.2.1...10.4.1)

Signed-off-by: dependabot[bot] <[email protected]>

* work around of the build break in mac (microsoft#6069)

* Fix the build break in macos release

* revert android change

* Bump up API version for 1.6 release (microsoft#6076)

* Update version to 1.6.0 (microsoft#6041)

* Update version to 1.6.0

* Add v 1.5.3 info

* Updating WindowsAI and ONNX version

Co-authored-by: Du Li <duli@OrtTrainingDev0.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>

* Rsevert "Fuse MatMulIntegerToFloat only when scales are scalar (microsoft#6008)"

This reverts commit beb950e.

Co-authored-by: Xavier Dupré <[email protected]>
Co-authored-by: Yufeng Li <[email protected]>
Co-authored-by: Edward Chen <[email protected]>
Co-authored-by: Zhang Lei <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Pranav Sharma <[email protected]>
Co-authored-by: Du Li <duli@OrtTrainingDev0.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>