Skip to content

Tags: 00liujj/MNN

Tags

0.2.1.4

Toggle 0.2.1.4's commit message
[Tools:Bugfix] Fix wrong path of schema head file for quantization tools

0.2.1.3

Toggle 0.2.1.3's commit message
[MNN:Bugfix, Converter:Bugfix, Express:Bugfix]

1. Fix bug for transform crash
2. Fix bug for Onnx GEMM convert bug
3. Fix bug for Onnx not const convolution convert bug

0.2.1.2

Toggle 0.2.1.2's commit message
fix tflite include

0.2.1.1

Toggle 0.2.1.1's commit message
- build:

	- unify schema building in core and converter;
	- add more build script for android;
	- add linux build script for python;

- ops impl:
	- add floor mod support in binary;
	- use eltwise impl in add/max/sub/mul binary for optimization;
	- remove fake double support in cast;
	- fix 5d support for concat;
	- add adjX and adjY support for batch matmul;
	- optimize conv2d back prop filter;
	- add pad mode support for conv3d;
	- fix bug in conv2d & conv depthwise with very small feature map;
	- optimize binary without broacast;
	- add data types support for gather;
	- add gather ND support;
	- use uint8 data type in gather v2;
	- add transpose support for matmul;
	- add matrix band part;
	- add dim != 4 support for padding, reshape & tensor convert;
	- add pad type support for pool3d;
	- make ops based on TensorFlow Lite quantization optional;
	- add all & any support for reduction;
	- use type in parameter as output type in reduction;
	- add int support for unary;
	- add variable weight support for conv2d;
	- fix conv2d depthwise weights initialization;
	- fix type support for transpose;
	- fix grad outputs count for  reduce grad and reshape grad;
	- fix priorbox & detection output;
	- fix metal softmax error;

- python:
	- add runSessionWithCallBackInfo interface;
	- add max nodes limit (1400) for visualization tool;
	- fix save error in python3;
	- align default dim;

- convert:
	- add extra design for optimization;
	- add more post converting optimizers;
	- add caffe v1 weights blob support;
	- add cast, unary, conv transpose support for onnx model;
	- optimize batchnorm, conv with variable weights, prelu, reshape, slice, upsample for onnx model;
	- add cos/sin/atan/tan support for unary for tensorflow model;
	- add any/all support for reduction for tensorflow model;
	- add elu, conv3d, pool3d support for tensorflow model;
	- optimize argmax, batchnorm, concat, batch to space, conv with variable weights, prelu, slice for tensorflow model;

- others:
	- fix size computer lock;
	- fix thread pool deadlock;
	- add express & parameters in express;
	- rewrite blitter chooser without static map;
	- add tests for expr;

0.2.1.0

Toggle 0.2.1.0's commit message
- dynamic computation graph (beta)

	- add supports (/express)
	- add tests
	- add benchmarks with it (/benchmark/exprModels)
- Python
	- MNN engine and tools were submitted to pip
	- available on Windows/macOS/Linux
- Engine/Converter
	- add supports for each op benchmarking
	- refactor optimizer by separating steps
- CPU
	- add supports for Conv3D, Pool3D, ELU, ReverseSequence
	- fix ArgMax, Permute, Scale, BinaryOp, Slice, SliceTf
- OpenCL
	- add half transform in CPU
	- add broadcast supports for binary
	- optimize Conv2D, Reshape, Eltwise, Gemm, etc.
- OpenGL
	- add sub, real div supports for binary
	- add supports for unary
	- optimize Conv2D, Reshape
- Vulkan
	- add max supports for eltwise
- Metal
	- fix metallib missing problem
- Train/Quantization
	- use express to refactor training codes

0.2.0.9

Toggle 0.2.0.9's commit message
beta 0.2.0.9

- fix quantization tool compiling on Windows
- fix converter compiling on Windows
- fix eltwise optimization on Windows
- separate sse & avx for Windows
- add LeakyReLU support for TensorFlow
- fix reshape, const for TensorFlow
- fix dimension format error for ONNX ops
- optimize winograd, ReLU for OpenCL
- add fp16 availability & dimensions size check-up for OpenCL
- optimize GEMM for arm32
- fix ExpandDims shape calculation when inputs size == 1

0.2.0.8

Toggle 0.2.0.8's commit message
beta 0.2.0.8

- add NaN check-up
- add quantification support for ScaleAdd Op
- add binary to eltwise optimization
- add console logs for quantization tool
- better document for quantization tool
- replace redundant dimension flags with dimension format
- optimize performance of TensorFlow Lite Quantized Convolution
- fix axis support for ONNX softmax
- fix get performance compile error on Windows

0.2.0.7

Toggle 0.2.0.7's commit message
beta 0.2.0.7

- move docs to https://www.yuque.com/mnn
- fix bugs for CPU ops TopKV2 and quantized convolution
- add enqueue map buffer error handle for OpenCL
- add nullptr protection for extra tensor desc
- add failure protection for memory acquirement
- fix slice shape calculation
- refactor binary shape calculation

0.2.0.6

Toggle 0.2.0.6's commit message
release 0.2.0.6

- fix bugs in quantization
- add evaluating tool for quantization
- add ADMM support in quantization
- fix lock in thread pool
- fix fusing for deconv
- fix reshape converting from ONNX to MNN
- turn off blob size checking by default

0.2.0.5

Toggle 0.2.0.5's commit message
beta 0.2.0.5

- CPU
	- add support for DepthToSpace & SpaceToDepth ops
- OpenGL
	- add Android demo
	- add half / float runtime option
	- add support for ROIPooling, Squeeze
	- fix bugs in conv im2col
- OpenCL
	- fix Concat, Eltwise, Reshape bugs
- Tools
	- add KL threshold method in quantization tool
	- support optimization for graph with multiple rnn