Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Implement mkldnn convolution fusion and quantization. #12530

Merged
merged 31 commits into from
Oct 9, 2018

Conversation

ZhennanQin
Copy link
Contributor

@ZhennanQin ZhennanQin commented Sep 12, 2018

Implement mkldnn convolution fusion and quantization.

Description

This PR is the implementation followed this proposal
@zheng-da, @azai91, @TaoLv, @pengzhao-intel @reminisce

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http:https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

  • Feature1, tests, (and when applicable, API doc)
  • Feature2, tests, (and when applicable, API doc)

Comments

  • If this change is a backward incompatible change, why must this change be made.
  • Interesting edge cases to note here

Implement mkldnn convolution quantization.
@zheng-da
Copy link
Contributor

Could you please split operator fusion and quantization into two PRs? It doesn't seem that these two works have to be in the same PR.

@ZhennanQin
Copy link
Contributor Author

@zheng-da There's strong dependence between quantization and fusion because they all changed mkldnn_conv.cc, which is the key part of this PR. If I split them into 2 PRs, then this file needs to be reviewed twice, because quantization will change this file a lot(almost rewrite), which eventually increase the burden of code review.

@kalyc
Copy link
Contributor

kalyc commented Sep 14, 2018

Thanks for your contribution @ZhennanQin
I noticed your PR build failed

@mxnet-label-bot[pr-awaiting-review]

@marcoabreu marcoabreu added the pr-awaiting-review PR is waiting for code review label Sep 14, 2018
@pengzhao-intel
Copy link
Contributor

pengzhao-intel commented Sep 17, 2018

@zheng-da @reminisce
Could you help take a review?
BTW, the test case will be added in this week.

hash = hash * 2 + this->with_sum ? 1 : 0;
hash = hash * 2 + this->with_postsum_relu ? 1 : 0;
hash = hash * 2 + this->quantized ? 1 : 0;
return hash;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is this method used?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Used for calculating param hash when caching and reusing mkldnn op primitive.

out_mem = mkldnn_output_t(
OutDataOp::Noop,
const_cast<mkldnn::memory *>(out_data[conv::kOut].GetMKLDNNDataReorder(
fwd->fwd_pd.dst_primitive_desc())));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the action here is Noop? The memory of out_data and the one in out_mem are different. Shouldn't we copy data back to out_data somehow?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, out_data always has same memory description with fwd->fwd_pd, so we should use GetMKLDNNData instead of GetMKLDNNDataReorder. Then out_mem will always be the memory in out_data.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if so, please change it to GetMKLDNNData.

if (mkldnn_param.with_sum)
cached_output_ = inputs[in_sum];
else
cached_output_ = outputs[kOut];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why cache input data and output? The input and output data may be reused somewhere else because of the memory planning in MXNet.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, they are changed every time, so we will reassign cached_input_ and cached_output_ on each forward. You can treat them as a normal variable, as it points to different NDArray according to mkldnn_param.

}
const int GetBoolHash() const {
int hash = 0;
hash = hash * 2 + this->with_bn ? 1 : 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I already made this comment in another PR:

This hash function is prone to collisions and can thus not be considered as hash function.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might want to use bitflags instead

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is not used anymore, I simply remove it. Sorry about that.

*/
MXNET_DLL int MXGenBackendSubgraph(SymbolHandle sym_handle, const char *backend,
SymbolHandle *ret_sym_handle);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this CAPI provided? It seems it's only used in testing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not for testing, but for quantization script. For mkldnn quantization, we agreed to do fusion first, and then do quantization. So on python side, we need an api to generate fused graph, and then pass it to quantization pass. Otherwise, we have to allow simple_bind returning the graph after subgraph pass.

CreateDefaultInputs(in_array, &in_array_fallback);
fcompute_(state_, op_ctx, in_array_fallback, req, out_array);
return;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it possible to move this to a separate PR? this modification should be reverted after the MKLDNN subgraph is implemented. Including this modification in this PR, we'll have to revert it manually.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I will move this part out.


private:
NodeAttrs attrs_;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is attrs_ used?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To query TIsMKLDNN for StatefulComputeExExecutor.

i_fmt == mkldnn::memory::format::nChw8c ||
i_fmt == mkldnn_nChw16c) {
i_fmt = mkldnn::memory::format::nhwc;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

quantization always uses channel last layout?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For mkldnn, yes. nhwc should be the default int8 layout, just like nchw for fp32.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question for both of you: If we quantize to int8 and store the params, will they be stored in nhwc by default (and thus not need to be converted each time the model is loaded)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@KellenSunderland Sorry for responding late. For mkldnn quantization flow, we won't offline quantize any params, but using fp32 params(eg. weight and bias) for quantized convolution. We will online quantize convolution params in first forwarding. For the code here, it's for the default output format of 'quantize' op when using in mkldnn quantization flow, this won't effect non-mkldnn quantization flow.

struct MKLDNNConvFusionParam {
MKLDNNConvFullParam full_conv_param;
std::shared_ptr<BatchNormParam> bn_param;
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is quite confusing. MKLDNNConvFullParam also contains all the flags used for conv fusion. Why not merge MKLDNNConvFullParam and MKLDNNConvFusionParam ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's an abstraction to isolate mkldnn convolution params with fusion params:
MKLDNNConvFullParam defines in mkldnn_convolution-inl.h, which only contains the options convolution needed, and will pass to MKLDNNConvolutionForwardFullFeature.
MKLDNNConvFusionParam defines in mkldnn_conv-inl.h, which is used for SgMKLDNNConvParamParser, and to support fusion related function in SgMKLDNNConvOperator::Forward.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MKLDNNConvFullParam contains the following data structure. It's used for fused convolution, right?

struct MKLDNNConvParam : public dmlc::Parameter<MKLDNNConvParam> {
  // When adding more members into this class, please double check GetHash()
  // won't overflow.
  bool with_bn;
  bool with_relu;
  bool with_sum;
  bool with_postsum_relu;
  bool quantized;
  bool weight_channelwise_scale;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zheng-da Yes. You're right. MKLDNNConvFullParam contains all the parameters for mkldnn convolution primitive.

// MKLDNN_OPCHECK_INIT(false, outputs.size(), inputs, outputs);
MKLDNNConvolutionForwardFullFeature(full_param, ctx, fwd, inputs, req, outputs);
// MKLDNN_OPCHECK_RUN(ConvolutionCompute<cpu>, attrs, ctx, inputs, req,
// outputs);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why comment these two lines?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will remove it.

src/operator/subgraph/mkldnn/mkldnn_conv.cc Outdated Show resolved Hide resolved
@marcoabreu
Copy link
Contributor

How is it possible that GetBoolHash was removed? I thought it was there to support the caching. Is it not actually required?

@ZhennanQin
Copy link
Contributor Author

@marcoabreu Because for original convolution op, the newly created params doesn't be used. And for _sg_mkldnn_conv, it's an stateful op, so it doesn't rely on caching(not use caching mechanism).



if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Generate a calibrated quantized model from a FP32 model with MKL-DNN support')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could update readme.md with an example to run this script?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do want to provide resnet50v1 as example, but we don't know where's to put the pre-trained model and its parameter file. Do you have any suggestion where's to upload them?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eric-haibin-lin it's a good idea. The quantization feature is improved a lot with this PR and we need a clear README. @xinyu-intel please draft a README

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, could we upload our model/parameters into http:https://data.mxnet.io/data/ so that the end user could reproduce the INT8 performance and accuracy w/o training the model again?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's an apache mxnet s3 bucket. @szha can help you with that


out = SymbolHandle()
backend = "MKLDNN"
check_call(_LIB.MXGenBackendSubgraph(sym.handle, c_str(backend), ctypes.byref(out)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Calling C_API in the example seems not user friendly. Do we want to have sth like this in the symbol.py? @zheng-da

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree. it's better to provide a Python API for this.

@@ -40,7 +40,7 @@
from ..module import Module


def _quantize_params(qsym, params):
def _quantize_params(qsym, params, th_dict):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does th_dict mean threashold_dict? I don't understand what th stands for

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it means threashold_dict, @reminisce can you explain it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it means threshold.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably worthwhile calling it thresh_dict or thresholds_dict. Still a pretty concise name, and it could avoid confusion.

@@ -408,12 +420,23 @@ def _load_params(params, logger=logging):
raise ValueError('Unsupported params provided. Must be either a path to the param file or'
' a pair of dictionaries representing arg_params and aux_params')

def save_params(fname, arg_params, aux_params, logger=None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is considered as a public API under the contrib.quantization namespace. What is the necessity of adding such an API?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already inside imagenet_gen_qsym.py. Will remove it.

static auto& quantized_op_map = Op::GetAttr<mxnet::FQuantizedOp>("FQuantizedOp");
return quantized_op_map.count(node->op()) && !excluded_nodes.count(node);
inline bool NeedQuantize(NodePtr node,
const std::unordered_set<std::string> excluded_nodes) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use const reference to save a copy

for (int c = 0; c < static_cast<int>(channel); ++c) {
DType weight_range = MaxAbs(weight_c_min[c], weight_c_max[c]);
weight_scales->at(c) = int8_range / weight_range;
DType *fp_ptr = weight_ptr + c * offset;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: use const Dtype* when it applies

return node;
}

static inline bool StringEndsWith(std::string const &str,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can probably be moved to common/utils.cc

public:
/*! \brief pattern match status */
enum SelectStatus {
sFail = 0,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually we use prefix with k for enum (kFail)

public:
/*! \brief pattern match status */
enum SelectStatus {
sFail = 0,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sFail -> kFail, sStart -> kStart, etc

dequantize_node->inputs.emplace_back(NodeEntry{mirror_node, max_index, 0});
dequantize_node->op()->attr_parser(&(dequantize_node->attrs));
if (node->is_variable() && node->attrs.name == "data") {
// Instert identity for data to collect calib for it.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instert -> Insert

// each output entry will connect to an unique internal output.
virtual void ConnectSubgraphOutputs(
const nnvm::NodePtr n,
std::vector<nnvm::NodeEntry *> *output_entries) const {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think it's better check if n has the same number of outputs as output_entries.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can't be guaranteed as output_entries may have duplicated entries from n.


mod_sg.forward(batch, is_train=False)
for output_sg in mod_sg.get_outputs():
output_sg.wait_to_read()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't you compare the outputs of fused version and unfused version?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's done by line 119.

@@ -53,6 +53,7 @@ def _quantize_params(qsym, params):
qsym : Symbol
Quantized symbol from FP32 symbol.
params : dict of str->NDArray
th_dict: dict of min/max pairs of layers' output
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may be misunderstanding something here, but Is the thresholding applied to the output? My understanding was it's usually applied to the weights during quantization.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It applies to the output as well.

@@ -696,3 +696,21 @@ int MXSetCalibTableToQuantizedSymbol(SymbolHandle qsym_handle,
*ret_qsym_handle = s;
API_END_HANDLE_ERROR(delete s);
}

int MXGenBackendSubgraph(SymbolHandle sym_handle, const char *backend,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By backend in this context do you mean the symbol that's a placeholder for the fused mkldnn call?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by a placeholder for the fused mkldnn call?
This API is intend for converting a symbol into a backend specific symbol. On quantization flow, we need do fusion at first, and then do quantization. So on python level, we need to get the backend specific symbol that all backend specific fusion applied, and pass it to quantization pass. That's why we need this API.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess my point is that 'backend' is an overloaded term here, so to me it's confusing when you say you're converting a symbol to a backend specific symbol.

Am I understanding correctly that when you're fusing ops and calling this function the symbol that you begin with (sym_handle) is a symbol representing a graph of NNVM ops which use our default MXNet backend, and the symbol you're converting to represents a fused operator targeting an MKLDNN backend?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, your understanding is correct.

nd_cpu = *this;
#if MXNET_USE_MKLDNN == 1
if (nd_cpu.IsMKLDNNData())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIt: I believe the project encourages braces even on single line conditionals.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see such rule from the code itself. Single line conditionals without braces can be found everywhere, even in this file. See line 1591, 1654, 1669 ...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just bring it up because I was corrected in another PR. Not sure if it's formalized anywhere.


if __name__ == "__main__":
import nose
nose.runmodule()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: newline

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why make lint can't report this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No idea, for source files it should.

@KellenSunderland
Copy link
Contributor

If you guys could rebase and push this it would really help use to verify that Travis is now working correctly.

@KellenSunderland
Copy link
Contributor

Thanks for rebasing gentleman!

@xinyu-intel
Copy link
Contributor

@KellenSunderland The CI looks good now:)

// Connect subgraph internal output with external output entries. By default,
// each output entry will connect to an unique internal output.
virtual void ConnectSubgraphOutputs(
const nnvm::NodePtr n,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to name n as subgraph_node. Please follow the style here to add description for the function and parameters. https://github.com/apache/incubator-mxnet/blob/master/src/operator/subgraph/subgraph_property.h#L62

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@reminisce Code style is not consistent in this file. SubgraphSelector use different coding style with SubgraphProperty, which making me confusing which coding style should to follow. For example, SubgraphSelector uses /*! */ style comment with param description, and long parameter name(eg nnvm::Node &input_node), while SubgraphProperty uses \ style comment without param description, and short parameter name(eg. const nnvm::Symbol &s).
Changing these 2 SubgraphProperty functions into SubgraphSelector style doesn't make sense to me. If you insist, I'd like to adjust the code style for whole file.

// Connect subgraph internal input with external input entries. By default,
// each input entry will connect in top sorted order.
virtual void ConnectSubgraphInputs(
const nnvm::NodePtr n, std::vector<nnvm::NodeEntry *> *input_entries,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here for name and description.

*output_entries->at(i) = nnvm::NodeEntry{n, static_cast<uint32_t>(i), 0};
}
}
// Connect subgraph internal input with external input entries. By default,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

node->inputs[0] = tmp;
std::rotate(input_entries->begin(), input_entries->begin() + 1,
input_entries->end());
std::rotate(orig_input_entries->begin(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will change the topo-sorted order of orig_input_entries. Have you tested the effect of this change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add code in bind and simple_bind to handle the case that input order change. Add test to verify this.(test_pos_conv_add2 is the case that input order change, we add a test that use bind with input list, and compare the result with non-fusion).

@eric-haibin-lin
Copy link
Member

Looks like some tests are failing

@ZhennanQin
Copy link
Contributor Author

Looks like some tests are failing

@eric-haibin-lin Fixed.

@eric-haibin-lin eric-haibin-lin merged commit ad027ca into apache:master Oct 9, 2018
@eric-haibin-lin
Copy link
Member

Great contribution. Thanks

// setup in_args_map
std::unordered_map<std::string, NDArray> in_args_map;
for (size_t i = 0; i < in_args->size(); ++i) {
in_args_map[arg_names[i]] = in_args->at(i);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zheng-da @ZhennanQin MXNet doesn't actually require that inputs have unique names, so this change is causing errors on a couple of unit tests in the nGraph PR (one tensor gets duplicated for inputs with duplicate names). I understand the need to reorder the inputs after Partitioning, and that the PartitionGraph pass will return different NodePtr, but is there a more robust indicator we can use here?

Copy link
Contributor Author

@ZhennanQin ZhennanQin Oct 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mbrookhart I think MXNet requires inputs with unique names, otherwise, how to distinguish the inputs with same names? According to symbol.bind API:

args (list of NDArray or dict of str to NDArray) 
Input arguments to the symbol.
If the input type is a list of NDArray, the order should be same as the order of list_arguments().
If the input type is a dict of str to NDArray, then it maps the name of arguments to the corresponding NDArray.
In either case, all the arguments must be provided.

So basically, name is the only id for each input.

Copy link

@mbrookhart mbrookhart Oct 10, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/apache/incubator-mxnet/blob/c98b19e2d108a3861d89b475927e8a21a913e540/tests/python/unittest/test_operator.py#L1171

If it's a list, the names don't have to be unique. In this unit test, both inputs are named "data".

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, MXNet seems to determine node ID's by shared_ptr memory address. Unfortunately, that trick doesn't work with Partition Graph because the pass copies the nodes. :(

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the input type is a list of NDArray, the order should be same as the order of list_arguments().
If there're inputs with same name, for your case then the list_arguments() will be like,
['data', 'data', 'data' ...]
Then, how can we know which 'data' is the data2?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The input order to the graph is determined by depth first search in the constructor of the IndexedGraph and/or determined by depth first search when getting the inputs for the symbol. The order in bind simply needs to match the order of that DFS. It's not used extensively, but there are a handful of cases in the unit tests where it happens, including some RNN tests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I understand that you can find the order from the code, but it's not guaranteed, and you shouldn't make any assumption based on that, as it may change in future because the order isn't a part of list_arguments() spec, at least for now.

I guess we should answer this question first, which is, should we support inputs with same name? If the answer is shouldn't, then we need to rename those inputs with same name in unit test and add corresponding check and document to disallow user using like that.

If the answer is we should, then we need to define a clear way for distinguishing inputs apart from name on API level, instead of the undocumented DFS order.

According to the current API design, I guess inputs with same name shouldn't be supported, as the order of list_arguments() is unique only if inputs has unique name. Adding DFS order to list_arguments() spec isn't user friendly, as it's hard for end user to find out the final DFS order from a complex topology, as single op(eg. rand_zipfian) from end-user level may be consist of many small ops in final computing graph.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought graph partitioning preserves the order of inputs. It now doesn't?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zheng-da Now graph partitioning may change the order of inputs. This is a basic requirement from graph optimization purpose. For example, if we found there's a redundant op that doesn't contribute to graph output, shouldn't we remove it? If the answer is yes, then the input list will be changed.
Besides, current API design provides bad support for inputs with same name(if you treat the undocumented behavior is a kind of support). We should fix this anyway.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:) A bug someone is using isn't a bug, it's a feature. I don't know who wrote test_maximum_minimum, but it feels like they were trying to test the duplicate name case?

@ZhennanQin ZhennanQin deleted the mkldnn_fusion_int8 branch October 12, 2018 02:46
SamanthaFeidFischer added a commit to SamanthaFeidFischer/incubator-mxnet that referenced this pull request Nov 17, 2018
* Change dependencies documentation opencv2-->opencv (#12654)

* opencv2-->opencv; deleted duplicate content

* update troubleshooting info

* fix bug, issue 12613 (#12614)

* [MXNET-780] Fix exception handling bug (#12051)

* Fix exception handling bug

* Trigger CI

* Add test for exc handling

* Trigger CI

* Resolve conflicts

* fix bug in prelu , issue 12061 (#12660)

* fix bug in prelu

* add unit test

* add mentions of the gluon toolkits and links to resources (#12667)

* add mentions of the gluon toolkits and links to resources

* fix nlp and cv text

* Remove fixed seed for test_ctc_loss (#12686)

* remove apachecon promo (#12695)

* Onnx version update from 1.2.1 to 1.3 in CI (#12633)

* upgrade onnx

* import helpers

* upgrade version in ci

* addressing comments

* fix

* test name changed

* retrigger tests

* adding comments

* [MXNET-833] [R] Char-level RNN tutorial fix (#12670)

* char RNN tutorial

* nit fixes

* Add documents for two new environment variables for memory pool. (#12668)

* document env.

* update.

* update.

* retrigger

* update mshadow for omp acceleration when nvcc is not present  (#12674)

* update mshadow

* bump

* fix for test order (#12358)

* [MXNET-951] Python dockerfiles built on pip binaries and build/release script (#12556)

* Initial Commit for docker automation python

* Fixes

* change dir for tests

* Fix more issues

* fix docker tag command

* cosmetic changes

* update README

* update test to fail on version mismatch

* remove debug mode

* Update README.md

* Update README.md

* update README

* Add Licenses

* Some review comments

* Add Cuda80 and cuda92 dockerfiles and build steps

* Add renamed and hence untracked files for cu90

* Update README

* More ways to login

* Update README with login options

* Update README with links to test. test_mxnet link will work only after merge

* [MXNET-500]Test cases improvement for MKLDNN on Gluon (#10921)

* Rebase to align the latest changes on test_gluon.py

* Referring the issue link to skip message

* Retrigger the PRECI

* Remove previous changes

* Modify the cases trying to eliminate the errors on GPU

* Resolving conflict

* Further reduce the tensor size

* minor changes

* move to mkl

* fix flaky case

* Remove the test_mkldnn_gluon.py

* Move the cases back to test_gluon.py

* Enable test_gluon.test_export (#12688)

* Update contribute.md (#12685)

* Update contribute.md

Fixed grammar

* Update contribute.md

Fixed Grammar

* Update proposal_target.py (#12709)

* [MXNET-637] Multidimensional LSTM example for MXNetR (#12664)

* added R LSTM examples

* added tutorial to whitelist

* fix encoding

* added seed and fixed few formatting issues

* addressed PR comments

* formatting fixes'

* nit fixes

* fix epochs

* fixed tutorial link

* import Julia binding

- enable Jenkins CI build for Julia
- add license headers to Julia source code
- update links for Julia README

* Fix static / dynamic linking of gperftools and jemalloc (#12714)

* Disable test batchnorm slice (#12716)

* [MXNET-860] Use emplace where helpful (#12694)

* [MXNET-860] Use emplace where helpful

* [MXNET-860] Add emplace as an error in clang-tidy

* [MXNET-953] Correct ASAN cflags flag (#12659)

* [MXNET-860] Remove std::moves that have no affect (#12730)

* [MXNET-860] Remove std::moves that have no affect

* [MXNET-860] Check for unneeded moves as errors

* add FListInputNames attribute to softmax_cross_entropy (#12701)

* Fix #12672, importing numpy scalars (zero-dimensional arrays) (#12678)

* Fix https://github.com/apache/incubator-mxnet/issues/12672

Problem is in using np.ascontiguousarray,
which is buggy for zero-dimensional arrays
 (see https://github.com/numpy/numpy/issues/5300 for details).

Here I use the solution proposed by numpy team:
switch to asarray with order='C'.

Add some tests for this situation (for array() and for setitem too).

* typo in tests

* [MXNET-908] Speed up travis builds to avoid timeouts (#12706)

This PR removes some redundant build tasks and removes some slow tests
to try and decrease the number of TravisCI timeouts that would otherwise
occur on large PRs.

* Throw exception if MXSymbolInferShape fails. (#12733)

* Throw exception if MXSymbolInferShape fails.

* scala-package/native/src/main/native/org_apache_mxnet_native_c_api.cc:
  (Java_org_apache_mxnet_LibInfo_mxSymbolInferShape): throw
  IllegalArgumentException with the content of MXGetError if call to
  MXSymbolInferShape fails.

* Remove stray space.

* Don't throw in JNI.

checkCall in scala code will do the right thing with a nonzero exit
status.

* Don't repeat the memory free code.

Just wrap the FillSymbolInferShape calls in `if (ret == 0) { ... }`.

* Fix too-long line.

* [MXNET-716] Adding Scala Inference Benchmarks (#12721)

* Adding Scala Inference Benchmark base class + an example of how to run it

* Fixed scalastyle issues

* Added platform check to the classpath

* Formatting the metrics to print upto 2 decimal digits in float

* Added bash script to fetch resnet-18 data and params

* Added flag for cpu/gpu for running the script

* Fixed duplicate if check

* [MXNET-623] Fixing an integer overflow bug in large NDArray (#11742)

* Fix integer overflow when the array size is too large

* Update issue templates

* Update issue templates

* Remove files added by mistake

* Fix compilation error after type index_t changed to int64_t

* Explicity specify type in std::max template to avoid platform dependent compilation error

* Add nightly test for large array

* Update submodule mshadow

* Fix compilation warning

* Fix compilation warning

* Change index variable type to size_t

* Fix integer overflow when the array size is too large

* Update issue templates

* Remove files added by mistake

* Fix compilation error after type index_t changed to int64_t

* Explicity specify type in std::max template to avoid platform dependent compilation error

* Add nightly test for large array

* [MXNET-531] NeuralStyle Example for Scala (#11621)

* add initial neuralstyle and test coverage

* Add two more test and README

* kill comments

* patch on memory leaks fix

* fix formatting issues

* remove redundant files

* disable the Gan example for now

* add ignore method

* add new download scheme to match the changes

* Update submodule mshadow

* Fix compilation warning

* Fix compilation warning

* Change index variable type to size_t

* Change temp_size type from size_t to index_t

* Fix lint error

* Fix compilation error in GPU

* Fix compilation error on GPU

* Fix compilation error in cpp-package

* Fix unit test in GPU

* Change correct type for nnvmGraph

* update mshadow submodule to local repo to verify

* update mshadow submodule

* change some data type to size_t

* change unit test style

* fix lint

* fix compilation error in Windows

* fix compilation error in Windows

* use forked submodule to verify

* temporarily update submodule to verify the fix

* update mshadow submodule to use remote

* add test to nightly test script

* Change numpy version to 1.15.2 in python and docker install requirements (#12711)

Default numpy version in The Python Package Index (PyPI) is 1.15.2

* Reenable test_gluon.test_conv (#12718)

* reenable the test

* Trigger CI

* Refine mxnet python installation (#12696)

* update the installation document

* fix minor text

* update the rename process

* fix wording

* remind users. of env and vs version

* leave only the required dll

* fix the link

* update the R anchor

* refine the description of step 7

* add missing .

* fix spelling

* update links and fix wording

* Update packages and tests in the straight dope nightly (#12744)

* [#12345] Enabling two tests in the Straight Dope Nightly.

Two straight dope notebook tests were disabled due to a timeout so they
were disabled. I've updated one of the notebooks (rnn-gluon) to use the
gpu instead of the cpu so it takes ~ 5 minutes on a p3.2xl, and verified
the other notebook takes a minute and was a false alarm (visual-qa). The
PR in the Straight Dope is:
https://github.com/zackchase/mxnet-the-straight-dope/pull/540

* Add dependency for IPython update.

* Detect errors in notebook execution failure.

* Clean up of naming in retry code.

* Fix failing GPU test on single GPU host (kvstore) (#12726)

Fixes #10977

* Add option for automatic downcasting dtype for cudnn to allow using Tensorcore for fp32  (#12722)

* [MXNET-1026] [Perl] Sync with recent changes in Python's API (#12739)

* * Added randn function
* Internal SELU function on C++ layer
* Predict now accepts ndarray as well
* Gluon: Only warn when the blocks are unregistered.
* Better sparse support for gluon
* Gpu memory info via mxnet api call.
* Gluon: Improved block summary.
* Added validation docs for MXNet installation for Perl.
* Flexible perl env for examples.
* Gluon: Custom dtypes for the symbol block
* Separate eval metric for the epoch level.

* fixed typo.

* fix benchmark on control flow operators. (#12693)

* [MXNET-982] Provide example to illustrate usage of CSVIter in C++ API (#12636)

* Adding the example to demonstrate the usage of CSVIter

* Addressed the review comments to make the example configurable. Moved the unittests folder in 'examples' directory.

* Updated the code to address the cpp lint errors.

* Removed the author tag.

* Fixing the lint errors and usage message.

* Update README file for cpp-package and provide README file for example directory.

* Revert "Update README file for cpp-package and provide README file for example directory."

This reverts commit 02e784aaf927d465447d08a978b202bd5677a979.

These files were part of fix for JIRA issue 1017. These files were mistakenly committed in this PR.

* Addressed the review comments regarding usage of atoi and avoiding string copy.

* Updated to use strtol instead of atoi

* [MXNET-912] Refactoring ctc loss operator (#12637)

* Implement ctc_loss as a normal operator

* Update unit test

* Update unit test and fix bug in backward

* fix lint error

* refactoring

* Fix compilation error in CUDA

* Fix CPU compilation error

* Move ctc_include to nn folder and refactor

* temporarily disable lint on 3rd party includes

* move ctc_include to 3rdparty

* remove contrib ctc_loss operator

* revert a change by mistake

* Fix a bug in kDevCPU

* revert change by mistake

* add alias to make it backward compatible

* add unit test for backward compatibility

* linting

* Add new name to CONTRIBUTORS.md (#12763)

* Add resnet50-v1 to benchmark_score (#12595)

* add resnet50-v1 to benchmark_score

* rename back and duplicated

* rename v2 back to resnet.py

* [MXNET-716][MIRROR #12723] Scala Benchmark Extension pack (#12758)

* reflect the PR

* add 1 more metric

* Implement mkldnn convolution fusion and quantization. (#12530)

* Implement mkldnn convolution fusion.
Implement mkldnn convolution quantization.

* Fix lint

* Fix performance regression caused by mkldnn fallback.

* clean up include

* Fix msbuild on openmp pragma.

* Fix quantization test, allow to use original op names as exclude layer for quantization.

* Fix unittest.

* Fix unittest

* fix lint

* Add post quantize fusion

* add test case

* add head license in test case

* Remove GetBoolHash()

* Remove mkldnn fallback change.

* Address Haibin's comments.

* Add TIsMKLDNN for _sg_mkldnn_conv temporarily.

* Address reminisce's comments.

* Handle the case that inplace fail.

* pass unit test.

* Add symbol api get_backend_symbol()

* Retrigger ci

* update the test case

* Check subgraph index.

* Use index as FAvoidQuantizeInput's parameter.

* Add mkldnn_hwigo support as quantizaiton needs.

* Address KellenSunderland's comments.

* Handle input order change after subgraph pass.

* Fix ci test

* Introduction to Clojure-MXNet video link. (#12754)

* [MXNET-915] Java Inference API core wrappers and tests (#12757)

* Core Java API class commit

* Update ScalaStyle max line length to 132 instead of 100

* Disabled flaky test: test_mkldnn.test_Deconvolution (#12770)

* Add mkl-dnn to docker install method (#12643)

* add mkl-dnn to docker install method

* add mkl for gpu

* add docker for windows

* Improve mkldnn fallback. (#12663)

* Fix regression in MKLDNN caused by PR 12019 (#12740)

* add flag to elementwise_add

* fix flatteng

* retrigger

* Fixed broken link for Baidu's WARP CTC (#12774)

* Updated CONTRIBUTORS.md to include lebeg and gigasquid, moved mabreu to committers section (#12766)

* Use modern onnx API to load model from file (#12777)

* Update env_var.md (#12702)

* fix cnn visualization tutorial (#12719)

* [MXNET-979] Add fix_beta support in BatchNorm (#12625)

* Add fix_beta support in BatchNorm CPU implementation

* Fix lint checks. Update GPU tests

* Fix gpu tests

* make fix_beta not available for sparse. Update fix_beta for mkldnn

* Make default fix_beta to False for backward compatibility

* Add fix_beta to cudnn batchnorm operator

* Add tests for missing fix_beta and fix_gamma params

* fix indentation

* Fix failing tests

* simplify the cases with defaults for gamma, beta

* [MXNET-947] Expand scala imclassification example with resnet (#12639)

* [MXNET-947] Scala imclassification example with Resnet

* R fix metric shape (#12776)

* Revert "[MXNET-979] Add fix_beta support in BatchNorm (#12625)" (#12789)

This reverts commit 0bab6d529343f0ce186859ba75c9bb02067e9cfe.
Because master branch started to fail with this change.

* Updated tvm submodule head (#12764)

* Updated tvm submodule head

* Remove FInplaceIdentity attr for cast and _backward_cast

* Adagrad optimizer with row-wise learning rate (#12365)

* Proximal Group Adagrad optimizer

* Remove proximal implementation and rename to GroupAdagrad

* Remove superfluous doc

* Remove superfluous argument

* Fix mismatch shapes (#12793)

* mismatch shape switch

* closing bracket

* closing bracket

* Make Gluon download function to be atomic (#12572)

* use rename trick to achieve atomic write but didn't support python2 and windows

* add test for multiprocess download

* implement atomic_replace referred by https://github.com/untitaker/python-atomicwrites

* change the number of testing process to 10

* add docstring and disable linter

* half way to address some issue reviewer have

* use warning instead of raise UserWarn

* check for sha1

* Trigger CI

* fix the logic of checking hash

* refine the error message

* add more comments and expose the error message to the user

* delete trailing whitespace

* rename _path_to_encode to _str_to_unicode

* fix the error message bug and add remove when the movefile fail on windows

* add remove temp file for non-windows os

* handle the OSError caused by os.remove

* Trigger CI

* use finally to raise failure of atomic replace

* add missing try except block for os.remove

* add retries value to error message

* Re-enables test_dropout (#12717)

* [MXNET -1004] Poisson NegativeLog Likelihood loss (#12697)

* PoissonNLLLoss function to compute negative log likelihood loss

* Removing debugging print statements

* Pylint code formatting problems addressed

* Added Stirling approximation for factorial term in the denominator and test case for the same

* Separated the test cases for Flag value for logits and compute_full

* Added comments for package- numpy inclusion and some pylint formatting

* Trigger CI

* Markdown file updted. Added entry for Poissons NLLLoss

* Fixing pending documentation issue

* Documentation docstring changed

* PR Comment to remove extra newline removed.

* Symbol PI corrected

* epsilon spellicng correction

* More unit tests added - testing with mod.score() and mod.fit()

* changed the number of epochs

* PR Comments addressed added mod score tests and a newline

* Empty line added

* Adding hybridized test

* Trigger CI

* Variable names changed

* Update osx.mk - Added "apple" to USE_BLAS comment (#12819)

Added "apple" to USE_BLAS comment because it is one of the versions that are possible. 
Currently the comment only has "mkl, blas, atlas, openblas" that can be used

* [MXNet-1002] Add GluonCV and NLP tookits, Keras, and developer wiki to navigation (#12704)

* refactor and sync nav bar between desktop and mobile

* update dev wiki url

* bump file for CI

* remove htaccess change from this pr

* removing keras for now

* bumping for CI

* fixed symbols naming in RNNCell, LSTMCell, GRUCell (#12794)

* fixed symbols naming in RNNCell and LSTMCell

* fixed GRUCell as well

* added test

* fixed tests?

* simplify mac mkldnn build (#12724)

* remove guard that prevent omp flag in mac

* udpate doc for mac make build

* update docs

* update readme

* set opencv to 1 in instructions

* remove disable opencv line

* update mac docs

* fix indent

* Change the way NDArrayIter handle the last batch (#12545)

* 1. move the shuffle to the reset 2. modify the roll_over behavior accordingly

* refactor the concat part

* refactor the code

* implement unit test for last_batch_handle

* refactor the getdata part

* add docstring and refine the code according to linter

* 1. add test case for NDArrayIter_h5py 2. refactor the implementation

* update contributions doc

* fix wording

* update doc for roll_over

* 1. add test for second iteration of roll_over 2. add shuffle test case

* fix some wording and refine the variables naming

* move utility function to new file

* move utility function to io_utils.py

* change shuffle function name to avoid redefining name

* make io as a module

* rename the utility functions

* disable wildcard-import

* fix the algorithm

* refactor the code

* test the NDArrayIter with different combinations of shuffle=True, data_source type and lables

* add edge case of label data for csr NDArrayIter

* trigger Travis CI

* handle the 'list' of data source

* check the list of data source

* fix the extra blank

* Trigger CI

* add _ to the utility functions

* Trigger CI

* update several test cases

* add test case for airbnb

* fix the typo

* fix wrong labels data shape

* switch the order of condition to make more sense

* [MXNET-707] Add unit test for mxnet to coreml converter (#11952)

* Add unittest to coreml converter

* Add unittest to coreml converter

* Add docstring and remove unused method

* updated test and removed unittest folder

* remove unittest

* Add coreml test to CI

* fix lint

* install mxnet-to-coreml for testing

* exclude test that takes too long

* linting to 100 max line width

* Add embedding to print_summary (#12796)

* Scala Docs - Replace old Symbol api usages (#12759)

* [MXNET-892] ONNX export/import: DepthToSpace, SpaceToDepth operators (#12731)

* ONNX export/import: DepthToSpace operator

* ONNX import/export: SpaceToDepth operator

* ONNX import/export: Tests for SpaceToDepth

* R install instructions update for macOS (#12832)

* add prereqs for R installation on Mac

* pin openblas for mac R install to 0.3.1

* Fixed __setattr__ method of _MXClassPropertyMetaClass (#12811)

* fixed indentation

* simplified code

* Fixed regex for matching platform type in Scala Benchmark scripts (#12826)

* Added context object to run TestCharRnn example (#12841)

* [MXNET-703] Show perf info for TensorRT during tests (#12656)

This PR makes sure perf information printed during TensorRT test runs
is correctly displayed when run in CI.

* Update Operator Implementation Tutorial (#12230)

* update op creation docs

* add flakiness checker and link to gradient checking

* address comments

* update reference line number

* fix comments

* Fix broken links (#12856)

* Fix Flaky Topk (#12798)

* fix flaky topk

* try to fix

* remove the usage of IndexFill

* fix

* add docstring

* Add Psroipooling CPU implementation (#12738)

* add psroipooling cpu impl

* minor fix

* revert copyright

* fix testcase

* add openmp

* no openmp for backward

* ONNX export: Fully connected operator w/o bias, ReduceSum, Square (#12646)

* ONNX export: Fully connected operator with no bias

* ONNX export: Helper function to convert bool string attributes to int

* ONNX export: ReduceSum operator

* ONNX import/export: Make pow backward compatible

* ONNX export: Square operator

* Undefined name: load_model() --> utils.load_model() (#12867)

* Undefined name: load_model() --> utils.load_model()

As discussed at:
* https://github.com/apache/incubator-mxnet/commit/815f36ce8b4ed16fe27d500f5c8c930cd10cee5c#r30956015

* Force a rebuild

* Force a rebuild

* ONNX export/import: Selu (#12785)

* Sparse support for logic ops (#12860)

* remove check

* fix lint

* fix gpu build

* add a tutorial for the subgraph API. (#12698)

* add tutorial.

* update.

* update.

* update.

* add test.

* fix subgraph test.

* update.

* update.

* update.

* add comments.

* remove test.

* update image path.

* update.

* update.

* update.

* fix lint.

* add link.

* fix lint.

* MKL-DNN Quantization Examples and README (#12808)

* add gluoncv support

* add ssd readme

* improve ssd readme

* add custom readme

* add ssd model link

* add squeezenet

* add ssd quantization script

* fix topo of args

* improve custom readme

* fix topo bug

* fix squeezenet

* add squeezenet accuracy

* Add initializer for min max to support quantization

* add dummy data inference

* add test case for init_param

* add subgraph docs

* improve docs

* add two models and fix default rgb_std to 1

* fix doc link

* improve MKLDNN_README

* add quantization for mobilenetv1

* fix ssd benchmark_score label shapes

* add resnet101_v1 and inceptionv3 support

* Refine some descriptions in the MKLDNN_README

* improve docs

* improve link in perf.md

* [MXNET-1033] Fix a bug in MultiboxTarget GPU implementation (#12840)

* remove num_labels check in multibox_target

* add unit test

* test both cpu and gpu

* add contrib operator to GPU unit test

* do not test all contrib operator in gpu

* [MXNET-1107] Fix CPUPinned unexpected behaviour (#12031)

* Fix CPUPinned unexpected behaviour

* fix lint

* add guards

* Actually, this may affect perf

* trigger ci

* fix lint

* fix documentation

* fix for dist_sync_device

* add guard

* fix bug with memory

* try fix for gluon mp interaction

* blah

* trigger jenkins

* Try fix for gluon multiprocessing bug

Thanks Nvidia!

* edit

* try nvidia fix

* address Haibin and Lin's comments

* get rid of blank line in Makefile

* NativeResource Management in Scala (#12647)

* add Generic MXNetHandle trait and MXNetHandlePhantomRef class that will be used by all MXNetObjects

* Generic Handle with AutoCloseable

* add NativeResource and NativeResourceManager with Periodic GC calling

* use NativeResource trait in NDArray, Symbol and Executor

* add run train mnist script

* create a Generic ResourceScope that can collect all NativeResources to dispose at the end

* modify NativeResource and ResourceScope, extend NativeResource in NDArray, Symbol and Executor

* remove GCExecutor

* deRegister PhantomReferences by when calling dispose()

* add Finalizer(temporary) to NativeResource

* refactor NativeResource.dispose() method

* update NativeResource/add Unit Test for NativeResource

* updates to NativeResource/NativeResourceRef and unit tests to NativeResource

* remove redundant code added because of the object equality that was needed

* add ResourceScope

* Fix NativeResource to not remove from Scope, add Unit Tests to ResourceScope

* cleanup log/print debug statements

* use TreeSet inplace of ArrayBuffer to speedup removal of resources from ResourceScope
Fix Executor dispose and make KVStore a NativeResource

* fix segfault that was happening because of NDArray creation on the fly in Optimizer

* Add comments for dispose(param:Boolean)

* add/update infer_range docs (#12879)

* Fix __all__ in optimizer/optimizer.py (#12886)

* Add index_copy() operator (#12810)

* add index_copy operator

* add index_copy op

* update index_copy op

* add unittest for index_copy()

* update index_copy

* update index_copy

* use mxnet_op::copy

* update index_copy

* update index_copy

* update index_copy

* update index_copy test

* update index_copy test

* sparse support for take(csr, axis=0)  (#12889)

* initial commit

* add test cases for mode

* fix bug

* add comment

* more comments

* Add more models to benchmark_score (#12780)

* add models to cnn benchmark

* improve benchmark score

* add benchmark_gluon

* improve lint

* improve lint

* add licsence for script

* improve script lint

* mv benchmark_gluon to new location

* support multi-gpus

* Add a new parameter 'global batchsize' for the batch size multiplication for multi-gpu case

* add batch size argument help

* improve help and change default batchsize

* simplify benchmark_gluon

* [MXNET-1025] Add Jetpack 3.3 support to Jetson (#12735)

* Fix Batch input issue with Scala Benchmark (#12848)

* add initial change

* add fix

* improved usage of Shape as well as warning message on performance

* change into parallel

* drop dropBack

* apply Andrew's comments

* remove add dim inside img 2 pixel

* addressed Naveen's comment

* update comments

* fix type inference in index_copy. (#12890)

* Extending the DCGAN example implemented by gluon API to provide a more straight-forward evaluation on the generated image (#12790)

* add inception_score to metric dcgan model

* Update README.md

* add two pic

* updata readme

* updata

* Update README.md

* add license

* refine1

* refine2

* refine3

* fix review comments

* Update README.md

* Update example/gluon/DCGAN/README.md

* Update example/gluon/DCGAN/README.md

* Update example/gluon/DCGAN/README.md

* Update example/gluon/DCGAN/README.md

* Update example/gluon/DCGAN/README.md

* Update example/gluon/DCGAN/README.md

* Update example/gluon/DCGAN/README.md

* Update example/gluon/DCGAN/README.md

* Update example/gluon/DCGAN/README.md

* Update example/gluon/DCGAN/README.md

* Update example/gluon/DCGAN/README.md

* modify sn_gan file links to DCGAN

* update pic links to web-data

* update the pic path of readme.md

* rm folder pic/, and related links update to https://github.com/dmlc/web-data/mxnet/example/gluon/DCGAN/

* Update README.md

* [MXNET-674] Speed up GPU builds in CI (#12782)

* [MXNET-674] Speed up GPU builds in CI

* [MXNET-674] Refactor SMs into shell variable

* [MXNET-674] Build CMake GPU CI jobs without PTX

* [MXNET-793] ★ Virtualized testing in CI with QEMU ★ (#12094)

* virtual testing with qemu

* Add install procedure

* update installation

* Refine test run

* use direct ssh

* update readme

* Fix uneccesary cp

* Minor refinements

* Refine error conditions in startup

* requirements installed inside QEMU

* Update base image

* Fix license

* Dockerfile rename fallout

* license fixes

* refine documentation

* license fix

* update readme

* Update qemu base image and refine documentation

* Address CR comments wrt shebangs.

* Address CR comments wrt comments.

* adjust vda2 -> vda1

* Disable SMP, bug with newer kernel

* Remove commented out code

* Fix licenses

* CR comments addressed

* increase ram to 4096mb

* Revert dockerfile renaming

* Fix undo rename of dockerfiles

* Address CR comments

* CR

* [MXNET-1017] Updating the readme file for cpp-package and adding readme file for example directory. (#12773)

* Updating the readme file for cpp-package and adding readme file for example directory.

* Updating the readme file for cpp-package and adding readme file for example directory.

* Addressed the review comments.

* Addressed the review comments

* Fail the broken link job when broken links are found (#12905)

* Fix typo in formula in docstring for GRU cell and layer and add clarification to description (gluon.rnn) (#12896)

* Fix typo in GRU cell and layers (gluon.rnn) docstring

* empty

* fix the paths issue for downloading script (#12913)

* Ignore generated scala files. (#12928)

* use ResourceScope in Model/Trainer/FeedForward.scala (#12882)

* use ResourceScope in Model/Trainer/FeedForward.scala

* add moveToOuterScope public method to move resources to a outerScope if it exists

* fix memory leak in FeedForward.scala by making it a native resource and disposing argparams, auxParams
in dispose() method

* Disabled flaky test: test_gluon_gpu.test_slice_batchnorm_reshape_batchnorm (#12768)

* Fix the operator API documentation (#12942)

* Fix the operator API documentation

* update message

* fix indpt[0] for take(csr) (#12927)

* getnnz operator  for CSR matrix (#12908)

* nnz

* update err msg

* skip nnz test on gpu

* fix broken docs (#12871)

* Add bytearray support back to imdecode (#12855, #12868) (#12912)

1. Avoid raise exception when input is bytearray.
2. Avoid OpenCV crash for empty input.
3. Added unittests.

* Update tree lstm example (#12960)

* update tree lstm example

* update README.md

* Update README.md

* Update bilstm integer array sorting example (#12929)

* Update the bilstm example to Gluon

* Update formating

* Update example/vae/VAE_example.ipynb

Co-Authored-By: ThomasDelteil <[email protected]>

* Fix the bug of assigning large integer to NDArray (#12921)

* remove num_labels check in multibox_target

* add unit test

* test both cpu and gpu

* add contrib operator to GPU unit test

* do not test all contrib operator in gpu

* Fix the large int assign problem

* Refactor mkldnn test files (#12410)

* move mkldnn helper funcs to diff file

* create test file to test helper functions

* update comments in header

* move helpers into include dir

* fix lint

* update comment

* add stdlib headers

* remove unused headers

* add endif

* add missing header

* add inlines

* fix lint

* move copyfrom test to mkldnn_test

* CudnnFind() usage improvements (#12804)

* Add mx.context.gpu_memory_info() to python api for flexible tests.

* Add test_gluon_gpu.py:test_large_models to show cudnnFind headroom issue.

* Output model sizes tried by test_gluon_gpu.py:test_large_models.

* Fix perl interface to MXGetGPUMemoryInformation.

* Increase difficulty of test_gluon_gpu.py:test_large_models.

* Forgot a file in fix for perl.

* Modify test to pass on no-cudnn CI runner.

* Mutex algo reg updates, serialize cudnnFind calls.

* Fix for cudnnFind memory headroom issue.

* Fix cpplint.

* Respond to reviewers comments.

* Guard against improper MXNET_GPU_MEM_LARGE_ALLOC_ROUND_SIZE values.

* Fix potentially unassigned var.

* fix mac r install and windows python build from source docs (#12919)

* fix mac r install and windows python build from source docs

* reorder macos r install instructions

* enable batchnorm unit tests (#12986)

* enable bn unit tests

* travis timed out, trigger ci

* Update CONTRIBUTORS.md (#12996)

I have made two minor contributions with pull requests so far. I forgot to add my name here earlier.

* fix Sphinx errors for tutorials and install ToCs (#12945)

* missing line break fix for tutorials toc

* fix the install index toc errors

* [MXNET -1030] Cosine Embedding Loss (#12750)

* COsine Embedding Loss function added

* Added unit tests for Cosine Embedding Loss Function

* Added Latex code for formula for cosine embedding loss

* Fixing document rendering

* Fixing documentation issue

* PR Comments addressed for using F (NDArray or Symbol) to calculate norm, renaming parameters

* Markdown file updated. Added entry for CosineEmbeddingLoss

* Added a line after .. math:: to fix documentation

* Documentation check - pylint fix

* Formula update

* Making the formula simpler for correct rendering incrementally - Update 1

* Making the formula simpler for correct rendering incrementally - Update 2

* Making the formula simpler for correct rendering incrementally - Update 3

* Making the formula simpler for correct rendering incrementally - Update 4

* Making the formula simpler for correct rendering incrementally - Update 5

* Trigger CI

*  making the utility function cosine similarity internal

* Added a test case for label = -1, for dissimilar vectors

* Refactored names of parameters to the loss functions and updated the formula in docstring

* PR comments addressed changes in documentation

* Added random input vectors and labelled tests

* Renaming variables

* Pylint issues fixed

* Resolving conflicts

* Pylint issues fixed

* Style issues fixed trailing whitespaces removed

* Review comment addressed, sample_weight added in the parameter

* Trigger CI

* Reordered Parameter description

* comments addressed - spelling errors

* nit comments addressed

* Trigger CI

* Trugger CI

* Trigger CI

* Trigger CI

* [MXNET-1173] Debug operators - isfinite, isinf and isnan (#12967)

* is_finite and is_inf implementation for front-end python api debug operator

* updated unit-tests

* updated test cases and incorporated is_nan function

* solved index out of bounds issue and added comments

* simplified abs function call and added isnan to contrib.py and all debug ops to doc

* changed dimensions, added regular number, assert_equal instead of almost, removed ctx and added data.abs

* [MXNET-1111] Remove CPUPinned in ImageRecordIter (#12666)

* squash commit

* get rid of argument

* undo a lot of unnecessary changes

* undo more changes

* fix typo

* fix lint

* address comments and fix rebase mistake

* fix typo made during rebase

* revert cpu_pinned

* revert changes, because works without needing to copy params to GPU. thanks @yuxihu for testing and @apeforest for raising this issue!

* revert changes to comm and nccl

* Added/changed file_name, brief description comments in some files (#13033)

* sample_like operators (#13034)

* [MXNET-1179] Enforce deterministic algorithms in convolution layers (#12992)

* add env variable to choose deterministic cudnn alg

* set default value to false

* fix build failure in Windows GPU

* revert the previous change

* only check determinism in CUDNN 7.x release

* Add cudnn version check

* fix lint error

* Add a deprecate message (#13042)

* Fix the operator API documentation

* update message

* deprecate old command

* Disable flaky test test_operator.test_dropout (#13057)

* Disable flaky test test_prelu (#13060)

* la_op_inline.h to la_op-inl.h for consistency (#13045)

* la_op_inline.h to la_op-inl.h for consistency

* operator/tensor left-over doc changes

* Improve clojure tutorial (#12974)

* Switch tutorial to dependency/ies that exist on Maven

* Improve Clojure Module tutorial

* Add namespace docstring

* Bring verbiage up to date with https://mxnet.incubator.apache.org/api/clojure/module.html

* Add newlines for readability and to keep line length <80

* Nix duplicated section in Clojure Symbol API docs

"Multiple Outputs" is a (deprecated) repeat of "Group Multiple
Symbols".

* Improve Clojure Symbol tutorial

* Add namespace docstring

* Bring verbiage up to date with https://mxnet.incubator.apache.org/api/clojure/symbol.html

* Add newlines for readability and to keep line length <80

* Fix missing end-code-block in Clojure NDArray API docs

* Improve Clojure NDArray tutorial

* Add namespace docstring

* Bring verbiage up to date with https://mxnet.incubator.apache.org/api/clojure/ndarray.html

* Add newlines for readability and to keep line length <80

* Improve Clojure KVStore tutorial

* Add namespace docstring

* Bring verbiage up to date with https://mxnet.incubator.apache.org/api/clojure/kvstore.html

* Add newlines for readability and to keep line length <80

* [MXNET-1017] Updating the readme file for cpp-package and adding readme file for example directory. (#12773)

* Updating the readme file for cpp-package and adding readme file for example directory.

* Updating the readme file for cpp-package and adding readme file for example directory.

* Addressed the review comments.

* Addressed the review comments

* Fail the broken link job when broken links are found (#12905)

* Fix typo in formula in docstring for GRU cell and layer and add clarification to description (gluon.rnn) (#12896)

* Fix typo in GRU cell and layers (gluon.rnn) docstring

* empty

* fix the paths issue for downloading script (#12913)

* removed unused header (#13066)

* Moves f16c autodetection to its own cmake module (#12331)

* Set correct update on kvstore flag in dist_device_sync mode (#12786)

* Set correct update on kvstore flag in dist_device_sync mode

* Add warning message for batch-size change in dist mode

* Empty commit

* Fix lint issues

* ONNX export: Cleanup (#12878)

* ONNX export: Cleanup input retrieval

- Create a common function to get inputs for conversion functions
- Do not register functions if onnx is not found

* ONNX export: Add helper for creating node

* Maven Surefire bug workaround (#13081)

* remove legacy installation of Roxygen2 5.0 and add R-specific clean target (#12993) (#12998)

* remove installation of legacy Roxygen2 vers. 5.0

* add R-specific clean target (#12993)

* fixup! remove installation of legacy Roxygen2 vers. 5.0

* fixup! remove installation of legacy Roxygen2 vers. 5.0

* Gluon LSTM Projection and Clipping Support (#13056)

* support projection in LSTM

* add tests

* update rnn to use cudnn ex

* extend cudnn test to handle different versions

* add lstm clip

* use CUDNN_VERSION

* merge USE_CUDNN_LSTM_CLIP and USE_CUDNN_LSTM_PROJ

* assign false value to clip nan explicitly to RNN and  GRU

* update test

* fix readme (#13082)

* [MXNET-1180] Scala Image API (#12995)

* add image and image suite

* apply toImage function and tests

* bug fix

* apply the commented change

* add test to apply border

* fix scalastyle

* [MXNET-793] Virtual testing with Qemu, refinement and extract test results to root MXNet folder (#13065)

* Improve Qemu infrastructure
Add documentation about running it interactively

* Separate provision

* Improve provisioning

* Refine provisioning and interactive

* Cant provision when the volumes arent mounted

* Fix running tests

* raise log output to INFO

* adjust logging

* flush stdout and stderr

* Refine by copying test results back to the host

* Fix license

* remove config file and different way to run QEMU

* remove config file and different way to run QEMU, remove ansible

* Updated / Deleted some examples (#12968)

* Updated / Deleted some examples

* remove onnx test

* remove onnx test

* Fix variable name in tutorial code snippet (#13052)

Fixes incorrect variable name in tutorial code as raised in issue https://github.com/apache/incubator-mxnet/issues/13051

* customized take forward for CPU (#12997)

* Update module example (#12961)

* Update Module example

* trigger CI

* ONNX export: Scalar, Reshape - Set appropriate tensor type (#13067)

np.array sets default dtype to float64 which is
not supported by ONNX. Setting these to appropriate type.

* Fix example for mxnet.nd.contrib.cond and fix typo in src/engine (#12954)

* fix typo in src/engine

* fix example for mx.nd.contrib.cond

* Improve the Clojure Package README to Make it Easier to Get Started (#12881)

* Improve the README and make it easier to get started

* Implement feedback from @ChaiBapchya and @daveliepmann

* combined deps

* Add wget

Co-Authored-By: gigasquid <[email protected]>

* WIP: update readme

* WIP: readme option 3

* Add section links to Clojure README

Link each install option with the corresponding README section
containing instructions for that option.

An existing link to Maven search is removed because it interferes with
the section links and it is replicated in the Option 1 instructions
below.

Per my PR suggestion:
https://github.com/apache/incubator-mxnet/pull/12881/files/22bbe55d8d62be9ff3aebf693f73fa6049afc01d#r226822148

* fix typo

Co-Authored-By: gigasquid <[email protected]>

* fix formatting

Co-Authored-By: gigasquid <[email protected]>

* fix formatting

Co-Authored-By: gigasquid <[email protected]>

* fix link

Co-Authored-By: gigasquid <[email protected]>

* Some more updates for the Clojure README

* [MXNET-918] Introduce Random module / Refact code generation (#13038)

* refactor code gen

* remove xxxAPIMacroBase (overkill)

* CI errors / scala-style

* PR review comments

* Fix a typo in operator guide (#13115)

* Fix the operator API documentation

* update message

* deprecate old command

* fix typo in op guide

* [Issue #11912] throw mxnet exceptions when decoding invalid images. (#12999)

* Raise an excption when passing an empty buffer to imdecode.

* src/io/image_io.cc: Check the length of the input buffer.
* tests/python/unittest/test_image.py: Update the (already existing) test to expect a mx.base.MXNetError.

* Raise an exception when passing an invalid data buffer to imdecode.

* src/io/image_io.cc: Raise an exception when the image could not be decoded instead of just logging.
* tests/python/unittest/test_image.py: Add a new test test_imdecode_invalid_image.

* Raise an exception when passing an invalid data buffer to imdecode.

* src/io/image_io.cc: Raise an exception when the image could not be decoded instead of just logging.
* tests/python/unittest/test_image.py: Add a new test test_imdecode_invalid_image.

* Rollback a "empty buffer" check in the image python bindings that's now more generally handled
in the core code.

* python/mxnet/image/image.py: remove buffer length check.

* Update adversary attack generation example (#12918)

* Fix adversary example generation

* Update README.md

* Fix test_utils.list_gpus()

* fix unused variable

* Disable travis tests (#13137)

* Update Gluon example folder (#12951)

* Reorganized the Gluon folder in example

* trigger CI

* update reference

* fix out of place accumulation

* Document the newly added env variable (#13049)

* add env variable to choose deterministic cudnn alg

* set default value to false

* fix build failure in Windows GPU

* revert the previous change

* only check determinism in CUDNN 7.x release

* Add cudnn version check

* fix lint error

* document env variable MXNET_ENFORCE_DETERMINISM

* use cudnnGet instead of cudnnFind when determinism required

* Revert "use cudnnGet instead of cudnnFind when determinism required"

This reverts commit d1bdf0f38f50b8c499f22ae1d50770b819f27678.

* Updated CONTRIBUTORS.md to include mxnet-label-bot  (#13048)

* Updated CONTRIBUTERS.md to include label-bot

* Created section for label bot and included wiki page

* Moved Label Bot section in CONTRIBUTORS.md file to a more convenient location

* Retriggering

* Fix docker cleanup race condition (#13092)

* Improved git reset for CI builds (#12784)

* Refactor L2_normalization (#13059)

* Refactor L2_normalization

* Fix windows build

* Fix windows build

* Move cpu optimization into l2_normalization.cc

* Retrigger CI

* Retrigger CI

* Fix variational autoencoder example (#12880)

* Add documentation on GPU performance on Quantization example (#13145)

* Add documentation on GPU performance

* Update README.md

* [MXNET-1194] Reenable nightly tutorials tests for Python2 and Python3 (#13099)

* Reenable nightly tests tutorials

* small fix to settings

* optimize a few more tutorials

* Update tests

* Update runtime_functions.sh

* Update fine_tuning_gluon.md

* Update JenkinsfileForBinaries

* Update JenkinsfileForBinaries

* remove coverage

* Update dec example (#12950)

* update dec example

* trigger CI

* update to remove dependency on sklearn data

* Update MKL-DNN dependency (#12953)

* update mkldnn and fix conv/deconv

* fix

* fix indent

* fix cmake

* fix cmake

* fix cpp test for mkldnn

* fix typo

* fix conficts after merge

* debug: remove 5d test

* debug: remove 4d test

* add comments

* debug: remove 2d test

* update mklml in ci

* fix mklml

* Revert "fix mklml"

This reverts commit 328a22a373c49aacb914badd0db431bfbc8234f3.

* Revert "update mklml in ci"

This reverts commit 9ff3687892f85f43b8eac72ba935ceda928ae7e8.

* Revert "debug: remove 2d test"

This reverts commit 32551b3662fc30d5c9758a86c7664b4f2e367128.

* Revert "debug: remove 4d test"

This reverts commit 5412d643c2b00ce54c05e7387aca6779dee120d5.

* Revert "debug: remove 5d test"

This reverts commit 1fe9f8806d29c765e05f91c584799a947af2eb1d.

* debug illegal core dump

* debug illegal core dump

* Revert "debug illegal core dump"

This reverts commit 39321d578ae589465c0d4edcae7f92b88fdf3feb.

* Revert "debug illegal core dump"

This reverts commit 153b068b6d3a18a33f399076d3420ac42f2bc387.

* change cmake

* pin mkldnn version to 0.17rc

* change format number

* remove include directories in cmake

* fix cpp test

* address cpplint complaint

* remove comment code

* update mkldnn head

* License header (#13178)

* Minor fix to license_header documentation

* Handle UnicodeError when checking license

* Updated capsnet example (#12934)

* Updated capsnet

* trigger CI

* Update README.md

* Updates to several examples (#13068)

* Minor updates to several examples

* fix typo

* update following review

* Fix Sphinx python docstring formatting error. (#13177)

* [Doc] Fix repo paths in Ubuntu build doc (#13101)

* [Doc] Fix repo paths in Ubuntu build doc

* [Doc] Use relative path in Ubuntu build doc

* Update scala intellij tutorial (#12827)

* Update scala intellij tutorial

Update mxnet version
log4j fixes
Instructions from source

* Remove version numbers and various improvements

* Improve cpp-package example project build files. (#13093)

1. Change output to build folder.
2. Remove files that not been deleted by make clean.

* Fix Sphinx document parsing error. (#13195)

Fixes #12935

* Fix #13090, Add image.imread to python API doc. (#13176)

* Fix Sphinx docstring formatting error. (#13004, #13005, #13006) (#13175)

* Fix #12944, Fix Sphinx python docstring formatting error. (#13174)

* Fix #13013, Fix Sphinx python docstring error. (#13173)

* update the README (#13186)

* Fixed Sparse astype doc string formatting error (#13171)

* Fix problem with some OSX not handling the cast on imDecode (#13207)

* Port of scala Image API to clojure (#13107)

* Port of scala Image API to clojure

* Minor style changes

* Add specs and other minor fixes

* Fix unit tests (:facepalm:)

* Fixed Documentation issues (#13215)

1. mxnet.metric.EvalMetric.get_config doc error
2. mxnet.module.SequentialModule.add doc error

* update the doc (#13205)

* Fix Sphinx doc errors (#13170)

* Fix Sphinx python docstring error: initializer.InitDesc (#12939) (#13148)

* Fix Sphinx python docstring error: text contrib module (#12949) (#13149)

* Sphinx failure fixes (#13213)

* [MXNET-793] Virtualized ARMv7 with Qemu CI integration (#13203)

* Testing just ndarray, since otherwise we require test refactoring which will be done later

* Add QEMU ARMv7 test stage to CI

* test_ndarray fails, so change for test_engine until UT are fixed in ARM

* Refactor kvstore test (#13140)

* Refactor kvstore test

* Fix pylint

* Fix problem with some OSX not handling the cast on imDecode (#13207)

* Fix num_gpus

* remove unused variable rotateM_ (#10803)

* Revert "Sphinx failure fixes" (#13230)

* Revert "Refactor kvstore test (#13140)"

This reverts commit d8d2d6ef3d688a465e47f7170c2a11da804c2835.

* Revert "[MXNET-793] Virtualized ARMv7 with Qemu CI integration (#13203)"

This reverts commit fd3dedc621919b6fee7d8ca7fa2a85749e190907.

* Revert "Sphinx failure fixes (#13213)"

This reverts commit 2e4d6c8c1064b74d4e1c1b3441c2ecf12b81c6e2.

* [MXNET-953] Fix oob memory read (#12631)

* update log4j version of Scala package (#13131)

* Disable Flaky test test_operator.test_clip (#12902)

* Update multi-task learning example (#12964)

* Update multi task learning example

* Updating README.md

* Update MKLML dependency (#13181)

* update mkml

* refine DownloadMKLML.cmake

* merge DownloadMKLML.cmake from #11148

* fix mkldnn release version

* fix windows compilation

* Add --no-cache option to build.py when building containers (#13182)

Add functionality to build.py to disable caching

* Tool to ease compilation and reproduction of test results (#13202)

* Add tool to simplify reproducing tests

* add local build

* Add cmake_options.yaml

* minor

* Fix license

* Fix licenses

* Rename file, address CR comments about gpu build function

* Address Marco's comments

* support for upper triangular matrices in linalg (#12904)

* Fix Sphinx python docstrings (#13160)

* Doc fixes

* addressing feedback

* base_module fix

* fixing cross-reference issues

* Implemented a regression unit test for #11793 (#12975)

When using C++-based iterators, it's important that only a single batch is referenced at a time. Because C++
iterators are exposed to the Python code through a C API, there is no concept of reference counting. Hence,
typically C++ iterators will deallocate a batch when next() is called on them. So, we need to make sure the Python
code only references a single batch at a time, otherwise the Python code will attempt to access freed memory,
resulting in either (a) garbage accuracy or (b) a segmentation fault.

The test passes with the latest mxnet build. I verified it failed on previous releases, such as mxnet==1.2.0.

* Add Java API docs generation (#13071)

* add Java API docs generation; split out from Scala API docs

* bumping file for ci

* make scala docs build compatible for 2.11.x and 2.12.x scala

fix typo

* fix exit bug

* Fix Sphinx error in ONNX file (#13251)

* [Example] Fixing Gradcam implementation (#13196)

* fixing gradcam

* changed loading parameters code

* fixing type conversions issue with previous versions of matplotlib

* Fix test failure due to hybridize call in test_gluon_rnn.test_layer_fill_shape (#13043)

* Restore hybridize call in test_gluon_rnn.test_layer_fill_shape

* reset bulk_size when cached op forward hit error to fix the test failure

* add try-catch block to reset bulk_size in more places to prevent potential bugs

* more cleanup upon exception in Imperative::Backward

* Addressed sphinx build issue (#13246)

* Add gauss err function operator (#13229)

* erf

register gpu

* add doc

* Add Turing and Volta support to arch_name (#13168)

* Bugfix in ci/docker_cache.py (#13249)

* Fix scaladoc build errors (#13189)

* Fix scaladoc errors from missing classpath

Remove duplicate scalastyle plugin

* Fix scaladoc warnings

Also enable and fix all feature and deprecation warnings

* Add missing documentations for getnnz (#13128)

* Addressed ONNX module documentation warnings and added notes for short-form representation (#13259)

* Manually track num_max_thread (#12380)

* use cached version of get thread max

* reserve core affects omp singleton

* omp_thread_max_ updated in one line

* remove enabled block

* add brackets

* re-add excluded reserved

* add missing var

* refactor macro

* adding unit test for MKLDNN FullyConnected operator (#12985)

* adding unit test for MKLDNN FullyConnected operator

* removing mkldnn filter

* removing mkldnn filter

* Doc fixes (#13256)

* fix train mnist for inception-bn and resnet (#13239)

* Fix a bug in index_copy (#13218)

* fix.

* add test.

* retrigger

* Addressed doc issues (#13165)

* Addressed doc issues

* Update optimizer.py

* Force APT cache update before executing install (#13285)

* [Example] Gradcam consolidation in tutorial (#13255)

* fixing gradcam

* changed loading parameters code

* fixing type conversions issue with previous versions of matplotlib

* gradcam consolidation

* creating directory structures in utils

* changing location

* empty commit

* [MXNET-1203] Tutorial infogan  (#13144)

* Adding info_gan example

* adjust paths of filenames

* Update index.md

* Update index.md

* Update index.md

* Update info_gan.md

Added an image

* Update info_gan.md

Applied some fixes

* Update info_gan.md

Applied some fixes

* Update info_gan.md

Applied some fixes

* Update info_gan.md

* Updated index.md file

* Updated index.md file

* change links

* Fixed typo

* Delete Untitled.ipynb

* Adding Vishaals comments

* Adding Anirudh's comments

* Fixed some bugs

* Adding Anirudh's comments

* some minor fixes

* Remove obsolete memory cost example (#13235)

* stop gap fix to let website builds through; scaladoc fix pending (#13298)

* Fix Sphinx errors in box_nms (#13261)

* Fix Sphinx errors (#13252)

* Sphinx errors in Gluon (#13275)

* Fix Sphinx python docstring formatting error. (#13194)

* Fix Sphinx python docstring formatting error (#13021).

Fixes #13021

* Update src/operator/nn/batch_norm.cc

Co-Authored-By: frankfliu <[email protected]>

* Visualization doc fix. Added notes for shortform (#13291)

* Addressed "dumplicate object reference" issues (#13214)

* Update basic_layers.py (#13299)

* add url and license to clojure package project (#13304)

* [Example] Add docstring for test optimizer and test score (#13286)

* update the doc for test_optimizer

* add docstring for test_score

* [Example] Update cpp example README (#13280)

* update the README to solve the  library cannot find problem

* fix the broken format

* remove redundancy and broken format

* add .

* [Example]update NER example readme on module prediction (#13184)

* update readme on module prediction

* fix typo

* update url

* improve grammar

* update link

* [MXNET-1198] MXNet Java API (#13162)

* [MXNET-984] Add Java NDArray and introduce Java Operator Builder class (#12816)

* clean history and add commit

* add lint header

* bypass the java unittest when make the package

* clean up redundant test

* clean spacing issue

* revert the change

* clean up

* cleanup the JMacros

* adding line escape

* revert some changes and fix scala style

* fixes regarding to Naveen's comment

* Java Inference api and SSD example (#12830)

* New Java inference API and SSD example

* Adding license to java files and fixing SSD example

* Fixing SSD example to point to ObjectDetector instead of ImageClassifier

* Make scripts for object detector independent to os and hw cpu/gpu

* Added API Docs to Java Inference API. Small fixes for PR

* Cosmetic updates for API DOCS requested during PR

* Attempt to fix the CI Javafx compiler issue

* Migrate from Javafx to apache commons for Pair implementation

* Removing javafx from pom file

* Fixes to appease the ScalaStyle deity

* Minor fix in SSD script and Readme

* Added ObjectDetectorOutput which is a POJO for Object Detector to simplify the return type

* Removing Apache Commons Immutable Pair

* Adding license to new file

* Minor style fixes

* minor style fix

* Updating to be in scala style and not explicitly declare some unnecessary variables

* NativeResource Management in Scala (#12647) (#12883)

* add Generic MXNetHandle trait and MXNetHandlePhantomRef class that will be used by all MXNetObjects

* Generic Handle with AutoCloseable

* add NativeResource and NativeResourceManager with Periodic GC calling

* use NativeResource trait in NDArray, Symbol and Executor

* add run train mnist script

* create a Generic ResourceScope that can collect all NativeResources to dispose at the end

* modify NativeResource and ResourceScope, extend NativeResource in NDArray, Symbol and Executor

* remove GCExecutor

* deRegister PhantomReferences by when calling dispose()

* add Finalizer(temporary) to NativeResource

* refactor NativeResource.dispose() method

* update NativeResource/add Unit Test for NativeResource

* updates to NativeResource/NativeResourceRef and unit tests to NativeResource

* remove redundant code added because of the object equality that was needed

* add ResourceScope

* Fix NativeResource to not remove from Scope, add Unit Tests to ResourceScope

* cleanup log/print debug statements

* use TreeSet inplace of ArrayBuffer to speedup removal of resources from ResourceScope
Fix Executor dispose and make KVStore a NativeResource

* fix segfault that was happening because of NDArray creation on the fly in Optimizer

* Add comments for dispose(param:Boolean)

* Added unit tests for Resource Scope in Java (#12955)

* Bumping down minimum java support from 8 to 7 (#12965)

* [MXNET-984] Java NDArray Documentation Generation (#12835)

* cherry pick javaDoc changes

* update NDArray changes

* refactoring change and merge all docGen in a single place

* clean the scalastyle

* take on Piyush nit

* drop the comments

* First pass at adding JavaDocs for new java api classes (#12963)

* First pass at adding JavaDocs for new java api classes

* Fix a scalastyle issue

* Updating JavaDoc based on feedback

* [MXNET-1160] add Java build/run example (#12969)

* add example

* clean up nit

* find the pain point

* add java tut into whitelist

* Trigger CI

* add java demo and split scala demo

* address the comments

* change the examples

* fix the wrong configuration

* Maven Surefire bug workaround (#13097)

* use ResourceScope in Model/Trainer/FeedForward.scala (#12882) (#13164)

* use ResourceScope in Model/Trainer/FeedForward.scala

* add moveToOuterScope public method to move resources to a outerScope if it exists

* fix memory leak in FeedForward.scala by making it a native resource and disposing argparams, auxParams
in dispose() method

* [MXNET-1187] Added Tutorial for Java under mxnet.io/docs/tutorials (#13183)

* Added tutorial for Java installation on IntelliJ for mxnet.io website

* Added correct image resources

* Removed spurious quotes

* Added java tutorial to whitelisting

* Added community download edition link to intelliJ section

* [MXNET-1202] Change Builder class into a better way (#13159)

* applying changes for Builder functions

* simplify the code structure

* update docgen

* follow Naveen's suggestion

* apply comments to Param

* clean up param build

* change on the comments

* add one description line

* [MXNET-1041] Add Java benchmark (#13095)

* add java benchmark

* applied changes based on Piyush comments

* applies Andrew's change

* fix clojure test issue

* update the statistic names

* follow Naveen's instruction

* [MXNET-918] [Introduce Random module / Refact code generation (#13038)][Cherry pick]  (#13242)

* [MXNET-918] Introduce Random module / Refact code generation (#13038)

* refactor code gen

* remove xxxAPIMacroBase (overkill)

* CI errors / scala-style

* PR review comments

* clean up the duplicated code

* add comments

* Fixed missing break statement (#13257)

* Java Benchmark failure (#13258)

* patch fix

* update ignore

* rename getContext to bindToDevice

* Update JavaBenchmark.java

* Addressing PR feedback for merging Java API into master (#13277)

* Addressing PR feedback for merging Java API into master

* Changed constructors to package private instead of private

* clean up the NDArray follow the comments (#13281)

* [MXNET-1181] Added command line alternative to IntelliJ in install instructions (#13267)

* Added command line alternative to IntelliJ

* Removed the duplicate file

* Fixed typos

* Fixed minor command issue

* add defaults and clean up the tests (#13295)

* [MXNET-1187] Added Java SSD Inference Tutorial for website (#13201)

* Added Java SSD Inference Tutorial for website

* Added whitelisting to SSD tutorial

* Address PR feedback

* Marking intelliJ as optional

* [MXNET-1182] Predictor example (#13237)

* add initial commit

* push back predictor

* name fix and bug fix

* update readme and script to run

* minor fix

* minor fix

* fix on doc

* update predictor

* Reducing the length of setup tutorial (#13306)

* enabling test_dropout after fixing flaky issue (#13276)

* enabling test_dropout after fixing flaky issue

* adding a check for positive seed

* fix the flag (#13293)

* Made fixes to sparse.py and sparse.md (#13305)

* Fix descriptions in scaladocs for macro ndarray/sybmol APIs (#13210)

* [Example] Gradcam- Fixing a link (#13307)

* fixing gradcam

* changed loading parameters code

* fixing type conversions issue with previous versions of matplotlib

* gradcam consolidation

* creating directory structures in utils

* changing location

* empty commit

* fix file lock issue

* fix link

* removing other commits

* remove commit

* Updated the Instructions for use of the label bot (#13192)

* Updated Instructions for Label Bot

* Updated instructions for mxnet-label-bot

* Including myself as a contributor

* Clarified usage of label bot

* Fixed typos and instructions/examples have been made more clear

* Added link for available labels

* [MXNET-33] Enhance mkldnn pooling to support full convention (#11047)

* fix mkldnn pooling to support full convention

* backward with full convention

* fix

* add pooling test for full convention

* add function for computing padding size

* fix unit test

* only support max-pooling

* fix pooling bwd

* address review comment

* [MXNET-1213] add Cent OS build for Scala (#13279)

* add centos build for Scala

* migrate the build portion to docker

* update build script and chmod +x

* address Jenkins change

* allow CentOS provide all depdencies

* fix file lock issue (#13296)

* modify code for working in gpu context. (#13302)
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet