Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Returning list of op names in get_graph_ops and extended error atoms to all TF error codes #6

Merged
merged 3 commits into from
May 19, 2018

Conversation

anshuman23
Copy link
Owner

So this PR covers the changes you had last requested @josevalim. Other than that I have just removed some unused code from before. Also ensured I am using enif_make_binnary wherever needed instead of enif_make_string:

  • Extending the error coverage: The C function error_to_string handles that now and an example of a faulty graph reading attempt would look like this:
iex(1)> graph = Tensorflex.read_graph "Makefile"
{:error, :invalid_argument}
  • Returning list of all operations in graph instead of directly printing them: This has also been taken care of, and now the function is called get_graph_ops which returns a List of strings (op names in the graph). An example is as follows:
iex(1)> graph = Tensorflex.read_graph("classify_image_graph_def.pb")
2018-05-17 23:36:16.488469: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that    this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2018-05-17 23:36:16.774442: W             tensorflow/core/framework/op_def_util.cc:334] OpBatchNormWithGlobalNormalization is deprecated. It will cease to work in   GraphDef version 9. Use tf.nn.batch_normalization().
Successfully imported graph
#Reference<0.1610607974.1988231169.250293>

iex(2)> op_list = Tensorflex.get_graph_ops graph
["softmax/biases", "softmax/weights", "pool_3/_reshape/shape",
 "mixed_10/join/concat_dim", "mixed_10/tower_2/conv/batchnorm/moving_variance",
 "mixed_10/tower_2/conv/batchnorm/moving_mean",
 "mixed_10/tower_2/conv/batchnorm/gamma",
 "mixed_10/tower_2/conv/batchnorm/beta", "mixed_10/tower_2/conv/conv2d_params",
 "mixed_10/tower_1/mixed/conv_1/batchnorm/moving_variance",
 "mixed_10/tower_1/mixed/conv_1/batchnorm/moving_mean",
 "mixed_10/tower_1/mixed/conv_1/batchnorm/gamma",
 "mixed_10/tower_1/mixed/conv_1/batchnorm/beta",
 "mixed_10/tower_1/mixed/conv_1/conv2d_params",
 "mixed_10/tower_1/mixed/conv/batchnorm/moving_variance",
 "mixed_10/tower_1/mixed/conv/batchnorm/moving_mean",
 "mixed_10/tower_1/mixed/conv/batchnorm/gamma",
 "mixed_10/tower_1/mixed/conv/batchnorm/beta",
 "mixed_10/tower_1/mixed/conv/conv2d_params",
 "mixed_10/tower_1/conv_1/batchnorm/moving_variance",
 "mixed_10/tower_1/conv_1/batchnorm/moving_mean",
 "mixed_10/tower_1/conv_1/batchnorm/gamma",
 "mixed_10/tower_1/conv_1/batchnorm/beta",
 "mixed_10/tower_1/conv_1/conv2d_params",
 "mixed_10/tower_1/conv/batchnorm/moving_variance",
 "mixed_10/tower_1/conv/batchnorm/moving_mean",
 "mixed_10/tower_1/conv/batchnorm/gamma",
 "mixed_10/tower_1/conv/batchnorm/beta", "mixed_10/tower_1/conv/conv2d_params",
 "mixed_10/tower/mixed/conv_1/batchnorm/moving_variance",
 "mixed_10/tower/mixed/conv_1/batchnorm/moving_mean",
 "mixed_10/tower/mixed/conv_1/batchnorm/gamma",
 "mixed_10/tower/mixed/conv_1/batchnorm/beta",
 "mixed_10/tower/mixed/conv_1/conv2d_params",
 "mixed_10/tower/mixed/conv/batchnorm/moving_variance",
 "mixed_10/tower/mixed/conv/batchnorm/moving_mean",
 "mixed_10/tower/mixed/conv/batchnorm/gamma",
 "mixed_10/tower/mixed/conv/batchnorm/beta",
 "mixed_10/tower/mixed/conv/conv2d_params",
 "mixed_10/tower/conv/batchnorm/moving_variance",
 "mixed_10/tower/conv/batchnorm/moving_mean",
 "mixed_10/tower/conv/batchnorm/gamma", "mixed_10/tower/conv/batchnorm/beta",
 "mixed_10/tower/conv/conv2d_params", "mixed_10/conv/batchnorm/moving_variance",
 "mixed_10/conv/batchnorm/moving_mean", "mixed_10/conv/batchnorm/gamma",
 "mixed_10/conv/batchnorm/beta", "mixed_10/conv/conv2d_params",
 "mixed_9/join/concat_dim", ...]

break;
case TF_DATA_LOSS: strcpy(error,"data_loss");
break;
default: strcpy(error,"unlisted_code");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could have each of those call enif_make_atom so we don't have to allocate the string in the first place, just to convert it to an atom?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed this in the latest commit. I was needlessly allocating a string for this. Renamed the function to error_to_atom now.

char op_name[BASE_STRING_LENGTH];
for(int i=0; i<n_ops; i++) {
op_temp = TF_GraphNextOperation(*graph, &pos);
strcpy(op_name, (char*) TF_OperationName(op_temp));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to copy the string here? Given that we are already copying it later to the binary with memcpy?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, this wasn't needed and I have fixed this in the latest commit. I apologize for the poorly written code here.

@josevalim
Copy link
Contributor

Comments added. Other than that, it looks great to go! 👍

What next steps do you have in mind? :)

@anshuman23
Copy link
Owner Author

Thanks for the feedback @josevalim!

I have been thinking about the next steps, and the logical progression of adding functions should eventually get to the point where we can run the loaded graph against our own inputs and generate prediction outputs. For this to work, a lot of functions need to be added first: particularly ones that will allow us to read Tensors supplied by the user. I am thinking over how I would accomplish this and then I will submit a PR for review. These TF_Tensor functions are very important and they would have to encompass a wide variety of inputs.

I might also keep importing graph based functions from Tensorflow as and when required (like the operation list function get_graph_ops) After that, I think some more graph based functions will be needed that will use both the tensor functionality and the graph functions. Finally, we will need to add support for creating and running Sessions.

@anshuman23 anshuman23 merged commit 2398996 into master May 19, 2018
@josevalim
Copy link
Contributor

@anshuman23 agreed! ❤️

Let's go with this plan and continue exploring the API and the "user story" but note that at some point we will have to "sit down" and write documentation and tests.

Also, can you please send an e-mail to the beamcommunity mailing list with your progress so far and your plans for the next week? What is coming next can be a very quick digest, like this one that you just sent!

@anshuman23
Copy link
Owner Author

Thanks @josevalim, and I completely agree on taking time out to write documentation and tests. I will incorporate that into my plans as well. :)

Yes, I'll send that email right away. Thanks again for all the help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants