Skip to content

Tensorflow bindings for the Elixir programming language 💪

Notifications You must be signed in to change notification settings

versilov/tensorflex

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

56 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tensorflex

Contents

How to run

  • You need to have the Tensorflow C API installed. Look here for details.
  • Clone this repository and cd into it
  • Run mix deps.get to install the dependencies
  • Run mix compile to compile the code
  • Open up iex using iex -S mix

Documentation

  • Reading in a pretrained graph defintion file

    This is the first step of the Inference process in Tensorflow/Tensorflex. For this example we read in the Inception Google model available for download here. The file name is classify_image_graph_def.pb. The example is as follows:

    iex(1)> {:ok, graph} = Tensorflex.read_graph("classify_image_graph_def.pb")
    2018-05-17 23:36:16.488469: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that    this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2018-05-17 23:36:16.774442: W             tensorflow/core/framework/op_def_util.cc:334] OpBatchNormWithGlobalNormalization is deprecated. It will cease to work in   GraphDef version 9. Use tf.nn.batch_normalization().
    Successfully imported graph
    #Reference<0.1610607974.1988231169.250293>
    
    iex(2)> op_list = Tensorflex.get_graph_ops graph
    ["softmax/biases", "softmax/weights", "pool_3/_reshape/shape",
    "mixed_10/join/concat_dim", "mixed_10/tower_2/conv/batchnorm/moving_variance",
    "mixed_10/tower_2/conv/batchnorm/moving_mean",
    "mixed_10/tower_2/conv/batchnorm/gamma",
    "mixed_10/tower_2/conv/batchnorm/beta", "mixed_10/tower_2/conv/conv2d_params",
    "mixed_10/tower_1/mixed/conv_1/batchnorm/moving_variance",
    "mixed_10/tower_1/mixed/conv_1/batchnorm/moving_mean",
    "mixed_10/tower_1/mixed/conv_1/batchnorm/gamma",
    "mixed_10/tower_1/mixed/conv_1/batchnorm/beta",
    "mixed_10/tower_1/mixed/conv_1/conv2d_params",
    "mixed_10/tower_1/mixed/conv/batchnorm/moving_variance",
    "mixed_10/tower_1/mixed/conv/batchnorm/moving_mean",
    "mixed_10/tower_1/mixed/conv/batchnorm/gamma",
    "mixed_10/tower_1/mixed/conv/batchnorm/beta",
    "mixed_10/tower_1/mixed/conv/conv2d_params",
    "mixed_10/tower_1/conv_1/batchnorm/moving_variance",
    "mixed_10/tower_1/conv_1/batchnorm/moving_mean",
    "mixed_10/tower_1/conv_1/batchnorm/gamma",
    "mixed_10/tower_1/conv_1/batchnorm/beta",
    "mixed_10/tower_1/conv_1/conv2d_params",
    "mixed_10/tower_1/conv/batchnorm/moving_variance",
    "mixed_10/tower_1/conv/batchnorm/moving_mean",
    "mixed_10/tower_1/conv/batchnorm/gamma",
    "mixed_10/tower_1/conv/batchnorm/beta", "mixed_10/tower_1/conv/conv2d_params",
    "mixed_10/tower/mixed/conv_1/batchnorm/moving_variance",
    "mixed_10/tower/mixed/conv_1/batchnorm/moving_mean",
    "mixed_10/tower/mixed/conv_1/batchnorm/gamma",
    "mixed_10/tower/mixed/conv_1/batchnorm/beta",
    "mixed_10/tower/mixed/conv_1/conv2d_params",
    "mixed_10/tower/mixed/conv/batchnorm/moving_variance",
    "mixed_10/tower/mixed/conv/batchnorm/moving_mean",
    "mixed_10/tower/mixed/conv/batchnorm/gamma",
    "mixed_10/tower/mixed/conv/batchnorm/beta",
    "mixed_10/tower/mixed/conv/conv2d_params",
    "mixed_10/tower/conv/batchnorm/moving_variance",
    "mixed_10/tower/conv/batchnorm/moving_mean",
    "mixed_10/tower/conv/batchnorm/gamma", "mixed_10/tower/conv/batchnorm/beta",
    "mixed_10/tower/conv/conv2d_params", "mixed_10/conv/batchnorm/moving_variance",
    "mixed_10/conv/batchnorm/moving_mean", "mixed_10/conv/batchnorm/gamma",
    "mixed_10/conv/batchnorm/beta", "mixed_10/conv/conv2d_params",
    "mixed_9/join/concat_dim", ...]
  • Matrix capabilities

    • Matrices are created using create_matrix which takes number of rows, number of columns and list(s) of matrix data as inputs
    • matrix_pos help get the value stored in the matrix at a particular row and column
    • size_of_matrix returns a tuple of the size of matrix as {number of rows, number of columns}
    • matrix_to_lists returns the data of the matrix as list of lists
    iex(1)> m = Tensorflex.create_matrix(2,3,[[2.2,1.3,44.5],[5.5,6.1,3.333]])
    #Reference<0.1012898165.3475636225.187946>
    
    iex(2)> Tensorflex.matrix_pos(m,2,1)
    5.5
    
    iex(3)> Tensorflex.size_of_matrix m
    {2, 3}
    
    iex(4)> Tensorflex.matrix_to_lists m
    [[2.2, 1.3, 44.5], [5.5, 6.1, 3.333]]
  • Tensor usage

    • Numeral Tensors:
      • Basically float64_tensor handles numeral tensors. It has two variants: one that takes in just 1 argument and the other which takes in 2 arguments
      • The one which takes 1 argument is just for making a tensor out of a single number
      • The 2 argument variant is actually more important and is used for multidimensional Tensors
      • Here, the first argument is the values and the second consists of the dimensions of the Tensor. Both these are matrices
    iex(1)> dims = Tensorflex.create_matrix(1,3,[[1,1,3]])
    #Reference<0.3771206257.3662544900.104749>
    
    iex(2)> vals = Tensorflex.create_matrix(1,3,[[245,202,9]])
    #Reference<0.3771206257.3662544900.104769>
    
    iex(3)> Tensorflex.float64_tensor 123.12
    {:ok, #Reference<0.3771206257.3662544897.110716>}
    
    iex(4)> {:ok, ftensor} = Tensorflex.float64_tensor(vals,dims)
    {:ok, #Reference<0.3771206257.3662544897.111510>}
    
    iex(5)> Tensorflex.tensor_datatype ftensor
    {:ok, :tf_double}
    • String Tensors:
    iex(1)> {:ok, str_tensor} = Tensorflex.string_tensor "1234"
    {:ok, #Reference<0.1771877210.87949316.135871>}
    
    iex(2)> Tensorflex.tensor_datatype str_tensor
    {:ok, :tf_string}
    
  • Running sessions

    • Sessions are basically used to run a set of inputs all through the operations in a predefined graph and obtain prediction outputs

    • To exemplify the working of the entire prediction pipeline, I am going to use the simple toy graph created by the script in examples/toy-example/ called graphdef_create.py . Upon running the script you should have a graph definition file called graphdef.pb. You can also download this file stored in my Dropbox here.

    • Then I would recommend going through the graphdef_create.py file to get an idea of what the operations are. The code basically works like a very simple matrix multiplication of some predefined weights with the input and then the addition of the biases. Ideally the weights should be ascertained through training but since this is a toy example they are predefined (look here in the code).

    • The more important thing to notice in the graphdef_create.py file are the operations where the input is fed and where the output is obtained. It is important to know the names of these operations as when we perform Inference the input will be fed to that named input operation and the output will be obtained similarly. The names of the operations are required to run sessions. In our toy example, the input operation is assigned the name "input" (look here in the code) and the output operation is assigned the name "output" (look here in the code).

    • Now in Tensorflex, the Inference would go something like this:

      • First load the graph and look to see all the operations are correct. You will see "input" and "output" somewhere as mentioned before.
      • Then create the input tensors. First create matrices to house the tensor data as well as it's dimensions. As an example, let's say we set our input to be a 3x3 tensor with the first inputs all 1.0, second row to be 2.0, and third to be 3.0. The tensor is a float32 tensor created using the float32_tensor function.
      • Now to create the output tensor, since we know the matrix operations, the output will be a 3x2 tensor. We set the dimensions appropriately. Moreover, since we do not yet have the output values (we can only get them after the session is run), we will use the float32_tensor_alloc function instead of float32_tensor.
      • Finally, we need to run a session to get the output answers from sending the input tensors through the graph. We can see that the answer we get is exactly what we get when we do the matrix multiplications of the inputs with the weights and the addition of the biases. Also, the run_session function takes 5 inputs: the graph definition, the input tensor, the output tensor, the name of the input operation and the output operation. This is why knowing the names of your input and output operations is important.
    iex(1)> {:ok, graph} = Tensorflex.read_graph "graphdef.pb"
    2018-06-04 00:32:53.993446: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that         this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
    {:ok, #Reference<0.1321508712.421658628.225797>}
    
    iex(2)> Tensorflex.get_graph_ops graph
    ["biases", "biases/read", "weights", "weights/read", "input", "MatMul", "add",
    "output"]
    
    iex(3)> in_vals = Tensorflex.create_matrix(3,3,[[1.0,1.0,1.0],[2.0,2.0,2.0],[3.0,3.0,3.0]])
    #Reference<0.1321508712.421658628.225826>
    
    iex(4)> in_dims = Tensorflex.create_matrix(1,2,[[3,3]]) 
    #Reference<0.1321508712.421658628.225834>
    
    iex(5)> {:ok, input_tensor} = Tensorflex.float32_tensor(in_vals, in_dims)
    {:ok, #Reference<0.1321508712.421658628.225842>}
    
    iex(6)> out_dims = Tensorflex.create_matrix(1,2,[[3,2]])
    #Reference<0.1321508712.421658628.225850>
    
    iex(7)> {:ok, output_tensor} = Tensorflex.float32_tensor_alloc(out_dims)       
    {:ok, #Reference<0.1321508712.421658628.225858>}
    
    iex(8)> Tensorflex.run_session(graph, input_tensor, output_tensor, "input", "output") 
    [
    [56.349998474121094, 39.26000213623047],
    [109.69999694824219, 75.52000427246094],
    [163.04998779296875, 111.77999877929688]
    ]

Pull Requests Made

About

Tensorflow bindings for the Elixir programming language 💪

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C 91.8%
  • Elixir 7.7%
  • Makefile 0.5%