Skip to content

naxty/seldon-core-onnx

Repository files navigation

seldon-core-onnx

This repository shows how to serve an ONNX model with seldon-core. We are deploying a deep convolutional neural network for emotion recognition in faces in a local Kubernetes cluster. The ONNX model can be found in the onnx/models repository.

Blog Article

  • English:
  • German:

Application overview

Testing of the application We are deploying a model in a Kubernetes to perform emotion recogniton on a face.

Repository Overview

Installation

We need the following requirements:

Run seldon-core with Docker

Clone repository and CD into the folder.

docker build -t emotion_service:0.1 . && docker run -p 5000:5000 -it emotion_service:0.1 

Run the following script with Python:

from PIL import Image
import numpy as np
import requests
path_to_image = "images/smile.jpg"
image = Image.open(path_to_image).convert('L')
resized = image.resize((64, 64))
values = np.array(resized).reshape(1, 1, 64, 64)
req = requests.post("http:https://localhost:5000/predict", json={"data":{"ndarray": values.tolist()}})

Tutorial with Kubernetes

All commands to set up the model on the Kubernetes cluster can be found in the Seldon_Kubernetes.ipynb notebook.

Inference with nGraph

nGraph compiler Take a look into the nGraph compiler repository.

from ngraph_onnx.onnx_importer.importer import import_onnx_file
import ngraph as ng
# Import the ONNX file
model = import_onnx_file('model/model.onnx')

# Create an nGraph runtime environment
runtime = ng.runtime(backend_name='CPU')

# Select the first model and compile it to a callable function
emotion_cnn = runtime.computation(model)

Testing

minikube ip
kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'
curl -vX POST http:https://192.168.99.100:30809/seldon/default/seldon-emotion/api/v0.1/predictions -d @payload.json --header "Content-Type: application/json"

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages