Skip to content

MaTriXy/pipeline

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PipelineAI Logo

PipelineAI Quick Start (CPU + GPU)

Train and Deploy your ML and AI Models in the Following Environments:

Having Issues? Contact Us Anytime... We're Always Awake.

PipelineAI Community Events

PipelineAI Products

Consistent, Immutable, Reproducible Model Runtimes

Consistent Model Environments

Each model is built into a separate Docker image with the appropriate Python, C++, and Java/Scala Runtime Libraries for training or prediction.

Use the same Docker Image from Local Laptop to Production to avoid dependency surprises.

Sample Machine Learning and AI Models

Click HERE to view model samples for the following:

  • Scikit-Learn
  • TensorFlow
  • Keras
  • Spark ML (formerly called Spark MLlib)
  • XGBoost
  • PyTorch
  • Caffe/2
  • Theano
  • MXNet
  • PMML/PFA
  • Custom Java/Python/C++ Ensembles

Nvidia GPU TensorFlow

Spark ML Scikit-Learn

R PMML

Xgboost Model Ensembles

Supported Model Runtimes (CPU and GPU)

  • Python (Scikit, TensorFlow, etc)
  • Java
  • Scala
  • Spark ML
  • C++
  • Caffe2
  • Theano
  • TensorFlow Serving
  • Nvidia TensorRT (TensorFlow, Caffe2)
  • MXNet
  • CNTK
  • ONNX

Supported Streaming Engines

  • Kafka
  • Kinesis
  • Flink
  • Spark Streaming
  • Heron
  • Storm

Drag N' Drop Model Deploy

PipelineAI Drag n' Drop Model Deploy UI

Generate Optimize Model Versions Upon Upload

Automatic Model Optimization and Native Code Generation

Distributed Model Training and Hyper-Parameter Tuning

PipelineAI Advanced Model Training UI

PipelineAI Advanced Model Training UI 2

Continuously Deploy Models to Clusters of PipelineAI Servers

PipelineAI Weavescope Kubernetes Cluster

View Real-Time Prediction Stream

Live Stream Predictions

Compare Both Offline (Batch) and Real-Time Model Performance

PipelineAI Model Comparison

Compare Response Time, Throughput, and Cost-Per-Prediction

PipelineAI Compare Performance and Cost Per Prediction

Shift Live Traffic to Maximize Revenue and Minimize Cost

PipelineAI Traffic Shift Multi-armed Bandit Maxmimize Revenue Minimize Cost

Continuously Fix Borderline Predictions through Crowd Sourcing

Borderline Prediction Fixing and Crowd Sourcing

About

PipelineAI: Real-Time Enterprise AI Platform

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Java 80.2%
  • Python 8.2%
  • JavaScript 5.1%
  • Scala 2.0%
  • CSS 1.6%
  • Clojure 1.2%
  • Other 1.7%