Train and Deploy your ML and AI Models in the Following Environments:
- Slack: https://joinslack.pipeline.ai
- Email: [email protected]
- Web: https://support.pipeline.ai
- YouTube: https://youtube.pipeline.ai
- Slideshare: https://slideshare.pipeline.ai
- Workshop: https://workshop.pipeline.ai
- Troubleshooting Guide
- PipelineAI Monthly Webinar (TensorFlow + Spark + GPUs + TPUs)
- Advanced Spark and TensorFlow Meetup (Global)
Each model is built into a separate Docker image with the appropriate Python, C++, and Java/Scala Runtime Libraries for training or prediction.
Use the same Docker Image from Local Laptop to Production to avoid dependency surprises.
Click HERE to view model samples for the following:
- Scikit-Learn
- TensorFlow
- Keras
- Spark ML (formerly called Spark MLlib)
- XGBoost
- PyTorch
- Caffe/2
- Theano
- MXNet
- PMML/PFA
- Custom Java/Python/C++ Ensembles
- Python (Scikit, TensorFlow, etc)
- Java
- Scala
- Spark ML
- C++
- Caffe2
- Theano
- TensorFlow Serving
- Nvidia TensorRT (TensorFlow, Caffe2)
- MXNet
- CNTK
- ONNX
- Kafka
- Kinesis
- Flink
- Spark Streaming
- Heron
- Storm