Skip to main content

Pipeline

This documentation covers all available options for deploying and extracting models from the EvoML platform.

Video Tutorial

Watch the evoML video tutorial on exploring machine learning models and model deployment here: Model Usage: Predictions, Deployment and Insights

Deployment Options

1. EvoML Pipeline

  • Description: Native pipeline deployment within EvoML platform
  • Use Cases:
    • Real-time predictions
    • Batch processing
    • Automated model updates
  • Features:
    • Built-in monitoring
    • Automatic scaling
    • Version control

2. REST API Integration

Description: Deploy models as REST API endpoints Features:

  • HTTP/HTTPS endpoints
  • Authentication support
  • Swagger documentation

3. Docker Containers

Description: Containerized model deployment Features:

  • Isolated environment
  • Portable deployment
  • Scalable architecture

4. EvoML Client Library

Description: Python client library for model deployment