Software Engineering for Machine Learning

Awesome Software Engineering for Machine Learning AwesomePRs Welcome

Software Engineering for Machine Learning are techniques and guidelines for building ML applications that do not concern the core ML problem -- e.g. the development of new algorithms -- but rather the surrounding activities like data ingestion, coding, testing, versioning, deployment, quality control, and team collaboration. Good software engineering practices enhance development, deployment and maintenance of production level applications using machine learning components.

⭐ Must-read

🎓 Scientific publication


Based on this literature, we compiled a survey on the adoption of software engineering practices for applications with machine learning components.

Feel free to take and share the survey and to read more!

Broad Overviews

These resources cover all aspects. - AI Engineering: 11 Foundational Practices ⭐ - Best Practices for Machine Learning Applications - Engineering Best Practices for Machine Learning ⭐ - Hidden Technical Debt in Machine Learning Systems 🎓⭐ - Rules of Machine Learning: Best Practices for ML Engineering ⭐ - Software Engineering for Machine Learning: A Case Study 🎓⭐

Data Management

How to manage the data sets you use in machine learning.

Model Training

How to organize your model training experiments.

Deployment and Operation

How to deploy and operate your models in a production environment.

Social Aspects

How to organize teams and projects to ensure effective collaboration and accountability.

Governance

Tooling

Tooling can make your life easier.

We only share open source tools, or commercial platforms that offer substantial free packages for research.

  • Airflow - Programmatically author, schedule and monitor workflows.
  • Archai - Neural architecture search.
  • Data Version Control (DVC) - DVC is a data and ML experiments management tool.
  • Facets Overview / Facets Dive - Robust visualizations to aid in understanding machine learning datasets.
  • FairLearn - A toolkit to assess and improve the fairness of machine learning models.
  • Git Large File System (LFS) - Replaces large files such as datasets with text pointers inside Git.
  • Great Expectations - Data validation and testing with integration in pipelines.
  • HParams - A thoughtful approach to configuration management for machine learning projects.
  • Kubeflow - A platform for data scientists who want to build and experiment with ML pipelines.
  • Label Studio - A multi-type data labeling and annotation tool with standardized output format.
  • LiFT - Linkedin fairness toolkit.
  • MLFlow - Manage the ML lifecycle, including experimentation, deployment, and a central model registry.
  • Model Card Toolkit - Streamlines and automates the generation of model cards; for model documentation.
  • Neptune.ai - Experiment tracking tool bringing organization and collaboration to data science projects.
  • Neuraxle - Sklearn-like framework for hyperparameter tuning and AutoML in deep learning projects.
  • OpenML - An inclusive movement to build an open, organized, online ecosystem for machine learning.
  • Robustness Metrics - Lghtweight modules to evaluate the robustness of classification models.
  • Spark Machine Learning - Spark’s ML library consisting of common learning algorithms and utilities.
  • TensorBoard - TensorFlow's Visualization Toolkit.
  • Tensorflow Extended (TFX) - An end-to-end platform for deploying production ML pipelines.
  • Weights & Biases - Experiment tracking, model optimization, and dataset versioning.

Contribute

Contributions welcomed! Read the contribution guidelines first