Metadata Management Tool Integrations

In our experience, nearly all AI development efforts, be they at large enterprises or new startups, begin by spending the first 3-6 months building their first ML pipelines from available tools. These custom integrations are time consuming and expensive to produce, can be fragile and frequently require drastic changes as project requirements evolve. 

Frequently, these custom ML pipelines only support a small set of built-in algorithms or a single ML library and are tied to each company’s existing infrastructure. Users cannot easily leverage new ML libraries, or share their work with a wider community.

Neuro facilitates adoption of robust, adaptable Machine Learning Operations (MLOps) by simplifying resource orchestration, automation and instrumentation at all steps of ML system construction, including integration, testing, deployment, monitoring and infrastructure management.

To maintain agility and avoid the pitfalls of technical debt, Neuro allows for the seamless connection of an ever-expanding universe of ML tools into your workflow.

We cover the entire ML lifecycle from Data Collection to Testing and Interpretation. All resources, processes and permissions are managed through our platform and can be installed and run on virtually any compute infrastructure, be it on-premise or in the cloud of your choice.

Metadata Management

The various components of a machine learning workflow can be split up into independent, reusable, modular parts that can be pipelined together to create, test and deploy models.

Our toolset integrator, Toolbox, contains up to date out of the box integrations with a wide range of open-source and commercial tools required for modern ML/AI development.

For Metadata Management, the Platform provides out of the box integrations with MLflow and W&B.


MLflow is s an open source platform for manage the ML lifecycle created by Databricks, that includes experimentation, reproducibility, deployment, and a central model registry.

MLflow Tracking is an API and UI for logging parameters, code versions, metrics and output files when running machine learning code for later visualization.

MLflow Projects provide a standard format for packaging reusable data science code. Each project is a directory with code or a Git repository, and uses a YAML descriptor file to specify its dependencies and how to run the code. Projects can specify their dependencies through a Conda environment.

MLflow Models is a convention for packaging machine learning models in multiple formats called “flavors”. MLflow offers a variety of tools to help you deploy different flavors of models. Each MLflow Model is saved as a directory containing arbitrary files and an MLmodel descriptor file that lists the flavors it can be used in.

Weights & Biases:

W&B provides a leading suite of developer tools for machine learning, including metadata management, model management, training and experiment tracking

W&B helps ML development teams track their models, visualize model performance and easily automate model training and iterative improvement.

W&B has been called a system of record for your model results: Add a few lines to your script, and each time you train a new version of your model, you’ll see a new experiment stream live to your dashboard.

Within the Platform, W&B can be integrated where needed to handle training, experiment tracking, model management and metadata management Docs:
Hyperparameter Tuning with Weights & Biases
Experiment Tracking with Weights & Biases