Resource Management Tool Integrations

In our experience, nearly all AI development efforts, be they at large enterprises or new startups, begin by spending the first 3-6 months building their first ML pipelines from available tools. These custom integrations are time consuming and expensive to produce, can be fragile and frequently require drastic changes as project requirements evolve. 

Frequently, these custom ML pipelines only support a small set of built-in algorithms, or a single ML library and are tied to each company’s infrastructure. Users cannot easily leverage new ML libraries, or share their work with a wider community.

Neuro facilitates adoption of robust, adaptable Machine Learning Operations (MLOps) by simplifying resource orchestration, automation and instrumentation at all steps of ML system construction, including integration, testing, deployment, monitoring and infrastructure management.

To maintain agility and avoid the pitfalls of technical debt, Neuro allows for the seamless connection of an ever-expanding universe of ML tools into your workflow.

We cover the entire ML lifecycle from Data Collection to Testing and Interpretation. All resources, processes and permissions are managed through our platform and can be installed and run on virtually any compute infrastructure, be it on-premise or in the cloud of your choice.

Resource Management

The various components of a machine learning workflow can be split up into independent, reusable, modular parts that can be pipelined together to create, test and deploy models.

Our toolset integrator, Toolbox, contains up to date out of the box integrations with a wide range of open-source and commercial tools required for modern ML/AI development.

For Resource Management, the Platform provides native functionality for managing Docker environments and allows for utilization of YAML for configuration of routine things such as starting a Jupyter Notebook in the platform, starting a Training Pipeline, opening a file browser for Remote Storage, etc., etc.


YAML is a language commonly used for configuration files and in applications where an object state is being stored or transmitted. YAML targets many of the same communications applications as Extensible Markup Language (XML) but has a minimal syntax which intentionally differs from SGML. It uses both Python-style indentation to indicate nesting, and a more compact format that uses […] for lists and {…} for maps so that JSON files are valid YAML 1.2.

A user can create YAML files that configure routine things, e.g. starting a Jupyter Notebook in the platform, starting a Training Pipeline, opening a file browser for Remote Storage, etc. Docs:
Creating a Cluster with YAML Flow

Docker uses Docker containers to run jobs in isolated environments and allows users to use Docker images as templates containing an application and all the dependencies to run that application. With, you can run jobs from images both on the public Docker registry and on the platform registry. Docs:
Environments (Docker Images)