Neuromation Platform

Get an Access

Neuro Platform gives you tools nescessary to enable rapid prototyping of applied AI solutions in Enterprise Environment.
Sign up for our early adopter program and receive GPU credits.

Installing Neuro and Logging In #

pip install -U neuromation neuro login

Paste that in a macOS, Windows or Linux terminal prompt. This will automatically install Neuro CLI and will bring you to the log you in screen of the Neuromation Platform.

Neuromation Platform CLI requires Python 3.7 installed. We suggest installing Anaconda Python 3.7 Distribution. On some distributions you might have to run pip3 install -U neuromation.

Understanding the Basics #

The Neuromation Platform puts data, models, training, and tuning at your fingertips. The Platform allows you to focus on model development tasks at hand by managing key aspects of underlying infrastructure and system integration, including: resource allocation, storage and image management, sharing, secure web and terminal access to running jobs.

The key components of the Platform are:

  • Image. A Docker container image that can be launched on the Platform.
  • Storage. One or more volumes that can be mounted to containers running on the Platform. These volumes may contain datasets or be used to store output.
  • Job. A running container with a certain amount of GPU/CPU/RAM resources allocated, and with certain storage volumes mounted to its filesystem.

Images, storage volumes, and jobs can be published and shared among users.

Each time you start a job, the Platform will:

  • Wait for requested resources to become available.
  • Pull a container image and launch it.
  • Attach storage volumes to the container's local mount points.

All your interactions with the Platform are happening using neuro, the command line interface (CLI) tool, in conjunction with https://neu.ro web interface.

Running Your First Job #

From your terminal or command prompt, with CLI installed and logged in, run:

neuro submit -c 4 -g 1 -m 16G --http 80 --volume storage://~:/var/storage/home:rw --volume storage://neuromation/public:/var/storage/neuromation/public:ro image://neuromation/fastai

This will initiate the process of running the container image with fast.ai's course v3 and Jupyter Notebook, attaching your home volume to container's /var/storage/home and attaching Neuromation volume containing shared datasets to /var/storage/neuromation.

The CLI will wait for the job to start, but at any point, you can break back into to the terminal, by pressing ^C. You can see the list of running jobs using: neuro ps or see a detailed status report for a particular job, use neuro status JOB-ID.

Once running, the Jupyter Notebook will become available at https://.neu.ro. The URL can also be found in neuro status command output.

Uploading Your Own Data #

You can upload your datasets to the Platform using Neuro CLI. Neuro CLI supports basic file system operations for copying and moving files to and from the Platform storage.

From your terminal or command prompt, change to the directory containing your dataset, and run:

neuro cp dataset.tar.gz storage://~

The URI storage:// indicates that the destination is the Platform. And ~ is interpreted as home volume. In a similar fashion, neuro cp storage://~/dataset.tar.gz. downloads dataset to your current directory locally.

Note: for performance reasons, it is recommended to consolidate data into a single file to upload and download.

You can access your dataset from within a container by giving --volume storage://~:/var/storage/home:rw to neuro submit when starting a new job.

To download results into results directory to your local machine, run:

neuro cp storage://~/results.tar.gz results/

Connecting to a Running Job #

To work with your dataset from within a container, or to troubleshoot a model, or to get shell access to a GPU instance, you can execute a command shell within a running job in interactive mode.

To do so, copy the job id of a running job (you can run neuro ps to see the list), and run:

neuro exec -t JOB-ID bash

This will start bash within the running job and connect your terminal to it.

Running Your Own Image #

It is assumed that the reader is familiar with building and pulling Docker images locally. For more details, please refer to Docker's Getting Started Guide.

Assuming you have a local Docker image named helloworld built on your local machine, you can push it into Neuromation Platform by running:

neuro push helloworld

After that, you can start the job by running:

neuro submit image://~/helloworld

What's next