pip install -U neuromation neuro login
Paste that in a macOS, Windows or Linux terminal prompt. This will automatically install Neuro CLI and will bring you to the log you in screen of the Neuromation Platform.
Neuromation Platform CLI requires Python 3.7 installed. We suggest installing Anaconda Python 3.7 Distribution. On some distributions you might have to run
pip3 install -U neuromation.
The Neuromation Platform puts data, models, training, and tuning at your fingertips. The Platform allows you to focus on model development tasks at hand by managing key aspects of underlying infrastructure and system integration, including: resource allocation, storage and image management, sharing, secure web and terminal access to running jobs.
The key components of the Platform are:
Images, storage volumes, and jobs can be published and shared among users.
Each time you start a job, the Platform will:
All your interactions with the Platform are happening using
neuro, the command line interface (CLI) tool, in conjunction with https://neu.ro web interface.
From your terminal or command prompt, with CLI installed and logged in, run:
neuro submit -c 4 -g 1 -m 16G --http 80 --volume storage://~:/var/storage/home:rw --volume storage://neuromation/public:/var/storage/neuromation/public:ro image://neuromation/fastai
This will initiate the process of running the container image with fast.ai's course v3 and Jupyter Notebook, attaching your home volume to container's
/var/storage/home and attaching Neuromation volume containing shared datasets to
The CLI will wait for the job to start, but at any point, you can break back into to the terminal, by pressing
^C. You can see the list of running jobs using:
neuro ps or see a detailed status report for a particular job, use
neuro status JOB-ID.
Once running, the Jupyter Notebook will become available at https://
neuro status command output.
You can upload your datasets to the Platform using Neuro CLI. Neuro CLI supports basic file system operations for copying and moving files to and from the Platform storage.
From your terminal or command prompt, change to the directory containing your dataset, and run:
neuro cp dataset.tar.gz storage://~
storage:// indicates that the destination is the Platform. And
~ is interpreted as home volume. In a similar fashion,
neuro cp storage://~/dataset.tar.gz. downloads dataset to your current directory locally.
Note: for performance reasons, it is recommended to consolidate data into a single file to upload and download.
You can access your dataset from within a container by giving
--volume storage://~:/var/storage/home:rw to
neuro submit when starting a new job.
To download results into results directory to your local machine, run:
neuro cp storage://~/results.tar.gz results/
To work with your dataset from within a container, or to troubleshoot a model, or to get shell access to a GPU instance, you can execute a command shell within a running job in interactive mode.
To do so, copy the job id of a running job (you can run
neuro ps to see the list), and run:
neuro exec -t JOB-ID bash
This will start bash within the running job and connect your terminal to it.
It is assumed that the reader is familiar with building and pulling Docker images locally. For more details, please refer to Docker's Getting Started Guide.
Assuming you have a local Docker image named helloworld built on your local machine, you can push it into Neuromation Platform by running:
neuro push helloworld
After that, you can start the job by running:
neuro submit image://~/helloworld