All-in-one web-based development environment for machine learning

The ML workspace is an all-in-one web-based IDE specialized for machine learning and data science. It is simple to deploy and gets you started within minutes to productively built ML solutions on your own machines. This workspace is the ultimate tool for developers preloaded with a variety of popular data science libraries (e.g., Tensorflow, PyTorch, Keras, Sklearn) and dev tools (e.g., Jupyter, VS Code, Tensorboard) perfectly configured, optimized, and integrated.


  • 💫 Jupyter, JupyterLab, and Visual Studio Code web-based IDEs.
  • 🗃 Pre-installed with many popular data science libraries & tools.
  • 🖥 Full Linux desktop GUI accessible via web browser.
  • 🔀 Seamless Git integration optimized for notebooks.
  • 📈 Integrated hardware & training monitoring via Tensorboard & Netdata.
  • 🚪 Access from anywhere via Web, SSH, or VNC under a single port.
  • 🎛 Usable as remote kernel (Jupyter) or remote machine (VS Code) via SSH.
  • 🐳 Easy to deploy on Mac, Linux, and Windows via Docker.


The workspace is equipped with a selection of best-in-class open-source development tools to help with the machine learning workflow. Many of these tools can be started from the Open Tool menu from Jupyter (the main application of the workspace):

Install Anything

Within your workspace you have full root & sudo privileges to install any library or tool you need via terminal (e.g., pip, apt-get, conda, or npm). You can find more ways to extend the workspace within the Extensibility section.

In [ ]:
!pip install matplotlib-venn
Install Dependencies in Notebooks: It’s a good idea to include cells which install and load any custom libraries or files (which are not pre-installed in the workspace) that your notebook needs.


Jupyter Notebook is a web-based interactive environment for writing and running code. The main building blocks of Jupyter are the file-browser, the notebook editor, and kernels. The file-browser provides an interactive file manager for all notebooks, files, and folders in the /workspace directory.

A new notebook can be created by clicking on the New drop-down button at the top of the list and selecting the desired language kernel.

You can spawn interactive terminal instances as well by selecting New -> Terminal in the file-browser.

The notebook editor enables users to author documents that include live code, markdown text, shell commands, LaTeX equations, interactive widgets, plots, and images. These notebook documents provide a complete and self-contained record of a computation that can be converted to various formats and shared with others.

This workspace has a variety of third-party Jupyter extensions activated. You can configure these extensions in the nbextensions configurator: nbextensions tab on the file browser

The Notebook allows code to be run in a range of different programming languages. For each notebook document that a user opens, the web application starts a kernel that runs the code for that notebook and returns output. This workspace has a Python 3 and Python 2 kernel pre-installed. Additional Kernels can be installed to get access to other languages (e.g., R, Scala, Go) or additional computing resources (e.g., GPUs, CPUs, Memory).

Python 2 support is deprecated and not fully supported. Please only use Python 2 if necessary!

Desktop GUI

This workspace provides an HTTP-based VNC access to the workspace via noVNC. Thereby, you can access and work within the workspace with a fully-featured desktop GUI. To access this desktop GUI, go to Open Tool, select VNC, and click the Connect button. In the case you are asked for a password, use vncpassword.

Once you are connected, you will see a desktop GUI that allows you to install and use full-fledged web-browsers or any other tool that is available for Ubuntu. Within the Tools folder on the desktop, you will find a collection of install scripts that makes it straightforward to install some of the most commonly used development tools, such as Atom, PyCharm, R-Runtime, R-Studio, or Postman (just double-click on the script).

Clipboard: If you want to share the clipboard between your machine and the workspace, you can use the copy-paste functionality as described below:

Long-running tasks: Use the desktop GUI for long-running Jupyter executions. By running notebooks from the browser of your workspace desktop GUI, all output will be synchronized to the notebook even if you have disconnected your browser from the notebook.

Visual Studio Code

Visual Studio Code (Open Tool -> VS Code) is an open-source lightweight but powerful code editor with built-in support for a variety of languages and a rich ecosystem of extensions. It combines the simplicity of a source code editor with powerful developer tooling, like IntelliSense code completion and debugging. The workspace integrates VS Code as a web-based application accessible through the browser-based on the awesome code-server project. It allows you to customize every feature to your liking and install any number of third-party extensions.

The workspace also provides a VS Code integration into Jupyter allowing you to open a VS Code instance for any selected folder, as shown below:


JupyterLab (Open Tool -> JupyterLab) is the next-generation user interface for Project Jupyter. It offers all the familiar building blocks of the classic Jupyter Notebook (notebook, terminal, text editor, file browser, rich outputs, etc.) in a flexible and powerful user interface. This JupyterLab instance comes pre-installed with a few helpful extensions such as a the jupyterlab-toc, jupyterlab-git, and juptyterlab-tensorboard.

Git Integration

Version control is a crucial aspect of productive collaboration. To make this process as smooth as possible, we have integrated a custom-made Jupyter extension specialized on pushing single notebooks, a full-fledged web-based Git client (ungit), a tool to open and edit plain text documents (e.g., .py, .md) as notebooks (jupytext), as well as a notebook merging tool (nbdime). Additionally, JupyterLab and VS Code also provide GUI-based Git clients.

Clone Repository

For cloning repositories via https, we recommend to navigate to the desired root folder and to click on the git button as shown below:

This might ask for some required settings and, subsequently, opens ungit, a web-based Git client with a clean and intuitive UI that makes it convenient to sync your code artifacts. Within ungit, you can clone any repository. If authentication is required, you will get asked for your credentials.

Push, Pull, Merge, and Other Git Actions

To commit and push a single notebook to a remote Git repository, we recommend to use the Git plugin integrated into Jupyter, as shown below:

For more advanced Git operations, we recommend to use ungit. With ungit, you can do most of the common git actions such as push, pull, merge, branch, tag, checkout, and many more.

Diffing and Merging Notebooks

Jupyter notebooks are great, but they often are huge files, with a very specific JSON file format. To enable seamless diffing and merging via Git this workspace is pre-installed with nbdime. Nbdime understands the structure of notebook documents and, therefore, automatically makes intelligent decisions when diffing and merging notebooks. In the case you have merge conflicts, nbdime will make sure that the notebook is still readable by Jupyter, as shown below:

Furthermore, the workspace comes pre-installed with jupytext, a Jupyter plugin that reads and writes notebooks as plain text files. This allows you to open, edit, and run scripts or markdown files (e.g., .py, .md) as notebooks within Jupyter. In the following screenshot, we have opened a markdown file via Jupyter:

In combination with Git, jupytext enables a clear diff history and easy merging of version conflicts. With both of those tools, collaborating on Jupyter notebooks with Git becomes straightforward.

File Sharing

The workspace has a feature to share any file or folder with anyone via a token-protected link. To share data via a link, select any file or folder from the Jupyter directory tree and click on the share button as shown in the following screenshot:

This will generate a unique link protected via a token that gives anyone with the link access to view and download the selected data via the Filebrowser UI:

To deactivate or manage (e.g., provide edit permissions) shared links, open the Filebrowser via Open Tool -> Filebrowser and select Settings->User Management.

Access Ports

It is possible to securely access any workspace internal port by selecting Open Tool -> Access Port. With this feature, you are able to access a REST API or web application running inside the workspace directly with your browser. The feature enables developers to build, run, test, and debug REST APIs or web applications directly from the workspace.

If you want to use an HTTP client or share access to a given port, you can select the Get shareable link option. This generates a token-secured link that anyone with access to the link can use to access the specified port.

The HTTP app requires to be resolved from a relative URL path or configure a base path (/tools/PORT/).
Example (click to expand...) 1. Start an HTTP server on port `1234` by running this command in a terminal within the workspace: `python -m http.server 1234` 2. Select `Open Tool -> Access Port`, input port `1234`, and select the `Get shareable link` option. 3. Click `Access`, and you will see the content provided by Python's `http.server`. 4. The opened link can also be shared to other people or called from external applications (e.g., try with Incognito Mode in Chrome).

SSH Access

SSH provides a powerful set of features that enables you to be more productive with your development tasks. You can easily set up a secure and passwordless SSH connection to a workspace by selecting Open Tool -> SSH. This will generate a secure setup command that can be run on any Linux or Mac machine to configure a passwordless & secure SSH connection to the workspace. Alternatively, you can also download the setup script and run it (instead of using the command).

The setup script only runs on Mac and Linux. Windows is currently not supported.

Just run the setup command or script on the machine from where you want to setup a connection to the workspace and input a name for the connection (e.g., my-workspace). You might also get asked for some additional input during the process, e.g. to install a remote kernel if remote_ikernel is installed. Once the passwordless SSH connection is successfully setup and tested, you can securely connect to the workspace by simply executing ssh my-workspace.

Besides the ability to execute commands on a remote machine, SSH also provides a variety of other features that can improve your development workflow as described in the following sections.

Tunnel Ports (click to expand...) An SSH connection can be used for tunneling application ports from the remote machine to the local machine, or vice versa. For example, you can expose the workspace internal port `5901` (VNC Server) to the local machine on port `5000` by executing: ```bash ssh -nNT -L 5000:localhost:5901 my-workspace ```
To expose an application port from your local machine to a workspace, use the -R option (instead of -L).
After the tunnel is established, you can use your favorite VNC viewer on your local machine and connect to `vnc://localhost:5000` (default password: `vncpassword`). To make the tunnel connection more resistant and reliable, we recommend to use [autossh]( to automatically restart SSH tunnels in the case that the connection dies: ```bash autossh -M 0 -f -nNT -L 5000:localhost:5901 my-workspace ``` Port tunneling is quite useful when you have started any server-based tool within the workspace that you like to make accessible for another machine. In its default setting, the workspace has a variety of tools already running on different ports, such as: - `8080`: Main workspace port with access to all integrated tools. - `8090`: Jupyter server. - `8054`: VS Code server. - `5901`: VNC server. - `3389`: RDP server. - `22`: SSH server. You can find port information on all the tools in the [supervisor configuration](
📖 For more information about port tunneling/forwarding, we recommend this guide.

Copy Data via SCP (click to expand...) [SCP]( allows files and directories to be securely copied to, from, or between different machines via SSH connections. For example, to copy a local file (`./local-file.txt`) into the `/workspace` folder inside the workspace, execute: ```bash scp ./local-file.txt my-workspace:/workspace ``` To copy the `/workspace` directory from `my-workspace` to the working directory of the local machine, execute: ```bash scp -r my-workspace:/workspace . ```
📖 For more information about scp, we recommend this guide.

Sync Data via Rsync (click to expand...) [Rsync]( is a utility for efficiently transferring and synchronizing files between different machines (e.g., via SSH connections) by comparing the modification times and sizes of files. The rsync command will determine which files need to be updated each time it is run, which is far more efficient and convenient than using something like scp or sftp. For example, to sync all content of a local folder (`./local-project-folder/`) into the `/workspace/remote-project-folder/` folder inside the workspace, execute: ```bash rsync -rlptzvP --delete --exclude=".git" "./local-project-folder/" "my-workspace:/workspace/remote-project-folder/" ``` If you have some changes inside the folder on the workspace, you can sync those changes back to the local folder by changing the source and destination arguments: ```bash rsync -rlptzvP --delete --exclude=".git" "my-workspace:/workspace/remote-project-folder/" "./local-project-folder/" ``` You can rerun these commands each time you want to synchronize the latest copy of your files. Rsync will make sure that only updates will be transferred.
📖 You can find more information about rsync on this man page.

Mount Folders via SSHFS (click to expand...) Besides copying and syncing data, an SSH connection can also be used to mount directories from a remote machine into the local filesystem via [SSHFS]( For example, to mount the `/workspace` directory of `my-workspace` into a local path (e.g. `/local/folder/path`), execute: ```bash sshfs -o reconnect my-workspace:/workspace /local/folder/path ``` Once the remote directory is mounted, you can interact with the remote file system the same way as with any local directory and file.
📖 For more information about sshfs, we recommend this guide.

Remote Development

The workspace can be integrated and used as a remote runtime (also known as remote kernel/machine/interpreter) for a variety of popular development tools and IDEs, such as Jupyter, VS Code, PyCharm, Colab, or Atom Hydrogen. Thereby, you can connect your favorite development tool running on your local machine to a remote machine for code execution. This enables a local-quality development experience with remote-hosted compute resources.

These integrations usually require a passwordless SSH connection from the local machine to the workspace. To set up an SSH connection, please follow the steps explained in the SSH Access section.

Jupyter - Remote Kernel (click to expand...) The workspace can be added to a Jupyter instance as a remote kernel by using the [remote_ikernel]( tool. If you have installed remote_ikernel (`pip install remote_ikernel`) on your local machine, the SSH setup script of the workspace will automatically offer you the option to setup a remote kernel connection.
When running kernels on remote machines, the notebooks themselves will be saved onto the local filesystem, but the kernel will only have access to the filesystem of the remote machine running the kernel. If you need to sync data, you can make use of rsync, scp, or sshfs as explained in the SSH Access section.
In case you want to manually setup and manage remote kernels, use the [remote_ikernel]( command-line tool, as shown below: ```bash # Change my-workspace with the name of a workspace SSH connection remote_ikernel manage --add \ --interface=ssh \ --kernel_cmd="ipython kernel -f {connection_file}" \ --name="ml-server Py 3.6" \ --host="my-workspace" ``` You can use the remote_ikernel command line functionality to list (`remote_ikernel manage --show`) or delete (`remote_ikernel manage --delete `) remote kernel connections.
VS Code - Remote Machine (click to expand...) The Visual Studio Code [Remote - SSH]( extension allows you to open a remote folder on any remote machine with SSH access and work with it just as you would if the folder were on your own machine. Once connected to a remote machine, you can interact with files and folders anywhere on the remote filesystem and take full advantage of VS Code's feature set (IntelliSense, debugging, and extension support). The discovers and works out-of-the-box with passwordless SSH connections as configured by the workspace SSH setup script. To enable your local VS Code application to connect to a workspace: 1. Install [Remote - SSH]( extension inside your local VS Code. 2. Run the SSH setup script of a selected workspace as explained in the [SSH Access](#ssh-access) section. 3. Open the Remote-SSH panel in your local VS Code. All configured SSH connections should be automatically discovered. Just select any configured workspace connection you like to connect to as shown below:
📖 You can find additional features and information about the Remote SSH extension in this guide.


Tensorboard provides a suite of visualization tools to make it easier to understand, debug, and optimize your experiment runs. It includes logging features for scalar, histogram, model structure, embeddings, and text & image visualization. The workspace comes pre-installed with jupyter_tensorboard extension that integrates Tensorboard into the Jupyter interface with functionalities to start, manage, and stop instances. You can open a new instance for a valid logs directory, as shown below:

If you have opened a Tensorboard instance in a valid log directory, you will see the visualizations of your logged data:

Tensorboard can be used in combination with many other ML frameworks besides Tensorflow. By using the tensorboardX library you can log basically from any python based library. Also, PyTorch has a direct Tensorboard integration as described here.

If you prefer to see the tensorboard directly within your notebook, you can make use of following Jupyter magic:

In [ ]:
%load_ext tensorboard.notebook
%tensorboard --logdir /workspace/path/to/logs

Hardware Monitoring

The workspace provides two pre-installed web-based tools to help developers during model training and other experimentation tasks to get insights into everything happening on the system and figure out performance bottlenecks.

Netdata (Open Tool -> Netdata) is a real-time hardware and performance monitoring dashboard that visualize the processes and services on your Linux systems. It monitors metrics about CPU, GPU, memory, disks, networks, processes, and more.

Glances (Open Tool -> Glances) is a web-based hardware monitoring dashboard as well and can be used as an alternative to Netdata.

Netdata and Glances will show you the hardware statistics for the entire machine on which the workspace container is running.

Run as a job

A job is defined as any computational task that runs for a certain time to completion, such as a model training or a data pipeline.

The workspace image can also be used to execute arbitrary Python code without starting any of the pre-installed tools. This provides a seamless way to productize your ML projects since the code that has been developed interactively within the workspace will have the same environment and configuration when run as a job via the same workspace image.

Run Python code as a job via the workspace image (click to expand...) To run Python code as a job, you need to provide a path or URL to a code directory (or script) via `EXECUTE_CODE`. The code can be either already mounted into the workspace container or downloaded from a version control system (e.g., git or svn) as described in the following sections. The selected code path needs to be python executable. In case the selected code is a directory (e.g., whenever you download the code from a VCS) you need to put a `` file at the root of this directory. The `` needs to contain the code that starts your job. #### Run code from version control system You can execute code directly from Git, Mercurial, Subversion, or Bazaar by using the pip-vcs format as described in [this guide]( For example, to execute code from a [subdirectory]( of a git repository, just run: ```bash docker run --env EXECUTE_CODE="git+" mltooling/ml-workspace:latest ```
📖 For additional information on how to specify branches, commits, or tags please refer to this guide.
#### Run code mounted into the workspace In the following example, we mount and execute the current working directory (expected to contain our code) into the `/workspace/ml-job/` directory of the workspace: ```bash docker run -v "${PWD}:/workspace/ml-job/" --env EXECUTE_CODE="/workspace/ml-job/" mltooling/ml-workspace:latest ``` #### Install Dependencies In the case that the pre-installed workspace libraries are not compatible with your code, you can install or change dependencies by just adding one or multiple of the following files to your code directory: - `requirements.txt`: [pip requirements format]( for pip-installable dependencies. - `environment.yml`: [conda environment file]( to create a separate Python environment. - ``: A shell script executed via `/bin/bash`. The execution order is 1. `environment.yml` -> 2. `` -> 3. `requirements.txt` #### Test job in interactive mode You can test your job code within the workspace (started normally with interactive tools) by executing the following python script: ```bash python /resources/scripts/ /path/to/your/job ``` #### Build a custom job image It is also possible to embed your code directly into a custom job image, as shown below: ```dockerfile FROM mltooling/ml-workspace:latest # Add job code to image COPY ml-job /workspace/ml-job ENV EXECUTE_CODE=/workspace/ml-job # Install requirements only RUN python /resources/scripts/ --requirements-only # Execute only the code at container startup CMD ["python", "/resources/", "--code-only"] ```

Pre-installed Libraries and Interpreters

The workspace is pre-installed with many popular interpreters, data science libraries, and ubuntu packages:

  • Interpreter: Miniconda 3 (Python 3.6), Java 8, NodeJS 11
  • Python libraries: Tensorflow, Keras, Pytorch, Sklearn, CNTK, XGBoost, Theano, Fastai, and many more

The full list of installed tools can be found within the Dockerfile.

For every minor version release, we run vulnerability, virus, and security checks within the workspace using vuls, safety, and clamav to make sure that the workspace environment is as secure as possible.


The workspace provides a high degree of extensibility. Within the workspace, you have full root & sudo privileges to install any library or tool you need via terminal (e.g., pip, apt-get, conda, or npm). You can open a terminal by one of the following ways:

  • Jupyter: New -> Terminal
  • Desktop VNC: Applications -> Terminal Emulator
  • JupyterLab: File -> New -> Terminal
  • VS Code: Terminal -> New Terminal

Additionally, pre-installed tools such as Jupyter, JupyterLab, and Visual Studio Code each provide their own rich ecosystem of extensions. The workspace also contains a collection of installer scripts for many commonly used development tools or libraries (e.g., PyCharm, Zeppelin, RStudio, Starspace). Those scripts can be either executed from the Desktop VNC (double-click on the script within the Tools folder on the Desktop) or from a terminal (execute any tool script from the /resources/tools/ folder).

Example (click to expand...) For example, to install the [Apache Zeppelin]( notebook server, simply execute: ```bash /resources/tools/ --port=1234 ``` After installation, refresh the Jupyter website and the Zeppelin tool will be available under `Open Tool -> Zeppelin`. Other tools might only be available within the Desktop VNC (e.g., `atom` or `pycharm`) or do not provide any UI (e.g., `starspace`, `docker-client`).

As an alternative to extending the workspace at runtime, you can also customize the workspace Docker image to create your own flavor as explained in the FAQ section.


The ML Workspace project is maintained by Lukas Masuch and Benjamin Räthlein. Please understand that we won't be able to provide individual support via email. We also believe that help is much more valuable if it's shared publicly so that more people can benefit from it.

Type Channel
🚨 Bug Reports
🎁 Feature Requests
👩‍💻 Usage Questions
🗯 General Discussion

Next Steps