Skip to content
This repository has been archived by the owner on Apr 12, 2023. It is now read-only.

Lightning-Universe/Training-Studio_app

Repository files navigation

Lightning PyTorch Training Studio App

The Lightning PyTorch Training Studio App is a full-stack AI application built using the Lightning framework to enable running experiments or sweeps with state-of-the-art sampling hyper-parameters algorithms and efficient experiment pruning strategies and more.

Learn more here.


Installation

Create a new virtual environment with python 3.8+

python -m venv .venv
source .venv/bin/activate

Clone and install lightning-hpo.

git clone https://github.com/Lightning-AI/lightning-hpo && cd lightning-hpo

pip install -e . -r requirements.txt --find-links https://download.pytorch.org/whl/cpu/torch_stable.html --pre

Make sure everything works fine.

python -m lightning run app app.py

Check the documentation to learn more !


Run the Training Studio App locally

In your first terminal, run the Lightning App.

lightning run app app.py

In second terminal, connect to the Lightning App and download its CLI.

lightning connect localhost
lightning --help

Usage: lightning [OPTIONS] COMMAND [ARGS]...

  --help     Show this message and exit.

Lightning App Commands
Usage: lightning [OPTIONS] COMMAND [ARGS]...

  --help     Show this message and exit.

Lightning App Commands
  add dataset        Create a dataset association by providing a public S3 bucket and an optional mount point.
                     The contents of the bucket can be then mounted on experiments and sweeps and
                     accessed through the filesystem.
  remove dataset     Delete a dataset association. Note that this will not delete the data itself,
                     it will only make it unavailable to experiments and sweeps.
  delete experiment  Delete an experiment. Note that artifacts will still be available after the operation.
  delete sweep       Delete a sweep. Note that artifacts will still be available after the operation.
  download artifacts Download artifacts for experiments or sweeps.
  run experiment     Run an experiment by providing a script, the cloud compute type and optional
                     data entries to be made available at a given path.
  run sweep          Run a sweep by providing a script, the cloud compute type and optional
                     data entries to be made available at a given path. Hyperparameters can be
                     provided as lists (`model.lr="[0.01, 0.1]"`) or using distributions
                     (`model.lr="uniform(0.01, 0.1)"`, `model.lr="log_uniform(0.01, 0.1)"`).
                     Hydra multirun override syntax is also supported.
  show artifacts     Show artifacts for experiments or sweeps, in flat or tree layout.
  show data          List all data associations.
  show experiments   Show experiments and their statuses.
  show logs          Show logs of an experiment or a sweep. Optionally follow logs as they stream.
  show sweeps        Show all sweeps and their statuses, or the experiments for a given sweep.
  stop experiment    Stop an experiment. Note that currently experiments cannot be resumed.
  stop sweep         Stop all experiments in a sweep. Note that currently sweeps cannot be resumed.

You are connected to the local Lightning App. Return to the primary CLI with `lightning disconnect`.

Run your first Sweep from sweep_examples/scripts folder

lightning run sweep train.py --model.lr "[0.001, 0.01, 0.1]" --data.batch "[32, 64]" --algorithm="grid_search" --requirements 'jsonargparse[signatures]>=4.15.2'

Scale by running the Training Studio App in the Cloud

Below, we are about to train a 1B+ LLM Model with multi-node.

lightning run app app.py --cloud

Connect to the App once ready.

lightning connect {APP_NAME}

Run your first multi node training experiment from sweep_examples/scripts folder (2 nodes of 4 V100 GPUS each).

lightning run experiment big_model.py --requirements deepspeed lightning-transformers==0.2.5 --num_nodes=2 --cloud_compute=gpu-fast-multi --disk_size=80