# Quickstart

C3 brings GPU compute to academics. Configure your project with a simple `.c3` file, and C3 handles provisioning GPUs from multiple data centers so you get compute when you need it, at competitive prices.

## Install the CLI[​](#install-the-cli "Direct link to Install the CLI")

```
curl -fsSL https://raw.githubusercontent.com/samleeney/c3/prod/install.sh | sh
```

Then authenticate:

```
c3 login
```

This opens your browser to sign in. Credentials are stored locally in `~/.c3/`.

## Run your first job[​](#run-your-first-job "Direct link to Run your first job")

Clone the examples repo and run a GPU benchmark:

```
git clone https://github.com/samleeney/c3-examples.git
cd c3-examples/jax-matmul
c3 init
```

This creates a `.c3` config. Open it and set it up:

```
# .c3
project: jax-matmul-example
script: run.sh
gpu: l40
time: "00:10:00"

python:
  project: ./                   # build env from pyproject.toml + uv.lock

output:
  - ./results                   # collected after the job finishes
```

The `script` field points to a bash script with your execution commands:

```
# run.sh
#!/bin/bash
python3 train.py
```

All job configuration (GPU, time limit, datasets, etc.) lives in `.c3` — the script is just the commands to run. The `python.project` setting tells C3 to build and cache your Python environment from `pyproject.toml` + `uv.lock`, so unchanged dependencies are reused instantly across jobs. See [Environment](https://docs.cthree.cloud/environment.md) for details.

Now deploy:

```
c3 deploy
```

C3 provisions a GPU, installs your Python dependencies, runs your script, and uploads the results. See [Project Configuration](https://docs.cthree.cloud/submission.md) for how each field works.

## Check status and download results[​](#check-status-and-download-results "Direct link to Check status and download results")

Monitor your job:

```
c3 squeue
```

```
JOB ID                     PROJECT                  STATUS       SUBMITTED            GPU
job_abc123                 jax-matmul-example       RUNNING      2024-01-15 10:30:00  l40
```

You can also view your jobs in the [web dashboard](https://cthree.cloud/dashboard/squeue).

Once complete, pull the results:

```
c3 pull job_abc123
```

Your output files are downloaded to `./job_abc123/results/`.

## Add a dataset[​](#add-a-dataset "Direct link to Add a dataset")

For jobs that need larger data, upload it to C3's storage once and mount it into any job:

```
c3 data cp ./my-data/ /datasets/my-data/
```

Then reference it in your `.c3`:

```
datasets:
  - ref: /datasets/my-data      # the remote path you uploaded to
    mount: /data/my-data         # where it appears on the GPU
```

Your script reads from `/data/my-data` as if the files were local. See [Data Mounting](https://docs.cthree.cloud/data-mounting.md) for details.

## Start a new project[​](#start-a-new-project "Direct link to Start a new project")

Create a `.c3` config for your own project:

```
cd my-project
c3 init
```

Edit the `.c3` file to set your GPU, time limit, and script (a bash script with your execution commands), then deploy with `c3 deploy`.

***

**Next steps:** Learn about [project configuration](https://docs.cthree.cloud/submission.md) for all `.c3` options, [data mounting](https://docs.cthree.cloud/data-mounting.md) for working with datasets, [artifact output](https://docs.cthree.cloud/artifacts.md) for collecting results, [environment](https://docs.cthree.cloud/environment.md) for Python dependencies, or browse the [marketplace](https://docs.cthree.cloud/marketplace.md) to see available GPUs and pricing.
