Overview
This recipe provisions a GPU-ready Kelpie worker with the entire RISC Zero toolchain already installed:- Stable Rust +
cargo rzuptoolchains (Rust, cargo-risczero,r0vm)- CUDA 12.x for accelerated proving
- A clone of the latest
risc0repo under/opt/risc0 - Sample helpers in
/opt/risc0/examples/
cargo command (e.g.,
cargo run -F cuda --release), and uploads /opt/results/ back to the same bucket.
Use this recipe when you need to prove custom guest programs or benchmark RISC Zero locally on Salad GPUs without wiring
up your own infrastructure. You can also use it as a starting point to build more complex proof pipelines of your own.
Prerequisites
- Salad organization + project where you can deploy recipes and create API keys.
- S3-compatible bucket (Cloudflare R2, AWS S3, MinIO, etc.) for inputs and outputs. You will provide the Access Key, Secret Key, Region, and (for R2) Endpoint URL when deploying the recipe.
- (Optional) Queued autoscaling invite the Kelpie service account to your Salad org if you plan to use Kelpie’s queue-aware scaler.
Deploy the Recipe
- Open the SaladCloud Portal, choose organization and project, then click Deploy a Container Group and search for RISC Zero ZKP.
- Fill in the form:
- Container Group Name – unique name of the container group.
- Storage Access Key / Secret Key – credentials for your S3-compatible bucket.
- Storage Region – e.g.,
autofor Cloudflare R2 orus-east-1for AWS. - Storage Endpoint URL – leave blank for AWS; for R2 provide the custom endpoint
(
https://<account>.r2.cloudflarestorage.com).
- Deploy. The container group comes online with 16 vCPUs, 8 GB RAM, and a 24 GB GPU. You can modify those settings by clicking Modify Configuration Each replica runs one Kelpie worker.
Kelpie Job Flow
Kelpie jobs are simple JSON documents with:command+arguments– what to run (e.g.,bash -lc "RUSTFLAGS=\"-C target-cpu=native\" cargo run -F cuda --release |& tee /opt/results/hello_world.log").sync.before– download your project from S3.sync.after– upload/opt/results/once the job completes.
salad-recipes/recipes/risc0/):
submit_kelpie_job.py– example job submittercheck_kelpie_job.py– poll job status
Sample Submission Script
The bundled script already knows how to run the hello-world demo on GPU. Updatebuild_sync to download your project
prefix under ""before”, then run:
- Creates a job ID (
job-xxxxxx). - Builds a Kelpie job that:
sync.afteruploads/opt/results/tos3://<bucket>/hello-world/<job-id>/- Runs
RUSTFLAGS="-C target-cpu=native" cargo run -F cuda --releasein/opt/risc0/examples/hello-worldwhile streaming output to/opt/results/hello_world.log
- Posts the job to Kelpie and (optionally) polls until it finishes.
- Prints Kelpie job id.
cargo run ... command with your own proof pipeline (e.g., running cargo run --release --bin my-host or
python scripts/prove.py). Make sure any artifacts you care about land under /opt/results/ so Kelpie uploads them.
Sync Patterns
Typical sync for custom projects:${JOB_ID}) inside
prefixes to keep outputs separated by job.
Monitor Jobs
Use the helper or the raw API to check job status:status, and all the other metadata.
Customizing the Worker
- Different Provers: Copy your host + guest crates into the bucket and point Kelpie at them via
sync.before. The base image already has CUDA + Rust. - External Dependencies: you can download them inside your
bash -lcscript (e.g.,pip install -r requirements.txt) or fork the recipe repo and modify it. - Autoscaling: Kelpie supports queue-aware autoscaling, including scale-to-zero when the queue is empty. Invite the Kelpie service account to your organization, then create a scaling rule with the Create Scaling Rule endpoint.