Skip to main content
Last Updated: February 13, 2026 OpenClaw is an open-source AI automation framework for building personal agents and workflows. This guide walks through the full flow:
  1. Deploy Ollama on SaladCloud.
  2. Install and onboard OpenClaw locally.
  3. Point OpenClaw to your Salad-hosted Ollama endpoint.
  4. Pair and test Telegram.

Why SaladCloud for OpenClaw?

  • Cost-effective: Use flat compute pricing instead of per-token API pricing.
  • Scalable: Scale replicas based on demand.
  • Fast setup: Deploy Ollama with a prebuilt Salad recipe.
  • Flexible: Use any Ollama model that fits your use case.

Prerequisites

  • A SaladCloud account.
  • A local machine for OpenClaw (macOS, Linux, or Windows with WSL2).
  • A Telegram account.

Part 1: Deploy Ollama on SaladCloud

Step 1: Choose a model

Recommended starter models from the Ollama OpenClaw guide:
ModelDescription
qwen3-coderOptimized for coding tasks
glm-4.7-flashBalanced performance and speed
gpt-oss:20bBalanced performance and speed
gpt-oss:120bImproved capability
You can use other Ollama models too, but these are good first choices for OpenClaw workflows.

Step 2: Deploy the Ollama recipe

  1. Open portal.salad.com.
  2. Select your organization and project.
  3. Click Deploy a Container Group.
  4. Choose the Ollama recipe.
  5. Configure your deployment:
    • Container Group Name: a unique name for your deployment
    • Model: for example gpt-oss:20b
    • Replicas: You can start with default or switch to 1 if you are going to configure fallback models in OpenClaw.
    • Hardware configuration: the default configuration is sufficient for most models, but if you want to update it click Modify Configuration
    • Authentication: If you want to restrict access to your Ollama endpoint, enable authentication but make sure to add your Salad API Key when configuring OpenClaw.
  6. Click Deploy.
  7. Wait until at least one replica is Running.

Step 3: Copy endpoint and test

Copy the deployment access domain (example: https://guava-thyme-123.salad.cloud). Test the Ollama-native endpoint. You can find this command on your deployment page.
curl https://<your-deployment>.salad.cloud/api/chat \
  -X POST \
  -H 'Content-Type: application/json' \
  -H 'Salad-Api-Key: <YOUR_SALAD_API_KEY>' \
  -d '{
    "model": "<your-model-name>",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "What is deep learning?"}
    ],
    "stream": false
  }'

Part 2: Install and onboard OpenClaw locally

Step 4: Install OpenClaw

Run the following command in your terminal to install OpenClaw CLI:
curl -fsSL https://openclaw.ai/install.sh | bash
Optional Docker path (if you prefer containerized local OpenClaw fro additional security/isolation):
git clone https://github.com/openclaw/openclaw
cd openclaw
./docker-setup.sh
This script builds the gateway image, runs the onboarding wizard, prints optional provider setup hints, starts the gateway via Docker Compose, generates a gateway token, and writes it to .env. For full installation details, see the OpenClaw docs.

Step 5: Run onboarding

If OpenClaw has not started the onboarding wizard automatically, you can start it with:
openclaw onboard --install-daemon
During onboarding, you will be prompted to set up channels and providers. For this guide, we will set up the Telegram channel now and skip the rest:
  1. Accept the local-agent security warning. If you choose No, onboarding exits immediately. Choose Yes only if you understand the agent can execute actions with your local user permissions.
  2. Select the quick start path.
  3. Skip model setup for now.
  4. Select Telegram when OpenClaw asks you to choose channels.
  5. Paste your Telegram bot token when prompted. (See Step 6 below for details.)
  6. Complete or skip the remaining optional setup steps.

Step 6: Telegram connection details (during onboarding)

Use this flow when onboarding asks for Telegram setup. First, create a Telegram bot in the Telegram app and get its token:
  1. Open Telegram and search for @BotFather
  2. Start a chat and run:
/newbot
  1. Set a display name for your bot.
  2. Set a bot username that ends with _bot (for example, salad_openclaw_bot).
  3. Once your bot is created, BotFather sends a summary message with all bot details. In that message, find and copy the bot token.
  4. Return to OpenClaw onboarding and paste the token when prompted. Then continue with the next onboarding steps.
  5. Return to your Telegram bot and start the first conversation. It will give you a pairing code and a command you need to run. Copy that and paste it in your terminal to complete pairing. command will look something like this:
openclaw pairing approve telegram <CODE>
If you skipped Telegram during onboarding, you can still configure it manually later by running “openclaw onboard —install-daemon” or adding this in ~/.openclaw/openclaw.json:
{
  channels: {
    telegram: {
      enabled: true,
      botToken: '<YOUR_BOT_TOKEN>',
      dmPolicy: 'pairing',
      groups: { '*': { requireMention: true } },
    },
  },
}

Part 3: Configure OpenClaw to use Salad-hosted Ollama

In this part, you will connect OpenClaw to your Salad-hosted Ollama endpoint. You will add a model provider that points to your Salad deployment URL, set the default model OpenClaw should use, and verify that OpenClaw can see the configured model before testing in Telegram.

Step 7: Update OpenClaw config

Edit ~/.openclaw/openclaw.json and add/merge:
{
  models: {
    providers: {
      ollama: {
        baseUrl: 'https://<your-deployment>.salad.cloud/v1',
        apiKey: 'ollama-local',
        api: 'openai-completions',
        headers: {
          'Salad-Api-Key': '<YOUR_SALAD_API_KEY>',
        },
        models: [
          {
            id: '<your-model-name>',
            name: '<your-model-name>',
            reasoning: false,
            input: ['text'],
            cost: {
              input: 0,
              output: 0,
              cacheRead: 0,
              cacheWrite: 0,
            },
            contextWindow: 128000,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
  agents: {
    defaults: {
      model: {
        primary: 'ollama/<your-model-name>',
      },
    },
  },
}
If Salad gateway auth is enabled, make sure to add a custom header, if not, remove the headers section:
{
  models: {
    providers: {
      ollama: {
        headers: {
          'Salad-Api-Key': '<YOUR_SALAD_API_KEY>',
        },
      },
    },
  },
}

Part 4: Restart and test

Step 8: Restart gateway

openclaw doctor --fix # to check if configuration is correct
openclaw gateway restart

Step 9: Send a test message

  1. Open Telegram and send a direct message to your bot.
  2. Wait for the bot response.
You can also use the local OpenClaw UI at http://127.0.0.1:18789/.

Troubleshooting

OpenClaw cannot connect to Ollama endpoint

  • Confirm the Salad container group is Running.
  • Verify the endpoint URL is correct.
  • Confirm model download has completed (check container logs).
  • Verify provider baseUrl ends with /v1 in OpenClaw config.
  • If auth is enabled, ensure Salad-Api-Key is set in headers.
  • Use curl command to send a message directly to the endpoint and confirm it works.

Telegram bot is not responding

  • Verify botToken is correct.
  • Re-run pairing approval if needed.
  • Check logs:
openclaw logs --follow

Cost optimization tips

Model selection

  • Start with smaller models for simple tasks.
  • Move to larger models only when your workflow needs stronger reasoning.
  • Compare quality and speed before scaling up.

Scaling strategy

  • Start with 1-3 replicas.
  • Increase replicas only after you observe real load.
  • Use autoscaling for variable demand.
  • Configure OpenClaw model fallbacks for resilience during reallocations.

References