docs: fix Venice AI typo (Venius → Venice)

Co-authored-by: jonisjongithub <jonisjongithub@users.noreply.github.com>

Co-authored-by: Clawdbot <bot@clawd.bot>
This commit is contained in:
jonisjongithub 2026-01-29 15:31:48 -08:00 committed by Gustavo Madeira Santana
parent 8e2b17e0c5
commit 96c9ffdedc
2 changed files with 6 additions and 6 deletions

View file

@ -13,9 +13,9 @@ default model as `provider/model`.
Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See [Channels](/channels). Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See [Channels](/channels).
## Highlight: Venius (Venice AI) ## Highlight: Venice (Venice AI)
Venius is our recommended Venice AI setup for privacy-first inference with an option to use Opus for hard tasks. Venice is our recommended Venice AI setup for privacy-first inference with an option to use Opus for hard tasks.
- Default: `venice/llama-3.3-70b` - Default: `venice/llama-3.3-70b`
- Best overall: `venice/claude-opus-45` (Opus remains the strongest) - Best overall: `venice/claude-opus-45` (Opus remains the strongest)
@ -47,7 +47,7 @@ See [Venice AI](/providers/venice).
- [Xiaomi](/providers/xiaomi) - [Xiaomi](/providers/xiaomi)
- [GLM models](/providers/glm) - [GLM models](/providers/glm)
- [MiniMax](/providers/minimax) - [MiniMax](/providers/minimax)
- [Venius (Venice AI, privacy-focused)](/providers/venice) - [Venice (Venice AI, privacy-focused)](/providers/venice)
- [Ollama (local models)](/providers/ollama) - [Ollama (local models)](/providers/ollama)
## Transcription providers ## Transcription providers

View file

@ -11,9 +11,9 @@ title: "Model Provider Quickstart"
OpenClaw can use many LLM providers. Pick one, authenticate, then set the default OpenClaw can use many LLM providers. Pick one, authenticate, then set the default
model as `provider/model`. model as `provider/model`.
## Highlight: Venius (Venice AI) ## Highlight: Venice (Venice AI)
Venius is our recommended Venice AI setup for privacy-first inference with an option to use Opus for the hardest tasks. Venice is our recommended Venice AI setup for privacy-first inference with an option to use Opus for the hardest tasks.
- Default: `venice/llama-3.3-70b` - Default: `venice/llama-3.3-70b`
- Best overall: `venice/claude-opus-45` (Opus remains the strongest) - Best overall: `venice/claude-opus-45` (Opus remains the strongest)
@ -43,7 +43,7 @@ See [Venice AI](/providers/venice).
- [Z.AI](/providers/zai) - [Z.AI](/providers/zai)
- [GLM models](/providers/glm) - [GLM models](/providers/glm)
- [MiniMax](/providers/minimax) - [MiniMax](/providers/minimax)
- [Venius (Venice AI)](/providers/venice) - [Venice (Venice AI)](/providers/venice)
- [Amazon Bedrock](/bedrock) - [Amazon Bedrock](/bedrock)
For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration, For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration,