mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-25 00:51:20 +00:00
* feat(gateway): skill-aware slash commands, paginated /commands, Telegram 100-cap Map active skills to Telegram's slash command menu so users can discover and invoke skills directly. Three changes: 1. Telegram menu now includes active skill commands alongside built-in commands, capped at 100 entries (Telegram Bot API limit). Overflow commands remain callable but hidden from the picker. Logged at startup when cap is hit. 2. New /commands [page] gateway command for paginated browsing of all commands + skills. /help now shows first 10 skill commands and points to /commands for the full list. 3. When a user types a slash command that matches a disabled or uninstalled skill, they get actionable guidance: - Disabled: 'Enable it with: hermes skills config' - Optional (not installed): 'Install with: hermes skills install official/<path>' Built on ideas from PR #3921 by @kshitijk4poor. * chore: move 21 niche skills to optional-skills Move specialized/niche skills from built-in (skills/) to optional (optional-skills/) to reduce the default skill count. Users can install them with: hermes skills install official/<category>/<name> Moved skills (21): - mlops: accelerate, chroma, faiss, flash-attention, hermes-atropos-environments, huggingface-tokenizers, instructor, lambda-labs, llava, nemo-curator, pinecone, pytorch-lightning, qdrant, saelens, simpo, slime, tensorrt-llm, torchtitan - research: domain-intel, duckduckgo-search - devops: inference-sh cli Built-in skills: 96 → 75 Optional skills: 22 → 43 * fix: only include repo built-in skills in Telegram menu, not user-installed User-installed skills (from hub or manually added) stay accessible via /skills and by typing the command directly, but don't get registered in the Telegram slash command picker. Only skills whose SKILL.md is under the repo's skills/ directory are included in the menu. This keeps the Telegram menu focused on the curated built-in set while user-installed skills remain discoverable through /skills and /commands.
70 lines
1.3 KiB
Markdown
70 lines
1.3 KiB
Markdown
# Provider Configuration
|
|
|
|
Guide to using Instructor with different LLM providers.
|
|
|
|
## Anthropic Claude
|
|
|
|
```python
|
|
import instructor
|
|
from anthropic import Anthropic
|
|
|
|
# Basic setup
|
|
client = instructor.from_anthropic(Anthropic())
|
|
|
|
# With API key
|
|
client = instructor.from_anthropic(
|
|
Anthropic(api_key="your-api-key")
|
|
)
|
|
|
|
# Recommended mode
|
|
client = instructor.from_anthropic(
|
|
Anthropic(),
|
|
mode=instructor.Mode.ANTHROPIC_TOOLS
|
|
)
|
|
|
|
# Usage
|
|
result = client.messages.create(
|
|
model="claude-sonnet-4-5-20250929",
|
|
max_tokens=1024,
|
|
messages=[{"role": "user", "content": "..."}],
|
|
response_model=YourModel
|
|
)
|
|
```
|
|
|
|
## OpenAI
|
|
|
|
```python
|
|
from openai import OpenAI
|
|
|
|
client = instructor.from_openai(OpenAI())
|
|
|
|
result = client.chat.completions.create(
|
|
model="gpt-4o-mini",
|
|
response_model=YourModel,
|
|
messages=[{"role": "user", "content": "..."}]
|
|
)
|
|
```
|
|
|
|
## Local Models (Ollama)
|
|
|
|
```python
|
|
client = instructor.from_openai(
|
|
OpenAI(
|
|
base_url="http://localhost:11434/v1",
|
|
api_key="ollama"
|
|
),
|
|
mode=instructor.Mode.JSON
|
|
)
|
|
|
|
result = client.chat.completions.create(
|
|
model="llama3.1",
|
|
response_model=YourModel,
|
|
messages=[...]
|
|
)
|
|
```
|
|
|
|
## Modes
|
|
|
|
- `Mode.ANTHROPIC_TOOLS`: Recommended for Claude
|
|
- `Mode.TOOLS`: OpenAI function calling
|
|
- `Mode.JSON`: Fallback for unsupported providers
|