mirror of
https://github.com/NousResearch/hermes-agent.git
synced 2026-04-25 00:51:20 +00:00
* feat(plugins): pluggable image_gen backends + OpenAI provider
Adds a ImageGenProvider ABC so image generation backends register as
bundled plugins under `plugins/image_gen/<name>/`. The plugin scanner
gains three primitives to make this work generically:
- `kind:` manifest field (`standalone` | `backend` | `exclusive`).
Bundled `kind: backend` plugins auto-load — no `plugins.enabled`
incantation. User-installed backends stay opt-in.
- Path-derived keys: `plugins/image_gen/openai/` gets key
`image_gen/openai`, so a future `tts/openai` cannot collide.
- Depth-2 recursion into category namespaces (parent dirs without a
`plugin.yaml` of their own).
Includes `OpenAIImageGenProvider` as the first consumer (gpt-image-1.5
default, plus gpt-image-1, gpt-image-1-mini, DALL-E 3/2). Base64
responses save to `$HERMES_HOME/cache/images/`; URL responses pass
through.
FAL stays in-tree for this PR — a follow-up ports it into
`plugins/image_gen/fal/` so the in-tree `image_generation_tool.py`
slims down. The dispatch shim in `_handle_image_generate` only fires
when `image_gen.provider` is explicitly set to a non-FAL value, so
existing FAL setups are untouched.
- 41 unit tests (scanner recursion, kind parsing, gate logic,
registry, OpenAI payload shapes)
- E2E smoke verified: bundled plugin autoloads, registers, and
`_handle_image_generate` routes to OpenAI when configured
* fix(image_gen/openai): don't send response_format to gpt-image-*
The live API rejects it: 'Unknown parameter: response_format'
(verified 2026-04-21 with gpt-image-1.5). gpt-image-* models return
b64_json unconditionally, so the parameter was both unnecessary and
actively broken.
* feat(image_gen/openai): gpt-image-2 only, drop legacy catalog
gpt-image-2 is the latest/best OpenAI image model (released 2026-04-21)
and there's no reason to expose the older gpt-image-1.5 / gpt-image-1 /
dall-e-3 / dall-e-2 alongside it — slower, lower quality, or awkward
(dall-e-2 squares only). Trim the catalog down to a single model.
Live-verified end-to-end: landscape 1536x1024 render of a Moog-style
synth matches prompt exactly, 2.4MB PNG saved to cache.
* feat(image_gen/openai): expose gpt-image-2 as three quality tiers
Users pick speed/fidelity via the normal model picker instead of a
hidden quality knob. All three tier IDs resolve to the single underlying
gpt-image-2 API model with a different quality parameter:
gpt-image-2-low ~15s fast iteration
gpt-image-2-medium ~40s default
gpt-image-2-high ~2min highest fidelity
Live-measured on OpenAI's API today: 15.4s / 40.8s / 116.9s for the
same 1024x1024 prompt.
Config:
image_gen.openai.model: gpt-image-2-high
# or
image_gen.model: gpt-image-2-low
# or env var for scripts/tests
OPENAI_IMAGE_MODEL=gpt-image-2-medium
Live-verified end-to-end with the low tier: 18.8s landscape render of a
golden retriever in wildflowers, vision-confirmed exact match.
* feat(tools_config): plugin image_gen providers inject themselves into picker
'hermes tools' → Image Generation now shows plugin-registered backends
alongside Nous Subscription and FAL.ai without tools_config.py needing
to know about them. OpenAI appears as a third option today; future
backends appear automatically as they're added.
Mechanism:
- ImageGenProvider gains an optional get_setup_schema() hook
(name, badge, tag, env_vars). Default derived from display_name.
- tools_config._plugin_image_gen_providers() pulls the schemas from
every registered non-FAL plugin provider.
- _visible_providers() appends those rows when rendering the Image
Generation category.
- _configure_provider() handles the new image_gen_plugin_name marker:
writes image_gen.provider and routes to the plugin's list_models()
catalog for the model picker.
- _toolset_needs_configuration_prompt('image_gen') stops demanding a
FAL key when any plugin provider reports is_available().
FAL is skipped in the plugin path because it already has hardcoded
TOOL_CATEGORIES rows — when it gets ported to a plugin in a follow-up
PR the hardcoded rows go away and it surfaces through the same path
as OpenAI.
Verified live: picker shows Nous Subscription / FAL.ai / OpenAI.
Picking OpenAI prompts for OPENAI_API_KEY, then shows the
gpt-image-2-low/medium/high model picker sourced from the plugin.
397 tests pass across plugins/, tools_config, registry, and picker.
* fix(image_gen): close final gaps for plugin-backend parity with FAL
Two small places that still hardcoded FAL:
- hermes_cli/setup.py status line: an OpenAI-only setup showed
'Image Generation: missing FAL_KEY'. Now probes plugin providers
and reports '(OpenAI)' when one is_available() — or falls back to
'missing FAL_KEY or OPENAI_API_KEY' if nothing is configured.
- image_generate tool schema description: said 'using FAL.ai, default
FLUX 2 Klein 9B'. Rewrote provider-neutral — 'backend and model are
user-configured' — and notes the 'image' field can be a URL or an
absolute path, which the gateway delivers either way via
extract_local_files().
303 lines
9.5 KiB
Python
303 lines
9.5 KiB
Python
"""OpenAI image generation backend.
|
|
|
|
Exposes OpenAI's ``gpt-image-2`` model at three quality tiers as an
|
|
:class:`ImageGenProvider` implementation. The tiers are implemented as
|
|
three virtual model IDs so the ``hermes tools`` model picker and the
|
|
``image_gen.model`` config key behave like any other multi-model backend:
|
|
|
|
gpt-image-2-low ~15s fastest, good for iteration
|
|
gpt-image-2-medium ~40s default — balanced
|
|
gpt-image-2-high ~2min slowest, highest fidelity
|
|
|
|
All three hit the same underlying API model (``gpt-image-2``) with a
|
|
different ``quality`` parameter. Output is base64 JSON → saved under
|
|
``$HERMES_HOME/cache/images/``.
|
|
|
|
Selection precedence (first hit wins):
|
|
|
|
1. ``OPENAI_IMAGE_MODEL`` env var (escape hatch for scripts / tests)
|
|
2. ``image_gen.openai.model`` in ``config.yaml``
|
|
3. ``image_gen.model`` in ``config.yaml`` (when it's one of our tier IDs)
|
|
4. :data:`DEFAULT_MODEL` — ``gpt-image-2-medium``
|
|
"""
|
|
|
|
from __future__ import annotations
|
|
|
|
import logging
|
|
import os
|
|
from typing import Any, Dict, List, Optional, Tuple
|
|
|
|
from agent.image_gen_provider import (
|
|
DEFAULT_ASPECT_RATIO,
|
|
ImageGenProvider,
|
|
error_response,
|
|
resolve_aspect_ratio,
|
|
save_b64_image,
|
|
success_response,
|
|
)
|
|
|
|
logger = logging.getLogger(__name__)
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Model catalog
|
|
# ---------------------------------------------------------------------------
|
|
#
|
|
# All three IDs resolve to the same underlying API model with a different
|
|
# ``quality`` setting. ``api_model`` is what gets sent to OpenAI;
|
|
# ``quality`` is the knob that changes generation time and output fidelity.
|
|
|
|
API_MODEL = "gpt-image-2"
|
|
|
|
_MODELS: Dict[str, Dict[str, Any]] = {
|
|
"gpt-image-2-low": {
|
|
"display": "GPT Image 2 (Low)",
|
|
"speed": "~15s",
|
|
"strengths": "Fast iteration, lowest cost",
|
|
"quality": "low",
|
|
},
|
|
"gpt-image-2-medium": {
|
|
"display": "GPT Image 2 (Medium)",
|
|
"speed": "~40s",
|
|
"strengths": "Balanced — default",
|
|
"quality": "medium",
|
|
},
|
|
"gpt-image-2-high": {
|
|
"display": "GPT Image 2 (High)",
|
|
"speed": "~2min",
|
|
"strengths": "Highest fidelity, strongest prompt adherence",
|
|
"quality": "high",
|
|
},
|
|
}
|
|
|
|
DEFAULT_MODEL = "gpt-image-2-medium"
|
|
|
|
_SIZES = {
|
|
"landscape": "1536x1024",
|
|
"square": "1024x1024",
|
|
"portrait": "1024x1536",
|
|
}
|
|
|
|
|
|
def _load_openai_config() -> Dict[str, Any]:
|
|
"""Read ``image_gen`` from config.yaml (returns {} on any failure)."""
|
|
try:
|
|
from hermes_cli.config import load_config
|
|
|
|
cfg = load_config()
|
|
section = cfg.get("image_gen") if isinstance(cfg, dict) else None
|
|
return section if isinstance(section, dict) else {}
|
|
except Exception as exc:
|
|
logger.debug("Could not load image_gen config: %s", exc)
|
|
return {}
|
|
|
|
|
|
def _resolve_model() -> Tuple[str, Dict[str, Any]]:
|
|
"""Decide which tier to use and return ``(model_id, meta)``."""
|
|
env_override = os.environ.get("OPENAI_IMAGE_MODEL")
|
|
if env_override and env_override in _MODELS:
|
|
return env_override, _MODELS[env_override]
|
|
|
|
cfg = _load_openai_config()
|
|
openai_cfg = cfg.get("openai") if isinstance(cfg.get("openai"), dict) else {}
|
|
candidate: Optional[str] = None
|
|
if isinstance(openai_cfg, dict):
|
|
value = openai_cfg.get("model")
|
|
if isinstance(value, str) and value in _MODELS:
|
|
candidate = value
|
|
if candidate is None:
|
|
top = cfg.get("model")
|
|
if isinstance(top, str) and top in _MODELS:
|
|
candidate = top
|
|
|
|
if candidate is not None:
|
|
return candidate, _MODELS[candidate]
|
|
|
|
return DEFAULT_MODEL, _MODELS[DEFAULT_MODEL]
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Provider
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
class OpenAIImageGenProvider(ImageGenProvider):
|
|
"""OpenAI ``images.generate`` backend — gpt-image-2 at low/medium/high."""
|
|
|
|
@property
|
|
def name(self) -> str:
|
|
return "openai"
|
|
|
|
@property
|
|
def display_name(self) -> str:
|
|
return "OpenAI"
|
|
|
|
def is_available(self) -> bool:
|
|
if not os.environ.get("OPENAI_API_KEY"):
|
|
return False
|
|
try:
|
|
import openai # noqa: F401
|
|
except ImportError:
|
|
return False
|
|
return True
|
|
|
|
def list_models(self) -> List[Dict[str, Any]]:
|
|
return [
|
|
{
|
|
"id": model_id,
|
|
"display": meta["display"],
|
|
"speed": meta["speed"],
|
|
"strengths": meta["strengths"],
|
|
"price": "varies",
|
|
}
|
|
for model_id, meta in _MODELS.items()
|
|
]
|
|
|
|
def default_model(self) -> Optional[str]:
|
|
return DEFAULT_MODEL
|
|
|
|
def get_setup_schema(self) -> Dict[str, Any]:
|
|
return {
|
|
"name": "OpenAI",
|
|
"badge": "paid",
|
|
"tag": "gpt-image-2 at low/medium/high quality tiers",
|
|
"env_vars": [
|
|
{
|
|
"key": "OPENAI_API_KEY",
|
|
"prompt": "OpenAI API key",
|
|
"url": "https://platform.openai.com/api-keys",
|
|
},
|
|
],
|
|
}
|
|
|
|
def generate(
|
|
self,
|
|
prompt: str,
|
|
aspect_ratio: str = DEFAULT_ASPECT_RATIO,
|
|
**kwargs: Any,
|
|
) -> Dict[str, Any]:
|
|
prompt = (prompt or "").strip()
|
|
aspect = resolve_aspect_ratio(aspect_ratio)
|
|
|
|
if not prompt:
|
|
return error_response(
|
|
error="Prompt is required and must be a non-empty string",
|
|
error_type="invalid_argument",
|
|
provider="openai",
|
|
aspect_ratio=aspect,
|
|
)
|
|
|
|
if not os.environ.get("OPENAI_API_KEY"):
|
|
return error_response(
|
|
error=(
|
|
"OPENAI_API_KEY not set. Run `hermes tools` → Image "
|
|
"Generation → OpenAI to configure, or `hermes setup` "
|
|
"to add the key."
|
|
),
|
|
error_type="auth_required",
|
|
provider="openai",
|
|
aspect_ratio=aspect,
|
|
)
|
|
|
|
try:
|
|
import openai
|
|
except ImportError:
|
|
return error_response(
|
|
error="openai Python package not installed (pip install openai)",
|
|
error_type="missing_dependency",
|
|
provider="openai",
|
|
aspect_ratio=aspect,
|
|
)
|
|
|
|
tier_id, meta = _resolve_model()
|
|
size = _SIZES.get(aspect, _SIZES["square"])
|
|
|
|
# gpt-image-2 returns b64_json unconditionally and REJECTS
|
|
# ``response_format`` as an unknown parameter. Don't send it.
|
|
payload: Dict[str, Any] = {
|
|
"model": API_MODEL,
|
|
"prompt": prompt,
|
|
"size": size,
|
|
"n": 1,
|
|
"quality": meta["quality"],
|
|
}
|
|
|
|
try:
|
|
client = openai.OpenAI()
|
|
response = client.images.generate(**payload)
|
|
except Exception as exc:
|
|
logger.debug("OpenAI image generation failed", exc_info=True)
|
|
return error_response(
|
|
error=f"OpenAI image generation failed: {exc}",
|
|
error_type="api_error",
|
|
provider="openai",
|
|
model=tier_id,
|
|
prompt=prompt,
|
|
aspect_ratio=aspect,
|
|
)
|
|
|
|
data = getattr(response, "data", None) or []
|
|
if not data:
|
|
return error_response(
|
|
error="OpenAI returned no image data",
|
|
error_type="empty_response",
|
|
provider="openai",
|
|
model=tier_id,
|
|
prompt=prompt,
|
|
aspect_ratio=aspect,
|
|
)
|
|
|
|
first = data[0]
|
|
b64 = getattr(first, "b64_json", None)
|
|
url = getattr(first, "url", None)
|
|
revised_prompt = getattr(first, "revised_prompt", None)
|
|
|
|
if b64:
|
|
try:
|
|
saved_path = save_b64_image(b64, prefix=f"openai_{tier_id}")
|
|
except Exception as exc:
|
|
return error_response(
|
|
error=f"Could not save image to cache: {exc}",
|
|
error_type="io_error",
|
|
provider="openai",
|
|
model=tier_id,
|
|
prompt=prompt,
|
|
aspect_ratio=aspect,
|
|
)
|
|
image_ref = str(saved_path)
|
|
elif url:
|
|
# Defensive — gpt-image-2 returns b64 today, but fall back
|
|
# gracefully if the API ever changes.
|
|
image_ref = url
|
|
else:
|
|
return error_response(
|
|
error="OpenAI response contained neither b64_json nor URL",
|
|
error_type="empty_response",
|
|
provider="openai",
|
|
model=tier_id,
|
|
prompt=prompt,
|
|
aspect_ratio=aspect,
|
|
)
|
|
|
|
extra: Dict[str, Any] = {"size": size, "quality": meta["quality"]}
|
|
if revised_prompt:
|
|
extra["revised_prompt"] = revised_prompt
|
|
|
|
return success_response(
|
|
image=image_ref,
|
|
model=tier_id,
|
|
prompt=prompt,
|
|
aspect_ratio=aspect,
|
|
provider="openai",
|
|
extra=extra,
|
|
)
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Plugin entry point
|
|
# ---------------------------------------------------------------------------
|
|
|
|
|
|
def register(ctx) -> None:
|
|
"""Plugin entry point — wire ``OpenAIImageGenProvider`` into the registry."""
|
|
ctx.register_image_gen_provider(OpenAIImageGenProvider())
|