1. 29 Apr, 2026 1 commit
  2. 28 Apr, 2026 2 commits
    • Oganneson's avatar
      fix(openai): drop reasoning items from /v1/responses input on OAuth path · 7452fad8
      Oganneson authored
      Closes #1957
      
      The OAuth path forwards client requests to chatgpt.com/backend-api/codex/responses,
      where applyCodexOAuthTransform forces store=false (chatgpt.com's codex backend
      rejects store=true). Reasoning items emitted under store=false are NEVER
      persisted upstream, so any rs_* reference that a client carries forward in a
      subsequent input[] array triggers a guaranteed upstream 404:
      
          Item with id 'rs_...' not found. Items are not persisted when `store` is
          set to false. Try again with `store` set to true, or remove this item
          from your input.
      
      sub2api wraps this as 502 "Upstream request failed" and the conversation
      breaks on every multi-turn /v1/responses request that uses reasoning + tools
      (reproducible with gpt-5.5; gpt-5.4 happens to dodge it because the upstream
      does not emit reasoning items for that model).
      
      Affected clients include any that follow the OpenAI Responses API spec and
      replay prior assistant items verbatim — in practice this hit OpenClaw and
      similar agent harnesses on every turn ≥2 with tool use.
      
      The fix: in filterCodexInput, drop input items with type == "reasoning"
      entirely. The model never reads reasoning summary text from input (only
      encrypted_content can carry reasoning context across turns, and chatgpt.com
      under store=false does not emit it), so this is a no-op for the model itself
      and a clean removal of unreachable upstream lookups.
      
      Scope is intentionally narrow:
        * Only OAuth account requests (account.Type == AccountTypeOAuth) reach
          applyCodexOAuthTransform / filterCodexInput.
        * API-key accounts going to api.openai.com/v1/responses are unaffected
          (store=true works there, rs_* persists, multi-turn already works).
        * Anthropic / Gemini platform groups go through different transforms and
          are unaffected.
        * /v1/chat/completions is unaffected (no reasoning items).
        * item_reference items (different type) are unaffected — only type ==
          "reasoning" is dropped.
      
      Verification:
        * Existing tests pass: go test ./internal/service/ -run Codex|Tool|OAuth
        * New regression test asserts reasoning items are dropped under both
          preserveReferences=true and preserveReferences=false.
        * End-to-end repro on gpt-5.5 multi-turn + tools: pre-patch 502, post-patch
          200. Repro on gpt-5.4 unchanged. Three-turn deep loop on gpt-5.5 passes.
      7452fad8
    • ivanvolt's avatar
      04b2866f
  3. 27 Apr, 2026 6 commits
  4. 24 Apr, 2026 4 commits
  5. 23 Apr, 2026 4 commits
    • gaoren002's avatar
      5f418997
    • erio's avatar
      revert: remove fork-only changes from release sync · 67518a59
      erio authored
      Revert payment/wechat, sora/claude-max cleanup, fork-only migrations,
      and cosmetic changes that were brought in by the release sync commit.
      Keep only channel-monitor related improvements:
      - PublicSettingsInjectionPayload named struct with drift test
      - ChannelMonitorRunner graceful shutdown in wire
      - image_output_price in SupportedModelChip
      - Simplified buildSelfNavItems in AppSidebar
      - Gateway WARN logs for 503 branches
      67518a59
    • erio's avatar
      sync: bring over remaining release/custom-0.1.115 changes · 748a84d8
      erio authored
      - Extract PublicSettingsInjectionPayload named struct with drift test
      - Add channel_monitor_default_interval_seconds to SSR injection
      - Add image_output_price to SupportedModelChip
      - Simplify AppSidebar buildSelfNavItems (admins see available channels)
      - Add gateway WARN logs for 503 no-available-accounts branches
      - Wire ChannelMonitorRunner into provideCleanup for graceful shutdown
      - Add migrations 130/131 (CC template userid fix + mimicry field cleanup)
      - Clean up fork-only features (sora, claude max simulation, client affinity)
      - Remove ~320 obsolete i18n keys
      - Add codexUsage utility, WechatServiceButton, BulkEditAccountModal
      - Tidy go.sum
      748a84d8
    • meteor041's avatar
      fix openai image request handling · 00778dca
      meteor041 authored
      00778dca
  6. 20 Apr, 2026 1 commit
    • erio's avatar
      fix(openai): 移除已下线 Codex 模型并修复归一化兜底副作用 · bbc4aed3
      erio authored
      - backend: 删除 gpt-5 / 5.1 / 5.1-codex / 5.1-codex-max / 5.1-codex-mini / 5.2-codex / 5.4-nano 的内置映射与 DefaultModels 条目
      - backend: normalizeCodexModel 默认兜底由 gpt-5.1 改为 gpt-5.4,gpt-5.3-codex-spark 独立保留映射
      - backend: 修复 isOpenAIGPT54Model 与 shouldAutoInjectPromptCacheKeyForCompat 对 claude / gpt-4o 的误判(之前依赖 gpt-5.1 作为非 GPT 族的隐式 sentinel,改后需要显式前缀守卫)
      - backend: 清理 billing_service 中已不可达的 fallback 价格与 switch 分支
      - frontend: 从白名单、OpenCode 配置、预设映射中移除已下线模型
      - 同步更新所有相关单测
      
      Refs: #1758, parallels upstream #1759 but adds downstream guard fixes
      bbc4aed3
  7. 14 Apr, 2026 1 commit
    • shuanbao0's avatar
      fix(gateway): 剥离 Cursor raw body 透传路径中 Codex 不支持的 Responses API 参数 · e1fab9b3
      shuanbao0 authored and 陈曦's avatar 陈曦 committed
      
      
      在前一个 commit 的 isResponsesShape 短路路径基础上,补充对 Cursor 云端
      带过来的、Codex 上游统一不支持的顶层 Responses API 参数的剥离:
      
        - prompt_cache_retention
        - safety_identifier
        - metadata
        - stream_options
      
      根因补充:这条 raw-body 透传路径为了保留 Cursor 的 input 数组整体结构,
      不再经过 ChatCompletionsRequest 的反序列化过滤,所以这些 Go 结构体里
      没有对应字段的参数会被原样发到上游,上游返回:
          Unsupported parameter: <field>
      常规 Chat Completions 转换路径天然通过 ChatCompletionsRequest 丢弃未知字段,
      不受影响;此处仅在 isResponsesShape 分支内用 sjson.DeleteBytes 显式过滤,
      作用域最小。剥离列表与 openai_gateway_service.go:2034 的
      unsupportedFields 语义对齐。
      
      另外在 applyCodexOAuthTransform 的 OAuth 兜底 strip 列表里同步追加
      prompt_cache_retention,作为对该函数所有其他 OAuth 调用点的 defense
      in depth(当前只有 Cursor 路径的短路已在前面剥过,但保留这一层更稳)。
      
      测试:
      - TestCursorMixedShape_StripsUnsupportedFields — 验证所有 4 个字段都被剥
      - TestApplyCodexOAuthTransform_StripsPromptCacheRetention — OAuth 兜底路径
      Co-Authored-By: default avatarClaude Opus 4.6 (1M context) <noreply@anthropic.com>
      e1fab9b3
  8. 11 Apr, 2026 1 commit
    • shuanbao0's avatar
      fix(gateway): 剥离 Cursor raw body 透传路径中 Codex 不支持的 Responses API 参数 · 422e25c9
      shuanbao0 authored
      
      
      在前一个 commit 的 isResponsesShape 短路路径基础上,补充对 Cursor 云端
      带过来的、Codex 上游统一不支持的顶层 Responses API 参数的剥离:
      
        - prompt_cache_retention
        - safety_identifier
        - metadata
        - stream_options
      
      根因补充:这条 raw-body 透传路径为了保留 Cursor 的 input 数组整体结构,
      不再经过 ChatCompletionsRequest 的反序列化过滤,所以这些 Go 结构体里
      没有对应字段的参数会被原样发到上游,上游返回:
          Unsupported parameter: <field>
      常规 Chat Completions 转换路径天然通过 ChatCompletionsRequest 丢弃未知字段,
      不受影响;此处仅在 isResponsesShape 分支内用 sjson.DeleteBytes 显式过滤,
      作用域最小。剥离列表与 openai_gateway_service.go:2034 的
      unsupportedFields 语义对齐。
      
      另外在 applyCodexOAuthTransform 的 OAuth 兜底 strip 列表里同步追加
      prompt_cache_retention,作为对该函数所有其他 OAuth 调用点的 defense
      in depth(当前只有 Cursor 路径的短路已在前面剥过,但保留这一层更稳)。
      
      测试:
      - TestCursorMixedShape_StripsUnsupportedFields — 验证所有 4 个字段都被剥
      - TestApplyCodexOAuthTransform_StripsPromptCacheRetention — OAuth 兜底路径
      Co-Authored-By: default avatarClaude Opus 4.6 (1M context) <noreply@anthropic.com>
      422e25c9
  9. 03 Apr, 2026 1 commit
  10. 24 Mar, 2026 1 commit
  11. 20 Mar, 2026 1 commit
    • Remx's avatar
      feat(openai): 增加 gpt-5.4-mini/nano 模型支持与定价配置 · c810cad7
      Remx authored
      - 接入 gpt-5.4-mini/nano 模型识别与规范化,补充默认模型列表
      - 增加 gpt-5.4-mini/nano 输入/缓存命中/输出价格与计费兜底逻辑
      - 同步前端模型白名单与 OpenCode 配置
      - 补充 service tier(priority/flex) 计费回归测试
      c810cad7
  12. 19 Mar, 2026 1 commit
    • Remx's avatar
      feat(openai): 增加 gpt-5.4-mini/nano 模型支持与定价配置 · 42d73118
      Remx authored
      - 接入 gpt-5.4-mini/nano 模型识别与规范化,补充默认模型列表
      - 增加 gpt-5.4-mini/nano 输入/缓存命中/输出价格与计费兜底逻辑
      - 同步前端模型白名单与 OpenCode 配置
      - 补充 service tier(priority/flex) 计费回归测试
      42d73118
  13. 16 Mar, 2026 1 commit
  14. 14 Mar, 2026 1 commit
  15. 12 Mar, 2026 1 commit
  16. 10 Mar, 2026 1 commit
    • rickylin047's avatar
      fix(openai): convert string input to array for Codex OAuth responses endpoint · 9f1f203b
      rickylin047 authored
      The ChatGPT backend-api codex/responses endpoint requires `input` to be
      an array, but the OpenAI Responses API spec allows it to be a plain string.
      When a client sends a string input, sub2api now converts it to the expected
      message array format. Empty/whitespace-only strings become an empty array
      to avoid triggering a 400 "Input must be a list" error.
      9f1f203b
  17. 07 Mar, 2026 1 commit
  18. 06 Mar, 2026 2 commits
  19. 13 Feb, 2026 1 commit
  20. 07 Feb, 2026 3 commits
    • erio's avatar
      feat(antigravity): comprehensive enhancements - model mapping, rate limiting, scheduling & ops · 5e98445b
      erio authored
      Key changes:
      - Upgrade model mapping: Opus 4.5 → Opus 4.6-thinking with precise matching
      - Unified rate limiting: scope-level → model-level with Redis snapshot sync
      - Load-balanced scheduling by call count with smart retry mechanism
      - Force cache billing support
      - Model identity injection in prompts with leak prevention
      - Thinking mode auto-handling (max_tokens/budget_tokens fix)
      - Frontend: whitelist mode toggle, model mapping validation, status indicators
      - Gemini session fallback with Redis Trie O(L) matching
      - Ops: enhanced concurrency monitoring, account availability, retry logic
      - Migration scripts: 049-051 for model mapping unification
      5e98445b
    • yangjianbo's avatar
      test(codex): 清理无用的 opencode 缓存测试 · 4e01126f
      yangjianbo authored
      移除不再需要的 setupCodexCache 调用与辅助函数(已不再回源/读写缓存)
      4e01126f
    • yangjianbo's avatar
      feat(codex): 移除 opencode 指令回源与缓存 · 55b56328
      yangjianbo authored
      - 不再从 GitHub 拉取 opencode codex_header.txt\n- 删除 ~/.opencode 缓存与异步刷新逻辑\n- 所有 instructions 统一使用内置 codex_cli_instructions.md
      55b56328
  21. 05 Feb, 2026 1 commit
  22. 03 Feb, 2026 1 commit
    • liuxiongfeng's avatar
      fix(openai): 统一 OAuth instructions 处理逻辑,修复 Codex CLI 400 错误 · 9a48b2e9
      liuxiongfeng authored
      - 修改 applyCodexOAuthTransform 函数签名,增加 isCodexCLI 参数
      - 移除 && !isCodexCLI 条件,对所有 OAuth 请求统一处理
      - 新增 applyInstructions/applyCodexCLIInstructions/applyOpenCodeInstructions 辅助函数
      - 新增 isInstructionsEmpty 函数检查 instructions 字段是否为空
      - 添加 Codex CLI 和非 Codex CLI 场景的测试用例
      
      逻辑说明:
      - Codex CLI + 有 instructions: 保持不变
      - Codex CLI + 无 instructions: 补充 opencode 指令
      - 非 Codex CLI: 使用 opencode 指令覆盖
      9a48b2e9
  23. 02 Feb, 2026 1 commit
  24. 17 Jan, 2026 1 commit
  25. 14 Jan, 2026 1 commit
    • yangjianbo's avatar
      fix(网关): OAuth 请求强制 store=false · 3663951d
      yangjianbo authored
      避免上游 Store 必须为 false 的错误
      
      仅在缺失或 true 时写回 store
      
      测试: go test ./internal/service -run TestApplyCodexOAuthTransform
      
      测试: make test-backend(golangci-lint 已单独执行)
      3663951d