- 28 Apr, 2026 2 commits
-
-
Oganneson authored
Closes #1957 The OAuth path forwards client requests to chatgpt.com/backend-api/codex/responses, where applyCodexOAuthTransform forces store=false (chatgpt.com's codex backend rejects store=true). Reasoning items emitted under store=false are NEVER persisted upstream, so any rs_* reference that a client carries forward in a subsequent input[] array triggers a guaranteed upstream 404: Item with id 'rs_...' not found. Items are not persisted when `store` is set to false. Try again with `store` set to true, or remove this item from your input. sub2api wraps this as 502 "Upstream request failed" and the conversation breaks on every multi-turn /v1/responses request that uses reasoning + tools (reproducible with gpt-5.5; gpt-5.4 happens to dodge it because the upstream does not emit reasoning items for that model). Affected clients include any that follow the OpenAI Responses API spec and replay prior assistant items verbatim — in practice this hit OpenClaw and similar agent harnesses on every turn ≥2 with tool use. The fix: in filterCodexInput, drop input items with type == "reasoning" entirely. The model never reads reasoning summary text from input (only encrypted_content can carry reasoning context across turns, and chatgpt.com under store=false does not emit it), so this is a no-op for the model itself and a clean removal of unreachable upstream lookups. Scope is intentionally narrow: * Only OAuth account requests (account.Type == AccountTypeOAuth) reach applyCodexOAuthTransform / filterCodexInput. * API-key accounts going to api.openai.com/v1/responses are unaffected (store=true works there, rs_* persists, multi-turn already works). * Anthropic / Gemini platform groups go through different transforms and are unaffected. * /v1/chat/completions is unaffected (no reasoning items). * item_reference items (different type) are unaffected — only type == "reasoning" is dropped. Verification: * Existing tests pass: go test ./internal/service/ -run Codex|Tool|OAuth * New regression test asserts reasoning items are dropped under both preserveReferences=true and preserveReferences=false. * End-to-end repro on gpt-5.5 multi-turn + tools: pre-patch 502, post-patch 200. Repro on gpt-5.4 unchanged. Three-turn deep loop on gpt-5.5 passes. -
ivanvolt authored
-
- 27 Apr, 2026 1 commit
-
-
gaoren002 authored
-
- 24 Apr, 2026 4 commits
- 23 Apr, 2026 5 commits
-
-
gaoren002 authored
-
erio authored
Revert payment/wechat, sora/claude-max cleanup, fork-only migrations, and cosmetic changes that were brought in by the release sync commit. Keep only channel-monitor related improvements: - PublicSettingsInjectionPayload named struct with drift test - ChannelMonitorRunner graceful shutdown in wire - image_output_price in SupportedModelChip - Simplified buildSelfNavItems in AppSidebar - Gateway WARN logs for 503 branches
-
erio authored
- Extract PublicSettingsInjectionPayload named struct with drift test - Add channel_monitor_default_interval_seconds to SSR injection - Add image_output_price to SupportedModelChip - Simplify AppSidebar buildSelfNavItems (admins see available channels) - Add gateway WARN logs for 503 no-available-accounts branches - Wire ChannelMonitorRunner into provideCleanup for graceful shutdown - Add migrations 130/131 (CC template userid fix + mimicry field cleanup) - Clean up fork-only features (sora, claude max simulation, client affinity) - Remove ~320 obsolete i18n keys - Add codexUsage utility, WechatServiceButton, BulkEditAccountModal - Tidy go.sum
-
shaw authored
-
meteor041 authored
-
- 20 Apr, 2026 1 commit
-
-
erio authored
- backend: 删除 gpt-5 / 5.1 / 5.1-codex / 5.1-codex-max / 5.1-codex-mini / 5.2-codex / 5.4-nano 的内置映射与 DefaultModels 条目 - backend: normalizeCodexModel 默认兜底由 gpt-5.1 改为 gpt-5.4,gpt-5.3-codex-spark 独立保留映射 - backend: 修复 isOpenAIGPT54Model 与 shouldAutoInjectPromptCacheKeyForCompat 对 claude / gpt-4o 的误判(之前依赖 gpt-5.1 作为非 GPT 族的隐式 sentinel,改后需要显式前缀守卫) - backend: 清理 billing_service 中已不可达的 fallback 价格与 switch 分支 - frontend: 从白名单、OpenCode 配置、预设映射中移除已下线模型 - 同步更新所有相关单测 Refs: #1758, parallels upstream #1759 but adds downstream guard fixes
-
- 11 Apr, 2026 1 commit
-
-
shuanbao0 authored
在前一个 commit 的 isResponsesShape 短路路径基础上,补充对 Cursor 云端 带过来的、Codex 上游统一不支持的顶层 Responses API 参数的剥离: - prompt_cache_retention - safety_identifier - metadata - stream_options 根因补充:这条 raw-body 透传路径为了保留 Cursor 的 input 数组整体结构, 不再经过 ChatCompletionsRequest 的反序列化过滤,所以这些 Go 结构体里 没有对应字段的参数会被原样发到上游,上游返回: Unsupported parameter: <field> 常规 Chat Completions 转换路径天然通过 ChatCompletionsRequest 丢弃未知字段, 不受影响;此处仅在 isResponsesShape 分支内用 sjson.DeleteBytes 显式过滤, 作用域最小。剥离列表与 openai_gateway_service.go:2034 的 unsupportedFields 语义对齐。 另外在 applyCodexOAuthTransform 的 OAuth 兜底 strip 列表里同步追加 prompt_cache_retention,作为对该函数所有其他 OAuth 调用点的 defense in depth(当前只有 Cursor 路径的短路已在前面剥过,但保留这一层更稳)。 测试: - TestCursorMixedShape_StripsUnsupportedFields — 验证所有 4 个字段都被剥 - TestApplyCodexOAuthTransform_StripsPromptCacheRetention — OAuth 兜底路径 Co-Authored-By:Claude Opus 4.6 (1M context) <noreply@anthropic.com>
-
- 07 Apr, 2026 1 commit
-
-
Alex authored
-
- 24 Mar, 2026 1 commit
-
-
InCerry authored
-
- 20 Mar, 2026 1 commit
-
-
Remx authored
- 接入 gpt-5.4-mini/nano 模型识别与规范化,补充默认模型列表 - 增加 gpt-5.4-mini/nano 输入/缓存命中/输出价格与计费兜底逻辑 - 同步前端模型白名单与 OpenCode 配置 - 补充 service tier(priority/flex) 计费回归测试
-
- 19 Mar, 2026 1 commit
-
-
Remx authored
- 接入 gpt-5.4-mini/nano 模型识别与规范化,补充默认模型列表 - 增加 gpt-5.4-mini/nano 输入/缓存命中/输出价格与计费兜底逻辑 - 同步前端模型白名单与 OpenCode 配置 - 补充 service tier(priority/flex) 计费回归测试
-
- 16 Mar, 2026 1 commit
-
-
Elysia authored
OAuth upstreams (ChatGPT) reject requests containing role:"system" in the input array with HTTP 400 "System messages are not allowed". Extract such items before forwarding and merge their content into the top-level instructions field, prepending to any existing value. Co-Authored-By:Claude Sonnet 4.6 <noreply@anthropic.com>
-
- 14 Mar, 2026 1 commit
-
-
ius authored
-
- 12 Mar, 2026 1 commit
-
-
yexueduxing authored
-
- 11 Mar, 2026 1 commit
-
-
CoolCoolTomato authored
-
- 10 Mar, 2026 1 commit
-
-
rickylin047 authored
The ChatGPT backend-api codex/responses endpoint requires `input` to be an array, but the OpenAI Responses API spec allows it to be a plain string. When a client sends a string input, sub2api now converts it to the expected message array format. Empty/whitespace-only strings become an empty array to avoid triggering a 400 "Input must be a list" error.
-
- 07 Mar, 2026 1 commit
-
-
shaw authored
-
- 06 Mar, 2026 2 commits
-
-
神乐 authored
-
yangjianbo authored
- 接入 gpt-5.4 模型识别与规范化,补充默认模型列表 - 增加 gpt-5.4 输入/缓存命中/输出价格与计费兜底逻辑 - 同步前端模型白名单与 OpenCode 上下文窗口(1050000/128000) Co-Authored-By:Claude Opus 4.6 <noreply@anthropic.com> (cherry picked from commit 924476dcac6181cd0f3ee731ec7b73672ff03793)
-
- 16 Feb, 2026 1 commit
-
-
0-don authored
-
- 13 Feb, 2026 1 commit
-
-
yangjianbo authored
-
- 07 Feb, 2026 3 commits
-
-
erio authored
Key changes: - Upgrade model mapping: Opus 4.5 → Opus 4.6-thinking with precise matching - Unified rate limiting: scope-level → model-level with Redis snapshot sync - Load-balanced scheduling by call count with smart retry mechanism - Force cache billing support - Model identity injection in prompts with leak prevention - Thinking mode auto-handling (max_tokens/budget_tokens fix) - Frontend: whitelist mode toggle, model mapping validation, status indicators - Gemini session fallback with Redis Trie O(L) matching - Ops: enhanced concurrency monitoring, account availability, retry logic - Migration scripts: 049-051 for model mapping unification
-
yangjianbo authored
- 不再从 GitHub 拉取 opencode codex_header.txt\n- 删除 ~/.opencode 缓存与异步刷新逻辑\n- 所有 instructions 统一使用内置 codex_cli_instructions.md
-
yangjianbo authored
- Codex CLI 请求仅使用内置 instructions,不再读取 opencode 缓存/回源\n- 新增 gateway.force_codex_cli(环境变量 GATEWAY_FORCE_CODEX_CLI)\n- ForceCodexCLI=true 时转发上游强制 User-Agent=codex_cli_rs/0.0.0\n- 更新 deploy 示例配置
-
- 06 Feb, 2026 1 commit
-
-
yangjianbo authored
-
- 05 Feb, 2026 1 commit
-
-
yangjianbo authored
-
- 03 Feb, 2026 1 commit
-
-
liuxiongfeng authored
- 修改 applyCodexOAuthTransform 函数签名,增加 isCodexCLI 参数 - 移除 && !isCodexCLI 条件,对所有 OAuth 请求统一处理 - 新增 applyInstructions/applyCodexCLIInstructions/applyOpenCodeInstructions 辅助函数 - 新增 isInstructionsEmpty 函数检查 instructions 字段是否为空 - 添加 Codex CLI 和非 Codex CLI 场景的测试用例 逻辑说明: - Codex CLI + 有 instructions: 保持不变 - Codex CLI + 无 instructions: 补充 opencode 指令 - 非 Codex CLI: 使用 opencode 指令覆盖
-
- 02 Feb, 2026 1 commit
-
-
song authored
-
- 17 Jan, 2026 1 commit
-
-
IanShaw027 authored
- codex_transform: 过滤无效工具,支持 Responses-style 和 ChatCompletions-style 格式 - tool_corrector: 添加 fetch 工具映射,修正 bash/edit 参数命名规范
-
- 14 Jan, 2026 2 commits
-
-
yangjianbo authored
-
yangjianbo authored
避免上游 Store 必须为 false 的错误 仅在缺失或 true 时写回 store 测试: go test ./internal/service -run TestApplyCodexOAuthTransform 测试: make test-backend(golangci-lint 已单独执行)
-
- 13 Jan, 2026 2 commits
-
-
yangjianbo authored
完善 function_call_output 续链校验与引用匹配 续链场景强制 store=true,过滤 input 时避免副作用 补充续链判断与过滤相关单元测试 测试: go test ./...
-
ianshaw authored
## 主要改动 1. **模型规范化扩展到所有账号** - 将 Codex 模型规范化(如 gpt-5-nano → gpt-5.1)应用到所有 OpenAI 账号类型 - 不再仅限于 OAuth 非 CLI 请求 - 解决 Codex CLI 使用 ChatGPT 账号时的模型兼容性问题 2. **reasoning.effort 参数规范化** - 自动将 `minimal` 转换为 `none` - 解决 gpt-5.1 模型不支持 `minimal` 值的问题 3. **Session/Conversation ID fallback 机制** - 从请求体多个字段提取 session_id/conversation_id - 优先级:prompt_cache_key → session_id → conversation_id → previous_response_id - 支持 Codex CLI 的会话保持 4. **Tool Call ID fallback** - 当 call_id 为空时使用 id 字段作为 fallback - 确保 tool call 输出能正确匹配 - 保留 item_reference 类型的 items 5. **Header 优化** - 添加 conversation_id 到允许的 headers - 移除删除 session headers 的逻辑 ## 相关 Issue - 参考 OpenCode issue #3118 关于 item_reference 的讨论
-