1. 30 Apr, 2026 5 commits
  2. 29 Apr, 2026 9 commits
    • github-actions[bot]'s avatar
      8ad099ba
    • shaw's avatar
      fix(scheduler): resolve SetSnapshot race conditions and remove usage throttle · 8bf2a7b8
      shaw authored
      Backend: Fix three race conditions in SetSnapshot that caused account
      scheduling anomalies and broken sticky sessions:
      - Use Lua CAS script for atomic version activation, preventing version
        rollback when concurrent goroutines write snapshots simultaneously
      - Add UnlockBucket to release rebuild lock immediately after completion
        instead of waiting 30s TTL expiry
      - Replace immediate DEL of old snapshots with 60s EXPIRE grace period,
        preventing readers from hitting empty ZRANGE during version switches
      
      Frontend: Remove serial queue throttle (1-2s delay per request) from
      usage loading since backend now uses passive sampling. All usage
      requests execute immediately in parallel.
      8bf2a7b8
    • shaw's avatar
      40feb86b
    • shaw's avatar
      fix(lint): check type assertion error in codex transform test · 5e54d492
      shaw authored
      The errcheck linter flagged an unchecked type assertion on
      item["type"].(string). Use the two-value form with require.True
      to satisfy the linter and fail clearly on unexpected types.
      5e54d492
    • KnowSky404's avatar
      fix: format ingress continuation test · f7c13af1
      KnowSky404 authored
      f7c13af1
    • KnowSky404's avatar
    • shaw's avatar
      fix(vertex): audit fixes for Vertex Service Account feature (#1977) · 93d91e20
      shaw authored
      - Security: force token_uri to Google default, preventing SSRF via crafted service account JSON
      - Dedup: extract shared getVertexServiceAccountAccessToken() to eliminate ~35 lines of duplication between ClaudeTokenProvider and GeminiTokenProvider
      - Fix: apply model mapping + Vertex model ID normalization in forward_as_responses and forward_as_chat_completions paths
      - Fix: exclude service_account from AI Studio endpoint selection (Vertex cannot serve generativelanguage.googleapis.com)
      - Feature: add model restriction/mapping UI for service_account in EditAccountModal
      - Dedup: extract VERTEX_LOCATION_OPTIONS to shared constants
      - i18n: replace all hardcoded Chinese strings in Vertex UI with translation keys
      93d91e20
    • alfadb's avatar
      fix(gateway): sanitize stream errors to avoid leaking infrastructure topology · d78478e8
      alfadb authored
      (*net.OpError).Error() concatenates Source/Addr fields, so the previous
      disconnectMsg surfaced internal source IP/port and upstream server address
      to clients via SSE error frames and UpstreamFailoverError.ResponseBody
      (reported by @Wei-Shaw on PR #2066).
      
      - Add sanitizeStreamError that maps known errors (io.ErrUnexpectedEOF,
        context.Canceled, syscall.ECONNRESET/EPIPE/ETIMEDOUT/...) to fixed
        descriptions and falls back to a generic placeholder, with an explicit
        *net.OpError branch that drops Source/Addr fields entirely.
      - Use sanitized message in client-facing disconnectMsg; full ev.err is
        still preserved in the existing operator log line for diagnosis.
      - Tests cover net.OpError redaction, the failover ResponseBody path, and
        every known sanitized error mapping.
      d78478e8
    • erio's avatar
      feat(ops): allow retention days = 0 to wipe table on each scheduled cleanup · 4b6954f9
      erio authored
      Background / 背景
      
      The ops cleanup task currently rejects retention days < 1 in both validate
      and normalize, so operators who want minimal-history setups (e.g. high
      churn deployments that prefer near-realtime cleanup) cannot express that
      intent through the UI. The only options are 1+ days, which keeps at least
      24h of history regardless of cron frequency.
      
      ops 清理任务目前在 validate 和 normalize 两处都拒绝小于 1 的保留天数,
      让希望尽量不留历史的运维场景(高吞吐部署 + 想用近实时清理)无法通过 UI
      表达。最低只能配 1,等于不管 cron 多频繁,至少都会保留 24 小时的历史。
      
      Purpose / 目的
      
      Let admins set retention days to 0, meaning "every scheduled cleanup
      run wipes the corresponding table(s) entirely". Combined with a more
      frequent cron (e.g. `0 * * * *`) this yields effectively rolling cleanup.
      
      允许管理员把保留天数设为 0,语义为"每次定时清理时把对应表全部清空"。
      搭配更频繁的 cron(比如每小时整点)即可获得近似滚动清理的效果。
      
      Changes / 改动内容
      
      Backend
      
      - service/ops_settings.go: validate accepts [0, 365]; normalize only
        refills default 30 when value is < 0 (negative is treated as legacy
        bad data, 0 is honoured)
      - service/ops_cleanup_service.go: introduce `opsCleanupPlan(now, days)`
        returning `(cutoff, truncate, ok)`. days==0 returns truncate=true and
        short-circuits to a new `truncateOpsTable` helper that uses
        `TRUNCATE TABLE` (O(1), no WAL, no VACUUM pressure). days>0 keeps
        the existing batched DELETE path unchanged. Empty tables skip
        TRUNCATE to avoid the ACCESS EXCLUSIVE lock entirely
      - Extract `isMissingRelationError` helper to dedupe the "table not
        yet created" tolerance shared by both delete and truncate paths
      - Add unit tests for `opsCleanupPlan` (three branches) and
        `isMissingRelationError`
      
      后端
      
      - service/ops_settings.go: validate 接受 [0, 365];normalize 仅在 < 0
        时回填默认 30(负数视为脏数据,0 被尊重)
      - service/ops_cleanup_service.go: 抽 `opsCleanupPlan(now, days)` 返回
        `(cutoff, truncate, ok)`。days==0 → truncate=true,走新增
        `truncateOpsTable`(TRUNCATE TABLE,O(1),无 WAL、无 VACUUM 压力);
        days>0 仍走原批量 DELETE 路径,行为完全不变。空表跳过 TRUNCATE,
        避免无意义的 ACCESS EXCLUSIVE 锁
      - 抽 `isMissingRelationError` helper 复用 delete / truncate 两处的
        "表不存在"宽容判断
      - 补 `opsCleanupPlan` 三分支 + `isMissingRelationError` 单元测试
      
      Frontend
      
      - OpsSettingsDialog.vue: validation accepts [0, 365]; input min=0
      - i18n (zh/en): hint mentions "0 = wipe all on every cleanup",
        validation message updated to 0-365 range
      
      前端
      
      - OpsSettingsDialog.vue: 校验放宽到 [0, 365],input min 改 0
      - i18n(zh/en):hint 补"0 = 每次清理时清空所有",错误提示改 0-365
      
      Trade-offs / 取舍
      
      - TRUNCATE requires ACCESS EXCLUSIVE lock briefly, but ops tables only
        have the cleanup task as a writer, so the lock is invisible to other
        workloads
      - Empty-table guard avoids the lock when there is nothing to clean
      - Negative values are still treated as legacy bad data and replaced
        with default 30 to preserve compatibility
      4b6954f9
  3. 28 Apr, 2026 8 commits
    • Oganneson's avatar
      fix(openai): drop reasoning items from /v1/responses input on OAuth path · 7452fad8
      Oganneson authored
      Closes #1957
      
      The OAuth path forwards client requests to chatgpt.com/backend-api/codex/responses,
      where applyCodexOAuthTransform forces store=false (chatgpt.com's codex backend
      rejects store=true). Reasoning items emitted under store=false are NEVER
      persisted upstream, so any rs_* reference that a client carries forward in a
      subsequent input[] array triggers a guaranteed upstream 404:
      
          Item with id 'rs_...' not found. Items are not persisted when `store` is
          set to false. Try again with `store` set to true, or remove this item
          from your input.
      
      sub2api wraps this as 502 "Upstream request failed" and the conversation
      breaks on every multi-turn /v1/responses request that uses reasoning + tools
      (reproducible with gpt-5.5; gpt-5.4 happens to dodge it because the upstream
      does not emit reasoning items for that model).
      
      Affected clients include any that follow the OpenAI Responses API spec and
      replay prior assistant items verbatim — in practice this hit OpenClaw and
      similar agent harnesses on every turn ≥2 with tool use.
      
      The fix: in filterCodexInput, drop input items with type == "reasoning"
      entirely. The model never reads reasoning summary text from input (only
      encrypted_content can carry reasoning context across turns, and chatgpt.com
      under store=false does not emit it), so this is a no-op for the model itself
      and a clean removal of unreachable upstream lookups.
      
      Scope is intentionally narrow:
        * Only OAuth account requests (account.Type == AccountTypeOAuth) reach
          applyCodexOAuthTransform / filterCodexInput.
        * API-key accounts going to api.openai.com/v1/responses are unaffected
          (store=true works there, rs_* persists, multi-turn already works).
        * Anthropic / Gemini platform groups go through different transforms and
          are unaffected.
        * /v1/chat/completions is unaffected (no reasoning items).
        * item_reference items (different type) are unaffected — only type ==
          "reasoning" is dropped.
      
      Verification:
        * Existing tests pass: go test ./internal/service/ -run Codex|Tool|OAuth
        * New regression test asserts reasoning items are dropped under both
          preserveReferences=true and preserveReferences=false.
        * End-to-end repro on gpt-5.5 multi-turn + tools: pre-patch 502, post-patch
          200. Repro on gpt-5.4 unchanged. Three-turn deep loop on gpt-5.5 passes.
      7452fad8
    • alfadb's avatar
      fix(gateway): emit Anthropic-standard SSE error events and failover body · 4c474616
      alfadb authored
      
      
      Two follow-ups to PR #2066's failover-wrap fix:
      
      1. Failover ResponseBody (`UpstreamFailoverError.ResponseBody`) was encoded
         as `{"error": "<msg>"}` (string field). `ExtractUpstreamErrorMessage`
         probes for `error.message`, `detail`, or top-level `message` only — so
         `handleFailoverExhausted` and downstream passthrough rules saw an empty
         message, losing the EOF root cause in ops logs. Re-encode as the
         Anthropic standard shape `{"type":"error","error":{"type":"upstream_disconnected","message":"..."}}`.
         (Addresses the inline review comment from copilot-pull-request-reviewer
         on Wei-Shaw/sub2api#2066.)
      
      2. The streaming `event: error` SSE frame for `response_too_large`,
         `stream_read_error`, and `stream_timeout` was non-standard
         (`{"error":"<reason>"}`). Anthropic SDKs (and Claude Code) expect
         `{"type":"error","error":{"type":"...","message":"..."}}` and parse
         `error.type`/`error.message` accordingly. Refactor `sendErrorEvent` to
         take both reason and message, and emit the standard frame so client
         SDKs surface a real diagnostic message instead of a generic stream error.
      
      This does not by itself prevent task interruption on long-stream EOF
      (SSE has no resume; client-side retry remains the only complete fix), but
      it gives both server-side ops logs and client-side error UIs a meaningful
      upstream message so users know the next step is to retry.
      
      Tests updated to assert the new body shape on both branches plus a new
      assertion that `ExtractUpstreamErrorMessage` returns a non-empty string.
      Co-Authored-By: default avatarClaude Opus 4.7 (1M context) <noreply@anthropic.com>
      4c474616
    • alfadb's avatar
      fix(gateway): wrap Anthropic stream EOF as failover error before client output · 63275735
      alfadb authored
      
      
      Anthropic streaming path (gateway_service.go) returned a plain error on
      upstream SSE read failure, so the handler-level UpstreamFailoverError check
      never fired and the client received a bare `stream_read_error` event,
      breaking long-running tasks even when no bytes had been written yet.
      
      The most common trigger is HTTP/2 GOAWAY from api.anthropic.com edge
      backends doing graceful rotation: Go's http.Transport surfaces this as
      `unexpected EOF` and never auto-retries.
      
      Mirror what the OpenAI and antigravity gateways already do: when the read
      error happens before any byte has reached the client (`!c.Writer.Written()`),
      return `*UpstreamFailoverError{StatusCode: 502, RetryableOnSameAccount: true}`
      so the handler can retry on the same or another account. After client
      output has begun, SSE has no resume protocol — keep the existing passthrough
      behavior.
      
      Tests cover both branches via streamReadCloser-based fixtures.
      Co-Authored-By: default avatarClaude Opus 4.7 (1M context) <noreply@anthropic.com>
      63275735
    • ivanvolt's avatar
      04b2866f
    • 陈曦's avatar
      trafficapi商用migration upgrade · 0d997a20
      陈曦 authored
      0d997a20
    • DaydreamCoding's avatar
      feat(openai): OpenAI Fast/Flex Policy 完整实现(HTTP + WebSocket + Admin) · 30f55a1f
      DaydreamCoding authored
      
      
      对称参照 Claude BetaPolicy 的 fast-mode 过滤实现,新增针对 OpenAI 上游
      service_tier 字段(priority / flex,含客户端 "fast" → "priority" 归一化)的
      pass / filter / block 三态策略,覆盖全部 OpenAI 入口 + admin 配置入口。
      
      后端核心
      - 新增 SettingKeyOpenAIFastPolicySettings、OpenAIFastPolicyRule、
        OpenAIFastPolicySettings 配置模型,含规则的 service_tier × action × scope
        × 模型白名单 × fallback action 维度。
      - SettingService.Get/SetOpenAIFastPolicySettings;缺失时返回内置默认策略
        (所有模型的 priority 走 filter,whitelist 为空,fallback=pass)。设计
        依据:service_tier=fast 是用户级开关,与 model 字段正交,默认锁定特定
        model slug 会留下"用 gpt-4 + fast 透传 priority 上游"的绕过路径。JSON
        解析失败不再静默 fallback,slog.Warn 记录脏数据,便于运维定位。
      - service_tier 归一化(trim + ToLower + fast→priority + 白名单 priority/flex)
        与策略评估(evaluateOpenAIFastPolicy)作为唯一真实来源,HTTP / WS 共用。
        抽出纯函数 evaluateOpenAIFastPolicyWithSettings,配合 ctx-bound settings
        快照(withOpenAIFastPolicyContext / openAIFastPolicySettingsFromContext),
        WS 长会话入口预取一次后所有帧复用,避免每帧打到 settingService。
      
      HTTP 入口(4 个)
      - Chat Completions、Anthropic 兼容(Messages,含 BetaFastMode→priority 二次
        命中)、原生 Responses、Passthrough Responses 全部接入
        applyOpenAIFastPolicyToBody,filter 走 sjson 顶层删除 service_tier,block
        返回 403 forbidden_error JSON。
      - 4 入口统一使用 upstream 视角的 model(GetMappedModel +
        normalizeOpenAIModelForUpstream + Codex OAuth normalize 后的 slug),
        避免 chat/messages/native /responses/passthrough 因为 model 维度不同
        造成 whitelist 命中差异。
      - 在 pass 路径也把客户端 "fast" 别名归一化为 "priority" 写回 body,
        否则 native /responses 与 passthrough 入口会把 "fast" 原样透传给上游
        导致 400/拒绝(chat-completions 入口的 normalizeResponsesBodyServiceTier
        此前已具备同等行为)。
      
      WebSocket 入口
      - 新增 applyOpenAIFastPolicyToWSResponseCreate:严格匹配
        type="response.create",仅处理顶层 service_tier;filter 用 sjson 删字段,
        block 返回 typed *OpenAIFastBlockedError。
      - ingress 路径在 parseClientPayload 内调用,block 命中先 Write Realtime
        风格 error event 再返回 OpenAIWSClientCloseError(StatusPolicyViolation
        =1008),依赖底层 WebSocket Conn.Write 的同步 flush 保证 error 先于
        close。
      - passthrough 路径在 RunEntry 前对 firstClientMessage 应用策略,并通过
        openAIWSPolicyEnforcingFrameConn 包装 ReadFrame 对每个 client→upstream
        帧执行策略;后续帧无 model 字段时回退到 capturedSessionModel。
        filter 闭包内同时侦测 session.update / session.created 帧的 session.model
        字段刷新 capturedSessionModel,封堵"首帧 model=gpt-4o(pass)→
        session.update 改为 gpt-5.5 → 不带 model 的 response.create fallback
        到 gpt-4o"的 mid-session 绕过路径。
      - passthrough billing:requestServiceTier 在策略 filter 之后再从
        firstClientMessage 提取,filter 命中时 OpenAIForwardResult.ServiceTier
        上报 nil(default tier),与 HTTP 入口(reqBody 来自 post-filter map)
        / WS ingress(payload 来自 post-filter bytes)的语义一致。
      - 错误事件 schema:{event_id: "evt_<32hex>", type: "error",
        error: {type: "forbidden_error", code: "policy_violation", message}},
        与 OpenAI codex 客户端 error event 解析兼容。
      
      Admin / Frontend
      - dto.SystemSettings / UpdateSettingsRequest 新增
        openai_fast_policy_settings 字段(omitempty),bulk GET/PUT 接入。
      - Settings 页 Gateway 页签新增 Fast/Flex Policy 表单卡片:
        service_tier × action × scope × 模型白名单 × fallback action 全字段配置。
      - 前端守门:openaiFastPolicyLoaded 标志仅在 GET 真带回字段时才允许回写,
        避免 rollout/错误把默认规则覆盖成空;saveSettings 回写循环 skip 该字段,
        由专用刷新逻辑处理;仅 action=block 时发送 error_message,匹配后端
        omitempty 行为。
      
      测试
      - HTTP 路径:openai_fast_policy_test.go 覆盖默认配置(whitelist=[],所有
        模型 priority filter)/ block 自定义错误 / scope 区分 / filter 删字段 /
        block 不改 body / block 短路上游 / Anthropic BetaFastMode 触发 OpenAI
        fast policy 等场景。
      - WebSocket 路径:openai_fast_policy_ws_test.go 覆盖
          helper 单元(filter / fast→priority 归一化 / flex 透传 / block typed
          error / 无 service_tier 字节不变 / 非 response.create 帧不动 / 空 type
          帧不动 / event_id+code 字段断言 / 非字符串 service_tier 容错)+
          pass 路径 fast 别名归一化回归 +
          ingress 端到端(filter 后上游不含 service_tier / block 后客户端先收
          error event 再收 close 1008 且上游 0 写)+
          passthrough capturedSessionModel fallback 用例(whitelist 策略下首帧
          建立、缺 model 命中 fallback、缺少 fallback 时的 leak 文档化)+
          passthrough session.update / session.created 旋转 capturedSessionModel
          的 mid-session 绕过回归 +
          passthrough billing post-filter ServiceTier 与 idempotent filter 回归。
      Co-Authored-By: default avatarClaude Opus 4.7 (1M context) <noreply@anthropic.com>
      30f55a1f
    • Zven's avatar
      3d4ca5e8
    • 陈曦's avatar
  4. 27 Apr, 2026 18 commits