Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Sign in / Register
Toggle navigation
Menu
Open sidebar
陈曦
sub2api
Commits
aa4e37d0
Unverified
Commit
aa4e37d0
authored
Mar 14, 2026
by
Wesley Liddick
Committed by
GitHub
Mar 14, 2026
Browse files
Merge pull request #966 from GuangYiDing/feat/db-backup-restore
feat: 数据库定时备份与恢复(S3 兼容存储,支持 Cloudflare R2)
parents
a1dc0089
1047f973
Changes
22
Expand all
Show whitespace changes
Inline
Side-by-side
Dockerfile
View file @
aa4e37d0
...
...
@@ -9,6 +9,7 @@
ARG
NODE_IMAGE=node:24-alpine
ARG
GOLANG_IMAGE=golang:1.26.1-alpine
ARG
ALPINE_IMAGE=alpine:3.21
ARG
POSTGRES_IMAGE=postgres:18-alpine
ARG
GOPROXY=https://goproxy.cn,direct
ARG
GOSUMDB=sum.golang.google.cn
...
...
@@ -73,7 +74,12 @@ RUN VERSION_VALUE="${VERSION}" && \
./cmd/server
# -----------------------------------------------------------------------------
# Stage 3: Final Runtime Image
# Stage 3: PostgreSQL Client (version-matched with docker-compose)
# -----------------------------------------------------------------------------
FROM
${POSTGRES_IMAGE} AS pg-client
# -----------------------------------------------------------------------------
# Stage 4: Final Runtime Image
# -----------------------------------------------------------------------------
FROM
${ALPINE_IMAGE}
...
...
@@ -86,8 +92,20 @@ LABEL org.opencontainers.image.source="https://github.com/Wei-Shaw/sub2api"
RUN
apk add
--no-cache
\
ca-certificates
\
tzdata
\
libpq
\
zstd-libs
\
lz4-libs
\
krb5-libs
\
libldap
\
libedit
\
&&
rm
-rf
/var/cache/apk/
*
# Copy pg_dump and psql from the same postgres image used in docker-compose
# This ensures version consistency between backup tools and the database server
COPY
--from=pg-client /usr/local/bin/pg_dump /usr/local/bin/pg_dump
COPY
--from=pg-client /usr/local/bin/psql /usr/local/bin/psql
COPY
--from=pg-client /usr/local/lib/libpq.so.5* /usr/local/lib/
# Create non-root user
RUN
addgroup
-g
1000 sub2api
&&
\
adduser
-u
1000
-G
sub2api
-s
/bin/sh
-D
sub2api
...
...
backend/cmd/server/wire.go
View file @
aa4e37d0
...
...
@@ -94,6 +94,7 @@ func provideCleanup(
antigravityOAuth
*
service
.
AntigravityOAuthService
,
openAIGateway
*
service
.
OpenAIGatewayService
,
scheduledTestRunner
*
service
.
ScheduledTestRunnerService
,
backupSvc
*
service
.
BackupService
,
)
func
()
{
return
func
()
{
ctx
,
cancel
:=
context
.
WithTimeout
(
context
.
Background
(),
10
*
time
.
Second
)
...
...
@@ -230,6 +231,12 @@ func provideCleanup(
}
return
nil
}},
{
"BackupService"
,
func
()
error
{
if
backupSvc
!=
nil
{
backupSvc
.
Stop
()
}
return
nil
}},
}
infraSteps
:=
[]
cleanupStep
{
...
...
backend/cmd/server/wire_gen.go
View file @
aa4e37d0
...
...
@@ -146,6 +146,10 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
adminAnnouncementHandler
:=
admin
.
NewAnnouncementHandler
(
announcementService
)
dataManagementService
:=
service
.
NewDataManagementService
()
dataManagementHandler
:=
admin
.
NewDataManagementHandler
(
dataManagementService
)
backupObjectStoreFactory
:=
repository
.
NewS3BackupStoreFactory
()
dbDumper
:=
repository
.
NewPgDumper
(
configConfig
)
backupService
:=
service
.
ProvideBackupService
(
settingRepository
,
configConfig
,
secretEncryptor
,
backupObjectStoreFactory
,
dbDumper
)
backupHandler
:=
admin
.
NewBackupHandler
(
backupService
,
userService
)
oAuthHandler
:=
admin
.
NewOAuthHandler
(
oAuthService
)
openAIOAuthHandler
:=
admin
.
NewOpenAIOAuthHandler
(
openAIOAuthService
,
adminService
)
geminiOAuthHandler
:=
admin
.
NewGeminiOAuthHandler
(
geminiOAuthService
)
...
...
@@ -201,7 +205,7 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
scheduledTestResultRepository
:=
repository
.
NewScheduledTestResultRepository
(
db
)
scheduledTestService
:=
service
.
ProvideScheduledTestService
(
scheduledTestPlanRepository
,
scheduledTestResultRepository
)
scheduledTestHandler
:=
admin
.
NewScheduledTestHandler
(
scheduledTestService
)
adminHandlers
:=
handler
.
ProvideAdminHandlers
(
dashboardHandler
,
adminUserHandler
,
groupHandler
,
accountHandler
,
adminAnnouncementHandler
,
dataManagementHandler
,
oAuthHandler
,
openAIOAuthHandler
,
geminiOAuthHandler
,
antigravityOAuthHandler
,
proxyHandler
,
adminRedeemHandler
,
promoHandler
,
settingHandler
,
opsHandler
,
systemHandler
,
adminSubscriptionHandler
,
adminUsageHandler
,
userAttributeHandler
,
errorPassthroughHandler
,
adminAPIKeyHandler
,
scheduledTestHandler
)
adminHandlers
:=
handler
.
ProvideAdminHandlers
(
dashboardHandler
,
adminUserHandler
,
groupHandler
,
accountHandler
,
adminAnnouncementHandler
,
dataManagementHandler
,
backupHandler
,
oAuthHandler
,
openAIOAuthHandler
,
geminiOAuthHandler
,
antigravityOAuthHandler
,
proxyHandler
,
adminRedeemHandler
,
promoHandler
,
settingHandler
,
opsHandler
,
systemHandler
,
adminSubscriptionHandler
,
adminUsageHandler
,
userAttributeHandler
,
errorPassthroughHandler
,
adminAPIKeyHandler
,
scheduledTestHandler
)
usageRecordWorkerPool
:=
service
.
NewUsageRecordWorkerPool
(
configConfig
)
userMsgQueueCache
:=
repository
.
NewUserMsgQueueCache
(
redisClient
)
userMessageQueueService
:=
service
.
ProvideUserMessageQueueService
(
userMsgQueueCache
,
rpmCache
,
configConfig
)
...
...
@@ -232,7 +236,7 @@ func initializeApplication(buildInfo handler.BuildInfo) (*Application, error) {
accountExpiryService
:=
service
.
ProvideAccountExpiryService
(
accountRepository
)
subscriptionExpiryService
:=
service
.
ProvideSubscriptionExpiryService
(
userSubscriptionRepository
)
scheduledTestRunnerService
:=
service
.
ProvideScheduledTestRunnerService
(
scheduledTestPlanRepository
,
scheduledTestService
,
accountTestService
,
rateLimitService
,
configConfig
)
v
:=
provideCleanup
(
client
,
redisClient
,
opsMetricsCollector
,
opsAggregationService
,
opsAlertEvaluatorService
,
opsCleanupService
,
opsScheduledReportService
,
opsSystemLogSink
,
soraMediaCleanupService
,
schedulerSnapshotService
,
tokenRefreshService
,
accountExpiryService
,
subscriptionExpiryService
,
usageCleanupService
,
idempotencyCleanupService
,
pricingService
,
emailQueueService
,
billingCacheService
,
usageRecordWorkerPool
,
subscriptionService
,
oAuthService
,
openAIOAuthService
,
geminiOAuthService
,
antigravityOAuthService
,
openAIGatewayService
,
scheduledTestRunnerService
)
v
:=
provideCleanup
(
client
,
redisClient
,
opsMetricsCollector
,
opsAggregationService
,
opsAlertEvaluatorService
,
opsCleanupService
,
opsScheduledReportService
,
opsSystemLogSink
,
soraMediaCleanupService
,
schedulerSnapshotService
,
tokenRefreshService
,
accountExpiryService
,
subscriptionExpiryService
,
usageCleanupService
,
idempotencyCleanupService
,
pricingService
,
emailQueueService
,
billingCacheService
,
usageRecordWorkerPool
,
subscriptionService
,
oAuthService
,
openAIOAuthService
,
geminiOAuthService
,
antigravityOAuthService
,
openAIGatewayService
,
scheduledTestRunnerService
,
backupService
)
application
:=
&
Application
{
Server
:
httpServer
,
Cleanup
:
v
,
...
...
@@ -285,6 +289,7 @@ func provideCleanup(
antigravityOAuth
*
service
.
AntigravityOAuthService
,
openAIGateway
*
service
.
OpenAIGatewayService
,
scheduledTestRunner
*
service
.
ScheduledTestRunnerService
,
backupSvc
*
service
.
BackupService
,
)
func
()
{
return
func
()
{
ctx
,
cancel
:=
context
.
WithTimeout
(
context
.
Background
(),
10
*
time
.
Second
)
...
...
@@ -420,6 +425,12 @@ func provideCleanup(
}
return
nil
}},
{
"BackupService"
,
func
()
error
{
if
backupSvc
!=
nil
{
backupSvc
.
Stop
()
}
return
nil
}},
}
infraSteps
:=
[]
cleanupStep
{
...
...
backend/cmd/server/wire_gen_test.go
View file @
aa4e37d0
...
...
@@ -75,6 +75,7 @@ func TestProvideCleanup_WithMinimalDependencies_NoPanic(t *testing.T) {
antigravityOAuthSvc
,
nil
,
// openAIGateway
nil
,
// scheduledTestRunner
nil
,
// backupSvc
)
require
.
NotPanics
(
t
,
func
()
{
...
...
backend/internal/handler/admin/backup_handler.go
0 → 100644
View file @
aa4e37d0
package
admin
import
(
"github.com/Wei-Shaw/sub2api/internal/pkg/response"
"github.com/Wei-Shaw/sub2api/internal/server/middleware"
"github.com/Wei-Shaw/sub2api/internal/service"
"github.com/gin-gonic/gin"
)
type
BackupHandler
struct
{
backupService
*
service
.
BackupService
userService
*
service
.
UserService
}
func
NewBackupHandler
(
backupService
*
service
.
BackupService
,
userService
*
service
.
UserService
)
*
BackupHandler
{
return
&
BackupHandler
{
backupService
:
backupService
,
userService
:
userService
,
}
}
// ─── S3 配置 ───
func
(
h
*
BackupHandler
)
GetS3Config
(
c
*
gin
.
Context
)
{
cfg
,
err
:=
h
.
backupService
.
GetS3Config
(
c
.
Request
.
Context
())
if
err
!=
nil
{
response
.
ErrorFrom
(
c
,
err
)
return
}
response
.
Success
(
c
,
cfg
)
}
func
(
h
*
BackupHandler
)
UpdateS3Config
(
c
*
gin
.
Context
)
{
var
req
service
.
BackupS3Config
if
err
:=
c
.
ShouldBindJSON
(
&
req
);
err
!=
nil
{
response
.
BadRequest
(
c
,
"Invalid request: "
+
err
.
Error
())
return
}
cfg
,
err
:=
h
.
backupService
.
UpdateS3Config
(
c
.
Request
.
Context
(),
req
)
if
err
!=
nil
{
response
.
ErrorFrom
(
c
,
err
)
return
}
response
.
Success
(
c
,
cfg
)
}
func
(
h
*
BackupHandler
)
TestS3Connection
(
c
*
gin
.
Context
)
{
var
req
service
.
BackupS3Config
if
err
:=
c
.
ShouldBindJSON
(
&
req
);
err
!=
nil
{
response
.
BadRequest
(
c
,
"Invalid request: "
+
err
.
Error
())
return
}
err
:=
h
.
backupService
.
TestS3Connection
(
c
.
Request
.
Context
(),
req
)
if
err
!=
nil
{
response
.
Success
(
c
,
gin
.
H
{
"ok"
:
false
,
"message"
:
err
.
Error
()})
return
}
response
.
Success
(
c
,
gin
.
H
{
"ok"
:
true
,
"message"
:
"connection successful"
})
}
// ─── 定时备份 ───
func
(
h
*
BackupHandler
)
GetSchedule
(
c
*
gin
.
Context
)
{
cfg
,
err
:=
h
.
backupService
.
GetSchedule
(
c
.
Request
.
Context
())
if
err
!=
nil
{
response
.
ErrorFrom
(
c
,
err
)
return
}
response
.
Success
(
c
,
cfg
)
}
func
(
h
*
BackupHandler
)
UpdateSchedule
(
c
*
gin
.
Context
)
{
var
req
service
.
BackupScheduleConfig
if
err
:=
c
.
ShouldBindJSON
(
&
req
);
err
!=
nil
{
response
.
BadRequest
(
c
,
"Invalid request: "
+
err
.
Error
())
return
}
cfg
,
err
:=
h
.
backupService
.
UpdateSchedule
(
c
.
Request
.
Context
(),
req
)
if
err
!=
nil
{
response
.
ErrorFrom
(
c
,
err
)
return
}
response
.
Success
(
c
,
cfg
)
}
// ─── 备份操作 ───
type
CreateBackupRequest
struct
{
ExpireDays
*
int
`json:"expire_days"`
// nil=使用默认值14,0=永不过期
}
func
(
h
*
BackupHandler
)
CreateBackup
(
c
*
gin
.
Context
)
{
var
req
CreateBackupRequest
_
=
c
.
ShouldBindJSON
(
&
req
)
// 允许空 body
expireDays
:=
14
// 默认14天过期
if
req
.
ExpireDays
!=
nil
{
expireDays
=
*
req
.
ExpireDays
}
record
,
err
:=
h
.
backupService
.
CreateBackup
(
c
.
Request
.
Context
(),
"manual"
,
expireDays
)
if
err
!=
nil
{
response
.
ErrorFrom
(
c
,
err
)
return
}
response
.
Success
(
c
,
record
)
}
func
(
h
*
BackupHandler
)
ListBackups
(
c
*
gin
.
Context
)
{
records
,
err
:=
h
.
backupService
.
ListBackups
(
c
.
Request
.
Context
())
if
err
!=
nil
{
response
.
ErrorFrom
(
c
,
err
)
return
}
if
records
==
nil
{
records
=
[]
service
.
BackupRecord
{}
}
response
.
Success
(
c
,
gin
.
H
{
"items"
:
records
})
}
func
(
h
*
BackupHandler
)
GetBackup
(
c
*
gin
.
Context
)
{
backupID
:=
c
.
Param
(
"id"
)
if
backupID
==
""
{
response
.
BadRequest
(
c
,
"backup ID is required"
)
return
}
record
,
err
:=
h
.
backupService
.
GetBackupRecord
(
c
.
Request
.
Context
(),
backupID
)
if
err
!=
nil
{
response
.
ErrorFrom
(
c
,
err
)
return
}
response
.
Success
(
c
,
record
)
}
func
(
h
*
BackupHandler
)
DeleteBackup
(
c
*
gin
.
Context
)
{
backupID
:=
c
.
Param
(
"id"
)
if
backupID
==
""
{
response
.
BadRequest
(
c
,
"backup ID is required"
)
return
}
if
err
:=
h
.
backupService
.
DeleteBackup
(
c
.
Request
.
Context
(),
backupID
);
err
!=
nil
{
response
.
ErrorFrom
(
c
,
err
)
return
}
response
.
Success
(
c
,
gin
.
H
{
"deleted"
:
true
})
}
func
(
h
*
BackupHandler
)
GetDownloadURL
(
c
*
gin
.
Context
)
{
backupID
:=
c
.
Param
(
"id"
)
if
backupID
==
""
{
response
.
BadRequest
(
c
,
"backup ID is required"
)
return
}
url
,
err
:=
h
.
backupService
.
GetBackupDownloadURL
(
c
.
Request
.
Context
(),
backupID
)
if
err
!=
nil
{
response
.
ErrorFrom
(
c
,
err
)
return
}
response
.
Success
(
c
,
gin
.
H
{
"url"
:
url
})
}
// ─── 恢复操作(需要重新输入管理员密码) ───
type
RestoreBackupRequest
struct
{
Password
string
`json:"password" binding:"required"`
}
func
(
h
*
BackupHandler
)
RestoreBackup
(
c
*
gin
.
Context
)
{
backupID
:=
c
.
Param
(
"id"
)
if
backupID
==
""
{
response
.
BadRequest
(
c
,
"backup ID is required"
)
return
}
var
req
RestoreBackupRequest
if
err
:=
c
.
ShouldBindJSON
(
&
req
);
err
!=
nil
{
response
.
BadRequest
(
c
,
"password is required for restore operation"
)
return
}
// 从上下文获取当前管理员用户 ID
sub
,
ok
:=
middleware
.
GetAuthSubjectFromContext
(
c
)
if
!
ok
{
response
.
Unauthorized
(
c
,
"unauthorized"
)
return
}
// 获取管理员用户并验证密码
user
,
err
:=
h
.
userService
.
GetByID
(
c
.
Request
.
Context
(),
sub
.
UserID
)
if
err
!=
nil
{
response
.
ErrorFrom
(
c
,
err
)
return
}
if
!
user
.
CheckPassword
(
req
.
Password
)
{
response
.
BadRequest
(
c
,
"incorrect admin password"
)
return
}
if
err
:=
h
.
backupService
.
RestoreBackup
(
c
.
Request
.
Context
(),
backupID
);
err
!=
nil
{
response
.
ErrorFrom
(
c
,
err
)
return
}
response
.
Success
(
c
,
gin
.
H
{
"restored"
:
true
})
}
backend/internal/handler/handler.go
View file @
aa4e37d0
...
...
@@ -12,6 +12,7 @@ type AdminHandlers struct {
Account
*
admin
.
AccountHandler
Announcement
*
admin
.
AnnouncementHandler
DataManagement
*
admin
.
DataManagementHandler
Backup
*
admin
.
BackupHandler
OAuth
*
admin
.
OAuthHandler
OpenAIOAuth
*
admin
.
OpenAIOAuthHandler
GeminiOAuth
*
admin
.
GeminiOAuthHandler
...
...
backend/internal/handler/wire.go
View file @
aa4e37d0
...
...
@@ -15,6 +15,7 @@ func ProvideAdminHandlers(
accountHandler
*
admin
.
AccountHandler
,
announcementHandler
*
admin
.
AnnouncementHandler
,
dataManagementHandler
*
admin
.
DataManagementHandler
,
backupHandler
*
admin
.
BackupHandler
,
oauthHandler
*
admin
.
OAuthHandler
,
openaiOAuthHandler
*
admin
.
OpenAIOAuthHandler
,
geminiOAuthHandler
*
admin
.
GeminiOAuthHandler
,
...
...
@@ -39,6 +40,7 @@ func ProvideAdminHandlers(
Account
:
accountHandler
,
Announcement
:
announcementHandler
,
DataManagement
:
dataManagementHandler
,
Backup
:
backupHandler
,
OAuth
:
oauthHandler
,
OpenAIOAuth
:
openaiOAuthHandler
,
GeminiOAuth
:
geminiOAuthHandler
,
...
...
@@ -128,6 +130,7 @@ var ProviderSet = wire.NewSet(
admin
.
NewAccountHandler
,
admin
.
NewAnnouncementHandler
,
admin
.
NewDataManagementHandler
,
admin
.
NewBackupHandler
,
admin
.
NewOAuthHandler
,
admin
.
NewOpenAIOAuthHandler
,
admin
.
NewGeminiOAuthHandler
,
...
...
backend/internal/repository/backup_pg_dumper.go
0 → 100644
View file @
aa4e37d0
package
repository
import
(
"context"
"fmt"
"io"
"os/exec"
"github.com/Wei-Shaw/sub2api/internal/config"
"github.com/Wei-Shaw/sub2api/internal/service"
)
// PgDumper implements service.DBDumper using pg_dump/psql
type
PgDumper
struct
{
cfg
*
config
.
DatabaseConfig
}
// NewPgDumper creates a new PgDumper
func
NewPgDumper
(
cfg
*
config
.
Config
)
service
.
DBDumper
{
return
&
PgDumper
{
cfg
:
&
cfg
.
Database
}
}
// Dump executes pg_dump and returns a streaming reader of the output
func
(
d
*
PgDumper
)
Dump
(
ctx
context
.
Context
)
(
io
.
ReadCloser
,
error
)
{
args
:=
[]
string
{
"-h"
,
d
.
cfg
.
Host
,
"-p"
,
fmt
.
Sprintf
(
"%d"
,
d
.
cfg
.
Port
),
"-U"
,
d
.
cfg
.
User
,
"-d"
,
d
.
cfg
.
DBName
,
"--no-owner"
,
"--no-acl"
,
"--clean"
,
"--if-exists"
,
}
cmd
:=
exec
.
CommandContext
(
ctx
,
"pg_dump"
,
args
...
)
if
d
.
cfg
.
Password
!=
""
{
cmd
.
Env
=
append
(
cmd
.
Environ
(),
"PGPASSWORD="
+
d
.
cfg
.
Password
)
}
if
d
.
cfg
.
SSLMode
!=
""
{
cmd
.
Env
=
append
(
cmd
.
Environ
(),
"PGSSLMODE="
+
d
.
cfg
.
SSLMode
)
}
stdout
,
err
:=
cmd
.
StdoutPipe
()
if
err
!=
nil
{
return
nil
,
fmt
.
Errorf
(
"create stdout pipe: %w"
,
err
)
}
if
err
:=
cmd
.
Start
();
err
!=
nil
{
return
nil
,
fmt
.
Errorf
(
"start pg_dump: %w"
,
err
)
}
// 返回一个 ReadCloser:读 stdout,关闭时等待进程退出
return
&
cmdReadCloser
{
ReadCloser
:
stdout
,
cmd
:
cmd
},
nil
}
// Restore executes psql to restore from a streaming reader
func
(
d
*
PgDumper
)
Restore
(
ctx
context
.
Context
,
data
io
.
Reader
)
error
{
args
:=
[]
string
{
"-h"
,
d
.
cfg
.
Host
,
"-p"
,
fmt
.
Sprintf
(
"%d"
,
d
.
cfg
.
Port
),
"-U"
,
d
.
cfg
.
User
,
"-d"
,
d
.
cfg
.
DBName
,
"--single-transaction"
,
}
cmd
:=
exec
.
CommandContext
(
ctx
,
"psql"
,
args
...
)
if
d
.
cfg
.
Password
!=
""
{
cmd
.
Env
=
append
(
cmd
.
Environ
(),
"PGPASSWORD="
+
d
.
cfg
.
Password
)
}
if
d
.
cfg
.
SSLMode
!=
""
{
cmd
.
Env
=
append
(
cmd
.
Environ
(),
"PGSSLMODE="
+
d
.
cfg
.
SSLMode
)
}
cmd
.
Stdin
=
data
output
,
err
:=
cmd
.
CombinedOutput
()
if
err
!=
nil
{
return
fmt
.
Errorf
(
"%v: %s"
,
err
,
string
(
output
))
}
return
nil
}
// cmdReadCloser wraps a command stdout pipe and waits for the process on Close
type
cmdReadCloser
struct
{
io
.
ReadCloser
cmd
*
exec
.
Cmd
}
func
(
c
*
cmdReadCloser
)
Close
()
error
{
// Close the pipe first
_
=
c
.
ReadCloser
.
Close
()
// Wait for the process to exit
if
err
:=
c
.
cmd
.
Wait
();
err
!=
nil
{
return
fmt
.
Errorf
(
"pg_dump exited with error: %w"
,
err
)
}
return
nil
}
backend/internal/repository/backup_s3_store.go
0 → 100644
View file @
aa4e37d0
package
repository
import
(
"bytes"
"context"
"fmt"
"io"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
v4
"github.com/aws/aws-sdk-go-v2/aws/signer/v4"
awsconfig
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/Wei-Shaw/sub2api/internal/service"
)
// S3BackupStore implements service.BackupObjectStore using AWS S3 compatible storage
type
S3BackupStore
struct
{
client
*
s3
.
Client
bucket
string
}
// NewS3BackupStoreFactory returns a BackupObjectStoreFactory that creates S3-backed stores
func
NewS3BackupStoreFactory
()
service
.
BackupObjectStoreFactory
{
return
func
(
ctx
context
.
Context
,
cfg
*
service
.
BackupS3Config
)
(
service
.
BackupObjectStore
,
error
)
{
region
:=
cfg
.
Region
if
region
==
""
{
region
=
"auto"
// Cloudflare R2 默认 region
}
awsCfg
,
err
:=
awsconfig
.
LoadDefaultConfig
(
ctx
,
awsconfig
.
WithRegion
(
region
),
awsconfig
.
WithCredentialsProvider
(
credentials
.
NewStaticCredentialsProvider
(
cfg
.
AccessKeyID
,
cfg
.
SecretAccessKey
,
""
),
),
)
if
err
!=
nil
{
return
nil
,
fmt
.
Errorf
(
"load aws config: %w"
,
err
)
}
client
:=
s3
.
NewFromConfig
(
awsCfg
,
func
(
o
*
s3
.
Options
)
{
if
cfg
.
Endpoint
!=
""
{
o
.
BaseEndpoint
=
&
cfg
.
Endpoint
}
if
cfg
.
ForcePathStyle
{
o
.
UsePathStyle
=
true
}
o
.
APIOptions
=
append
(
o
.
APIOptions
,
v4
.
SwapComputePayloadSHA256ForUnsignedPayloadMiddleware
)
o
.
RequestChecksumCalculation
=
aws
.
RequestChecksumCalculationWhenRequired
})
return
&
S3BackupStore
{
client
:
client
,
bucket
:
cfg
.
Bucket
},
nil
}
}
func
(
s
*
S3BackupStore
)
Upload
(
ctx
context
.
Context
,
key
string
,
body
io
.
Reader
,
contentType
string
)
(
int64
,
error
)
{
// 读取全部内容以获取大小(S3 PutObject 需要知道内容长度)
data
,
err
:=
io
.
ReadAll
(
body
)
if
err
!=
nil
{
return
0
,
fmt
.
Errorf
(
"read body: %w"
,
err
)
}
_
,
err
=
s
.
client
.
PutObject
(
ctx
,
&
s3
.
PutObjectInput
{
Bucket
:
&
s
.
bucket
,
Key
:
&
key
,
Body
:
bytes
.
NewReader
(
data
),
ContentType
:
&
contentType
,
})
if
err
!=
nil
{
return
0
,
fmt
.
Errorf
(
"S3 PutObject: %w"
,
err
)
}
return
int64
(
len
(
data
)),
nil
}
func
(
s
*
S3BackupStore
)
Download
(
ctx
context
.
Context
,
key
string
)
(
io
.
ReadCloser
,
error
)
{
result
,
err
:=
s
.
client
.
GetObject
(
ctx
,
&
s3
.
GetObjectInput
{
Bucket
:
&
s
.
bucket
,
Key
:
&
key
,
})
if
err
!=
nil
{
return
nil
,
fmt
.
Errorf
(
"S3 GetObject: %w"
,
err
)
}
return
result
.
Body
,
nil
}
func
(
s
*
S3BackupStore
)
Delete
(
ctx
context
.
Context
,
key
string
)
error
{
_
,
err
:=
s
.
client
.
DeleteObject
(
ctx
,
&
s3
.
DeleteObjectInput
{
Bucket
:
&
s
.
bucket
,
Key
:
&
key
,
})
return
err
}
func
(
s
*
S3BackupStore
)
PresignURL
(
ctx
context
.
Context
,
key
string
,
expiry
time
.
Duration
)
(
string
,
error
)
{
presignClient
:=
s3
.
NewPresignClient
(
s
.
client
)
result
,
err
:=
presignClient
.
PresignGetObject
(
ctx
,
&
s3
.
GetObjectInput
{
Bucket
:
&
s
.
bucket
,
Key
:
&
key
,
},
s3
.
WithPresignExpires
(
expiry
))
if
err
!=
nil
{
return
""
,
fmt
.
Errorf
(
"presign url: %w"
,
err
)
}
return
result
.
URL
,
nil
}
func
(
s
*
S3BackupStore
)
HeadBucket
(
ctx
context
.
Context
)
error
{
_
,
err
:=
s
.
client
.
HeadBucket
(
ctx
,
&
s3
.
HeadBucketInput
{
Bucket
:
&
s
.
bucket
,
})
if
err
!=
nil
{
return
fmt
.
Errorf
(
"S3 HeadBucket failed: %w"
,
err
)
}
return
nil
}
backend/internal/repository/wire.go
View file @
aa4e37d0
...
...
@@ -100,6 +100,10 @@ var ProviderSet = wire.NewSet(
// Encryptors
NewAESEncryptor
,
// Backup infrastructure
NewPgDumper
,
NewS3BackupStoreFactory
,
// HTTP service ports (DI Strategy A: return interface directly)
NewTurnstileVerifier
,
ProvidePricingRemoteClient
,
...
...
backend/internal/server/routes/admin.go
View file @
aa4e37d0
...
...
@@ -58,6 +58,9 @@ func RegisterAdminRoutes(
// 数据管理
registerDataManagementRoutes
(
admin
,
h
)
// 数据库备份恢复
registerBackupRoutes
(
admin
,
h
)
// 运维监控(Ops)
registerOpsRoutes
(
admin
,
h
)
...
...
@@ -440,6 +443,30 @@ func registerDataManagementRoutes(admin *gin.RouterGroup, h *handler.Handlers) {
}
}
func
registerBackupRoutes
(
admin
*
gin
.
RouterGroup
,
h
*
handler
.
Handlers
)
{
backup
:=
admin
.
Group
(
"/backups"
)
{
// S3 存储配置
backup
.
GET
(
"/s3-config"
,
h
.
Admin
.
Backup
.
GetS3Config
)
backup
.
PUT
(
"/s3-config"
,
h
.
Admin
.
Backup
.
UpdateS3Config
)
backup
.
POST
(
"/s3-config/test"
,
h
.
Admin
.
Backup
.
TestS3Connection
)
// 定时备份配置
backup
.
GET
(
"/schedule"
,
h
.
Admin
.
Backup
.
GetSchedule
)
backup
.
PUT
(
"/schedule"
,
h
.
Admin
.
Backup
.
UpdateSchedule
)
// 备份操作
backup
.
POST
(
""
,
h
.
Admin
.
Backup
.
CreateBackup
)
backup
.
GET
(
""
,
h
.
Admin
.
Backup
.
ListBackups
)
backup
.
GET
(
"/:id"
,
h
.
Admin
.
Backup
.
GetBackup
)
backup
.
DELETE
(
"/:id"
,
h
.
Admin
.
Backup
.
DeleteBackup
)
backup
.
GET
(
"/:id/download-url"
,
h
.
Admin
.
Backup
.
GetDownloadURL
)
// 恢复操作
backup
.
POST
(
"/:id/restore"
,
h
.
Admin
.
Backup
.
RestoreBackup
)
}
}
func
registerSystemRoutes
(
admin
*
gin
.
RouterGroup
,
h
*
handler
.
Handlers
)
{
system
:=
admin
.
Group
(
"/system"
)
{
...
...
backend/internal/service/backup_service.go
0 → 100644
View file @
aa4e37d0
This diff is collapsed.
Click to expand it.
backend/internal/service/backup_service_test.go
0 → 100644
View file @
aa4e37d0
//go:build unit
package
service
import
(
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"strings"
"sync"
"testing"
"time"
"github.com/stretchr/testify/require"
"github.com/Wei-Shaw/sub2api/internal/config"
)
// ─── Mocks ───
type
mockSettingRepo
struct
{
mu
sync
.
Mutex
data
map
[
string
]
string
}
func
newMockSettingRepo
()
*
mockSettingRepo
{
return
&
mockSettingRepo
{
data
:
make
(
map
[
string
]
string
)}
}
func
(
m
*
mockSettingRepo
)
Get
(
_
context
.
Context
,
key
string
)
(
*
Setting
,
error
)
{
m
.
mu
.
Lock
()
defer
m
.
mu
.
Unlock
()
v
,
ok
:=
m
.
data
[
key
]
if
!
ok
{
return
nil
,
ErrSettingNotFound
}
return
&
Setting
{
Key
:
key
,
Value
:
v
},
nil
}
func
(
m
*
mockSettingRepo
)
GetValue
(
_
context
.
Context
,
key
string
)
(
string
,
error
)
{
m
.
mu
.
Lock
()
defer
m
.
mu
.
Unlock
()
v
,
ok
:=
m
.
data
[
key
]
if
!
ok
{
return
""
,
nil
}
return
v
,
nil
}
func
(
m
*
mockSettingRepo
)
Set
(
_
context
.
Context
,
key
,
value
string
)
error
{
m
.
mu
.
Lock
()
defer
m
.
mu
.
Unlock
()
m
.
data
[
key
]
=
value
return
nil
}
func
(
m
*
mockSettingRepo
)
GetMultiple
(
_
context
.
Context
,
keys
[]
string
)
(
map
[
string
]
string
,
error
)
{
m
.
mu
.
Lock
()
defer
m
.
mu
.
Unlock
()
result
:=
make
(
map
[
string
]
string
)
for
_
,
k
:=
range
keys
{
if
v
,
ok
:=
m
.
data
[
k
];
ok
{
result
[
k
]
=
v
}
}
return
result
,
nil
}
func
(
m
*
mockSettingRepo
)
SetMultiple
(
_
context
.
Context
,
settings
map
[
string
]
string
)
error
{
m
.
mu
.
Lock
()
defer
m
.
mu
.
Unlock
()
for
k
,
v
:=
range
settings
{
m
.
data
[
k
]
=
v
}
return
nil
}
func
(
m
*
mockSettingRepo
)
GetAll
(
_
context
.
Context
)
(
map
[
string
]
string
,
error
)
{
m
.
mu
.
Lock
()
defer
m
.
mu
.
Unlock
()
result
:=
make
(
map
[
string
]
string
,
len
(
m
.
data
))
for
k
,
v
:=
range
m
.
data
{
result
[
k
]
=
v
}
return
result
,
nil
}
func
(
m
*
mockSettingRepo
)
Delete
(
_
context
.
Context
,
key
string
)
error
{
m
.
mu
.
Lock
()
defer
m
.
mu
.
Unlock
()
delete
(
m
.
data
,
key
)
return
nil
}
// plainEncryptor 仅做 base64-like 包装,用于测试
type
plainEncryptor
struct
{}
func
(
e
*
plainEncryptor
)
Encrypt
(
plaintext
string
)
(
string
,
error
)
{
return
"ENC:"
+
plaintext
,
nil
}
func
(
e
*
plainEncryptor
)
Decrypt
(
ciphertext
string
)
(
string
,
error
)
{
if
strings
.
HasPrefix
(
ciphertext
,
"ENC:"
)
{
return
strings
.
TrimPrefix
(
ciphertext
,
"ENC:"
),
nil
}
return
ciphertext
,
fmt
.
Errorf
(
"not encrypted"
)
}
type
mockDumper
struct
{
dumpData
[]
byte
dumpErr
error
restored
[]
byte
restErr
error
}
func
(
m
*
mockDumper
)
Dump
(
_
context
.
Context
)
(
io
.
ReadCloser
,
error
)
{
if
m
.
dumpErr
!=
nil
{
return
nil
,
m
.
dumpErr
}
return
io
.
NopCloser
(
bytes
.
NewReader
(
m
.
dumpData
)),
nil
}
func
(
m
*
mockDumper
)
Restore
(
_
context
.
Context
,
data
io
.
Reader
)
error
{
if
m
.
restErr
!=
nil
{
return
m
.
restErr
}
d
,
err
:=
io
.
ReadAll
(
data
)
if
err
!=
nil
{
return
err
}
m
.
restored
=
d
return
nil
}
type
mockObjectStore
struct
{
objects
map
[
string
][]
byte
mu
sync
.
Mutex
}
func
newMockObjectStore
()
*
mockObjectStore
{
return
&
mockObjectStore
{
objects
:
make
(
map
[
string
][]
byte
)}
}
func
(
m
*
mockObjectStore
)
Upload
(
_
context
.
Context
,
key
string
,
body
io
.
Reader
,
_
string
)
(
int64
,
error
)
{
data
,
err
:=
io
.
ReadAll
(
body
)
if
err
!=
nil
{
return
0
,
err
}
m
.
mu
.
Lock
()
m
.
objects
[
key
]
=
data
m
.
mu
.
Unlock
()
return
int64
(
len
(
data
)),
nil
}
func
(
m
*
mockObjectStore
)
Download
(
_
context
.
Context
,
key
string
)
(
io
.
ReadCloser
,
error
)
{
m
.
mu
.
Lock
()
data
,
ok
:=
m
.
objects
[
key
]
m
.
mu
.
Unlock
()
if
!
ok
{
return
nil
,
fmt
.
Errorf
(
"not found: %s"
,
key
)
}
return
io
.
NopCloser
(
bytes
.
NewReader
(
data
)),
nil
}
func
(
m
*
mockObjectStore
)
Delete
(
_
context
.
Context
,
key
string
)
error
{
m
.
mu
.
Lock
()
delete
(
m
.
objects
,
key
)
m
.
mu
.
Unlock
()
return
nil
}
func
(
m
*
mockObjectStore
)
PresignURL
(
_
context
.
Context
,
key
string
,
_
time
.
Duration
)
(
string
,
error
)
{
return
"https://presigned.example.com/"
+
key
,
nil
}
func
(
m
*
mockObjectStore
)
HeadBucket
(
_
context
.
Context
)
error
{
return
nil
}
func
newTestBackupService
(
repo
*
mockSettingRepo
,
dumper
*
mockDumper
,
store
*
mockObjectStore
)
*
BackupService
{
cfg
:=
&
config
.
Config
{
Database
:
config
.
DatabaseConfig
{
Host
:
"localhost"
,
Port
:
5432
,
User
:
"test"
,
DBName
:
"testdb"
,
},
}
factory
:=
func
(
_
context
.
Context
,
_
*
BackupS3Config
)
(
BackupObjectStore
,
error
)
{
return
store
,
nil
}
return
NewBackupService
(
repo
,
cfg
,
&
plainEncryptor
{},
factory
,
dumper
)
}
func
seedS3Config
(
t
*
testing
.
T
,
repo
*
mockSettingRepo
)
{
t
.
Helper
()
cfg
:=
BackupS3Config
{
Bucket
:
"test-bucket"
,
AccessKeyID
:
"AKID"
,
SecretAccessKey
:
"ENC:secret123"
,
Prefix
:
"backups"
,
}
data
,
_
:=
json
.
Marshal
(
cfg
)
require
.
NoError
(
t
,
repo
.
Set
(
context
.
Background
(),
settingKeyBackupS3Config
,
string
(
data
)))
}
// ─── Tests ───
func
TestBackupService_S3ConfigEncryption
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
svc
:=
newTestBackupService
(
repo
,
&
mockDumper
{},
newMockObjectStore
())
// 保存配置 -> SecretAccessKey 应被加密
_
,
err
:=
svc
.
UpdateS3Config
(
context
.
Background
(),
BackupS3Config
{
Bucket
:
"my-bucket"
,
AccessKeyID
:
"AKID"
,
SecretAccessKey
:
"my-secret"
,
Prefix
:
"backups"
,
})
require
.
NoError
(
t
,
err
)
// 直接读取数据库中存储的值,应该是加密后的
raw
,
_
:=
repo
.
GetValue
(
context
.
Background
(),
settingKeyBackupS3Config
)
var
stored
BackupS3Config
require
.
NoError
(
t
,
json
.
Unmarshal
([]
byte
(
raw
),
&
stored
))
require
.
Equal
(
t
,
"ENC:my-secret"
,
stored
.
SecretAccessKey
)
// 通过 GetS3Config 获取应该脱敏
cfg
,
err
:=
svc
.
GetS3Config
(
context
.
Background
())
require
.
NoError
(
t
,
err
)
require
.
Empty
(
t
,
cfg
.
SecretAccessKey
)
require
.
Equal
(
t
,
"my-bucket"
,
cfg
.
Bucket
)
// loadS3Config 内部应解密
internal
,
err
:=
svc
.
loadS3Config
(
context
.
Background
())
require
.
NoError
(
t
,
err
)
require
.
Equal
(
t
,
"my-secret"
,
internal
.
SecretAccessKey
)
}
func
TestBackupService_S3ConfigKeepExistingSecret
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
svc
:=
newTestBackupService
(
repo
,
&
mockDumper
{},
newMockObjectStore
())
// 先保存一个有 secret 的配置
_
,
err
:=
svc
.
UpdateS3Config
(
context
.
Background
(),
BackupS3Config
{
Bucket
:
"my-bucket"
,
AccessKeyID
:
"AKID"
,
SecretAccessKey
:
"original-secret"
,
})
require
.
NoError
(
t
,
err
)
// 再更新时不提供 secret,应保留原值
_
,
err
=
svc
.
UpdateS3Config
(
context
.
Background
(),
BackupS3Config
{
Bucket
:
"my-bucket"
,
AccessKeyID
:
"AKID-NEW"
,
})
require
.
NoError
(
t
,
err
)
internal
,
err
:=
svc
.
loadS3Config
(
context
.
Background
())
require
.
NoError
(
t
,
err
)
require
.
Equal
(
t
,
"original-secret"
,
internal
.
SecretAccessKey
)
require
.
Equal
(
t
,
"AKID-NEW"
,
internal
.
AccessKeyID
)
}
func
TestBackupService_SaveRecordConcurrency
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
svc
:=
newTestBackupService
(
repo
,
&
mockDumper
{},
newMockObjectStore
())
var
wg
sync
.
WaitGroup
n
:=
20
wg
.
Add
(
n
)
for
i
:=
0
;
i
<
n
;
i
++
{
go
func
(
idx
int
)
{
defer
wg
.
Done
()
record
:=
&
BackupRecord
{
ID
:
fmt
.
Sprintf
(
"rec-%d"
,
idx
),
Status
:
"completed"
,
StartedAt
:
time
.
Now
()
.
Format
(
time
.
RFC3339
),
}
_
=
svc
.
saveRecord
(
context
.
Background
(),
record
)
}(
i
)
}
wg
.
Wait
()
records
,
err
:=
svc
.
loadRecords
(
context
.
Background
())
require
.
NoError
(
t
,
err
)
require
.
Len
(
t
,
records
,
n
)
}
func
TestBackupService_LoadRecords_Empty
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
svc
:=
newTestBackupService
(
repo
,
&
mockDumper
{},
newMockObjectStore
())
records
,
err
:=
svc
.
loadRecords
(
context
.
Background
())
require
.
NoError
(
t
,
err
)
require
.
Nil
(
t
,
records
)
// 无数据时返回 nil
}
func
TestBackupService_LoadRecords_Corrupted
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
_
=
repo
.
Set
(
context
.
Background
(),
settingKeyBackupRecords
,
"not valid json{{{"
)
svc
:=
newTestBackupService
(
repo
,
&
mockDumper
{},
newMockObjectStore
())
records
,
err
:=
svc
.
loadRecords
(
context
.
Background
())
require
.
Error
(
t
,
err
)
// 损坏数据应返回错误
require
.
Nil
(
t
,
records
)
}
func
TestBackupService_CreateBackup_Streaming
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
seedS3Config
(
t
,
repo
)
dumpContent
:=
"-- PostgreSQL dump
\n
CREATE TABLE test (id int);
\n
"
dumper
:=
&
mockDumper
{
dumpData
:
[]
byte
(
dumpContent
)}
store
:=
newMockObjectStore
()
svc
:=
newTestBackupService
(
repo
,
dumper
,
store
)
record
,
err
:=
svc
.
CreateBackup
(
context
.
Background
(),
"manual"
,
14
)
require
.
NoError
(
t
,
err
)
require
.
Equal
(
t
,
"completed"
,
record
.
Status
)
require
.
Greater
(
t
,
record
.
SizeBytes
,
int64
(
0
))
require
.
NotEmpty
(
t
,
record
.
S3Key
)
// 验证 S3 上确实有文件
store
.
mu
.
Lock
()
require
.
Len
(
t
,
store
.
objects
,
1
)
store
.
mu
.
Unlock
()
}
func
TestBackupService_CreateBackup_DumpFailure
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
seedS3Config
(
t
,
repo
)
dumper
:=
&
mockDumper
{
dumpErr
:
fmt
.
Errorf
(
"pg_dump failed"
)}
store
:=
newMockObjectStore
()
svc
:=
newTestBackupService
(
repo
,
dumper
,
store
)
record
,
err
:=
svc
.
CreateBackup
(
context
.
Background
(),
"manual"
,
14
)
require
.
Error
(
t
,
err
)
require
.
Equal
(
t
,
"failed"
,
record
.
Status
)
require
.
Contains
(
t
,
record
.
ErrorMsg
,
"pg_dump"
)
}
func
TestBackupService_CreateBackup_NoS3Config
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
svc
:=
newTestBackupService
(
repo
,
&
mockDumper
{},
newMockObjectStore
())
_
,
err
:=
svc
.
CreateBackup
(
context
.
Background
(),
"manual"
,
14
)
require
.
ErrorIs
(
t
,
err
,
ErrBackupS3NotConfigured
)
}
func
TestBackupService_CreateBackup_ConcurrentBlocked
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
seedS3Config
(
t
,
repo
)
// 使用一个慢速 dumper 来模拟正在进行的备份
dumper
:=
&
mockDumper
{
dumpData
:
[]
byte
(
"data"
)}
store
:=
newMockObjectStore
()
svc
:=
newTestBackupService
(
repo
,
dumper
,
store
)
// 手动设置 backingUp 标志
svc
.
mu
.
Lock
()
svc
.
backingUp
=
true
svc
.
mu
.
Unlock
()
_
,
err
:=
svc
.
CreateBackup
(
context
.
Background
(),
"manual"
,
14
)
require
.
ErrorIs
(
t
,
err
,
ErrBackupInProgress
)
}
func
TestBackupService_RestoreBackup_Streaming
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
seedS3Config
(
t
,
repo
)
dumpContent
:=
"-- PostgreSQL dump
\n
CREATE TABLE test (id int);
\n
"
dumper
:=
&
mockDumper
{
dumpData
:
[]
byte
(
dumpContent
)}
store
:=
newMockObjectStore
()
svc
:=
newTestBackupService
(
repo
,
dumper
,
store
)
// 先创建一个备份
record
,
err
:=
svc
.
CreateBackup
(
context
.
Background
(),
"manual"
,
14
)
require
.
NoError
(
t
,
err
)
// 恢复
err
=
svc
.
RestoreBackup
(
context
.
Background
(),
record
.
ID
)
require
.
NoError
(
t
,
err
)
// 验证 psql 收到的数据是否与原始 dump 内容一致
require
.
Equal
(
t
,
dumpContent
,
string
(
dumper
.
restored
))
}
func
TestBackupService_RestoreBackup_NotCompleted
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
seedS3Config
(
t
,
repo
)
svc
:=
newTestBackupService
(
repo
,
&
mockDumper
{},
newMockObjectStore
())
// 手动插入一条 failed 记录
_
=
svc
.
saveRecord
(
context
.
Background
(),
&
BackupRecord
{
ID
:
"fail-1"
,
Status
:
"failed"
,
})
err
:=
svc
.
RestoreBackup
(
context
.
Background
(),
"fail-1"
)
require
.
Error
(
t
,
err
)
}
func
TestBackupService_DeleteBackup
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
seedS3Config
(
t
,
repo
)
dumpContent
:=
"data"
dumper
:=
&
mockDumper
{
dumpData
:
[]
byte
(
dumpContent
)}
store
:=
newMockObjectStore
()
svc
:=
newTestBackupService
(
repo
,
dumper
,
store
)
record
,
err
:=
svc
.
CreateBackup
(
context
.
Background
(),
"manual"
,
14
)
require
.
NoError
(
t
,
err
)
// S3 中应有文件
store
.
mu
.
Lock
()
require
.
Len
(
t
,
store
.
objects
,
1
)
store
.
mu
.
Unlock
()
// 删除
err
=
svc
.
DeleteBackup
(
context
.
Background
(),
record
.
ID
)
require
.
NoError
(
t
,
err
)
// S3 中文件应被删除
store
.
mu
.
Lock
()
require
.
Len
(
t
,
store
.
objects
,
0
)
store
.
mu
.
Unlock
()
// 记录应不存在
_
,
err
=
svc
.
GetBackupRecord
(
context
.
Background
(),
record
.
ID
)
require
.
ErrorIs
(
t
,
err
,
ErrBackupNotFound
)
}
func
TestBackupService_GetDownloadURL
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
seedS3Config
(
t
,
repo
)
dumper
:=
&
mockDumper
{
dumpData
:
[]
byte
(
"data"
)}
store
:=
newMockObjectStore
()
svc
:=
newTestBackupService
(
repo
,
dumper
,
store
)
record
,
err
:=
svc
.
CreateBackup
(
context
.
Background
(),
"manual"
,
14
)
require
.
NoError
(
t
,
err
)
url
,
err
:=
svc
.
GetBackupDownloadURL
(
context
.
Background
(),
record
.
ID
)
require
.
NoError
(
t
,
err
)
require
.
Contains
(
t
,
url
,
"https://presigned.example.com/"
)
}
func
TestBackupService_ListBackups_Sorted
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
svc
:=
newTestBackupService
(
repo
,
&
mockDumper
{},
newMockObjectStore
())
now
:=
time
.
Now
()
for
i
:=
0
;
i
<
3
;
i
++
{
_
=
svc
.
saveRecord
(
context
.
Background
(),
&
BackupRecord
{
ID
:
fmt
.
Sprintf
(
"rec-%d"
,
i
),
Status
:
"completed"
,
StartedAt
:
now
.
Add
(
time
.
Duration
(
i
)
*
time
.
Hour
)
.
Format
(
time
.
RFC3339
),
})
}
records
,
err
:=
svc
.
ListBackups
(
context
.
Background
())
require
.
NoError
(
t
,
err
)
require
.
Len
(
t
,
records
,
3
)
// 最新在前
require
.
Equal
(
t
,
"rec-2"
,
records
[
0
]
.
ID
)
require
.
Equal
(
t
,
"rec-0"
,
records
[
2
]
.
ID
)
}
func
TestBackupService_TestS3Connection
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
store
:=
newMockObjectStore
()
svc
:=
newTestBackupService
(
repo
,
&
mockDumper
{},
store
)
err
:=
svc
.
TestS3Connection
(
context
.
Background
(),
BackupS3Config
{
Bucket
:
"test"
,
AccessKeyID
:
"ak"
,
SecretAccessKey
:
"sk"
,
})
require
.
NoError
(
t
,
err
)
}
func
TestBackupService_TestS3Connection_Incomplete
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
svc
:=
newTestBackupService
(
repo
,
&
mockDumper
{},
newMockObjectStore
())
err
:=
svc
.
TestS3Connection
(
context
.
Background
(),
BackupS3Config
{
Bucket
:
"test"
,
})
require
.
Error
(
t
,
err
)
require
.
Contains
(
t
,
err
.
Error
(),
"incomplete"
)
}
func
TestBackupService_Schedule_CronValidation
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
svc
:=
newTestBackupService
(
repo
,
&
mockDumper
{},
newMockObjectStore
())
svc
.
cronSched
=
nil
// 未初始化 cron
// 启用但 cron 为空
_
,
err
:=
svc
.
UpdateSchedule
(
context
.
Background
(),
BackupScheduleConfig
{
Enabled
:
true
,
CronExpr
:
""
,
})
require
.
Error
(
t
,
err
)
// 无效的 cron 表达式
_
,
err
=
svc
.
UpdateSchedule
(
context
.
Background
(),
BackupScheduleConfig
{
Enabled
:
true
,
CronExpr
:
"invalid"
,
})
require
.
Error
(
t
,
err
)
}
func
TestBackupService_LoadS3Config_Corrupted
(
t
*
testing
.
T
)
{
repo
:=
newMockSettingRepo
()
_
=
repo
.
Set
(
context
.
Background
(),
settingKeyBackupS3Config
,
"not json!!!!"
)
svc
:=
newTestBackupService
(
repo
,
&
mockDumper
{},
newMockObjectStore
())
cfg
,
err
:=
svc
.
loadS3Config
(
context
.
Background
())
require
.
Error
(
t
,
err
)
require
.
Nil
(
t
,
cfg
)
}
backend/internal/service/wire.go
View file @
aa4e37d0
...
...
@@ -322,6 +322,19 @@ func ProvideAPIKeyAuthCacheInvalidator(apiKeyService *APIKeyService) APIKeyAuthC
return
apiKeyService
}
// ProvideBackupService creates and starts BackupService
func
ProvideBackupService
(
settingRepo
SettingRepository
,
cfg
*
config
.
Config
,
encryptor
SecretEncryptor
,
storeFactory
BackupObjectStoreFactory
,
dumper
DBDumper
,
)
*
BackupService
{
svc
:=
NewBackupService
(
settingRepo
,
cfg
,
encryptor
,
storeFactory
,
dumper
)
svc
.
Start
()
return
svc
}
// ProvideSettingService wires SettingService with group reader for default subscription validation.
func
ProvideSettingService
(
settingRepo
SettingRepository
,
groupRepo
GroupRepository
,
cfg
*
config
.
Config
)
*
SettingService
{
svc
:=
NewSettingService
(
settingRepo
,
cfg
)
...
...
@@ -373,6 +386,7 @@ var ProviderSet = wire.NewSet(
NewAccountTestService
,
ProvideSettingService
,
NewDataManagementService
,
ProvideBackupService
,
ProvideOpsSystemLogSink
,
NewOpsService
,
ProvideOpsMetricsCollector
,
...
...
deploy/docker-compose.dev.yml
0 → 100644
View file @
aa4e37d0
# =============================================================================
# Sub2API Docker Compose - Local Development Build
# =============================================================================
# Build from local source code for testing changes.
#
# Usage:
# cd deploy
# docker compose -f docker-compose.dev.yml up --build
# =============================================================================
services
:
sub2api
:
build
:
context
:
..
dockerfile
:
Dockerfile
container_name
:
sub2api-dev
restart
:
unless-stopped
ports
:
-
"
${BIND_HOST:-127.0.0.1}:${SERVER_PORT:-8080}:8080"
volumes
:
-
./data:/app/data
environment
:
-
AUTO_SETUP=true
-
SERVER_HOST=0.0.0.0
-
SERVER_PORT=8080
-
SERVER_MODE=debug
-
RUN_MODE=${RUN_MODE:-standard}
-
DATABASE_HOST=postgres
-
DATABASE_PORT=5432
-
DATABASE_USER=${POSTGRES_USER:-sub2api}
-
DATABASE_PASSWORD=${POSTGRES_PASSWORD:?POSTGRES_PASSWORD is required}
-
DATABASE_DBNAME=${POSTGRES_DB:-sub2api}
-
DATABASE_SSLMODE=disable
-
REDIS_HOST=redis
-
REDIS_PORT=6379
-
REDIS_PASSWORD=${REDIS_PASSWORD:-}
-
REDIS_DB=${REDIS_DB:-0}
-
ADMIN_EMAIL=${ADMIN_EMAIL:-admin@sub2api.local}
-
ADMIN_PASSWORD=${ADMIN_PASSWORD:-}
-
JWT_SECRET=${JWT_SECRET:-}
-
TOTP_ENCRYPTION_KEY=${TOTP_ENCRYPTION_KEY:-}
-
TZ=${TZ:-Asia/Shanghai}
depends_on
:
postgres
:
condition
:
service_healthy
redis
:
condition
:
service_healthy
networks
:
-
sub2api-network
healthcheck
:
test
:
[
"
CMD"
,
"
wget"
,
"
-q"
,
"
-T"
,
"
5"
,
"
-O"
,
"
/dev/null"
,
"
http://localhost:8080/health"
]
interval
:
30s
timeout
:
10s
retries
:
3
start_period
:
30s
postgres
:
image
:
postgres:18-alpine
container_name
:
sub2api-postgres-dev
restart
:
unless-stopped
volumes
:
-
./postgres_data:/var/lib/postgresql/data
environment
:
-
POSTGRES_USER=${POSTGRES_USER:-sub2api}
-
POSTGRES_PASSWORD=${POSTGRES_PASSWORD:?POSTGRES_PASSWORD is required}
-
POSTGRES_DB=${POSTGRES_DB:-sub2api}
-
PGDATA=/var/lib/postgresql/data
-
TZ=${TZ:-Asia/Shanghai}
networks
:
-
sub2api-network
healthcheck
:
test
:
[
"
CMD-SHELL"
,
"
pg_isready
-U
${POSTGRES_USER:-sub2api}
-d
${POSTGRES_DB:-sub2api}"
]
interval
:
10s
timeout
:
5s
retries
:
5
start_period
:
10s
redis
:
image
:
redis:8-alpine
container_name
:
sub2api-redis-dev
restart
:
unless-stopped
volumes
:
-
./redis_data:/data
command
:
>
sh -c '
redis-server
--save 60 1
--appendonly yes
--appendfsync everysec
${REDIS_PASSWORD:+--requirepass "$REDIS_PASSWORD"}'
environment
:
-
TZ=${TZ:-Asia/Shanghai}
-
REDISCLI_AUTH=${REDIS_PASSWORD:-}
networks
:
-
sub2api-network
healthcheck
:
test
:
[
"
CMD"
,
"
redis-cli"
,
"
ping"
]
interval
:
10s
timeout
:
5s
retries
:
5
start_period
:
5s
networks
:
sub2api-network
:
driver
:
bridge
frontend/src/api/admin/backup.ts
0 → 100644
View file @
aa4e37d0
import
{
apiClient
}
from
'
../client
'
export
interface
BackupS3Config
{
endpoint
:
string
region
:
string
bucket
:
string
access_key_id
:
string
secret_access_key
?:
string
prefix
:
string
force_path_style
:
boolean
}
export
interface
BackupScheduleConfig
{
enabled
:
boolean
cron_expr
:
string
retain_days
:
number
retain_count
:
number
}
export
interface
BackupRecord
{
id
:
string
status
:
'
pending
'
|
'
running
'
|
'
completed
'
|
'
failed
'
backup_type
:
string
file_name
:
string
s3_key
:
string
size_bytes
:
number
triggered_by
:
string
error_message
?:
string
started_at
:
string
finished_at
?:
string
expires_at
?:
string
}
export
interface
CreateBackupRequest
{
expire_days
?:
number
}
export
interface
TestS3Response
{
ok
:
boolean
message
:
string
}
// S3 Config
export
async
function
getS3Config
():
Promise
<
BackupS3Config
>
{
const
{
data
}
=
await
apiClient
.
get
<
BackupS3Config
>
(
'
/admin/backups/s3-config
'
)
return
data
}
export
async
function
updateS3Config
(
config
:
BackupS3Config
):
Promise
<
BackupS3Config
>
{
const
{
data
}
=
await
apiClient
.
put
<
BackupS3Config
>
(
'
/admin/backups/s3-config
'
,
config
)
return
data
}
export
async
function
testS3Connection
(
config
:
BackupS3Config
):
Promise
<
TestS3Response
>
{
const
{
data
}
=
await
apiClient
.
post
<
TestS3Response
>
(
'
/admin/backups/s3-config/test
'
,
config
)
return
data
}
// Schedule
export
async
function
getSchedule
():
Promise
<
BackupScheduleConfig
>
{
const
{
data
}
=
await
apiClient
.
get
<
BackupScheduleConfig
>
(
'
/admin/backups/schedule
'
)
return
data
}
export
async
function
updateSchedule
(
config
:
BackupScheduleConfig
):
Promise
<
BackupScheduleConfig
>
{
const
{
data
}
=
await
apiClient
.
put
<
BackupScheduleConfig
>
(
'
/admin/backups/schedule
'
,
config
)
return
data
}
// Backup operations
export
async
function
createBackup
(
req
?:
CreateBackupRequest
):
Promise
<
BackupRecord
>
{
const
{
data
}
=
await
apiClient
.
post
<
BackupRecord
>
(
'
/admin/backups
'
,
req
||
{},
{
timeout
:
600000
})
return
data
}
export
async
function
listBackups
():
Promise
<
{
items
:
BackupRecord
[]
}
>
{
const
{
data
}
=
await
apiClient
.
get
<
{
items
:
BackupRecord
[]
}
>
(
'
/admin/backups
'
)
return
data
}
export
async
function
getBackup
(
id
:
string
):
Promise
<
BackupRecord
>
{
const
{
data
}
=
await
apiClient
.
get
<
BackupRecord
>
(
`/admin/backups/
${
id
}
`
)
return
data
}
export
async
function
deleteBackup
(
id
:
string
):
Promise
<
void
>
{
await
apiClient
.
delete
(
`/admin/backups/
${
id
}
`
)
}
export
async
function
getDownloadURL
(
id
:
string
):
Promise
<
{
url
:
string
}
>
{
const
{
data
}
=
await
apiClient
.
get
<
{
url
:
string
}
>
(
`/admin/backups/
${
id
}
/download-url`
)
return
data
}
// Restore
export
async
function
restoreBackup
(
id
:
string
,
password
:
string
):
Promise
<
void
>
{
await
apiClient
.
post
(
`/admin/backups/
${
id
}
/restore`
,
{
password
},
{
timeout
:
600000
})
}
export
const
backupAPI
=
{
getS3Config
,
updateS3Config
,
testS3Connection
,
getSchedule
,
updateSchedule
,
createBackup
,
listBackups
,
getBackup
,
deleteBackup
,
getDownloadURL
,
restoreBackup
,
}
export
default
backupAPI
frontend/src/api/admin/index.ts
View file @
aa4e37d0
...
...
@@ -23,6 +23,7 @@ import errorPassthroughAPI from './errorPassthrough'
import
dataManagementAPI
from
'
./dataManagement
'
import
apiKeysAPI
from
'
./apiKeys
'
import
scheduledTestsAPI
from
'
./scheduledTests
'
import
backupAPI
from
'
./backup
'
/**
* Unified admin API object for convenient access
...
...
@@ -47,7 +48,8 @@ export const adminAPI = {
errorPassthrough
:
errorPassthroughAPI
,
dataManagement
:
dataManagementAPI
,
apiKeys
:
apiKeysAPI
,
scheduledTests
:
scheduledTestsAPI
scheduledTests
:
scheduledTestsAPI
,
backup
:
backupAPI
}
export
{
...
...
@@ -70,7 +72,8 @@ export {
errorPassthroughAPI
,
dataManagementAPI
,
apiKeysAPI
,
scheduledTestsAPI
scheduledTestsAPI
,
backupAPI
}
export
default
adminAPI
...
...
frontend/src/components/layout/AppSidebar.vue
View file @
aa4e37d0
...
...
@@ -387,6 +387,21 @@ const DatabaseIcon = {
)
}
const
CloudArrowUpIcon
=
{
render
:
()
=>
h
(
'
svg
'
,
{
fill
:
'
none
'
,
viewBox
:
'
0 0 24 24
'
,
stroke
:
'
currentColor
'
,
'
stroke-width
'
:
'
1.5
'
},
[
h
(
'
path
'
,
{
'
stroke-linecap
'
:
'
round
'
,
'
stroke-linejoin
'
:
'
round
'
,
d
:
'
M12 16.5V9.75m0 0l3 3m-3-3l-3 3M6.75 19.5a4.5 4.5 0 01-1.41-8.775 5.25 5.25 0 0110.233-2.33 3 3 0 013.758 3.848A3.752 3.752 0 0118 19.5H6.75z
'
})
]
)
}
const
BellIcon
=
{
render
:
()
=>
h
(
...
...
@@ -611,6 +626,7 @@ const adminNavItems = computed((): NavItem[] => {
if
(
authStore
.
isSimpleMode
)
{
const
filtered
=
baseItems
.
filter
(
item
=>
!
item
.
hideInSimpleMode
)
filtered
.
push
({
path
:
'
/keys
'
,
label
:
t
(
'
nav.apiKeys
'
),
icon
:
KeyIcon
})
filtered
.
push
({
path
:
'
/admin/backup
'
,
label
:
t
(
'
nav.backup
'
),
icon
:
CloudArrowUpIcon
})
filtered
.
push
({
path
:
'
/admin/data-management
'
,
label
:
t
(
'
nav.dataManagement
'
),
icon
:
DatabaseIcon
})
filtered
.
push
({
path
:
'
/admin/settings
'
,
label
:
t
(
'
nav.settings
'
),
icon
:
CogIcon
})
// Add admin custom menu items after settings
...
...
@@ -620,6 +636,7 @@ const adminNavItems = computed((): NavItem[] => {
return
filtered
}
baseItems
.
push
({
path
:
'
/admin/backup
'
,
label
:
t
(
'
nav.backup
'
),
icon
:
CloudArrowUpIcon
})
baseItems
.
push
({
path
:
'
/admin/data-management
'
,
label
:
t
(
'
nav.dataManagement
'
),
icon
:
DatabaseIcon
})
baseItems
.
push
({
path
:
'
/admin/settings
'
,
label
:
t
(
'
nav.settings
'
),
icon
:
CogIcon
})
// Add admin custom menu items after settings
...
...
frontend/src/i18n/locales/en.ts
View file @
aa4e37d0
...
...
@@ -340,6 +340,7 @@ export default {
redeemCodes
:
'
Redeem Codes
'
,
ops
:
'
Ops
'
,
promoCodes
:
'
Promo Codes
'
,
backup
:
'
DB Backup
'
,
dataManagement
:
'
Data Management
'
,
settings
:
'
Settings
'
,
myAccount
:
'
My Account
'
,
...
...
@@ -978,6 +979,111 @@ export default {
failedToLoad
:
'
Failed to load dashboard statistics
'
},
backup
:
{
title
:
'
Database Backup
'
,
description
:
'
Full database backup to S3-compatible storage with scheduled backup and restore
'
,
s3
:
{
title
:
'
S3 Storage Configuration
'
,
description
:
'
Configure S3-compatible storage (supports Cloudflare R2)
'
,
descriptionPrefix
:
'
Configure S3-compatible storage (supports
'
,
descriptionSuffix
:
'
)
'
,
enabled
:
'
Enable S3 Storage
'
,
endpoint
:
'
Endpoint
'
,
region
:
'
Region
'
,
bucket
:
'
Bucket
'
,
prefix
:
'
Key Prefix
'
,
accessKeyId
:
'
Access Key ID
'
,
secretAccessKey
:
'
Secret Access Key
'
,
secretConfigured
:
'
Already configured, leave empty to keep
'
,
forcePathStyle
:
'
Force Path Style
'
,
testConnection
:
'
Test Connection
'
,
testSuccess
:
'
S3 connection test successful
'
,
testFailed
:
'
S3 connection test failed
'
,
saved
:
'
S3 configuration saved
'
},
schedule
:
{
title
:
'
Scheduled Backup
'
,
description
:
'
Configure automatic scheduled backups
'
,
enabled
:
'
Enable Scheduled Backup
'
,
cronExpr
:
'
Cron Expression
'
,
cronHint
:
'
e.g. "0 2 * * *" means every day at 2:00 AM
'
,
retainDays
:
'
Backup Expire Days
'
,
retainDaysHint
:
'
Backup files auto-delete after this many days, 0 = never expire
'
,
retainCount
:
'
Max Retain Count
'
,
retainCountHint
:
'
Maximum number of backups to keep, 0 = unlimited
'
,
saved
:
'
Schedule configuration saved
'
},
operations
:
{
title
:
'
Backup Records
'
,
description
:
'
Create manual backups and manage existing backup records
'
,
createBackup
:
'
Create Backup
'
,
backing
:
'
Backing up...
'
,
backupCreated
:
'
Backup created successfully
'
,
expireDays
:
'
Expire Days
'
},
columns
:
{
status
:
'
Status
'
,
fileName
:
'
File Name
'
,
size
:
'
Size
'
,
expiresAt
:
'
Expires At
'
,
triggeredBy
:
'
Triggered By
'
,
startedAt
:
'
Started At
'
,
actions
:
'
Actions
'
},
status
:
{
pending
:
'
Pending
'
,
running
:
'
Running
'
,
completed
:
'
Completed
'
,
failed
:
'
Failed
'
},
trigger
:
{
manual
:
'
Manual
'
,
scheduled
:
'
Scheduled
'
},
neverExpire
:
'
Never
'
,
empty
:
'
No backup records
'
,
actions
:
{
download
:
'
Download
'
,
restore
:
'
Restore
'
,
restoreConfirm
:
'
Are you sure you want to restore from this backup? This will overwrite the current database!
'
,
restorePasswordPrompt
:
'
Please enter your admin password to confirm the restore operation
'
,
restoreSuccess
:
'
Database restored successfully
'
,
deleteConfirm
:
'
Are you sure you want to delete this backup?
'
,
deleted
:
'
Backup deleted
'
},
r2Guide
:
{
title
:
'
Cloudflare R2 Setup Guide
'
,
intro
:
'
Cloudflare R2 provides S3-compatible object storage with a free tier of 10GB storage + 1M Class A requests/month, ideal for database backups.
'
,
step1
:
{
title
:
'
Create an R2 Bucket
'
,
line1
:
'
Log in to the Cloudflare Dashboard (dash.cloudflare.com), select "R2 Object Storage" from the sidebar
'
,
line2
:
'
Click "Create bucket", enter a name (e.g. sub2api-backups), choose a region
'
,
line3
:
'
Click create to finish
'
},
step2
:
{
title
:
'
Create an API Token
'
,
line1
:
'
On the R2 page, click "Manage R2 API Tokens" in the top right
'
,
line2
:
'
Click "Create API token", set permission to "Object Read & Write"
'
,
line3
:
'
Recommended: restrict to specific bucket for better security
'
,
line4
:
'
After creation, you will see the Access Key ID and Secret Access Key
'
,
warning
:
'
The Secret Access Key is only shown once — copy and save it immediately!
'
},
step3
:
{
title
:
'
Get the S3 Endpoint
'
,
desc
:
'
Find your Account ID on the R2 overview page (in the URL or the right panel). The endpoint format is:
'
,
accountId
:
'
your_account_id
'
},
step4
:
{
title
:
'
Fill in the Configuration
'
,
checkEnabled
:
'
Checked
'
,
bucketValue
:
'
Your bucket name
'
,
fromStep2
:
'
Value from Step 2
'
,
unchecked
:
'
Unchecked
'
},
freeTier
:
'
R2 Free Tier: 10GB storage + 1M Class A requests + 10M Class B requests per month — more than enough for database backups.
'
}
},
dataManagement
:
{
title
:
'
Data Management
'
,
description
:
'
Manage data management agent status, object storage settings, and backup jobs in one place
'
,
...
...
frontend/src/i18n/locales/zh.ts
View file @
aa4e37d0
...
...
@@ -340,6 +340,7 @@ export default {
redeemCodes
:
'
兑换码
'
,
ops
:
'
运维监控
'
,
promoCodes
:
'
优惠码
'
,
backup
:
'
数据库备份
'
,
dataManagement
:
'
数据管理
'
,
settings
:
'
系统设置
'
,
myAccount
:
'
我的账户
'
,
...
...
@@ -1000,6 +1001,111 @@ export default {
failedToLoad
:
'
加载仪表盘数据失败
'
},
backup
:
{
title
:
'
数据库备份
'
,
description
:
'
全量数据库备份到 S3 兼容存储,支持定时备份与恢复
'
,
s3
:
{
title
:
'
S3 存储配置
'
,
description
:
'
配置 S3 兼容存储(支持 Cloudflare R2)
'
,
descriptionPrefix
:
'
配置 S3 兼容存储(支持
'
,
descriptionSuffix
:
'
)
'
,
enabled
:
'
启用 S3 存储
'
,
endpoint
:
'
端点地址
'
,
region
:
'
区域
'
,
bucket
:
'
存储桶
'
,
prefix
:
'
Key 前缀
'
,
accessKeyId
:
'
Access Key ID
'
,
secretAccessKey
:
'
Secret Access Key
'
,
secretConfigured
:
'
已配置,留空保持不变
'
,
forcePathStyle
:
'
强制路径风格
'
,
testConnection
:
'
测试连接
'
,
testSuccess
:
'
S3 连接测试成功
'
,
testFailed
:
'
S3 连接测试失败
'
,
saved
:
'
S3 配置已保存
'
},
schedule
:
{
title
:
'
定时备份
'
,
description
:
'
配置自动定时备份
'
,
enabled
:
'
启用定时备份
'
,
cronExpr
:
'
Cron 表达式
'
,
cronHint
:
'
例如 "0 2 * * *" 表示每天凌晨 2 点
'
,
retainDays
:
'
备份过期天数
'
,
retainDaysHint
:
'
备份文件超过此天数后自动删除,0 = 永不过期
'
,
retainCount
:
'
最大保留份数
'
,
retainCountHint
:
'
最多保留的备份数量,0 = 不限制
'
,
saved
:
'
定时备份配置已保存
'
},
operations
:
{
title
:
'
备份记录
'
,
description
:
'
创建手动备份和管理已有备份记录
'
,
createBackup
:
'
创建备份
'
,
backing
:
'
备份中...
'
,
backupCreated
:
'
备份创建成功
'
,
expireDays
:
'
过期天数
'
},
columns
:
{
status
:
'
状态
'
,
fileName
:
'
文件名
'
,
size
:
'
大小
'
,
expiresAt
:
'
过期时间
'
,
triggeredBy
:
'
触发方式
'
,
startedAt
:
'
开始时间
'
,
actions
:
'
操作
'
},
status
:
{
pending
:
'
等待中
'
,
running
:
'
执行中
'
,
completed
:
'
已完成
'
,
failed
:
'
失败
'
},
trigger
:
{
manual
:
'
手动
'
,
scheduled
:
'
定时
'
},
neverExpire
:
'
永不过期
'
,
empty
:
'
暂无备份记录
'
,
actions
:
{
download
:
'
下载
'
,
restore
:
'
恢复
'
,
restoreConfirm
:
'
确定要从此备份恢复吗?这将覆盖当前数据库!
'
,
restorePasswordPrompt
:
'
请输入管理员密码以确认恢复操作
'
,
restoreSuccess
:
'
数据库恢复成功
'
,
deleteConfirm
:
'
确定要删除此备份吗?
'
,
deleted
:
'
备份已删除
'
},
r2Guide
:
{
title
:
'
Cloudflare R2 配置教程
'
,
intro
:
'
Cloudflare R2 提供 S3 兼容的对象存储,免费额度为 10GB 存储 + 每月 100 万次 A 类请求,非常适合数据库备份。
'
,
step1
:
{
title
:
'
创建 R2 存储桶
'
,
line1
:
'
登录 Cloudflare Dashboard (dash.cloudflare.com),左侧菜单选择「R2 对象存储」
'
,
line2
:
'
点击「创建存储桶」,输入名称(如 sub2api-backups),选择区域
'
,
line3
:
'
点击创建完成
'
},
step2
:
{
title
:
'
创建 API 令牌
'
,
line1
:
'
在 R2 页面,点击右上角「管理 R2 API 令牌」
'
,
line2
:
'
点击「创建 API 令牌」,权限选择「对象读和写」
'
,
line3
:
'
建议指定存储桶范围(仅允许访问备份桶,更安全)
'
,
line4
:
'
创建后会显示 Access Key ID 和 Secret Access Key
'
,
warning
:
'
Secret Access Key 只会显示一次,请立即复制保存!
'
},
step3
:
{
title
:
'
获取 S3 端点地址
'
,
desc
:
'
在 R2 概览页面找到你的账户 ID(在 URL 或右侧面板中),端点格式为:
'
,
accountId
:
'
你的账户 ID
'
},
step4
:
{
title
:
'
填写以下配置
'
,
checkEnabled
:
'
勾选
'
,
bucketValue
:
'
你创建的存储桶名称
'
,
fromStep2
:
'
第 2 步获取的值
'
,
unchecked
:
'
不勾选
'
},
freeTier
:
'
R2 免费额度:10GB 存储 + 每月 100 万次 A 类请求 + 1000 万次 B 类请求,对数据库备份完全够用。
'
}
},
dataManagement
:
{
title
:
'
数据管理
'
,
description
:
'
统一管理数据管理代理状态、对象存储配置和备份任务
'
,
...
...
Prev
1
2
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment