Compare commits
29 Commits
0537709576
...
snapshot-r
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
20af99b244 | ||
| 990c114531 | |||
|
|
0c8a9efecc | ||
|
|
70d25e13b8 | ||
|
|
2bb64620d4 | ||
|
|
7543b3a850 | ||
|
|
a558c97088 | ||
|
|
2de3c5f6ab | ||
|
|
8ece52992b | ||
|
|
03965e35fb | ||
|
|
ebaf718424 | ||
|
|
cb923704db | ||
|
|
5fdb56e498 | ||
|
|
fc64391cfb | ||
|
|
f1448d512c | ||
|
|
5021d46ba0 | ||
|
|
97659421b5 | ||
|
|
8677820864 | ||
|
|
0c461d0656 | ||
|
|
1eff047fb6 | ||
|
|
8e77130c62 | ||
|
|
ec0ba85958 | ||
|
|
5e63ebc9fe | ||
|
|
419fcb7625 | ||
|
|
a5e6197bca | ||
|
|
a08514b4f2 | ||
|
|
8895e296b9 | ||
|
|
c962a223c6 | ||
|
|
3aa0169af0 |
61
CHECK-BACKEND-LOGS.md
Normal file
61
CHECK-BACKEND-LOGS.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# Cara Cek Backend Logs
|
||||
|
||||
## Lokasi Log File
|
||||
Backend logs ditulis ke: `/tmp/backend-api.log`
|
||||
|
||||
## Cara Melihat Logs
|
||||
|
||||
### 1. Lihat Logs Real-time (Live)
|
||||
```bash
|
||||
tail -f /tmp/backend-api.log
|
||||
```
|
||||
|
||||
### 2. Lihat Logs Terakhir (50 baris)
|
||||
```bash
|
||||
tail -50 /tmp/backend-api.log
|
||||
```
|
||||
|
||||
### 3. Filter Logs untuk Error ZFS Pool
|
||||
```bash
|
||||
tail -100 /tmp/backend-api.log | grep -i "zfs\|pool\|create\|error\|failed"
|
||||
```
|
||||
|
||||
### 4. Lihat Logs dengan Format JSON yang Lebih Readable
|
||||
```bash
|
||||
tail -50 /tmp/backend-api.log | jq '.'
|
||||
```
|
||||
|
||||
### 5. Monitor Logs Real-time untuk ZFS Pool Creation
|
||||
```bash
|
||||
tail -f /tmp/backend-api.log | grep -i "zfs\|pool\|create"
|
||||
```
|
||||
|
||||
## Restart Backend
|
||||
|
||||
Backend perlu di-restart setelah perubahan code untuk load route baru:
|
||||
|
||||
```bash
|
||||
# 1. Cari process ID backend
|
||||
ps aux | grep calypso-api | grep -v grep
|
||||
|
||||
# 2. Kill process (ganti PID dengan process ID yang ditemukan)
|
||||
kill <PID>
|
||||
|
||||
# 3. Restart backend
|
||||
cd /development/calypso/backend
|
||||
export CALYPSO_DB_PASSWORD="calypso123"
|
||||
export CALYPSO_JWT_SECRET="test-jwt-secret-key-minimum-32-characters-long"
|
||||
go run ./cmd/calypso-api -config config.yaml.example > /tmp/backend-api.log 2>&1 &
|
||||
|
||||
# 4. Cek apakah backend sudah running
|
||||
sleep 3
|
||||
tail -20 /tmp/backend-api.log
|
||||
```
|
||||
|
||||
## Masalah yang Ditemukan
|
||||
|
||||
Dari logs, terlihat:
|
||||
- **Status 404** untuk `POST /api/v1/storage/zfs/pools`
|
||||
- Route sudah ada di code, tapi backend belum di-restart
|
||||
- **Solusi**: Restart backend untuk load route baru
|
||||
|
||||
37
PERMISSIONS-FIX.md
Normal file
37
PERMISSIONS-FIX.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# Permissions Fix - Admin User
|
||||
|
||||
## Issue
|
||||
The admin user was getting 403 Forbidden errors when accessing API endpoints because the admin role didn't have any permissions assigned.
|
||||
|
||||
## Solution
|
||||
All 18 permissions have been assigned to the admin role:
|
||||
|
||||
- `audit:read`
|
||||
- `iam:manage`, `iam:read`, `iam:write`
|
||||
- `iscsi:manage`, `iscsi:read`, `iscsi:write`
|
||||
- `monitoring:read`, `monitoring:write`
|
||||
- `storage:manage`, `storage:read`, `storage:write`
|
||||
- `system:manage`, `system:read`, `system:write`
|
||||
- `tape:manage`, `tape:read`, `tape:write`
|
||||
|
||||
## Action Required
|
||||
|
||||
**You need to log out and log back in** to refresh your authentication token with the updated permissions.
|
||||
|
||||
1. Click "Logout" in the sidebar
|
||||
2. Log back in with:
|
||||
- Username: `admin`
|
||||
- Password: `admin123`
|
||||
3. The dashboard should now load all data correctly
|
||||
|
||||
## Verification
|
||||
|
||||
After logging back in, you should see:
|
||||
- ✅ Metrics loading (CPU, RAM, Storage, etc.)
|
||||
- ✅ Alerts loading
|
||||
- ✅ Storage repositories loading
|
||||
- ✅ No more 403 errors in the console
|
||||
|
||||
## Status
|
||||
✅ **FIXED** - All permissions assigned to admin role
|
||||
|
||||
45
backend/Makefile
Normal file
45
backend/Makefile
Normal file
@@ -0,0 +1,45 @@
|
||||
.PHONY: build run test clean deps migrate
|
||||
|
||||
# Build the application
|
||||
build:
|
||||
go build -o bin/calypso-api ./cmd/calypso-api
|
||||
|
||||
# Run the application locally
|
||||
run:
|
||||
go run ./cmd/calypso-api -config config.yaml.example
|
||||
|
||||
# Run tests
|
||||
test:
|
||||
go test ./...
|
||||
|
||||
# Run tests with coverage
|
||||
test-coverage:
|
||||
go test -coverprofile=coverage.out ./...
|
||||
go tool cover -html=coverage.out
|
||||
|
||||
# Format code
|
||||
fmt:
|
||||
go fmt ./...
|
||||
|
||||
# Lint code
|
||||
lint:
|
||||
golangci-lint run ./...
|
||||
|
||||
# Download dependencies
|
||||
deps:
|
||||
go mod download
|
||||
go mod tidy
|
||||
|
||||
# Clean build artifacts
|
||||
clean:
|
||||
rm -rf bin/
|
||||
rm -f coverage.out
|
||||
|
||||
# Install dependencies
|
||||
install-deps:
|
||||
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
|
||||
|
||||
# Build for production (Linux)
|
||||
build-linux:
|
||||
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -ldflags="-w -s" -o bin/calypso-api-linux ./cmd/calypso-api
|
||||
|
||||
149
backend/README.md
Normal file
149
backend/README.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# AtlasOS - Calypso Backend
|
||||
|
||||
Enterprise-grade backup appliance platform backend API.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Go 1.22 or later
|
||||
- PostgreSQL 14 or later
|
||||
- Ubuntu Server 24.04 LTS
|
||||
|
||||
## Installation
|
||||
|
||||
1. Install system requirements:
|
||||
```bash
|
||||
sudo ./scripts/install-requirements.sh
|
||||
```
|
||||
|
||||
2. Create PostgreSQL database:
|
||||
```bash
|
||||
sudo -u postgres createdb calypso
|
||||
sudo -u postgres createuser calypso
|
||||
sudo -u postgres psql -c "ALTER USER calypso WITH PASSWORD 'your_password';"
|
||||
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE calypso TO calypso;"
|
||||
```
|
||||
|
||||
3. Install Go dependencies:
|
||||
```bash
|
||||
cd backend
|
||||
go mod download
|
||||
```
|
||||
|
||||
4. Configure the application:
|
||||
```bash
|
||||
sudo mkdir -p /etc/calypso
|
||||
sudo cp config.yaml.example /etc/calypso/config.yaml
|
||||
sudo nano /etc/calypso/config.yaml
|
||||
```
|
||||
|
||||
Set environment variables:
|
||||
```bash
|
||||
export CALYPSO_DB_PASSWORD="your_database_password"
|
||||
export CALYPSO_JWT_SECRET="your_jwt_secret_key_min_32_chars"
|
||||
```
|
||||
|
||||
## Building
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
go build -o bin/calypso-api ./cmd/calypso-api
|
||||
```
|
||||
|
||||
## Running Locally
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
export CALYPSO_DB_PASSWORD="your_password"
|
||||
export CALYPSO_JWT_SECRET="your_jwt_secret"
|
||||
go run ./cmd/calypso-api -config config.yaml.example
|
||||
```
|
||||
|
||||
The API will be available at `http://localhost:8080`
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Health Check
|
||||
- `GET /api/v1/health` - System health status
|
||||
|
||||
### Authentication
|
||||
- `POST /api/v1/auth/login` - User login
|
||||
- `POST /api/v1/auth/logout` - User logout
|
||||
- `GET /api/v1/auth/me` - Get current user info (requires auth)
|
||||
|
||||
### Tasks
|
||||
- `GET /api/v1/tasks/{id}` - Get task status (requires auth)
|
||||
|
||||
### IAM (Admin only)
|
||||
- `GET /api/v1/iam/users` - List users
|
||||
- `GET /api/v1/iam/users/{id}` - Get user
|
||||
- `POST /api/v1/iam/users` - Create user
|
||||
- `PUT /api/v1/iam/users/{id}` - Update user
|
||||
- `DELETE /api/v1/iam/users/{id}` - Delete user
|
||||
|
||||
## Database Migrations
|
||||
|
||||
Migrations are automatically run on startup. They are located in:
|
||||
- `internal/common/database/migrations/`
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
backend/
|
||||
├── cmd/
|
||||
│ └── calypso-api/ # Main application entry point
|
||||
├── internal/
|
||||
│ ├── auth/ # Authentication handlers
|
||||
│ ├── iam/ # Identity and access management
|
||||
│ ├── audit/ # Audit logging middleware
|
||||
│ ├── tasks/ # Async task engine
|
||||
│ ├── system/ # System management (future)
|
||||
│ ├── monitoring/ # Monitoring (future)
|
||||
│ └── common/ # Shared utilities
|
||||
│ ├── config/ # Configuration management
|
||||
│ ├── database/ # Database connection and migrations
|
||||
│ ├── logger/ # Structured logging
|
||||
│ └── router/ # HTTP router setup
|
||||
├── db/
|
||||
│ └── migrations/ # Database migration files
|
||||
└── config.yaml.example # Example configuration
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
### Running Tests
|
||||
```bash
|
||||
go test ./...
|
||||
```
|
||||
|
||||
### Code Formatting
|
||||
```bash
|
||||
go fmt ./...
|
||||
```
|
||||
|
||||
### Building for Production
|
||||
```bash
|
||||
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o bin/calypso-api ./cmd/calypso-api
|
||||
```
|
||||
|
||||
## Systemd Service
|
||||
|
||||
To install as a systemd service:
|
||||
|
||||
```bash
|
||||
sudo cp deploy/systemd/calypso-api.service /etc/systemd/system/
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable calypso-api
|
||||
sudo systemctl start calypso-api
|
||||
```
|
||||
|
||||
## Security Notes
|
||||
|
||||
- The JWT secret must be a strong random string (minimum 32 characters)
|
||||
- Database passwords should be set via environment variables, not config files
|
||||
- The service runs as non-root user `calypso`
|
||||
- All mutating operations are audited
|
||||
|
||||
## License
|
||||
|
||||
Proprietary - AtlasOS Calypso
|
||||
|
||||
BIN
backend/bin/calypso-api
Executable file
BIN
backend/bin/calypso-api
Executable file
Binary file not shown.
BIN
backend/calypso-api
Executable file
BIN
backend/calypso-api
Executable file
Binary file not shown.
119
backend/cmd/calypso-api/main.go
Normal file
119
backend/cmd/calypso-api/main.go
Normal file
@@ -0,0 +1,119 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/config"
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/atlasos/calypso/internal/common/router"
|
||||
"golang.org/x/sync/errgroup"
|
||||
)
|
||||
|
||||
var (
|
||||
version = "dev"
|
||||
buildTime = "unknown"
|
||||
gitCommit = "unknown"
|
||||
)
|
||||
|
||||
func main() {
|
||||
var (
|
||||
configPath = flag.String("config", "/etc/calypso/config.yaml", "Path to configuration file")
|
||||
showVersion = flag.Bool("version", false, "Show version information")
|
||||
)
|
||||
flag.Parse()
|
||||
|
||||
if *showVersion {
|
||||
fmt.Printf("AtlasOS - Calypso API\n")
|
||||
fmt.Printf("Version: %s\n", version)
|
||||
fmt.Printf("Build Time: %s\n", buildTime)
|
||||
fmt.Printf("Git Commit: %s\n", gitCommit)
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
// Initialize logger
|
||||
logger := logger.NewLogger("calypso-api")
|
||||
|
||||
// Load configuration
|
||||
cfg, err := config.Load(*configPath)
|
||||
if err != nil {
|
||||
logger.Fatal("Failed to load configuration", "error", err)
|
||||
}
|
||||
|
||||
// Initialize database
|
||||
db, err := database.NewConnection(cfg.Database)
|
||||
if err != nil {
|
||||
logger.Fatal("Failed to connect to database", "error", err)
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
// Run migrations
|
||||
if err := database.RunMigrations(context.Background(), db); err != nil {
|
||||
logger.Fatal("Failed to run database migrations", "error", err)
|
||||
}
|
||||
logger.Info("Database migrations completed successfully")
|
||||
|
||||
// Initialize router
|
||||
r := router.NewRouter(cfg, db, logger)
|
||||
|
||||
// Create HTTP server
|
||||
// Note: WriteTimeout should be 0 for WebSocket connections (they handle their own timeouts)
|
||||
srv := &http.Server{
|
||||
Addr: fmt.Sprintf(":%d", cfg.Server.Port),
|
||||
Handler: r,
|
||||
ReadTimeout: 15 * time.Second,
|
||||
WriteTimeout: 0, // 0 means no timeout - needed for WebSocket connections
|
||||
IdleTimeout: 120 * time.Second, // Increased for WebSocket keep-alive
|
||||
}
|
||||
|
||||
// Setup graceful shutdown
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
g, gCtx := errgroup.WithContext(ctx)
|
||||
|
||||
// Start HTTP server
|
||||
g.Go(func() error {
|
||||
logger.Info("Starting HTTP server", "port", cfg.Server.Port, "address", srv.Addr)
|
||||
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
|
||||
return fmt.Errorf("server failed: %w", err)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
// Graceful shutdown handler
|
||||
g.Go(func() error {
|
||||
sigChan := make(chan os.Signal, 1)
|
||||
signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM)
|
||||
select {
|
||||
case <-sigChan:
|
||||
logger.Info("Received shutdown signal, initiating graceful shutdown...")
|
||||
cancel()
|
||||
case <-gCtx.Done():
|
||||
return gCtx.Err()
|
||||
}
|
||||
|
||||
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer shutdownCancel()
|
||||
|
||||
if err := srv.Shutdown(shutdownCtx); err != nil {
|
||||
return fmt.Errorf("server shutdown failed: %w", err)
|
||||
}
|
||||
logger.Info("HTTP server stopped gracefully")
|
||||
return nil
|
||||
})
|
||||
|
||||
// Wait for all goroutines
|
||||
if err := g.Wait(); err != nil {
|
||||
log.Fatalf("Server error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
32
backend/cmd/hash-password/main.go
Normal file
32
backend/cmd/hash-password/main.go
Normal file
@@ -0,0 +1,32 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"github.com/atlasos/calypso/internal/common/config"
|
||||
"github.com/atlasos/calypso/internal/common/password"
|
||||
)
|
||||
|
||||
func main() {
|
||||
if len(os.Args) < 2 {
|
||||
fmt.Fprintf(os.Stderr, "Usage: %s <password>\n", os.Args[0])
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
pwd := os.Args[1]
|
||||
params := config.Argon2Params{
|
||||
Memory: 64 * 1024,
|
||||
Iterations: 3,
|
||||
Parallelism: 4,
|
||||
SaltLength: 16,
|
||||
KeyLength: 32,
|
||||
}
|
||||
|
||||
hash, err := password.HashPassword(pwd, params)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
fmt.Println(hash)
|
||||
}
|
||||
47
backend/config.yaml.example
Normal file
47
backend/config.yaml.example
Normal file
@@ -0,0 +1,47 @@
|
||||
# AtlasOS - Calypso API Configuration
|
||||
# Copy this file to /etc/calypso/config.yaml and customize
|
||||
|
||||
server:
|
||||
port: 8080
|
||||
host: "0.0.0.0"
|
||||
read_timeout: 15s
|
||||
write_timeout: 15s
|
||||
idle_timeout: 60s
|
||||
# Response caching configuration
|
||||
cache:
|
||||
enabled: true # Enable response caching
|
||||
default_ttl: 5m # Default cache TTL (5 minutes)
|
||||
max_age: 300 # Cache-Control max-age in seconds (5 minutes)
|
||||
|
||||
database:
|
||||
host: "localhost"
|
||||
port: 5432
|
||||
user: "calypso"
|
||||
password: "" # Set via CALYPSO_DB_PASSWORD environment variable
|
||||
database: "calypso"
|
||||
ssl_mode: "disable"
|
||||
# Connection pool optimization:
|
||||
# max_connections: Should be (max_expected_concurrent_requests / avg_query_time_ms * 1000)
|
||||
# For typical workloads: 25-50 connections
|
||||
max_connections: 25
|
||||
# max_idle_conns: Keep some connections warm for faster response
|
||||
# Should be ~20% of max_connections
|
||||
max_idle_conns: 5
|
||||
# conn_max_lifetime: Recycle connections to prevent stale connections
|
||||
# 5 minutes is good for most workloads
|
||||
conn_max_lifetime: 5m
|
||||
|
||||
auth:
|
||||
jwt_secret: "" # Set via CALYPSO_JWT_SECRET environment variable (use strong random string)
|
||||
token_lifetime: 24h
|
||||
argon2:
|
||||
memory: 65536 # 64 MB
|
||||
iterations: 3
|
||||
parallelism: 4
|
||||
salt_length: 16
|
||||
key_length: 32
|
||||
|
||||
logging:
|
||||
level: "info" # debug, info, warn, error
|
||||
format: "json" # json or text
|
||||
|
||||
50
backend/go.mod
Normal file
50
backend/go.mod
Normal file
@@ -0,0 +1,50 @@
|
||||
module github.com/atlasos/calypso
|
||||
|
||||
go 1.24.0
|
||||
|
||||
toolchain go1.24.11
|
||||
|
||||
require (
|
||||
github.com/gin-gonic/gin v1.10.0
|
||||
github.com/golang-jwt/jwt/v5 v5.2.1
|
||||
github.com/google/uuid v1.6.0
|
||||
github.com/gorilla/websocket v1.5.3
|
||||
github.com/lib/pq v1.10.9
|
||||
github.com/stretchr/testify v1.11.1
|
||||
go.uber.org/zap v1.27.0
|
||||
golang.org/x/crypto v0.23.0
|
||||
golang.org/x/sync v0.7.0
|
||||
golang.org/x/time v0.14.0
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/bytedance/sonic v1.11.6 // indirect
|
||||
github.com/bytedance/sonic/loader v0.1.1 // indirect
|
||||
github.com/cloudwego/base64x v0.1.4 // indirect
|
||||
github.com/cloudwego/iasm v0.2.0 // indirect
|
||||
github.com/creack/pty v1.1.24 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/gabriel-vasile/mimetype v1.4.3 // indirect
|
||||
github.com/gin-contrib/sse v0.1.0 // indirect
|
||||
github.com/go-playground/locales v0.14.1 // indirect
|
||||
github.com/go-playground/universal-translator v0.18.1 // indirect
|
||||
github.com/go-playground/validator/v10 v10.20.0 // indirect
|
||||
github.com/goccy/go-json v0.10.2 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/klauspost/cpuid/v2 v2.2.7 // indirect
|
||||
github.com/leodido/go-urn v1.4.0 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
|
||||
github.com/ugorji/go/codec v1.2.12 // indirect
|
||||
go.uber.org/multierr v1.10.0 // indirect
|
||||
golang.org/x/arch v0.8.0 // indirect
|
||||
golang.org/x/net v0.25.0 // indirect
|
||||
golang.org/x/sys v0.20.0 // indirect
|
||||
golang.org/x/text v0.15.0 // indirect
|
||||
google.golang.org/protobuf v1.34.1 // indirect
|
||||
)
|
||||
110
backend/go.sum
Normal file
110
backend/go.sum
Normal file
@@ -0,0 +1,110 @@
|
||||
github.com/bytedance/sonic v1.11.6 h1:oUp34TzMlL+OY1OUWxHqsdkgC/Zfc85zGqw9siXjrc0=
|
||||
github.com/bytedance/sonic v1.11.6/go.mod h1:LysEHSvpvDySVdC2f87zGWf6CIKJcAvqab1ZaiQtds4=
|
||||
github.com/bytedance/sonic/loader v0.1.1 h1:c+e5Pt1k/cy5wMveRDyk2X4B9hF4g7an8N3zCYjJFNM=
|
||||
github.com/bytedance/sonic/loader v0.1.1/go.mod h1:ncP89zfokxS5LZrJxl5z0UJcsk4M4yY2JpfqGeCtNLU=
|
||||
github.com/cloudwego/base64x v0.1.4 h1:jwCgWpFanWmN8xoIUHa2rtzmkd5J2plF/dnLS6Xd/0Y=
|
||||
github.com/cloudwego/base64x v0.1.4/go.mod h1:0zlkT4Wn5C6NdauXdJRhSKRlJvmclQ1hhJgA0rcu/8w=
|
||||
github.com/cloudwego/iasm v0.2.0 h1:1KNIy1I1H9hNNFEEH3DVnI4UujN+1zjpuk6gwHLTssg=
|
||||
github.com/cloudwego/iasm v0.2.0/go.mod h1:8rXZaNYT2n95jn+zTI1sDr+IgcD2GVs0nlbbQPiEFhY=
|
||||
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
|
||||
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0=
|
||||
github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk=
|
||||
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
|
||||
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
|
||||
github.com/gin-gonic/gin v1.10.0 h1:nTuyha1TYqgedzytsKYqna+DfLos46nTv2ygFy86HFU=
|
||||
github.com/gin-gonic/gin v1.10.0/go.mod h1:4PMNQiOhvDRa013RKVbsiNwoyezlm2rm0uX/T7kzp5Y=
|
||||
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
|
||||
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
|
||||
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
|
||||
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
|
||||
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
|
||||
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
|
||||
github.com/go-playground/validator/v10 v10.20.0 h1:K9ISHbSaI0lyB2eWMPJo+kOS/FBExVwjEviJTixqxL8=
|
||||
github.com/go-playground/validator/v10 v10.20.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM=
|
||||
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
|
||||
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
|
||||
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
|
||||
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
|
||||
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
|
||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
|
||||
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
|
||||
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
||||
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
|
||||
github.com/klauspost/cpuid/v2 v2.2.7 h1:ZWSB3igEs+d0qvnxR/ZBzXVmxkgt8DdzP6m9pfuVLDM=
|
||||
github.com/klauspost/cpuid/v2 v2.2.7/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
|
||||
github.com/knz/go-libedit v1.10.1/go.mod h1:MZTVkCWyz0oBc7JOWP3wNAzd002ZbM/5hgShxwh4x8M=
|
||||
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
|
||||
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
|
||||
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
|
||||
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
|
||||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM=
|
||||
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
||||
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
|
||||
github.com/ugorji/go/codec v1.2.12 h1:9LC83zGrHhuUA9l16C9AHXAqEV/2wBQ4nkvumAE65EE=
|
||||
github.com/ugorji/go/codec v1.2.12/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
|
||||
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
||||
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
||||
go.uber.org/multierr v1.10.0 h1:S0h4aNzvfcFsC3dRF1jLoaov7oRaKqRGC/pUEJ2yvPQ=
|
||||
go.uber.org/multierr v1.10.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
|
||||
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
|
||||
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
|
||||
golang.org/x/arch v0.8.0 h1:3wRIsP3pM4yUptoR96otTUOXI367OS0+c9eeRi9doIc=
|
||||
golang.org/x/arch v0.8.0/go.mod h1:FEVrYAQjsQXMVJ1nsMoVVXPZg6p2JE2mx8psSWTDQys=
|
||||
golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI=
|
||||
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
|
||||
golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac=
|
||||
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
|
||||
golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M=
|
||||
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y=
|
||||
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk=
|
||||
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
|
||||
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
|
||||
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/protobuf v1.34.1 h1:9ddQBjfCyZPOHPUiPxpYESBLc+T8P3E+Vo4IbKZgFWg=
|
||||
google.golang.org/protobuf v1.34.1/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
nullprogram.com/x/optparse v1.0.0/go.mod h1:KdyPE+Igbe0jQUrVfMqDMeJQIJZEuyV7pjYmp6pbG50=
|
||||
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
|
||||
148
backend/internal/audit/middleware.go
Normal file
148
backend/internal/audit/middleware.go
Normal file
@@ -0,0 +1,148 @@
|
||||
package audit
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"strings"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// Middleware provides audit logging functionality
|
||||
type Middleware struct {
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewMiddleware creates a new audit middleware
|
||||
func NewMiddleware(db *database.DB, log *logger.Logger) *Middleware {
|
||||
return &Middleware{
|
||||
db: db,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// LogRequest creates middleware that logs all mutating requests
|
||||
func (m *Middleware) LogRequest() gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
// Only log mutating methods
|
||||
method := c.Request.Method
|
||||
if method == "GET" || method == "HEAD" || method == "OPTIONS" {
|
||||
c.Next()
|
||||
return
|
||||
}
|
||||
|
||||
// Capture request body
|
||||
var bodyBytes []byte
|
||||
if c.Request.Body != nil {
|
||||
bodyBytes, _ = io.ReadAll(c.Request.Body)
|
||||
c.Request.Body = io.NopCloser(bytes.NewBuffer(bodyBytes))
|
||||
}
|
||||
|
||||
// Process request
|
||||
c.Next()
|
||||
|
||||
// Get user information
|
||||
userID, _ := c.Get("user_id")
|
||||
username, _ := c.Get("username")
|
||||
|
||||
// Capture response status
|
||||
status := c.Writer.Status()
|
||||
|
||||
// Log to database
|
||||
go m.logAuditEvent(
|
||||
userID,
|
||||
username,
|
||||
method,
|
||||
c.Request.URL.Path,
|
||||
c.ClientIP(),
|
||||
c.GetHeader("User-Agent"),
|
||||
bodyBytes,
|
||||
status,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// logAuditEvent logs an audit event to the database
|
||||
func (m *Middleware) logAuditEvent(
|
||||
userID interface{},
|
||||
username interface{},
|
||||
method, path, ipAddress, userAgent string,
|
||||
requestBody []byte,
|
||||
responseStatus int,
|
||||
) {
|
||||
var userIDStr, usernameStr string
|
||||
if userID != nil {
|
||||
userIDStr, _ = userID.(string)
|
||||
}
|
||||
if username != nil {
|
||||
usernameStr, _ = username.(string)
|
||||
}
|
||||
|
||||
// Determine action and resource from path
|
||||
action, resourceType, resourceID := parsePath(path)
|
||||
// Override action with HTTP method
|
||||
action = strings.ToLower(method)
|
||||
|
||||
// Truncate request body if too large
|
||||
bodyJSON := string(requestBody)
|
||||
if len(bodyJSON) > 10000 {
|
||||
bodyJSON = bodyJSON[:10000] + "... (truncated)"
|
||||
}
|
||||
|
||||
query := `
|
||||
INSERT INTO audit_log (
|
||||
user_id, username, action, resource_type, resource_id,
|
||||
method, path, ip_address, user_agent,
|
||||
request_body, response_status, created_at
|
||||
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, NOW())
|
||||
`
|
||||
|
||||
var bodyJSONPtr *string
|
||||
if len(bodyJSON) > 0 {
|
||||
bodyJSONPtr = &bodyJSON
|
||||
}
|
||||
|
||||
_, err := m.db.Exec(query,
|
||||
userIDStr, usernameStr, action, resourceType, resourceID,
|
||||
method, path, ipAddress, userAgent,
|
||||
bodyJSONPtr, responseStatus,
|
||||
)
|
||||
if err != nil {
|
||||
m.logger.Error("Failed to log audit event", "error", err)
|
||||
}
|
||||
}
|
||||
|
||||
// parsePath extracts action, resource type, and resource ID from a path
|
||||
func parsePath(path string) (action, resourceType, resourceID string) {
|
||||
// Example: /api/v1/iam/users/123 -> action=update, resourceType=user, resourceID=123
|
||||
if len(path) < 8 || path[:8] != "/api/v1/" {
|
||||
return "unknown", "unknown", ""
|
||||
}
|
||||
|
||||
remaining := path[8:]
|
||||
parts := strings.Split(remaining, "/")
|
||||
if len(parts) == 0 {
|
||||
return "unknown", "unknown", ""
|
||||
}
|
||||
|
||||
// First part is usually the resource type (e.g., "iam", "tasks")
|
||||
resourceType = parts[0]
|
||||
|
||||
// Determine action from HTTP method (will be set by caller)
|
||||
action = "unknown"
|
||||
|
||||
// Last part might be resource ID if it's a UUID or number
|
||||
if len(parts) > 1 {
|
||||
lastPart := parts[len(parts)-1]
|
||||
// Check if it looks like a UUID or ID
|
||||
if len(lastPart) > 10 {
|
||||
resourceID = lastPart
|
||||
}
|
||||
}
|
||||
|
||||
return action, resourceType, resourceID
|
||||
}
|
||||
|
||||
259
backend/internal/auth/handler.go
Normal file
259
backend/internal/auth/handler.go
Normal file
@@ -0,0 +1,259 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/config"
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/atlasos/calypso/internal/common/password"
|
||||
"github.com/atlasos/calypso/internal/iam"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/golang-jwt/jwt/v5"
|
||||
)
|
||||
|
||||
// Handler handles authentication requests
|
||||
type Handler struct {
|
||||
db *database.DB
|
||||
config *config.Config
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewHandler creates a new auth handler
|
||||
func NewHandler(db *database.DB, cfg *config.Config, log *logger.Logger) *Handler {
|
||||
return &Handler{
|
||||
db: db,
|
||||
config: cfg,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// LoginRequest represents a login request
|
||||
type LoginRequest struct {
|
||||
Username string `json:"username" binding:"required"`
|
||||
Password string `json:"password" binding:"required"`
|
||||
}
|
||||
|
||||
// LoginResponse represents a login response
|
||||
type LoginResponse struct {
|
||||
Token string `json:"token"`
|
||||
ExpiresAt time.Time `json:"expires_at"`
|
||||
User UserInfo `json:"user"`
|
||||
}
|
||||
|
||||
// UserInfo represents user information in auth responses
|
||||
type UserInfo struct {
|
||||
ID string `json:"id"`
|
||||
Username string `json:"username"`
|
||||
Email string `json:"email"`
|
||||
FullName string `json:"full_name"`
|
||||
Roles []string `json:"roles"`
|
||||
}
|
||||
|
||||
// Login handles user login
|
||||
func (h *Handler) Login(c *gin.Context) {
|
||||
var req LoginRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
|
||||
return
|
||||
}
|
||||
|
||||
// Get user from database
|
||||
user, err := iam.GetUserByUsername(h.db, req.Username)
|
||||
if err != nil {
|
||||
h.logger.Warn("Login attempt failed", "username", req.Username, "error", "user not found")
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "invalid credentials"})
|
||||
return
|
||||
}
|
||||
|
||||
// Check if user is active
|
||||
if !user.IsActive {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "account is disabled"})
|
||||
return
|
||||
}
|
||||
|
||||
// Verify password
|
||||
if !h.verifyPassword(req.Password, user.PasswordHash) {
|
||||
h.logger.Warn("Login attempt failed", "username", req.Username, "error", "invalid password")
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "invalid credentials"})
|
||||
return
|
||||
}
|
||||
|
||||
// Generate JWT token
|
||||
token, expiresAt, err := h.generateToken(user)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to generate token", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to generate token"})
|
||||
return
|
||||
}
|
||||
|
||||
// Create session
|
||||
if err := h.createSession(user.ID, token, c.ClientIP(), c.GetHeader("User-Agent"), expiresAt); err != nil {
|
||||
h.logger.Error("Failed to create session", "error", err)
|
||||
// Continue anyway, token is still valid
|
||||
}
|
||||
|
||||
// Update last login
|
||||
if err := h.updateLastLogin(user.ID); err != nil {
|
||||
h.logger.Warn("Failed to update last login", "error", err)
|
||||
}
|
||||
|
||||
// Get user roles
|
||||
roles, err := iam.GetUserRoles(h.db, user.ID)
|
||||
if err != nil {
|
||||
h.logger.Warn("Failed to get user roles", "error", err)
|
||||
roles = []string{}
|
||||
}
|
||||
|
||||
h.logger.Info("User logged in successfully", "username", req.Username, "user_id", user.ID)
|
||||
|
||||
c.JSON(http.StatusOK, LoginResponse{
|
||||
Token: token,
|
||||
ExpiresAt: expiresAt,
|
||||
User: UserInfo{
|
||||
ID: user.ID,
|
||||
Username: user.Username,
|
||||
Email: user.Email,
|
||||
FullName: user.FullName,
|
||||
Roles: roles,
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
// Logout handles user logout
|
||||
func (h *Handler) Logout(c *gin.Context) {
|
||||
// Extract token
|
||||
authHeader := c.GetHeader("Authorization")
|
||||
if authHeader != "" {
|
||||
parts := strings.SplitN(authHeader, " ", 2)
|
||||
if len(parts) == 2 && parts[0] == "Bearer" {
|
||||
// Invalidate session (token hash would be stored)
|
||||
// For now, just return success
|
||||
}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "logged out successfully"})
|
||||
}
|
||||
|
||||
// Me returns current user information
|
||||
func (h *Handler) Me(c *gin.Context) {
|
||||
user, exists := c.Get("user")
|
||||
if !exists {
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "authentication required"})
|
||||
return
|
||||
}
|
||||
|
||||
authUser, ok := user.(*iam.User)
|
||||
if !ok {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "invalid user context"})
|
||||
return
|
||||
}
|
||||
|
||||
roles, err := iam.GetUserRoles(h.db, authUser.ID)
|
||||
if err != nil {
|
||||
h.logger.Warn("Failed to get user roles", "error", err)
|
||||
roles = []string{}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, UserInfo{
|
||||
ID: authUser.ID,
|
||||
Username: authUser.Username,
|
||||
Email: authUser.Email,
|
||||
FullName: authUser.FullName,
|
||||
Roles: roles,
|
||||
})
|
||||
}
|
||||
|
||||
// ValidateToken validates a JWT token and returns the user
|
||||
func (h *Handler) ValidateToken(tokenString string) (*iam.User, error) {
|
||||
token, err := jwt.Parse(tokenString, func(token *jwt.Token) (interface{}, error) {
|
||||
if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok {
|
||||
return nil, jwt.ErrSignatureInvalid
|
||||
}
|
||||
return []byte(h.config.Auth.JWTSecret), nil
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if !token.Valid {
|
||||
return nil, jwt.ErrSignatureInvalid
|
||||
}
|
||||
|
||||
claims, ok := token.Claims.(jwt.MapClaims)
|
||||
if !ok {
|
||||
return nil, jwt.ErrInvalidKey
|
||||
}
|
||||
|
||||
userID, ok := claims["user_id"].(string)
|
||||
if !ok {
|
||||
return nil, jwt.ErrInvalidKey
|
||||
}
|
||||
|
||||
// Get user from database
|
||||
user, err := iam.GetUserByID(h.db, userID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if !user.IsActive {
|
||||
return nil, jwt.ErrInvalidKey
|
||||
}
|
||||
|
||||
return user, nil
|
||||
}
|
||||
|
||||
// verifyPassword verifies a password against an Argon2id hash
|
||||
func (h *Handler) verifyPassword(pwd, hash string) bool {
|
||||
valid, err := password.VerifyPassword(pwd, hash)
|
||||
if err != nil {
|
||||
h.logger.Warn("Password verification error", "error", err)
|
||||
return false
|
||||
}
|
||||
return valid
|
||||
}
|
||||
|
||||
// generateToken generates a JWT token for a user
|
||||
func (h *Handler) generateToken(user *iam.User) (string, time.Time, error) {
|
||||
expiresAt := time.Now().Add(h.config.Auth.TokenLifetime)
|
||||
|
||||
claims := jwt.MapClaims{
|
||||
"user_id": user.ID,
|
||||
"username": user.Username,
|
||||
"exp": expiresAt.Unix(),
|
||||
"iat": time.Now().Unix(),
|
||||
}
|
||||
|
||||
token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims)
|
||||
tokenString, err := token.SignedString([]byte(h.config.Auth.JWTSecret))
|
||||
if err != nil {
|
||||
return "", time.Time{}, err
|
||||
}
|
||||
|
||||
return tokenString, expiresAt, nil
|
||||
}
|
||||
|
||||
// createSession creates a session record in the database
|
||||
func (h *Handler) createSession(userID, token, ipAddress, userAgent string, expiresAt time.Time) error {
|
||||
// Hash the token for storage using SHA-256
|
||||
tokenHash := HashToken(token)
|
||||
|
||||
query := `
|
||||
INSERT INTO sessions (user_id, token_hash, ip_address, user_agent, expires_at)
|
||||
VALUES ($1, $2, $3, $4, $5)
|
||||
`
|
||||
_, err := h.db.Exec(query, userID, tokenHash, ipAddress, userAgent, expiresAt)
|
||||
return err
|
||||
}
|
||||
|
||||
// updateLastLogin updates the user's last login timestamp
|
||||
func (h *Handler) updateLastLogin(userID string) error {
|
||||
query := `UPDATE users SET last_login_at = NOW() WHERE id = $1`
|
||||
_, err := h.db.Exec(query, userID)
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
107
backend/internal/auth/password.go
Normal file
107
backend/internal/auth/password.go
Normal file
@@ -0,0 +1,107 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"crypto/subtle"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/config"
|
||||
"golang.org/x/crypto/argon2"
|
||||
)
|
||||
|
||||
// HashPassword hashes a password using Argon2id
|
||||
func HashPassword(password string, params config.Argon2Params) (string, error) {
|
||||
// Generate a random salt
|
||||
salt := make([]byte, params.SaltLength)
|
||||
if _, err := rand.Read(salt); err != nil {
|
||||
return "", fmt.Errorf("failed to generate salt: %w", err)
|
||||
}
|
||||
|
||||
// Hash the password
|
||||
hash := argon2.IDKey(
|
||||
[]byte(password),
|
||||
salt,
|
||||
params.Iterations,
|
||||
params.Memory,
|
||||
params.Parallelism,
|
||||
params.KeyLength,
|
||||
)
|
||||
|
||||
// Encode the hash and salt in the standard format
|
||||
// Format: $argon2id$v=<version>$m=<memory>,t=<iterations>,p=<parallelism>$<salt>$<hash>
|
||||
b64Salt := base64.RawStdEncoding.EncodeToString(salt)
|
||||
b64Hash := base64.RawStdEncoding.EncodeToString(hash)
|
||||
|
||||
encodedHash := fmt.Sprintf(
|
||||
"$argon2id$v=%d$m=%d,t=%d,p=%d$%s$%s",
|
||||
argon2.Version,
|
||||
params.Memory,
|
||||
params.Iterations,
|
||||
params.Parallelism,
|
||||
b64Salt,
|
||||
b64Hash,
|
||||
)
|
||||
|
||||
return encodedHash, nil
|
||||
}
|
||||
|
||||
// VerifyPassword verifies a password against an Argon2id hash
|
||||
func VerifyPassword(password, encodedHash string) (bool, error) {
|
||||
// Parse the encoded hash
|
||||
parts := strings.Split(encodedHash, "$")
|
||||
if len(parts) != 6 {
|
||||
return false, errors.New("invalid hash format")
|
||||
}
|
||||
|
||||
if parts[1] != "argon2id" {
|
||||
return false, errors.New("unsupported hash algorithm")
|
||||
}
|
||||
|
||||
// Parse version
|
||||
var version int
|
||||
if _, err := fmt.Sscanf(parts[2], "v=%d", &version); err != nil {
|
||||
return false, fmt.Errorf("failed to parse version: %w", err)
|
||||
}
|
||||
if version != argon2.Version {
|
||||
return false, errors.New("incompatible version")
|
||||
}
|
||||
|
||||
// Parse parameters
|
||||
var memory, iterations uint32
|
||||
var parallelism uint8
|
||||
if _, err := fmt.Sscanf(parts[3], "m=%d,t=%d,p=%d", &memory, &iterations, ¶llelism); err != nil {
|
||||
return false, fmt.Errorf("failed to parse parameters: %w", err)
|
||||
}
|
||||
|
||||
// Decode salt and hash
|
||||
salt, err := base64.RawStdEncoding.DecodeString(parts[4])
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to decode salt: %w", err)
|
||||
}
|
||||
|
||||
hash, err := base64.RawStdEncoding.DecodeString(parts[5])
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to decode hash: %w", err)
|
||||
}
|
||||
|
||||
// Compute the hash of the provided password
|
||||
otherHash := argon2.IDKey(
|
||||
[]byte(password),
|
||||
salt,
|
||||
iterations,
|
||||
memory,
|
||||
parallelism,
|
||||
uint32(len(hash)),
|
||||
)
|
||||
|
||||
// Compare hashes using constant-time comparison
|
||||
if subtle.ConstantTimeCompare(hash, otherHash) == 1 {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
return false, nil
|
||||
}
|
||||
|
||||
20
backend/internal/auth/token.go
Normal file
20
backend/internal/auth/token.go
Normal file
@@ -0,0 +1,20 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
)
|
||||
|
||||
// HashToken creates a cryptographic hash of the token for storage
|
||||
// Uses SHA-256 to hash the token before storing in the database
|
||||
func HashToken(token string) string {
|
||||
hash := sha256.Sum256([]byte(token))
|
||||
return hex.EncodeToString(hash[:])
|
||||
}
|
||||
|
||||
// VerifyTokenHash verifies if a token matches a stored hash
|
||||
func VerifyTokenHash(token, storedHash string) bool {
|
||||
computedHash := HashToken(token)
|
||||
return computedHash == storedHash
|
||||
}
|
||||
|
||||
70
backend/internal/auth/token_test.go
Normal file
70
backend/internal/auth/token_test.go
Normal file
@@ -0,0 +1,70 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestHashToken(t *testing.T) {
|
||||
token := "test-jwt-token-string-12345"
|
||||
hash := HashToken(token)
|
||||
|
||||
// Verify hash is not empty
|
||||
if hash == "" {
|
||||
t.Error("HashToken returned empty string")
|
||||
}
|
||||
|
||||
// Verify hash length (SHA-256 produces 64 hex characters)
|
||||
if len(hash) != 64 {
|
||||
t.Errorf("HashToken returned hash of length %d, expected 64", len(hash))
|
||||
}
|
||||
|
||||
// Verify hash is deterministic (same token produces same hash)
|
||||
hash2 := HashToken(token)
|
||||
if hash != hash2 {
|
||||
t.Error("HashToken returned different hashes for same token")
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashToken_DifferentTokens(t *testing.T) {
|
||||
token1 := "token1"
|
||||
token2 := "token2"
|
||||
|
||||
hash1 := HashToken(token1)
|
||||
hash2 := HashToken(token2)
|
||||
|
||||
// Different tokens should produce different hashes
|
||||
if hash1 == hash2 {
|
||||
t.Error("Different tokens produced same hash")
|
||||
}
|
||||
}
|
||||
|
||||
func TestVerifyTokenHash(t *testing.T) {
|
||||
token := "test-jwt-token-string-12345"
|
||||
storedHash := HashToken(token)
|
||||
|
||||
// Test correct token
|
||||
if !VerifyTokenHash(token, storedHash) {
|
||||
t.Error("VerifyTokenHash returned false for correct token")
|
||||
}
|
||||
|
||||
// Test wrong token
|
||||
if VerifyTokenHash("wrong-token", storedHash) {
|
||||
t.Error("VerifyTokenHash returned true for wrong token")
|
||||
}
|
||||
|
||||
// Test empty token
|
||||
if VerifyTokenHash("", storedHash) {
|
||||
t.Error("VerifyTokenHash returned true for empty token")
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashToken_EmptyToken(t *testing.T) {
|
||||
hash := HashToken("")
|
||||
if hash == "" {
|
||||
t.Error("HashToken should return hash even for empty token")
|
||||
}
|
||||
if len(hash) != 64 {
|
||||
t.Errorf("HashToken returned hash of length %d for empty token, expected 64", len(hash))
|
||||
}
|
||||
}
|
||||
|
||||
383
backend/internal/backup/handler.go
Normal file
383
backend/internal/backup/handler.go
Normal file
@@ -0,0 +1,383 @@
|
||||
package backup
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// Handler handles backup-related API requests
|
||||
type Handler struct {
|
||||
service *Service
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewHandler creates a new backup handler
|
||||
func NewHandler(service *Service, log *logger.Logger) *Handler {
|
||||
return &Handler{
|
||||
service: service,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// ListJobs lists backup jobs with optional filters
|
||||
func (h *Handler) ListJobs(c *gin.Context) {
|
||||
opts := ListJobsOptions{
|
||||
Status: c.Query("status"),
|
||||
JobType: c.Query("job_type"),
|
||||
ClientName: c.Query("client_name"),
|
||||
JobName: c.Query("job_name"),
|
||||
}
|
||||
|
||||
// Parse pagination
|
||||
var limit, offset int
|
||||
if limitStr := c.Query("limit"); limitStr != "" {
|
||||
if _, err := fmt.Sscanf(limitStr, "%d", &limit); err == nil {
|
||||
opts.Limit = limit
|
||||
}
|
||||
}
|
||||
if offsetStr := c.Query("offset"); offsetStr != "" {
|
||||
if _, err := fmt.Sscanf(offsetStr, "%d", &offset); err == nil {
|
||||
opts.Offset = offset
|
||||
}
|
||||
}
|
||||
|
||||
jobs, totalCount, err := h.service.ListJobs(c.Request.Context(), opts)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list jobs", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list jobs"})
|
||||
return
|
||||
}
|
||||
|
||||
if jobs == nil {
|
||||
jobs = []Job{}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"jobs": jobs,
|
||||
"total": totalCount,
|
||||
"limit": opts.Limit,
|
||||
"offset": opts.Offset,
|
||||
})
|
||||
}
|
||||
|
||||
// GetJob retrieves a job by ID
|
||||
func (h *Handler) GetJob(c *gin.Context) {
|
||||
id := c.Param("id")
|
||||
|
||||
job, err := h.service.GetJob(c.Request.Context(), id)
|
||||
if err != nil {
|
||||
if err.Error() == "job not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "job not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get job", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get job"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, job)
|
||||
}
|
||||
|
||||
// CreateJob creates a new backup job
|
||||
func (h *Handler) CreateJob(c *gin.Context) {
|
||||
var req CreateJobRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Validate job type
|
||||
validJobTypes := map[string]bool{
|
||||
"Backup": true, "Restore": true, "Verify": true, "Copy": true, "Migrate": true,
|
||||
}
|
||||
if !validJobTypes[req.JobType] {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid job_type"})
|
||||
return
|
||||
}
|
||||
|
||||
// Validate job level
|
||||
validJobLevels := map[string]bool{
|
||||
"Full": true, "Incremental": true, "Differential": true, "Since": true,
|
||||
}
|
||||
if !validJobLevels[req.JobLevel] {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid job_level"})
|
||||
return
|
||||
}
|
||||
|
||||
job, err := h.service.CreateJob(c.Request.Context(), req)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to create job", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create job"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusCreated, job)
|
||||
}
|
||||
|
||||
// ExecuteBconsoleCommand executes a bconsole command
|
||||
func (h *Handler) ExecuteBconsoleCommand(c *gin.Context) {
|
||||
var req struct {
|
||||
Command string `json:"command" binding:"required"`
|
||||
}
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "command is required"})
|
||||
return
|
||||
}
|
||||
|
||||
output, err := h.service.ExecuteBconsoleCommand(c.Request.Context(), req.Command)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to execute bconsole command", "error", err, "command", req.Command)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"error": "failed to execute command",
|
||||
"output": output,
|
||||
"details": err.Error(),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"output": output,
|
||||
})
|
||||
}
|
||||
|
||||
// ListClients lists all backup clients with optional filters
|
||||
func (h *Handler) ListClients(c *gin.Context) {
|
||||
opts := ListClientsOptions{}
|
||||
|
||||
// Parse enabled filter
|
||||
if enabledStr := c.Query("enabled"); enabledStr != "" {
|
||||
enabled := enabledStr == "true"
|
||||
opts.Enabled = &enabled
|
||||
}
|
||||
|
||||
// Parse search query
|
||||
opts.Search = c.Query("search")
|
||||
|
||||
clients, err := h.service.ListClients(c.Request.Context(), opts)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list clients", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"error": "failed to list clients",
|
||||
"details": err.Error(),
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
if clients == nil {
|
||||
clients = []Client{}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"clients": clients,
|
||||
"total": len(clients),
|
||||
})
|
||||
}
|
||||
|
||||
// GetDashboardStats returns dashboard statistics
|
||||
func (h *Handler) GetDashboardStats(c *gin.Context) {
|
||||
stats, err := h.service.GetDashboardStats(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get dashboard stats", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get dashboard stats"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, stats)
|
||||
}
|
||||
|
||||
// ListStoragePools lists all storage pools
|
||||
func (h *Handler) ListStoragePools(c *gin.Context) {
|
||||
pools, err := h.service.ListStoragePools(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list storage pools", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list storage pools"})
|
||||
return
|
||||
}
|
||||
|
||||
if pools == nil {
|
||||
pools = []StoragePool{}
|
||||
}
|
||||
|
||||
h.logger.Info("Listed storage pools", "count", len(pools))
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"pools": pools,
|
||||
"total": len(pools),
|
||||
})
|
||||
}
|
||||
|
||||
// ListStorageVolumes lists all storage volumes
|
||||
func (h *Handler) ListStorageVolumes(c *gin.Context) {
|
||||
poolName := c.Query("pool_name")
|
||||
|
||||
volumes, err := h.service.ListStorageVolumes(c.Request.Context(), poolName)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list storage volumes", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list storage volumes"})
|
||||
return
|
||||
}
|
||||
|
||||
if volumes == nil {
|
||||
volumes = []StorageVolume{}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"volumes": volumes,
|
||||
"total": len(volumes),
|
||||
})
|
||||
}
|
||||
|
||||
// ListStorageDaemons lists all storage daemons
|
||||
func (h *Handler) ListStorageDaemons(c *gin.Context) {
|
||||
daemons, err := h.service.ListStorageDaemons(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list storage daemons", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list storage daemons"})
|
||||
return
|
||||
}
|
||||
|
||||
if daemons == nil {
|
||||
daemons = []StorageDaemon{}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"daemons": daemons,
|
||||
"total": len(daemons),
|
||||
})
|
||||
}
|
||||
|
||||
// CreateStoragePool creates a new storage pool
|
||||
func (h *Handler) CreateStoragePool(c *gin.Context) {
|
||||
var req CreatePoolRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
pool, err := h.service.CreateStoragePool(c.Request.Context(), req)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to create storage pool", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusCreated, pool)
|
||||
}
|
||||
|
||||
// DeleteStoragePool deletes a storage pool
|
||||
func (h *Handler) DeleteStoragePool(c *gin.Context) {
|
||||
idStr := c.Param("id")
|
||||
if idStr == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "pool ID is required"})
|
||||
return
|
||||
}
|
||||
|
||||
var poolID int
|
||||
if _, err := fmt.Sscanf(idStr, "%d", &poolID); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid pool ID"})
|
||||
return
|
||||
}
|
||||
|
||||
err := h.service.DeleteStoragePool(c.Request.Context(), poolID)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to delete storage pool", "error", err, "pool_id", poolID)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "pool deleted successfully"})
|
||||
}
|
||||
|
||||
// CreateStorageVolume creates a new storage volume
|
||||
func (h *Handler) CreateStorageVolume(c *gin.Context) {
|
||||
var req CreateVolumeRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
volume, err := h.service.CreateStorageVolume(c.Request.Context(), req)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to create storage volume", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusCreated, volume)
|
||||
}
|
||||
|
||||
// UpdateStorageVolume updates a storage volume
|
||||
func (h *Handler) UpdateStorageVolume(c *gin.Context) {
|
||||
idStr := c.Param("id")
|
||||
if idStr == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "volume ID is required"})
|
||||
return
|
||||
}
|
||||
|
||||
var volumeID int
|
||||
if _, err := fmt.Sscanf(idStr, "%d", &volumeID); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid volume ID"})
|
||||
return
|
||||
}
|
||||
|
||||
var req UpdateVolumeRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
volume, err := h.service.UpdateStorageVolume(c.Request.Context(), volumeID, req)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to update storage volume", "error", err, "volume_id", volumeID)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, volume)
|
||||
}
|
||||
|
||||
// DeleteStorageVolume deletes a storage volume
|
||||
func (h *Handler) DeleteStorageVolume(c *gin.Context) {
|
||||
idStr := c.Param("id")
|
||||
if idStr == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "volume ID is required"})
|
||||
return
|
||||
}
|
||||
|
||||
var volumeID int
|
||||
if _, err := fmt.Sscanf(idStr, "%d", &volumeID); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid volume ID"})
|
||||
return
|
||||
}
|
||||
|
||||
err := h.service.DeleteStorageVolume(c.Request.Context(), volumeID)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to delete storage volume", "error", err, "volume_id", volumeID)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "volume deleted successfully"})
|
||||
}
|
||||
|
||||
// ListMedia lists all media from bconsole "list media" command
|
||||
func (h *Handler) ListMedia(c *gin.Context) {
|
||||
media, err := h.service.ListMedia(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list media", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
if media == nil {
|
||||
media = []Media{}
|
||||
}
|
||||
|
||||
h.logger.Info("Listed media", "count", len(media))
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"media": media,
|
||||
"total": len(media),
|
||||
})
|
||||
}
|
||||
2322
backend/internal/backup/service.go
Normal file
2322
backend/internal/backup/service.go
Normal file
File diff suppressed because it is too large
Load Diff
143
backend/internal/common/cache/cache.go
vendored
Normal file
143
backend/internal/common/cache/cache.go
vendored
Normal file
@@ -0,0 +1,143 @@
|
||||
package cache
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// CacheEntry represents a cached value with expiration
|
||||
type CacheEntry struct {
|
||||
Value interface{}
|
||||
ExpiresAt time.Time
|
||||
CreatedAt time.Time
|
||||
}
|
||||
|
||||
// IsExpired checks if the cache entry has expired
|
||||
func (e *CacheEntry) IsExpired() bool {
|
||||
return time.Now().After(e.ExpiresAt)
|
||||
}
|
||||
|
||||
// Cache provides an in-memory cache with TTL support
|
||||
type Cache struct {
|
||||
entries map[string]*CacheEntry
|
||||
mu sync.RWMutex
|
||||
ttl time.Duration
|
||||
}
|
||||
|
||||
// NewCache creates a new cache with a default TTL
|
||||
func NewCache(defaultTTL time.Duration) *Cache {
|
||||
c := &Cache{
|
||||
entries: make(map[string]*CacheEntry),
|
||||
ttl: defaultTTL,
|
||||
}
|
||||
|
||||
// Start background cleanup goroutine
|
||||
go c.cleanup()
|
||||
|
||||
return c
|
||||
}
|
||||
|
||||
// Get retrieves a value from the cache
|
||||
func (c *Cache) Get(key string) (interface{}, bool) {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
|
||||
entry, exists := c.entries[key]
|
||||
if !exists {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
if entry.IsExpired() {
|
||||
// Don't delete here, let cleanup handle it
|
||||
return nil, false
|
||||
}
|
||||
|
||||
return entry.Value, true
|
||||
}
|
||||
|
||||
// Set stores a value in the cache with the default TTL
|
||||
func (c *Cache) Set(key string, value interface{}) {
|
||||
c.SetWithTTL(key, value, c.ttl)
|
||||
}
|
||||
|
||||
// SetWithTTL stores a value in the cache with a custom TTL
|
||||
func (c *Cache) SetWithTTL(key string, value interface{}, ttl time.Duration) {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
c.entries[key] = &CacheEntry{
|
||||
Value: value,
|
||||
ExpiresAt: time.Now().Add(ttl),
|
||||
CreatedAt: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
// Delete removes a value from the cache
|
||||
func (c *Cache) Delete(key string) {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
delete(c.entries, key)
|
||||
}
|
||||
|
||||
// Clear removes all entries from the cache
|
||||
func (c *Cache) Clear() {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.entries = make(map[string]*CacheEntry)
|
||||
}
|
||||
|
||||
// cleanup periodically removes expired entries
|
||||
func (c *Cache) cleanup() {
|
||||
ticker := time.NewTicker(1 * time.Minute)
|
||||
defer ticker.Stop()
|
||||
|
||||
for range ticker.C {
|
||||
c.mu.Lock()
|
||||
for key, entry := range c.entries {
|
||||
if entry.IsExpired() {
|
||||
delete(c.entries, key)
|
||||
}
|
||||
}
|
||||
c.mu.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
// Stats returns cache statistics
|
||||
func (c *Cache) Stats() map[string]interface{} {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
|
||||
total := len(c.entries)
|
||||
expired := 0
|
||||
for _, entry := range c.entries {
|
||||
if entry.IsExpired() {
|
||||
expired++
|
||||
}
|
||||
}
|
||||
|
||||
return map[string]interface{}{
|
||||
"total_entries": total,
|
||||
"active_entries": total - expired,
|
||||
"expired_entries": expired,
|
||||
"default_ttl_seconds": int(c.ttl.Seconds()),
|
||||
}
|
||||
}
|
||||
|
||||
// GenerateKey generates a cache key from a string
|
||||
func GenerateKey(prefix string, parts ...string) string {
|
||||
key := prefix
|
||||
for _, part := range parts {
|
||||
key += ":" + part
|
||||
}
|
||||
|
||||
// Hash long keys to keep them manageable
|
||||
if len(key) > 200 {
|
||||
hash := sha256.Sum256([]byte(key))
|
||||
return prefix + ":" + hex.EncodeToString(hash[:])
|
||||
}
|
||||
|
||||
return key
|
||||
}
|
||||
|
||||
209
backend/internal/common/config/config.go
Normal file
209
backend/internal/common/config/config.go
Normal file
@@ -0,0 +1,209 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// Config represents the application configuration
|
||||
type Config struct {
|
||||
Server ServerConfig `yaml:"server"`
|
||||
Database DatabaseConfig `yaml:"database"`
|
||||
Auth AuthConfig `yaml:"auth"`
|
||||
Logging LoggingConfig `yaml:"logging"`
|
||||
Security SecurityConfig `yaml:"security"`
|
||||
}
|
||||
|
||||
// ServerConfig holds HTTP server configuration
|
||||
type ServerConfig struct {
|
||||
Port int `yaml:"port"`
|
||||
Host string `yaml:"host"`
|
||||
ReadTimeout time.Duration `yaml:"read_timeout"`
|
||||
WriteTimeout time.Duration `yaml:"write_timeout"`
|
||||
IdleTimeout time.Duration `yaml:"idle_timeout"`
|
||||
Cache CacheConfig `yaml:"cache"`
|
||||
}
|
||||
|
||||
// CacheConfig holds response caching configuration
|
||||
type CacheConfig struct {
|
||||
Enabled bool `yaml:"enabled"`
|
||||
DefaultTTL time.Duration `yaml:"default_ttl"`
|
||||
MaxAge int `yaml:"max_age"` // seconds for Cache-Control header
|
||||
}
|
||||
|
||||
// DatabaseConfig holds PostgreSQL connection configuration
|
||||
type DatabaseConfig struct {
|
||||
Host string `yaml:"host"`
|
||||
Port int `yaml:"port"`
|
||||
User string `yaml:"user"`
|
||||
Password string `yaml:"password"`
|
||||
Database string `yaml:"database"`
|
||||
SSLMode string `yaml:"ssl_mode"`
|
||||
MaxConnections int `yaml:"max_connections"`
|
||||
MaxIdleConns int `yaml:"max_idle_conns"`
|
||||
ConnMaxLifetime time.Duration `yaml:"conn_max_lifetime"`
|
||||
}
|
||||
|
||||
// AuthConfig holds authentication configuration
|
||||
type AuthConfig struct {
|
||||
JWTSecret string `yaml:"jwt_secret"`
|
||||
TokenLifetime time.Duration `yaml:"token_lifetime"`
|
||||
Argon2Params Argon2Params `yaml:"argon2"`
|
||||
}
|
||||
|
||||
// Argon2Params holds Argon2id password hashing parameters
|
||||
type Argon2Params struct {
|
||||
Memory uint32 `yaml:"memory"`
|
||||
Iterations uint32 `yaml:"iterations"`
|
||||
Parallelism uint8 `yaml:"parallelism"`
|
||||
SaltLength uint32 `yaml:"salt_length"`
|
||||
KeyLength uint32 `yaml:"key_length"`
|
||||
}
|
||||
|
||||
// LoggingConfig holds logging configuration
|
||||
type LoggingConfig struct {
|
||||
Level string `yaml:"level"`
|
||||
Format string `yaml:"format"` // json or text
|
||||
}
|
||||
|
||||
// SecurityConfig holds security-related configuration
|
||||
type SecurityConfig struct {
|
||||
CORS CORSConfig `yaml:"cors"`
|
||||
RateLimit RateLimitConfig `yaml:"rate_limit"`
|
||||
SecurityHeaders SecurityHeadersConfig `yaml:"security_headers"`
|
||||
}
|
||||
|
||||
// CORSConfig holds CORS configuration
|
||||
type CORSConfig struct {
|
||||
AllowedOrigins []string `yaml:"allowed_origins"`
|
||||
AllowedMethods []string `yaml:"allowed_methods"`
|
||||
AllowedHeaders []string `yaml:"allowed_headers"`
|
||||
AllowCredentials bool `yaml:"allow_credentials"`
|
||||
}
|
||||
|
||||
// RateLimitConfig holds rate limiting configuration
|
||||
type RateLimitConfig struct {
|
||||
Enabled bool `yaml:"enabled"`
|
||||
RequestsPerSecond float64 `yaml:"requests_per_second"`
|
||||
BurstSize int `yaml:"burst_size"`
|
||||
}
|
||||
|
||||
// SecurityHeadersConfig holds security headers configuration
|
||||
type SecurityHeadersConfig struct {
|
||||
Enabled bool `yaml:"enabled"`
|
||||
}
|
||||
|
||||
// Load reads configuration from file and environment variables
|
||||
func Load(path string) (*Config, error) {
|
||||
cfg := DefaultConfig()
|
||||
|
||||
// Read from file if it exists
|
||||
if _, err := os.Stat(path); err == nil {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read config file: %w", err)
|
||||
}
|
||||
|
||||
if err := yaml.Unmarshal(data, cfg); err != nil {
|
||||
return nil, fmt.Errorf("failed to parse config file: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Override with environment variables
|
||||
overrideFromEnv(cfg)
|
||||
|
||||
return cfg, nil
|
||||
}
|
||||
|
||||
// DefaultConfig returns a configuration with sensible defaults
|
||||
func DefaultConfig() *Config {
|
||||
return &Config{
|
||||
Server: ServerConfig{
|
||||
Port: 8080,
|
||||
Host: "0.0.0.0",
|
||||
ReadTimeout: 15 * time.Second,
|
||||
WriteTimeout: 15 * time.Second,
|
||||
IdleTimeout: 60 * time.Second,
|
||||
},
|
||||
Database: DatabaseConfig{
|
||||
Host: getEnv("CALYPSO_DB_HOST", "localhost"),
|
||||
Port: getEnvInt("CALYPSO_DB_PORT", 5432),
|
||||
User: getEnv("CALYPSO_DB_USER", "calypso"),
|
||||
Password: getEnv("CALYPSO_DB_PASSWORD", ""),
|
||||
Database: getEnv("CALYPSO_DB_NAME", "calypso"),
|
||||
SSLMode: getEnv("CALYPSO_DB_SSLMODE", "disable"),
|
||||
MaxConnections: 25,
|
||||
MaxIdleConns: 5,
|
||||
ConnMaxLifetime: 5 * time.Minute,
|
||||
},
|
||||
Auth: AuthConfig{
|
||||
JWTSecret: getEnv("CALYPSO_JWT_SECRET", "change-me-in-production"),
|
||||
TokenLifetime: 24 * time.Hour,
|
||||
Argon2Params: Argon2Params{
|
||||
Memory: 64 * 1024, // 64 MB
|
||||
Iterations: 3,
|
||||
Parallelism: 4,
|
||||
SaltLength: 16,
|
||||
KeyLength: 32,
|
||||
},
|
||||
},
|
||||
Logging: LoggingConfig{
|
||||
Level: getEnv("CALYPSO_LOG_LEVEL", "info"),
|
||||
Format: getEnv("CALYPSO_LOG_FORMAT", "json"),
|
||||
},
|
||||
Security: SecurityConfig{
|
||||
CORS: CORSConfig{
|
||||
AllowedOrigins: []string{"*"}, // Default: allow all (should be restricted in production)
|
||||
AllowedMethods: []string{"GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS"},
|
||||
AllowedHeaders: []string{"Content-Type", "Authorization", "Accept", "Origin"},
|
||||
AllowCredentials: true,
|
||||
},
|
||||
RateLimit: RateLimitConfig{
|
||||
Enabled: true,
|
||||
RequestsPerSecond: 100.0,
|
||||
BurstSize: 50,
|
||||
},
|
||||
SecurityHeaders: SecurityHeadersConfig{
|
||||
Enabled: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// overrideFromEnv applies environment variable overrides
|
||||
func overrideFromEnv(cfg *Config) {
|
||||
if v := os.Getenv("CALYPSO_SERVER_PORT"); v != "" {
|
||||
cfg.Server.Port = getEnvInt("CALYPSO_SERVER_PORT", cfg.Server.Port)
|
||||
}
|
||||
if v := os.Getenv("CALYPSO_DB_HOST"); v != "" {
|
||||
cfg.Database.Host = v
|
||||
}
|
||||
if v := os.Getenv("CALYPSO_DB_PASSWORD"); v != "" {
|
||||
cfg.Database.Password = v
|
||||
}
|
||||
if v := os.Getenv("CALYPSO_JWT_SECRET"); v != "" {
|
||||
cfg.Auth.JWTSecret = v
|
||||
}
|
||||
}
|
||||
|
||||
// Helper functions
|
||||
func getEnv(key, defaultValue string) string {
|
||||
if v := os.Getenv(key); v != "" {
|
||||
return v
|
||||
}
|
||||
return defaultValue
|
||||
}
|
||||
|
||||
func getEnvInt(key string, defaultValue int) int {
|
||||
if v := os.Getenv(key); v != "" {
|
||||
var result int
|
||||
if _, err := fmt.Sscanf(v, "%d", &result); err == nil {
|
||||
return result
|
||||
}
|
||||
}
|
||||
return defaultValue
|
||||
}
|
||||
|
||||
57
backend/internal/common/database/database.go
Normal file
57
backend/internal/common/database/database.go
Normal file
@@ -0,0 +1,57 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
_ "github.com/lib/pq"
|
||||
"github.com/atlasos/calypso/internal/common/config"
|
||||
)
|
||||
|
||||
// DB wraps sql.DB with additional methods
|
||||
type DB struct {
|
||||
*sql.DB
|
||||
}
|
||||
|
||||
// NewConnection creates a new database connection
|
||||
func NewConnection(cfg config.DatabaseConfig) (*DB, error) {
|
||||
dsn := fmt.Sprintf(
|
||||
"host=%s port=%d user=%s password=%s dbname=%s sslmode=%s",
|
||||
cfg.Host, cfg.Port, cfg.User, cfg.Password, cfg.Database, cfg.SSLMode,
|
||||
)
|
||||
|
||||
db, err := sql.Open("postgres", dsn)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to open database connection: %w", err)
|
||||
}
|
||||
|
||||
// Configure connection pool
|
||||
db.SetMaxOpenConns(cfg.MaxConnections)
|
||||
db.SetMaxIdleConns(cfg.MaxIdleConns)
|
||||
db.SetConnMaxLifetime(cfg.ConnMaxLifetime)
|
||||
|
||||
// Test connection
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
if err := db.PingContext(ctx); err != nil {
|
||||
return nil, fmt.Errorf("failed to ping database: %w", err)
|
||||
}
|
||||
|
||||
return &DB{db}, nil
|
||||
}
|
||||
|
||||
// Close closes the database connection
|
||||
func (db *DB) Close() error {
|
||||
return db.DB.Close()
|
||||
}
|
||||
|
||||
// Ping checks the database connection
|
||||
func (db *DB) Ping() error {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
return db.PingContext(ctx)
|
||||
}
|
||||
|
||||
167
backend/internal/common/database/migrations.go
Normal file
167
backend/internal/common/database/migrations.go
Normal file
@@ -0,0 +1,167 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"context"
|
||||
"embed"
|
||||
"fmt"
|
||||
"io/fs"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
)
|
||||
|
||||
//go:embed migrations/*.sql
|
||||
var migrationsFS embed.FS
|
||||
|
||||
// RunMigrations executes all pending database migrations
|
||||
func RunMigrations(ctx context.Context, db *DB) error {
|
||||
log := logger.NewLogger("migrations")
|
||||
|
||||
// Create migrations table if it doesn't exist
|
||||
if err := createMigrationsTable(ctx, db); err != nil {
|
||||
return fmt.Errorf("failed to create migrations table: %w", err)
|
||||
}
|
||||
|
||||
// Get all migration files
|
||||
migrations, err := getMigrationFiles()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read migration files: %w", err)
|
||||
}
|
||||
|
||||
// Get applied migrations
|
||||
applied, err := getAppliedMigrations(ctx, db)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get applied migrations: %w", err)
|
||||
}
|
||||
|
||||
// Apply pending migrations
|
||||
for _, migration := range migrations {
|
||||
if applied[migration.Version] {
|
||||
log.Debug("Migration already applied", "version", migration.Version)
|
||||
continue
|
||||
}
|
||||
|
||||
log.Info("Applying migration", "version", migration.Version, "name", migration.Name)
|
||||
|
||||
// Read migration SQL
|
||||
sql, err := migrationsFS.ReadFile(fmt.Sprintf("migrations/%s", migration.Filename))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read migration file %s: %w", migration.Filename, err)
|
||||
}
|
||||
|
||||
// Execute migration in a transaction
|
||||
tx, err := db.BeginTx(ctx, nil)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to begin transaction: %w", err)
|
||||
}
|
||||
|
||||
if _, err := tx.ExecContext(ctx, string(sql)); err != nil {
|
||||
tx.Rollback()
|
||||
return fmt.Errorf("failed to execute migration %d: %w", migration.Version, err)
|
||||
}
|
||||
|
||||
// Record migration
|
||||
if _, err := tx.ExecContext(ctx,
|
||||
"INSERT INTO schema_migrations (version, applied_at) VALUES ($1, NOW())",
|
||||
migration.Version,
|
||||
); err != nil {
|
||||
tx.Rollback()
|
||||
return fmt.Errorf("failed to record migration %d: %w", migration.Version, err)
|
||||
}
|
||||
|
||||
if err := tx.Commit(); err != nil {
|
||||
return fmt.Errorf("failed to commit migration %d: %w", migration.Version, err)
|
||||
}
|
||||
|
||||
log.Info("Migration applied successfully", "version", migration.Version)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Migration represents a database migration
|
||||
type Migration struct {
|
||||
Version int
|
||||
Name string
|
||||
Filename string
|
||||
}
|
||||
|
||||
// getMigrationFiles returns all migration files sorted by version
|
||||
func getMigrationFiles() ([]Migration, error) {
|
||||
entries, err := fs.ReadDir(migrationsFS, "migrations")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var migrations []Migration
|
||||
for _, entry := range entries {
|
||||
if entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
filename := entry.Name()
|
||||
if !strings.HasSuffix(filename, ".sql") {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse version from filename: 001_initial_schema.sql
|
||||
parts := strings.SplitN(filename, "_", 2)
|
||||
if len(parts) < 2 {
|
||||
continue
|
||||
}
|
||||
|
||||
version, err := strconv.Atoi(parts[0])
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
name := strings.TrimSuffix(parts[1], ".sql")
|
||||
migrations = append(migrations, Migration{
|
||||
Version: version,
|
||||
Name: name,
|
||||
Filename: filename,
|
||||
})
|
||||
}
|
||||
|
||||
// Sort by version
|
||||
sort.Slice(migrations, func(i, j int) bool {
|
||||
return migrations[i].Version < migrations[j].Version
|
||||
})
|
||||
|
||||
return migrations, nil
|
||||
}
|
||||
|
||||
// createMigrationsTable creates the schema_migrations table
|
||||
func createMigrationsTable(ctx context.Context, db *DB) error {
|
||||
query := `
|
||||
CREATE TABLE IF NOT EXISTS schema_migrations (
|
||||
version INTEGER PRIMARY KEY,
|
||||
applied_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
)
|
||||
`
|
||||
_, err := db.ExecContext(ctx, query)
|
||||
return err
|
||||
}
|
||||
|
||||
// getAppliedMigrations returns a map of applied migration versions
|
||||
func getAppliedMigrations(ctx context.Context, db *DB) (map[int]bool, error) {
|
||||
rows, err := db.QueryContext(ctx, "SELECT version FROM schema_migrations ORDER BY version")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
applied := make(map[int]bool)
|
||||
for rows.Next() {
|
||||
var version int
|
||||
if err := rows.Scan(&version); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
applied[version] = true
|
||||
}
|
||||
|
||||
return applied, rows.Err()
|
||||
}
|
||||
|
||||
@@ -0,0 +1,213 @@
|
||||
-- AtlasOS - Calypso
|
||||
-- Initial Database Schema
|
||||
-- Version: 1.0
|
||||
|
||||
-- Users table
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
username VARCHAR(255) NOT NULL UNIQUE,
|
||||
email VARCHAR(255) NOT NULL UNIQUE,
|
||||
password_hash VARCHAR(255) NOT NULL,
|
||||
full_name VARCHAR(255),
|
||||
is_active BOOLEAN NOT NULL DEFAULT true,
|
||||
is_system BOOLEAN NOT NULL DEFAULT false,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
last_login_at TIMESTAMP
|
||||
);
|
||||
|
||||
-- Roles table
|
||||
CREATE TABLE IF NOT EXISTS roles (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(100) NOT NULL UNIQUE,
|
||||
description TEXT,
|
||||
is_system BOOLEAN NOT NULL DEFAULT false,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Permissions table
|
||||
CREATE TABLE IF NOT EXISTS permissions (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL UNIQUE,
|
||||
resource VARCHAR(100) NOT NULL,
|
||||
action VARCHAR(100) NOT NULL,
|
||||
description TEXT,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- User roles junction table
|
||||
CREATE TABLE IF NOT EXISTS user_roles (
|
||||
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||
role_id UUID NOT NULL REFERENCES roles(id) ON DELETE CASCADE,
|
||||
assigned_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
assigned_by UUID REFERENCES users(id),
|
||||
PRIMARY KEY (user_id, role_id)
|
||||
);
|
||||
|
||||
-- Role permissions junction table
|
||||
CREATE TABLE IF NOT EXISTS role_permissions (
|
||||
role_id UUID NOT NULL REFERENCES roles(id) ON DELETE CASCADE,
|
||||
permission_id UUID NOT NULL REFERENCES permissions(id) ON DELETE CASCADE,
|
||||
granted_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
PRIMARY KEY (role_id, permission_id)
|
||||
);
|
||||
|
||||
-- Sessions table
|
||||
CREATE TABLE IF NOT EXISTS sessions (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||
token_hash VARCHAR(255) NOT NULL UNIQUE,
|
||||
ip_address INET,
|
||||
user_agent TEXT,
|
||||
expires_at TIMESTAMP NOT NULL,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
last_activity_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Audit log table
|
||||
CREATE TABLE IF NOT EXISTS audit_log (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID REFERENCES users(id),
|
||||
username VARCHAR(255),
|
||||
action VARCHAR(100) NOT NULL,
|
||||
resource_type VARCHAR(100) NOT NULL,
|
||||
resource_id VARCHAR(255),
|
||||
method VARCHAR(10),
|
||||
path TEXT,
|
||||
ip_address INET,
|
||||
user_agent TEXT,
|
||||
request_body JSONB,
|
||||
response_status INTEGER,
|
||||
error_message TEXT,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Tasks table (for async operations)
|
||||
CREATE TABLE IF NOT EXISTS tasks (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
type VARCHAR(100) NOT NULL,
|
||||
status VARCHAR(50) NOT NULL DEFAULT 'pending',
|
||||
progress INTEGER NOT NULL DEFAULT 0,
|
||||
message TEXT,
|
||||
error_message TEXT,
|
||||
created_by UUID REFERENCES users(id),
|
||||
started_at TIMESTAMP,
|
||||
completed_at TIMESTAMP,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
metadata JSONB
|
||||
);
|
||||
|
||||
-- Alerts table
|
||||
CREATE TABLE IF NOT EXISTS alerts (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
severity VARCHAR(20) NOT NULL,
|
||||
source VARCHAR(100) NOT NULL,
|
||||
title VARCHAR(255) NOT NULL,
|
||||
message TEXT NOT NULL,
|
||||
resource_type VARCHAR(100),
|
||||
resource_id VARCHAR(255),
|
||||
is_acknowledged BOOLEAN NOT NULL DEFAULT false,
|
||||
acknowledged_by UUID REFERENCES users(id),
|
||||
acknowledged_at TIMESTAMP,
|
||||
resolved_at TIMESTAMP,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
metadata JSONB
|
||||
);
|
||||
|
||||
-- System configuration table
|
||||
CREATE TABLE IF NOT EXISTS system_config (
|
||||
key VARCHAR(255) PRIMARY KEY,
|
||||
value TEXT NOT NULL,
|
||||
description TEXT,
|
||||
is_encrypted BOOLEAN NOT NULL DEFAULT false,
|
||||
updated_by UUID REFERENCES users(id),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes for performance
|
||||
CREATE INDEX IF NOT EXISTS idx_users_username ON users(username);
|
||||
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
|
||||
CREATE INDEX IF NOT EXISTS idx_users_active ON users(is_active);
|
||||
CREATE INDEX IF NOT EXISTS idx_sessions_user_id ON sessions(user_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_sessions_token_hash ON sessions(token_hash);
|
||||
CREATE INDEX IF NOT EXISTS idx_sessions_expires_at ON sessions(expires_at);
|
||||
CREATE INDEX IF NOT EXISTS idx_audit_log_user_id ON audit_log(user_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_audit_log_created_at ON audit_log(created_at);
|
||||
CREATE INDEX IF NOT EXISTS idx_audit_log_resource ON audit_log(resource_type, resource_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_status ON tasks(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_type ON tasks(type);
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_created_by ON tasks(created_by);
|
||||
CREATE INDEX IF NOT EXISTS idx_alerts_severity ON alerts(severity);
|
||||
CREATE INDEX IF NOT EXISTS idx_alerts_acknowledged ON alerts(is_acknowledged);
|
||||
CREATE INDEX IF NOT EXISTS idx_alerts_created_at ON alerts(created_at);
|
||||
|
||||
-- Insert default system roles
|
||||
INSERT INTO roles (name, description, is_system) VALUES
|
||||
('admin', 'Full system access and configuration', true),
|
||||
('operator', 'Day-to-day operations and monitoring', true),
|
||||
('readonly', 'Read-only access for monitoring and reporting', true)
|
||||
ON CONFLICT (name) DO NOTHING;
|
||||
|
||||
-- Insert default permissions
|
||||
INSERT INTO permissions (name, resource, action, description) VALUES
|
||||
-- System permissions
|
||||
('system:read', 'system', 'read', 'View system information'),
|
||||
('system:write', 'system', 'write', 'Modify system configuration'),
|
||||
('system:manage', 'system', 'manage', 'Full system management'),
|
||||
|
||||
-- Storage permissions
|
||||
('storage:read', 'storage', 'read', 'View storage information'),
|
||||
('storage:write', 'storage', 'write', 'Modify storage configuration'),
|
||||
('storage:manage', 'storage', 'manage', 'Full storage management'),
|
||||
|
||||
-- Tape permissions
|
||||
('tape:read', 'tape', 'read', 'View tape library information'),
|
||||
('tape:write', 'tape', 'write', 'Perform tape operations'),
|
||||
('tape:manage', 'tape', 'manage', 'Full tape management'),
|
||||
|
||||
-- iSCSI permissions
|
||||
('iscsi:read', 'iscsi', 'read', 'View iSCSI configuration'),
|
||||
('iscsi:write', 'iscsi', 'write', 'Modify iSCSI configuration'),
|
||||
('iscsi:manage', 'iscsi', 'manage', 'Full iSCSI management'),
|
||||
|
||||
-- IAM permissions
|
||||
('iam:read', 'iam', 'read', 'View users and roles'),
|
||||
('iam:write', 'iam', 'write', 'Modify users and roles'),
|
||||
('iam:manage', 'iam', 'manage', 'Full IAM management'),
|
||||
|
||||
-- Audit permissions
|
||||
('audit:read', 'audit', 'read', 'View audit logs'),
|
||||
|
||||
-- Monitoring permissions
|
||||
('monitoring:read', 'monitoring', 'read', 'View monitoring data'),
|
||||
('monitoring:write', 'monitoring', 'write', 'Acknowledge alerts')
|
||||
ON CONFLICT (name) DO NOTHING;
|
||||
|
||||
-- Assign permissions to roles
|
||||
-- Admin gets all permissions
|
||||
INSERT INTO role_permissions (role_id, permission_id)
|
||||
SELECT r.id, p.id
|
||||
FROM roles r, permissions p
|
||||
WHERE r.name = 'admin'
|
||||
ON CONFLICT DO NOTHING;
|
||||
|
||||
-- Operator gets read and write (but not manage) for most resources
|
||||
INSERT INTO role_permissions (role_id, permission_id)
|
||||
SELECT r.id, p.id
|
||||
FROM roles r, permissions p
|
||||
WHERE r.name = 'operator'
|
||||
AND p.action IN ('read', 'write')
|
||||
AND p.resource IN ('storage', 'tape', 'iscsi', 'monitoring')
|
||||
ON CONFLICT DO NOTHING;
|
||||
|
||||
-- ReadOnly gets only read permissions
|
||||
INSERT INTO role_permissions (role_id, permission_id)
|
||||
SELECT r.id, p.id
|
||||
FROM roles r, permissions p
|
||||
WHERE r.name = 'readonly'
|
||||
AND p.action = 'read'
|
||||
ON CONFLICT DO NOTHING;
|
||||
|
||||
@@ -0,0 +1,227 @@
|
||||
-- AtlasOS - Calypso
|
||||
-- Storage and Tape Component Schema
|
||||
-- Version: 2.0
|
||||
|
||||
-- ZFS pools table
|
||||
CREATE TABLE IF NOT EXISTS zfs_pools (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL UNIQUE,
|
||||
description TEXT,
|
||||
raid_level VARCHAR(50) NOT NULL, -- stripe, mirror, raidz, raidz2, raidz3
|
||||
disks TEXT[] NOT NULL, -- array of device paths
|
||||
size_bytes BIGINT NOT NULL,
|
||||
used_bytes BIGINT NOT NULL DEFAULT 0,
|
||||
compression VARCHAR(50) NOT NULL DEFAULT 'lz4', -- off, lz4, zstd, gzip
|
||||
deduplication BOOLEAN NOT NULL DEFAULT false,
|
||||
auto_expand BOOLEAN NOT NULL DEFAULT false,
|
||||
scrub_interval INTEGER NOT NULL DEFAULT 30, -- days
|
||||
is_active BOOLEAN NOT NULL DEFAULT true,
|
||||
health_status VARCHAR(50) NOT NULL DEFAULT 'online', -- online, degraded, faulted, offline
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
created_by UUID REFERENCES users(id)
|
||||
);
|
||||
|
||||
-- Disk repositories table
|
||||
CREATE TABLE IF NOT EXISTS disk_repositories (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL UNIQUE,
|
||||
description TEXT,
|
||||
volume_group VARCHAR(255) NOT NULL,
|
||||
logical_volume VARCHAR(255) NOT NULL,
|
||||
size_bytes BIGINT NOT NULL,
|
||||
used_bytes BIGINT NOT NULL DEFAULT 0,
|
||||
filesystem_type VARCHAR(50),
|
||||
mount_point TEXT,
|
||||
is_active BOOLEAN NOT NULL DEFAULT true,
|
||||
warning_threshold_percent INTEGER NOT NULL DEFAULT 80,
|
||||
critical_threshold_percent INTEGER NOT NULL DEFAULT 90,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
created_by UUID REFERENCES users(id)
|
||||
);
|
||||
|
||||
-- Physical disks table
|
||||
CREATE TABLE IF NOT EXISTS physical_disks (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
device_path VARCHAR(255) NOT NULL UNIQUE,
|
||||
vendor VARCHAR(255),
|
||||
model VARCHAR(255),
|
||||
serial_number VARCHAR(255),
|
||||
size_bytes BIGINT NOT NULL,
|
||||
sector_size INTEGER,
|
||||
is_ssd BOOLEAN NOT NULL DEFAULT false,
|
||||
health_status VARCHAR(50) NOT NULL DEFAULT 'unknown',
|
||||
health_details JSONB,
|
||||
is_used BOOLEAN NOT NULL DEFAULT false,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Volume groups table
|
||||
CREATE TABLE IF NOT EXISTS volume_groups (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL UNIQUE,
|
||||
size_bytes BIGINT NOT NULL,
|
||||
free_bytes BIGINT NOT NULL,
|
||||
physical_volumes TEXT[],
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- SCST iSCSI targets table
|
||||
CREATE TABLE IF NOT EXISTS scst_targets (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
iqn VARCHAR(512) NOT NULL UNIQUE,
|
||||
target_type VARCHAR(50) NOT NULL, -- 'disk', 'vtl', 'physical_tape'
|
||||
name VARCHAR(255) NOT NULL,
|
||||
description TEXT,
|
||||
is_active BOOLEAN NOT NULL DEFAULT true,
|
||||
single_initiator_only BOOLEAN NOT NULL DEFAULT false,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
created_by UUID REFERENCES users(id)
|
||||
);
|
||||
|
||||
-- SCST LUN mappings table
|
||||
CREATE TABLE IF NOT EXISTS scst_luns (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
target_id UUID NOT NULL REFERENCES scst_targets(id) ON DELETE CASCADE,
|
||||
lun_number INTEGER NOT NULL,
|
||||
device_name VARCHAR(255) NOT NULL,
|
||||
device_path VARCHAR(512) NOT NULL,
|
||||
handler_type VARCHAR(50) NOT NULL, -- 'vdisk', 'sg', 'tape'
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
UNIQUE(target_id, lun_number)
|
||||
);
|
||||
|
||||
-- SCST initiator groups table
|
||||
CREATE TABLE IF NOT EXISTS scst_initiator_groups (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
target_id UUID NOT NULL REFERENCES scst_targets(id) ON DELETE CASCADE,
|
||||
group_name VARCHAR(255) NOT NULL,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
UNIQUE(target_id, group_name)
|
||||
);
|
||||
|
||||
-- SCST initiators table
|
||||
CREATE TABLE IF NOT EXISTS scst_initiators (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
group_id UUID NOT NULL REFERENCES scst_initiator_groups(id) ON DELETE CASCADE,
|
||||
iqn VARCHAR(512) NOT NULL,
|
||||
is_active BOOLEAN NOT NULL DEFAULT true,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
UNIQUE(group_id, iqn)
|
||||
);
|
||||
|
||||
-- Physical tape libraries table
|
||||
CREATE TABLE IF NOT EXISTS physical_tape_libraries (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL UNIQUE,
|
||||
serial_number VARCHAR(255),
|
||||
vendor VARCHAR(255),
|
||||
model VARCHAR(255),
|
||||
changer_device_path VARCHAR(512),
|
||||
changer_stable_path VARCHAR(512),
|
||||
slot_count INTEGER,
|
||||
drive_count INTEGER,
|
||||
is_active BOOLEAN NOT NULL DEFAULT true,
|
||||
discovered_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
last_inventory_at TIMESTAMP,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Physical tape drives table
|
||||
CREATE TABLE IF NOT EXISTS physical_tape_drives (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
library_id UUID NOT NULL REFERENCES physical_tape_libraries(id) ON DELETE CASCADE,
|
||||
drive_number INTEGER NOT NULL,
|
||||
device_path VARCHAR(512),
|
||||
stable_path VARCHAR(512),
|
||||
vendor VARCHAR(255),
|
||||
model VARCHAR(255),
|
||||
serial_number VARCHAR(255),
|
||||
drive_type VARCHAR(50), -- 'LTO-8', 'LTO-9', etc.
|
||||
status VARCHAR(50) NOT NULL DEFAULT 'unknown', -- 'idle', 'loading', 'ready', 'error'
|
||||
current_tape_barcode VARCHAR(255),
|
||||
is_active BOOLEAN NOT NULL DEFAULT true,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
UNIQUE(library_id, drive_number)
|
||||
);
|
||||
|
||||
-- Physical tape slots table
|
||||
CREATE TABLE IF NOT EXISTS physical_tape_slots (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
library_id UUID NOT NULL REFERENCES physical_tape_libraries(id) ON DELETE CASCADE,
|
||||
slot_number INTEGER NOT NULL,
|
||||
barcode VARCHAR(255),
|
||||
tape_present BOOLEAN NOT NULL DEFAULT false,
|
||||
tape_type VARCHAR(50),
|
||||
last_updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
UNIQUE(library_id, slot_number)
|
||||
);
|
||||
|
||||
-- Virtual tape libraries table
|
||||
CREATE TABLE IF NOT EXISTS virtual_tape_libraries (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL UNIQUE,
|
||||
description TEXT,
|
||||
mhvtl_library_id INTEGER,
|
||||
backing_store_path TEXT NOT NULL,
|
||||
slot_count INTEGER NOT NULL DEFAULT 10,
|
||||
drive_count INTEGER NOT NULL DEFAULT 2,
|
||||
is_active BOOLEAN NOT NULL DEFAULT true,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
created_by UUID REFERENCES users(id)
|
||||
);
|
||||
|
||||
-- Virtual tape drives table
|
||||
CREATE TABLE IF NOT EXISTS virtual_tape_drives (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
library_id UUID NOT NULL REFERENCES virtual_tape_libraries(id) ON DELETE CASCADE,
|
||||
drive_number INTEGER NOT NULL,
|
||||
device_path VARCHAR(512),
|
||||
stable_path VARCHAR(512),
|
||||
status VARCHAR(50) NOT NULL DEFAULT 'idle',
|
||||
current_tape_id UUID,
|
||||
is_active BOOLEAN NOT NULL DEFAULT true,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
UNIQUE(library_id, drive_number)
|
||||
);
|
||||
|
||||
-- Virtual tapes table
|
||||
CREATE TABLE IF NOT EXISTS virtual_tapes (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
library_id UUID NOT NULL REFERENCES virtual_tape_libraries(id) ON DELETE CASCADE,
|
||||
barcode VARCHAR(255) NOT NULL,
|
||||
slot_number INTEGER,
|
||||
image_file_path TEXT NOT NULL,
|
||||
size_bytes BIGINT NOT NULL DEFAULT 0,
|
||||
used_bytes BIGINT NOT NULL DEFAULT 0,
|
||||
tape_type VARCHAR(50) NOT NULL DEFAULT 'LTO-8',
|
||||
status VARCHAR(50) NOT NULL DEFAULT 'idle', -- 'idle', 'in_drive', 'exported'
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
UNIQUE(library_id, barcode)
|
||||
);
|
||||
|
||||
-- Indexes for performance
|
||||
CREATE INDEX IF NOT EXISTS idx_disk_repositories_name ON disk_repositories(name);
|
||||
CREATE INDEX IF NOT EXISTS idx_disk_repositories_active ON disk_repositories(is_active);
|
||||
CREATE INDEX IF NOT EXISTS idx_physical_disks_device_path ON physical_disks(device_path);
|
||||
CREATE INDEX IF NOT EXISTS idx_scst_targets_iqn ON scst_targets(iqn);
|
||||
CREATE INDEX IF NOT EXISTS idx_scst_targets_type ON scst_targets(target_type);
|
||||
CREATE INDEX IF NOT EXISTS idx_scst_luns_target_id ON scst_luns(target_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_scst_initiators_group_id ON scst_initiators(group_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_physical_tape_libraries_name ON physical_tape_libraries(name);
|
||||
CREATE INDEX IF NOT EXISTS idx_physical_tape_drives_library_id ON physical_tape_drives(library_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_physical_tape_slots_library_id ON physical_tape_slots(library_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_virtual_tape_libraries_name ON virtual_tape_libraries(name);
|
||||
CREATE INDEX IF NOT EXISTS idx_virtual_tape_drives_library_id ON virtual_tape_drives(library_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_virtual_tapes_library_id ON virtual_tapes(library_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_virtual_tapes_barcode ON virtual_tapes(barcode);
|
||||
|
||||
@@ -0,0 +1,174 @@
|
||||
-- AtlasOS - Calypso
|
||||
-- Performance Optimization: Database Indexes
|
||||
-- Version: 3.0
|
||||
-- Description: Adds indexes for frequently queried columns to improve query performance
|
||||
|
||||
-- ============================================================================
|
||||
-- Authentication & Authorization Indexes
|
||||
-- ============================================================================
|
||||
|
||||
-- Users table indexes
|
||||
-- Username is frequently queried during login
|
||||
CREATE INDEX IF NOT EXISTS idx_users_username ON users(username);
|
||||
-- Email lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
|
||||
-- Active user lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_users_is_active ON users(is_active) WHERE is_active = true;
|
||||
|
||||
-- Sessions table indexes
|
||||
-- Token hash lookups are very frequent (every authenticated request)
|
||||
CREATE INDEX IF NOT EXISTS idx_sessions_token_hash ON sessions(token_hash);
|
||||
-- User session lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_sessions_user_id ON sessions(user_id);
|
||||
-- Expired session cleanup (index on expires_at for efficient cleanup queries)
|
||||
CREATE INDEX IF NOT EXISTS idx_sessions_expires_at ON sessions(expires_at);
|
||||
|
||||
-- User roles junction table
|
||||
-- Lookup roles for a user (frequent during permission checks)
|
||||
CREATE INDEX IF NOT EXISTS idx_user_roles_user_id ON user_roles(user_id);
|
||||
-- Lookup users with a role
|
||||
CREATE INDEX IF NOT EXISTS idx_user_roles_role_id ON user_roles(role_id);
|
||||
|
||||
-- Role permissions junction table
|
||||
-- Lookup permissions for a role
|
||||
CREATE INDEX IF NOT EXISTS idx_role_permissions_role_id ON role_permissions(role_id);
|
||||
-- Lookup roles with a permission
|
||||
CREATE INDEX IF NOT EXISTS idx_role_permissions_permission_id ON role_permissions(permission_id);
|
||||
|
||||
-- ============================================================================
|
||||
-- Audit & Monitoring Indexes
|
||||
-- ============================================================================
|
||||
|
||||
-- Audit log indexes
|
||||
-- Time-based queries (most common audit log access pattern)
|
||||
CREATE INDEX IF NOT EXISTS idx_audit_log_created_at ON audit_log(created_at DESC);
|
||||
-- User activity queries
|
||||
CREATE INDEX IF NOT EXISTS idx_audit_log_user_id ON audit_log(user_id);
|
||||
-- Resource-based queries
|
||||
CREATE INDEX IF NOT EXISTS idx_audit_log_resource ON audit_log(resource_type, resource_id);
|
||||
|
||||
-- Alerts table indexes
|
||||
-- Time-based ordering (default ordering in ListAlerts)
|
||||
CREATE INDEX IF NOT EXISTS idx_alerts_created_at ON alerts(created_at DESC);
|
||||
-- Severity filtering
|
||||
CREATE INDEX IF NOT EXISTS idx_alerts_severity ON alerts(severity);
|
||||
-- Source filtering
|
||||
CREATE INDEX IF NOT EXISTS idx_alerts_source ON alerts(source);
|
||||
-- Acknowledgment status
|
||||
CREATE INDEX IF NOT EXISTS idx_alerts_acknowledged ON alerts(is_acknowledged) WHERE is_acknowledged = false;
|
||||
-- Resource-based queries
|
||||
CREATE INDEX IF NOT EXISTS idx_alerts_resource ON alerts(resource_type, resource_id);
|
||||
-- Composite index for common filter combinations
|
||||
CREATE INDEX IF NOT EXISTS idx_alerts_severity_acknowledged ON alerts(severity, is_acknowledged, created_at DESC);
|
||||
|
||||
-- ============================================================================
|
||||
-- Task Management Indexes
|
||||
-- ============================================================================
|
||||
|
||||
-- Tasks table indexes
|
||||
-- Status filtering (very common in task queries)
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_status ON tasks(status);
|
||||
-- Created by user
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_created_by ON tasks(created_by);
|
||||
-- Time-based queries
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_created_at ON tasks(created_at DESC);
|
||||
-- Status + time composite (common query pattern)
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_status_created_at ON tasks(status, created_at DESC);
|
||||
-- Failed tasks lookup (for alerting)
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_failed_recent ON tasks(status, created_at DESC) WHERE status = 'failed';
|
||||
|
||||
-- ============================================================================
|
||||
-- Storage Indexes
|
||||
-- ============================================================================
|
||||
|
||||
-- Disk repositories indexes
|
||||
-- Active repository lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_disk_repositories_is_active ON disk_repositories(is_active) WHERE is_active = true;
|
||||
-- Name lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_disk_repositories_name ON disk_repositories(name);
|
||||
-- Volume group lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_disk_repositories_vg ON disk_repositories(volume_group);
|
||||
|
||||
-- Physical disks indexes
|
||||
-- Device path lookups (for sync operations)
|
||||
CREATE INDEX IF NOT EXISTS idx_physical_disks_device_path ON physical_disks(device_path);
|
||||
|
||||
-- ============================================================================
|
||||
-- SCST Indexes
|
||||
-- ============================================================================
|
||||
|
||||
-- SCST targets indexes
|
||||
-- IQN lookups (frequent during target operations)
|
||||
CREATE INDEX IF NOT EXISTS idx_scst_targets_iqn ON scst_targets(iqn);
|
||||
-- Active target lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_scst_targets_is_active ON scst_targets(is_active) WHERE is_active = true;
|
||||
|
||||
-- SCST LUNs indexes
|
||||
-- Target + LUN lookups (very frequent)
|
||||
CREATE INDEX IF NOT EXISTS idx_scst_luns_target_lun ON scst_luns(target_id, lun_number);
|
||||
-- Device path lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_scst_luns_device_path ON scst_luns(device_path);
|
||||
|
||||
-- SCST initiator groups indexes
|
||||
-- Target + group name lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_scst_initiator_groups_target ON scst_initiator_groups(target_id);
|
||||
-- Group name lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_scst_initiator_groups_name ON scst_initiator_groups(group_name);
|
||||
|
||||
-- SCST initiators indexes
|
||||
-- Group + IQN lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_scst_initiators_group_iqn ON scst_initiators(group_id, iqn);
|
||||
-- Active initiator lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_scst_initiators_is_active ON scst_initiators(is_active) WHERE is_active = true;
|
||||
|
||||
-- ============================================================================
|
||||
-- Tape Library Indexes
|
||||
-- ============================================================================
|
||||
|
||||
-- Physical tape libraries indexes
|
||||
-- Serial number lookups (for discovery)
|
||||
CREATE INDEX IF NOT EXISTS idx_physical_tape_libraries_serial ON physical_tape_libraries(serial_number);
|
||||
-- Active library lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_physical_tape_libraries_is_active ON physical_tape_libraries(is_active) WHERE is_active = true;
|
||||
|
||||
-- Physical tape drives indexes
|
||||
-- Library + drive number lookups (very frequent)
|
||||
CREATE INDEX IF NOT EXISTS idx_physical_tape_drives_library_drive ON physical_tape_drives(library_id, drive_number);
|
||||
-- Status filtering
|
||||
CREATE INDEX IF NOT EXISTS idx_physical_tape_drives_status ON physical_tape_drives(status);
|
||||
|
||||
-- Physical tape slots indexes
|
||||
-- Library + slot number lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_physical_tape_slots_library_slot ON physical_tape_slots(library_id, slot_number);
|
||||
-- Barcode lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_physical_tape_slots_barcode ON physical_tape_slots(barcode) WHERE barcode IS NOT NULL;
|
||||
|
||||
-- Virtual tape libraries indexes
|
||||
-- MHVTL library ID lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_virtual_tape_libraries_mhvtl_id ON virtual_tape_libraries(mhvtl_library_id);
|
||||
-- Active library lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_virtual_tape_libraries_is_active ON virtual_tape_libraries(is_active) WHERE is_active = true;
|
||||
|
||||
-- Virtual tape drives indexes
|
||||
-- Library + drive number lookups (very frequent)
|
||||
CREATE INDEX IF NOT EXISTS idx_virtual_tape_drives_library_drive ON virtual_tape_drives(library_id, drive_number);
|
||||
-- Status filtering
|
||||
CREATE INDEX IF NOT EXISTS idx_virtual_tape_drives_status ON virtual_tape_drives(status);
|
||||
-- Current tape lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_virtual_tape_drives_current_tape ON virtual_tape_drives(current_tape_id) WHERE current_tape_id IS NOT NULL;
|
||||
|
||||
-- Virtual tapes indexes
|
||||
-- Library + slot number lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_virtual_tapes_library_slot ON virtual_tapes(library_id, slot_number);
|
||||
-- Barcode lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_virtual_tapes_barcode ON virtual_tapes(barcode) WHERE barcode IS NOT NULL;
|
||||
-- Status filtering
|
||||
CREATE INDEX IF NOT EXISTS idx_virtual_tapes_status ON virtual_tapes(status);
|
||||
|
||||
-- ============================================================================
|
||||
-- Statistics Update
|
||||
-- ============================================================================
|
||||
|
||||
-- Update table statistics for query planner
|
||||
ANALYZE;
|
||||
|
||||
@@ -0,0 +1,31 @@
|
||||
-- AtlasOS - Calypso
|
||||
-- Add ZFS Pools Table
|
||||
-- Version: 4.0
|
||||
-- Note: This migration adds the zfs_pools table that was added to migration 002
|
||||
-- but may not have been applied if migration 002 was run before the table was added
|
||||
|
||||
-- ZFS pools table
|
||||
CREATE TABLE IF NOT EXISTS zfs_pools (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL UNIQUE,
|
||||
description TEXT,
|
||||
raid_level VARCHAR(50) NOT NULL, -- stripe, mirror, raidz, raidz2, raidz3
|
||||
disks TEXT[] NOT NULL, -- array of device paths
|
||||
size_bytes BIGINT NOT NULL,
|
||||
used_bytes BIGINT NOT NULL DEFAULT 0,
|
||||
compression VARCHAR(50) NOT NULL DEFAULT 'lz4', -- off, lz4, zstd, gzip
|
||||
deduplication BOOLEAN NOT NULL DEFAULT false,
|
||||
auto_expand BOOLEAN NOT NULL DEFAULT false,
|
||||
scrub_interval INTEGER NOT NULL DEFAULT 30, -- days
|
||||
is_active BOOLEAN NOT NULL DEFAULT true,
|
||||
health_status VARCHAR(50) NOT NULL DEFAULT 'online', -- online, degraded, faulted, offline
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
created_by UUID REFERENCES users(id)
|
||||
);
|
||||
|
||||
-- Create index on name for faster lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_pools_name ON zfs_pools(name);
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_pools_created_by ON zfs_pools(created_by);
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_pools_health_status ON zfs_pools(health_status);
|
||||
|
||||
@@ -0,0 +1,35 @@
|
||||
-- AtlasOS - Calypso
|
||||
-- Add ZFS Datasets Table
|
||||
-- Version: 5.0
|
||||
-- Description: Stores ZFS dataset metadata in database for faster queries and consistency
|
||||
|
||||
-- ZFS datasets table
|
||||
CREATE TABLE IF NOT EXISTS zfs_datasets (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(512) NOT NULL UNIQUE, -- Full dataset name (e.g., pool/dataset)
|
||||
pool_id UUID NOT NULL REFERENCES zfs_pools(id) ON DELETE CASCADE,
|
||||
pool_name VARCHAR(255) NOT NULL, -- Denormalized for faster queries
|
||||
type VARCHAR(50) NOT NULL, -- filesystem, volume, snapshot
|
||||
mount_point TEXT, -- Mount point path (null for volumes)
|
||||
used_bytes BIGINT NOT NULL DEFAULT 0,
|
||||
available_bytes BIGINT NOT NULL DEFAULT 0,
|
||||
referenced_bytes BIGINT NOT NULL DEFAULT 0,
|
||||
compression VARCHAR(50) NOT NULL DEFAULT 'lz4', -- off, lz4, zstd, gzip
|
||||
deduplication VARCHAR(50) NOT NULL DEFAULT 'off', -- off, on, verify
|
||||
quota BIGINT DEFAULT -1, -- -1 for unlimited, bytes otherwise
|
||||
reservation BIGINT NOT NULL DEFAULT 0, -- Reserved space in bytes
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
created_by UUID REFERENCES users(id)
|
||||
);
|
||||
|
||||
-- Create indexes for faster lookups
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_datasets_pool_id ON zfs_datasets(pool_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_datasets_pool_name ON zfs_datasets(pool_name);
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_datasets_name ON zfs_datasets(name);
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_datasets_type ON zfs_datasets(type);
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_datasets_created_by ON zfs_datasets(created_by);
|
||||
|
||||
-- Composite index for common queries (list datasets by pool)
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_datasets_pool_type ON zfs_datasets(pool_id, type);
|
||||
|
||||
@@ -0,0 +1,50 @@
|
||||
-- AtlasOS - Calypso
|
||||
-- Add ZFS Shares and iSCSI Export Tables
|
||||
-- Version: 6.0
|
||||
-- Description: Separate tables for filesystem shares (NFS/SMB) and volume iSCSI exports
|
||||
|
||||
-- ZFS Filesystem Shares Table (for NFS/SMB)
|
||||
CREATE TABLE IF NOT EXISTS zfs_shares (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
dataset_id UUID NOT NULL REFERENCES zfs_datasets(id) ON DELETE CASCADE,
|
||||
share_type VARCHAR(50) NOT NULL, -- 'nfs', 'smb', 'both'
|
||||
nfs_enabled BOOLEAN NOT NULL DEFAULT false,
|
||||
nfs_options TEXT, -- e.g., "rw,sync,no_subtree_check"
|
||||
nfs_clients TEXT[], -- Allowed client IPs/networks
|
||||
smb_enabled BOOLEAN NOT NULL DEFAULT false,
|
||||
smb_share_name VARCHAR(255), -- SMB share name
|
||||
smb_path TEXT, -- SMB share path (usually same as mount_point)
|
||||
smb_comment TEXT,
|
||||
smb_guest_ok BOOLEAN NOT NULL DEFAULT false,
|
||||
smb_read_only BOOLEAN NOT NULL DEFAULT false,
|
||||
smb_browseable BOOLEAN NOT NULL DEFAULT true,
|
||||
is_active BOOLEAN NOT NULL DEFAULT true,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
created_by UUID REFERENCES users(id),
|
||||
UNIQUE(dataset_id) -- One share config per dataset
|
||||
);
|
||||
|
||||
-- ZFS Volume iSCSI Exports Table
|
||||
CREATE TABLE IF NOT EXISTS zfs_iscsi_exports (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
dataset_id UUID NOT NULL REFERENCES zfs_datasets(id) ON DELETE CASCADE,
|
||||
target_id UUID REFERENCES scst_targets(id) ON DELETE SET NULL, -- Link to SCST target
|
||||
lun_number INTEGER, -- LUN number in the target
|
||||
device_path TEXT, -- /dev/zvol/pool/volume path
|
||||
is_active BOOLEAN NOT NULL DEFAULT true,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
created_by UUID REFERENCES users(id),
|
||||
UNIQUE(dataset_id) -- One iSCSI export per volume
|
||||
);
|
||||
|
||||
-- Create indexes
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_shares_dataset_id ON zfs_shares(dataset_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_shares_type ON zfs_shares(share_type);
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_shares_active ON zfs_shares(is_active);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_iscsi_exports_dataset_id ON zfs_iscsi_exports(dataset_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_iscsi_exports_target_id ON zfs_iscsi_exports(target_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_zfs_iscsi_exports_active ON zfs_iscsi_exports(is_active);
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
-- Add vendor column to virtual_tape_libraries table
|
||||
ALTER TABLE virtual_tape_libraries ADD COLUMN IF NOT EXISTS vendor VARCHAR(255);
|
||||
|
||||
@@ -0,0 +1,45 @@
|
||||
-- Add user groups feature
|
||||
-- Groups table
|
||||
CREATE TABLE IF NOT EXISTS groups (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL UNIQUE,
|
||||
description TEXT,
|
||||
is_system BOOLEAN NOT NULL DEFAULT false,
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- User groups junction table
|
||||
CREATE TABLE IF NOT EXISTS user_groups (
|
||||
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
|
||||
group_id UUID NOT NULL REFERENCES groups(id) ON DELETE CASCADE,
|
||||
assigned_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
assigned_by UUID REFERENCES users(id),
|
||||
PRIMARY KEY (user_id, group_id)
|
||||
);
|
||||
|
||||
-- Group roles junction table (groups can have roles)
|
||||
CREATE TABLE IF NOT EXISTS group_roles (
|
||||
group_id UUID NOT NULL REFERENCES groups(id) ON DELETE CASCADE,
|
||||
role_id UUID NOT NULL REFERENCES roles(id) ON DELETE CASCADE,
|
||||
granted_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
PRIMARY KEY (group_id, role_id)
|
||||
);
|
||||
|
||||
-- Indexes
|
||||
CREATE INDEX IF NOT EXISTS idx_groups_name ON groups(name);
|
||||
CREATE INDEX IF NOT EXISTS idx_user_groups_user_id ON user_groups(user_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_user_groups_group_id ON user_groups(group_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_group_roles_group_id ON group_roles(group_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_group_roles_role_id ON group_roles(role_id);
|
||||
|
||||
-- Insert default system groups
|
||||
INSERT INTO groups (name, description, is_system) VALUES
|
||||
('wheel', 'System administrators group', true),
|
||||
('operators', 'System operators group', true),
|
||||
('backup', 'Backup operators group', true),
|
||||
('auditors', 'Auditors group', true),
|
||||
('storage_admins', 'Storage administrators group', true),
|
||||
('services', 'Service accounts group', true)
|
||||
ON CONFLICT (name) DO NOTHING;
|
||||
|
||||
@@ -0,0 +1,34 @@
|
||||
-- AtlasOS - Calypso
|
||||
-- Backup Jobs Schema
|
||||
-- Version: 9.0
|
||||
|
||||
-- Backup jobs table
|
||||
CREATE TABLE IF NOT EXISTS backup_jobs (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
job_id INTEGER NOT NULL UNIQUE, -- Bareos job ID
|
||||
job_name VARCHAR(255) NOT NULL,
|
||||
client_name VARCHAR(255) NOT NULL,
|
||||
job_type VARCHAR(50) NOT NULL, -- 'Backup', 'Restore', 'Verify', 'Copy', 'Migrate'
|
||||
job_level VARCHAR(50) NOT NULL, -- 'Full', 'Incremental', 'Differential', 'Since'
|
||||
status VARCHAR(50) NOT NULL, -- 'Running', 'Completed', 'Failed', 'Canceled', 'Waiting'
|
||||
bytes_written BIGINT NOT NULL DEFAULT 0,
|
||||
files_written INTEGER NOT NULL DEFAULT 0,
|
||||
duration_seconds INTEGER,
|
||||
started_at TIMESTAMP,
|
||||
ended_at TIMESTAMP,
|
||||
error_message TEXT,
|
||||
storage_name VARCHAR(255),
|
||||
pool_name VARCHAR(255),
|
||||
volume_name VARCHAR(255),
|
||||
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
|
||||
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Indexes for performance
|
||||
CREATE INDEX IF NOT EXISTS idx_backup_jobs_job_id ON backup_jobs(job_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_backup_jobs_job_name ON backup_jobs(job_name);
|
||||
CREATE INDEX IF NOT EXISTS idx_backup_jobs_client_name ON backup_jobs(client_name);
|
||||
CREATE INDEX IF NOT EXISTS idx_backup_jobs_status ON backup_jobs(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_backup_jobs_started_at ON backup_jobs(started_at DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_backup_jobs_job_type ON backup_jobs(job_type);
|
||||
|
||||
@@ -0,0 +1,39 @@
|
||||
-- AtlasOS - Calypso
|
||||
-- Add Backup Permissions
|
||||
-- Version: 10.0
|
||||
|
||||
-- Insert backup permissions
|
||||
INSERT INTO permissions (name, resource, action, description) VALUES
|
||||
('backup:read', 'backup', 'read', 'View backup jobs and history'),
|
||||
('backup:write', 'backup', 'write', 'Create and manage backup jobs'),
|
||||
('backup:manage', 'backup', 'manage', 'Full backup management')
|
||||
ON CONFLICT (name) DO NOTHING;
|
||||
|
||||
-- Assign backup permissions to roles
|
||||
|
||||
-- Admin gets all backup permissions (explicitly assign since admin query in 001 only runs once)
|
||||
INSERT INTO role_permissions (role_id, permission_id)
|
||||
SELECT r.id, p.id
|
||||
FROM roles r, permissions p
|
||||
WHERE r.name = 'admin'
|
||||
AND p.resource = 'backup'
|
||||
ON CONFLICT DO NOTHING;
|
||||
|
||||
-- Operator gets read and write permissions for backup
|
||||
INSERT INTO role_permissions (role_id, permission_id)
|
||||
SELECT r.id, p.id
|
||||
FROM roles r, permissions p
|
||||
WHERE r.name = 'operator'
|
||||
AND p.resource = 'backup'
|
||||
AND p.action IN ('read', 'write')
|
||||
ON CONFLICT DO NOTHING;
|
||||
|
||||
-- ReadOnly gets only read permission for backup
|
||||
INSERT INTO role_permissions (role_id, permission_id)
|
||||
SELECT r.id, p.id
|
||||
FROM roles r, permissions p
|
||||
WHERE r.name = 'readonly'
|
||||
AND p.resource = 'backup'
|
||||
AND p.action = 'read'
|
||||
ON CONFLICT DO NOTHING;
|
||||
|
||||
@@ -0,0 +1,209 @@
|
||||
-- AtlasOS - Calypso
|
||||
-- PostgreSQL Function to Sync Jobs from Bacula to Calypso
|
||||
-- Version: 11.0
|
||||
--
|
||||
-- This function syncs jobs from Bacula database (Job table) to Calypso database (backup_jobs table)
|
||||
-- Uses dblink extension to query Bacula database from Calypso database
|
||||
--
|
||||
-- Prerequisites:
|
||||
-- 1. dblink extension must be installed: CREATE EXTENSION IF NOT EXISTS dblink;
|
||||
-- 2. User must have access to both databases
|
||||
-- 3. Connection parameters must be configured in the function
|
||||
|
||||
-- Create function to sync jobs from Bacula to Calypso
|
||||
CREATE OR REPLACE FUNCTION sync_bacula_jobs(
|
||||
bacula_db_name TEXT DEFAULT 'bacula',
|
||||
bacula_host TEXT DEFAULT 'localhost',
|
||||
bacula_port INTEGER DEFAULT 5432,
|
||||
bacula_user TEXT DEFAULT 'calypso',
|
||||
bacula_password TEXT DEFAULT ''
|
||||
)
|
||||
RETURNS TABLE(
|
||||
jobs_synced INTEGER,
|
||||
jobs_inserted INTEGER,
|
||||
jobs_updated INTEGER,
|
||||
errors INTEGER
|
||||
) AS $$
|
||||
DECLARE
|
||||
conn_str TEXT;
|
||||
jobs_count INTEGER := 0;
|
||||
inserted_count INTEGER := 0;
|
||||
updated_count INTEGER := 0;
|
||||
error_count INTEGER := 0;
|
||||
job_record RECORD;
|
||||
BEGIN
|
||||
-- Build dblink connection string
|
||||
conn_str := format(
|
||||
'dbname=%s host=%s port=%s user=%s password=%s',
|
||||
bacula_db_name,
|
||||
bacula_host,
|
||||
bacula_port,
|
||||
bacula_user,
|
||||
bacula_password
|
||||
);
|
||||
|
||||
-- Query jobs from Bacula database using dblink
|
||||
FOR job_record IN
|
||||
SELECT * FROM dblink(
|
||||
conn_str,
|
||||
$QUERY$
|
||||
SELECT
|
||||
j.JobId,
|
||||
j.Name as job_name,
|
||||
COALESCE(c.Name, 'unknown') as client_name,
|
||||
CASE
|
||||
WHEN j.Type = 'B' THEN 'Backup'
|
||||
WHEN j.Type = 'R' THEN 'Restore'
|
||||
WHEN j.Type = 'V' THEN 'Verify'
|
||||
WHEN j.Type = 'C' THEN 'Copy'
|
||||
WHEN j.Type = 'M' THEN 'Migrate'
|
||||
ELSE 'Backup'
|
||||
END as job_type,
|
||||
CASE
|
||||
WHEN j.Level = 'F' THEN 'Full'
|
||||
WHEN j.Level = 'I' THEN 'Incremental'
|
||||
WHEN j.Level = 'D' THEN 'Differential'
|
||||
WHEN j.Level = 'S' THEN 'Since'
|
||||
ELSE 'Full'
|
||||
END as job_level,
|
||||
CASE
|
||||
WHEN j.JobStatus = 'T' THEN 'Running'
|
||||
WHEN j.JobStatus = 'C' THEN 'Completed'
|
||||
WHEN j.JobStatus = 'f' OR j.JobStatus = 'F' THEN 'Failed'
|
||||
WHEN j.JobStatus = 'A' THEN 'Canceled'
|
||||
WHEN j.JobStatus = 'W' THEN 'Waiting'
|
||||
ELSE 'Waiting'
|
||||
END as status,
|
||||
COALESCE(j.JobBytes, 0) as bytes_written,
|
||||
COALESCE(j.JobFiles, 0) as files_written,
|
||||
j.StartTime as started_at,
|
||||
j.EndTime as ended_at,
|
||||
CASE
|
||||
WHEN j.EndTime IS NOT NULL AND j.StartTime IS NOT NULL
|
||||
THEN EXTRACT(EPOCH FROM (j.EndTime - j.StartTime))::INTEGER
|
||||
ELSE NULL
|
||||
END as duration_seconds
|
||||
FROM Job j
|
||||
LEFT JOIN Client c ON j.ClientId = c.ClientId
|
||||
ORDER BY j.StartTime DESC
|
||||
LIMIT 1000
|
||||
$QUERY$
|
||||
) AS t(
|
||||
job_id INTEGER,
|
||||
job_name TEXT,
|
||||
client_name TEXT,
|
||||
job_type TEXT,
|
||||
job_level TEXT,
|
||||
status TEXT,
|
||||
bytes_written BIGINT,
|
||||
files_written INTEGER,
|
||||
started_at TIMESTAMP,
|
||||
ended_at TIMESTAMP,
|
||||
duration_seconds INTEGER
|
||||
)
|
||||
LOOP
|
||||
BEGIN
|
||||
-- Check if job already exists (before insert/update)
|
||||
IF EXISTS (SELECT 1 FROM backup_jobs WHERE job_id = job_record.job_id) THEN
|
||||
updated_count := updated_count + 1;
|
||||
ELSE
|
||||
inserted_count := inserted_count + 1;
|
||||
END IF;
|
||||
|
||||
-- Upsert job to backup_jobs table
|
||||
INSERT INTO backup_jobs (
|
||||
job_id, job_name, client_name, job_type, job_level, status,
|
||||
bytes_written, files_written, started_at, ended_at, duration_seconds,
|
||||
updated_at
|
||||
) VALUES (
|
||||
job_record.job_id,
|
||||
job_record.job_name,
|
||||
job_record.client_name,
|
||||
job_record.job_type,
|
||||
job_record.job_level,
|
||||
job_record.status,
|
||||
job_record.bytes_written,
|
||||
job_record.files_written,
|
||||
job_record.started_at,
|
||||
job_record.ended_at,
|
||||
job_record.duration_seconds,
|
||||
NOW()
|
||||
)
|
||||
ON CONFLICT (job_id) DO UPDATE SET
|
||||
job_name = EXCLUDED.job_name,
|
||||
client_name = EXCLUDED.client_name,
|
||||
job_type = EXCLUDED.job_type,
|
||||
job_level = EXCLUDED.job_level,
|
||||
status = EXCLUDED.status,
|
||||
bytes_written = EXCLUDED.bytes_written,
|
||||
files_written = EXCLUDED.files_written,
|
||||
started_at = EXCLUDED.started_at,
|
||||
ended_at = EXCLUDED.ended_at,
|
||||
duration_seconds = EXCLUDED.duration_seconds,
|
||||
updated_at = NOW();
|
||||
|
||||
jobs_count := jobs_count + 1;
|
||||
EXCEPTION
|
||||
WHEN OTHERS THEN
|
||||
error_count := error_count + 1;
|
||||
-- Log error but continue with next job
|
||||
RAISE WARNING 'Error syncing job %: %', job_record.job_id, SQLERRM;
|
||||
END;
|
||||
END LOOP;
|
||||
|
||||
-- Return summary
|
||||
RETURN QUERY SELECT jobs_count, inserted_count, updated_count, error_count;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Create a simpler version that uses current database connection settings
|
||||
-- This version assumes Bacula is on same host/port with same user
|
||||
CREATE OR REPLACE FUNCTION sync_bacula_jobs_simple()
|
||||
RETURNS TABLE(
|
||||
jobs_synced INTEGER,
|
||||
jobs_inserted INTEGER,
|
||||
jobs_updated INTEGER,
|
||||
errors INTEGER
|
||||
) AS $$
|
||||
DECLARE
|
||||
current_user_name TEXT;
|
||||
current_host TEXT;
|
||||
current_port INTEGER;
|
||||
current_db TEXT;
|
||||
BEGIN
|
||||
-- Get current connection info
|
||||
SELECT
|
||||
current_user,
|
||||
COALESCE(inet_server_addr()::TEXT, 'localhost'),
|
||||
COALESCE(inet_server_port(), 5432),
|
||||
current_database()
|
||||
INTO
|
||||
current_user_name,
|
||||
current_host,
|
||||
current_port,
|
||||
current_db;
|
||||
|
||||
-- Call main function with current connection settings
|
||||
-- Note: password needs to be passed or configured in .pgpass
|
||||
RETURN QUERY
|
||||
SELECT * FROM sync_bacula_jobs(
|
||||
'bacula', -- Try 'bacula' first
|
||||
current_host,
|
||||
current_port,
|
||||
current_user_name,
|
||||
'' -- Empty password - will use .pgpass or peer authentication
|
||||
);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Grant execute permission to calypso user
|
||||
GRANT EXECUTE ON FUNCTION sync_bacula_jobs(TEXT, TEXT, INTEGER, TEXT, TEXT) TO calypso;
|
||||
GRANT EXECUTE ON FUNCTION sync_bacula_jobs_simple() TO calypso;
|
||||
|
||||
-- Create index if not exists (should already exist from migration 009)
|
||||
CREATE INDEX IF NOT EXISTS idx_backup_jobs_job_id ON backup_jobs(job_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_backup_jobs_updated_at ON backup_jobs(updated_at);
|
||||
|
||||
COMMENT ON FUNCTION sync_bacula_jobs IS 'Syncs jobs from Bacula database to Calypso backup_jobs table using dblink';
|
||||
COMMENT ON FUNCTION sync_bacula_jobs_simple IS 'Simplified version that uses current connection settings (requires .pgpass for password)';
|
||||
|
||||
@@ -0,0 +1,209 @@
|
||||
-- AtlasOS - Calypso
|
||||
-- PostgreSQL Function to Sync Jobs from Bacula to Calypso
|
||||
-- Version: 11.0
|
||||
--
|
||||
-- This function syncs jobs from Bacula database (Job table) to Calypso database (backup_jobs table)
|
||||
-- Uses dblink extension to query Bacula database from Calypso database
|
||||
--
|
||||
-- Prerequisites:
|
||||
-- 1. dblink extension must be installed: CREATE EXTENSION IF NOT EXISTS dblink;
|
||||
-- 2. User must have access to both databases
|
||||
-- 3. Connection parameters must be configured in the function
|
||||
|
||||
-- Create function to sync jobs from Bacula to Calypso
|
||||
CREATE OR REPLACE FUNCTION sync_bacula_jobs(
|
||||
bacula_db_name TEXT DEFAULT 'bacula',
|
||||
bacula_host TEXT DEFAULT 'localhost',
|
||||
bacula_port INTEGER DEFAULT 5432,
|
||||
bacula_user TEXT DEFAULT 'calypso',
|
||||
bacula_password TEXT DEFAULT ''
|
||||
)
|
||||
RETURNS TABLE(
|
||||
jobs_synced INTEGER,
|
||||
jobs_inserted INTEGER,
|
||||
jobs_updated INTEGER,
|
||||
errors INTEGER
|
||||
) AS $$
|
||||
DECLARE
|
||||
conn_str TEXT;
|
||||
jobs_count INTEGER := 0;
|
||||
inserted_count INTEGER := 0;
|
||||
updated_count INTEGER := 0;
|
||||
error_count INTEGER := 0;
|
||||
job_record RECORD;
|
||||
BEGIN
|
||||
-- Build dblink connection string
|
||||
conn_str := format(
|
||||
'dbname=%s host=%s port=%s user=%s password=%s',
|
||||
bacula_db_name,
|
||||
bacula_host,
|
||||
bacula_port,
|
||||
bacula_user,
|
||||
bacula_password
|
||||
);
|
||||
|
||||
-- Query jobs from Bacula database using dblink
|
||||
FOR job_record IN
|
||||
SELECT * FROM dblink(
|
||||
conn_str,
|
||||
$$
|
||||
SELECT
|
||||
j.JobId,
|
||||
j.Name as job_name,
|
||||
COALESCE(c.Name, 'unknown') as client_name,
|
||||
CASE
|
||||
WHEN j.Type = 'B' THEN 'Backup'
|
||||
WHEN j.Type = 'R' THEN 'Restore'
|
||||
WHEN j.Type = 'V' THEN 'Verify'
|
||||
WHEN j.Type = 'C' THEN 'Copy'
|
||||
WHEN j.Type = 'M' THEN 'Migrate'
|
||||
ELSE 'Backup'
|
||||
END as job_type,
|
||||
CASE
|
||||
WHEN j.Level = 'F' THEN 'Full'
|
||||
WHEN j.Level = 'I' THEN 'Incremental'
|
||||
WHEN j.Level = 'D' THEN 'Differential'
|
||||
WHEN j.Level = 'S' THEN 'Since'
|
||||
ELSE 'Full'
|
||||
END as job_level,
|
||||
CASE
|
||||
WHEN j.JobStatus = 'T' THEN 'Running'
|
||||
WHEN j.JobStatus = 'C' THEN 'Completed'
|
||||
WHEN j.JobStatus = 'f' OR j.JobStatus = 'F' THEN 'Failed'
|
||||
WHEN j.JobStatus = 'A' THEN 'Canceled'
|
||||
WHEN j.JobStatus = 'W' THEN 'Waiting'
|
||||
ELSE 'Waiting'
|
||||
END as status,
|
||||
COALESCE(j.JobBytes, 0) as bytes_written,
|
||||
COALESCE(j.JobFiles, 0) as files_written,
|
||||
j.StartTime as started_at,
|
||||
j.EndTime as ended_at,
|
||||
CASE
|
||||
WHEN j.EndTime IS NOT NULL AND j.StartTime IS NOT NULL
|
||||
THEN EXTRACT(EPOCH FROM (j.EndTime - j.StartTime))::INTEGER
|
||||
ELSE NULL
|
||||
END as duration_seconds
|
||||
FROM Job j
|
||||
LEFT JOIN Client c ON j.ClientId = c.ClientId
|
||||
ORDER BY j.StartTime DESC
|
||||
LIMIT 1000
|
||||
$$
|
||||
) AS t(
|
||||
job_id INTEGER,
|
||||
job_name TEXT,
|
||||
client_name TEXT,
|
||||
job_type TEXT,
|
||||
job_level TEXT,
|
||||
status TEXT,
|
||||
bytes_written BIGINT,
|
||||
files_written INTEGER,
|
||||
started_at TIMESTAMP,
|
||||
ended_at TIMESTAMP,
|
||||
duration_seconds INTEGER
|
||||
)
|
||||
LOOP
|
||||
BEGIN
|
||||
-- Check if job already exists (before insert/update)
|
||||
IF EXISTS (SELECT 1 FROM backup_jobs WHERE job_id = job_record.job_id) THEN
|
||||
updated_count := updated_count + 1;
|
||||
ELSE
|
||||
inserted_count := inserted_count + 1;
|
||||
END IF;
|
||||
|
||||
-- Upsert job to backup_jobs table
|
||||
INSERT INTO backup_jobs (
|
||||
job_id, job_name, client_name, job_type, job_level, status,
|
||||
bytes_written, files_written, started_at, ended_at, duration_seconds,
|
||||
updated_at
|
||||
) VALUES (
|
||||
job_record.job_id,
|
||||
job_record.job_name,
|
||||
job_record.client_name,
|
||||
job_record.job_type,
|
||||
job_record.job_level,
|
||||
job_record.status,
|
||||
job_record.bytes_written,
|
||||
job_record.files_written,
|
||||
job_record.started_at,
|
||||
job_record.ended_at,
|
||||
job_record.duration_seconds,
|
||||
NOW()
|
||||
)
|
||||
ON CONFLICT (job_id) DO UPDATE SET
|
||||
job_name = EXCLUDED.job_name,
|
||||
client_name = EXCLUDED.client_name,
|
||||
job_type = EXCLUDED.job_type,
|
||||
job_level = EXCLUDED.job_level,
|
||||
status = EXCLUDED.status,
|
||||
bytes_written = EXCLUDED.bytes_written,
|
||||
files_written = EXCLUDED.files_written,
|
||||
started_at = EXCLUDED.started_at,
|
||||
ended_at = EXCLUDED.ended_at,
|
||||
duration_seconds = EXCLUDED.duration_seconds,
|
||||
updated_at = NOW();
|
||||
|
||||
jobs_count := jobs_count + 1;
|
||||
EXCEPTION
|
||||
WHEN OTHERS THEN
|
||||
error_count := error_count + 1;
|
||||
-- Log error but continue with next job
|
||||
RAISE WARNING 'Error syncing job %: %', job_record.job_id, SQLERRM;
|
||||
END;
|
||||
END LOOP;
|
||||
|
||||
-- Return summary
|
||||
RETURN QUERY SELECT jobs_count, inserted_count, updated_count, error_count;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Create a simpler version that uses current database connection settings
|
||||
-- This version assumes Bacula is on same host/port with same user
|
||||
CREATE OR REPLACE FUNCTION sync_bacula_jobs_simple()
|
||||
RETURNS TABLE(
|
||||
jobs_synced INTEGER,
|
||||
jobs_inserted INTEGER,
|
||||
jobs_updated INTEGER,
|
||||
errors INTEGER
|
||||
) AS $$
|
||||
DECLARE
|
||||
current_user_name TEXT;
|
||||
current_host TEXT;
|
||||
current_port INTEGER;
|
||||
current_db TEXT;
|
||||
BEGIN
|
||||
-- Get current connection info
|
||||
SELECT
|
||||
current_user,
|
||||
COALESCE(inet_server_addr()::TEXT, 'localhost'),
|
||||
COALESCE(inet_server_port(), 5432),
|
||||
current_database()
|
||||
INTO
|
||||
current_user_name,
|
||||
current_host,
|
||||
current_port,
|
||||
current_db;
|
||||
|
||||
-- Call main function with current connection settings
|
||||
-- Note: password needs to be passed or configured in .pgpass
|
||||
RETURN QUERY
|
||||
SELECT * FROM sync_bacula_jobs(
|
||||
'bacula', -- Try 'bacula' first
|
||||
current_host,
|
||||
current_port,
|
||||
current_user_name,
|
||||
'' -- Empty password - will use .pgpass or peer authentication
|
||||
);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Grant execute permission to calypso user
|
||||
GRANT EXECUTE ON FUNCTION sync_bacula_jobs(TEXT, TEXT, INTEGER, TEXT, TEXT) TO calypso;
|
||||
GRANT EXECUTE ON FUNCTION sync_bacula_jobs_simple() TO calypso;
|
||||
|
||||
-- Create index if not exists (should already exist from migration 009)
|
||||
CREATE INDEX IF NOT EXISTS idx_backup_jobs_job_id ON backup_jobs(job_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_backup_jobs_updated_at ON backup_jobs(updated_at);
|
||||
|
||||
COMMENT ON FUNCTION sync_bacula_jobs IS 'Syncs jobs from Bacula database to Calypso backup_jobs table using dblink';
|
||||
COMMENT ON FUNCTION sync_bacula_jobs_simple IS 'Simplified version that uses current connection settings (requires .pgpass for password)';
|
||||
|
||||
127
backend/internal/common/database/query_optimization.go
Normal file
127
backend/internal/common/database/query_optimization.go
Normal file
@@ -0,0 +1,127 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"time"
|
||||
)
|
||||
|
||||
// QueryStats holds query performance statistics
|
||||
type QueryStats struct {
|
||||
Query string
|
||||
Duration time.Duration
|
||||
Rows int64
|
||||
Error error
|
||||
Timestamp time.Time
|
||||
}
|
||||
|
||||
// QueryOptimizer provides query optimization utilities
|
||||
type QueryOptimizer struct {
|
||||
db *DB
|
||||
}
|
||||
|
||||
// NewQueryOptimizer creates a new query optimizer
|
||||
func NewQueryOptimizer(db *DB) *QueryOptimizer {
|
||||
return &QueryOptimizer{db: db}
|
||||
}
|
||||
|
||||
// ExecuteWithTimeout executes a query with a timeout
|
||||
func (qo *QueryOptimizer) ExecuteWithTimeout(ctx context.Context, timeout time.Duration, query string, args ...interface{}) (sql.Result, error) {
|
||||
ctx, cancel := context.WithTimeout(ctx, timeout)
|
||||
defer cancel()
|
||||
return qo.db.ExecContext(ctx, query, args...)
|
||||
}
|
||||
|
||||
// QueryWithTimeout executes a query with a timeout and returns rows
|
||||
func (qo *QueryOptimizer) QueryWithTimeout(ctx context.Context, timeout time.Duration, query string, args ...interface{}) (*sql.Rows, error) {
|
||||
ctx, cancel := context.WithTimeout(ctx, timeout)
|
||||
defer cancel()
|
||||
return qo.db.QueryContext(ctx, query, args...)
|
||||
}
|
||||
|
||||
// QueryRowWithTimeout executes a query with a timeout and returns a single row
|
||||
func (qo *QueryOptimizer) QueryRowWithTimeout(ctx context.Context, timeout time.Duration, query string, args ...interface{}) *sql.Row {
|
||||
ctx, cancel := context.WithTimeout(ctx, timeout)
|
||||
defer cancel()
|
||||
return qo.db.QueryRowContext(ctx, query, args...)
|
||||
}
|
||||
|
||||
// BatchInsert performs a batch insert operation
|
||||
// This is more efficient than multiple individual INSERT statements
|
||||
func (qo *QueryOptimizer) BatchInsert(ctx context.Context, table string, columns []string, values [][]interface{}) error {
|
||||
if len(values) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Build the query
|
||||
query := fmt.Sprintf("INSERT INTO %s (%s) VALUES ", table, joinColumns(columns))
|
||||
|
||||
// Build value placeholders
|
||||
placeholders := make([]string, len(values))
|
||||
args := make([]interface{}, 0, len(values)*len(columns))
|
||||
argIndex := 1
|
||||
|
||||
for i, row := range values {
|
||||
rowPlaceholders := make([]string, len(columns))
|
||||
for j := range columns {
|
||||
rowPlaceholders[j] = fmt.Sprintf("$%d", argIndex)
|
||||
args = append(args, row[j])
|
||||
argIndex++
|
||||
}
|
||||
placeholders[i] = fmt.Sprintf("(%s)", joinStrings(rowPlaceholders, ", "))
|
||||
}
|
||||
|
||||
query += joinStrings(placeholders, ", ")
|
||||
|
||||
_, err := qo.db.ExecContext(ctx, query, args...)
|
||||
return err
|
||||
}
|
||||
|
||||
// helper functions
|
||||
func joinColumns(columns []string) string {
|
||||
return joinStrings(columns, ", ")
|
||||
}
|
||||
|
||||
func joinStrings(strs []string, sep string) string {
|
||||
if len(strs) == 0 {
|
||||
return ""
|
||||
}
|
||||
if len(strs) == 1 {
|
||||
return strs[0]
|
||||
}
|
||||
result := strs[0]
|
||||
for i := 1; i < len(strs); i++ {
|
||||
result += sep + strs[i]
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// OptimizeConnectionPool optimizes database connection pool settings
|
||||
// This should be called after analyzing query patterns
|
||||
func OptimizeConnectionPool(db *sql.DB, maxConns, maxIdleConns int, maxLifetime time.Duration) {
|
||||
db.SetMaxOpenConns(maxConns)
|
||||
db.SetMaxIdleConns(maxIdleConns)
|
||||
db.SetConnMaxLifetime(maxLifetime)
|
||||
|
||||
// Set connection idle timeout (how long an idle connection can stay in pool)
|
||||
// Default is 0 (no timeout), but setting a timeout helps prevent stale connections
|
||||
db.SetConnMaxIdleTime(10 * time.Minute)
|
||||
}
|
||||
|
||||
// GetConnectionStats returns current connection pool statistics
|
||||
func GetConnectionStats(db *sql.DB) map[string]interface{} {
|
||||
stats := db.Stats()
|
||||
return map[string]interface{}{
|
||||
"max_open_connections": stats.MaxOpenConnections,
|
||||
"open_connections": stats.OpenConnections,
|
||||
"in_use": stats.InUse,
|
||||
"idle": stats.Idle,
|
||||
"wait_count": stats.WaitCount,
|
||||
"wait_duration": stats.WaitDuration.String(),
|
||||
"max_idle_closed": stats.MaxIdleClosed,
|
||||
"max_idle_time_closed": stats.MaxIdleTimeClosed,
|
||||
"max_lifetime_closed": stats.MaxLifetimeClosed,
|
||||
}
|
||||
}
|
||||
|
||||
98
backend/internal/common/logger/logger.go
Normal file
98
backend/internal/common/logger/logger.go
Normal file
@@ -0,0 +1,98 @@
|
||||
package logger
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"go.uber.org/zap"
|
||||
"go.uber.org/zap/zapcore"
|
||||
)
|
||||
|
||||
// Logger wraps zap.Logger for structured logging
|
||||
type Logger struct {
|
||||
*zap.Logger
|
||||
}
|
||||
|
||||
// NewLogger creates a new logger instance
|
||||
func NewLogger(service string) *Logger {
|
||||
config := zap.NewProductionConfig()
|
||||
config.EncoderConfig.TimeKey = "timestamp"
|
||||
config.EncoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder
|
||||
config.EncoderConfig.MessageKey = "message"
|
||||
config.EncoderConfig.LevelKey = "level"
|
||||
|
||||
// Use JSON format by default, can be overridden via env
|
||||
logFormat := os.Getenv("CALYPSO_LOG_FORMAT")
|
||||
if logFormat == "text" {
|
||||
config.Encoding = "console"
|
||||
config.EncoderConfig.EncodeLevel = zapcore.CapitalColorLevelEncoder
|
||||
}
|
||||
|
||||
// Set log level from environment
|
||||
logLevel := os.Getenv("CALYPSO_LOG_LEVEL")
|
||||
if logLevel != "" {
|
||||
var level zapcore.Level
|
||||
if err := level.UnmarshalText([]byte(logLevel)); err == nil {
|
||||
config.Level = zap.NewAtomicLevelAt(level)
|
||||
}
|
||||
}
|
||||
|
||||
zapLogger, err := config.Build(
|
||||
zap.AddCaller(),
|
||||
zap.AddStacktrace(zapcore.ErrorLevel),
|
||||
zap.Fields(zap.String("service", service)),
|
||||
)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
return &Logger{zapLogger}
|
||||
}
|
||||
|
||||
// WithFields adds structured fields to the logger
|
||||
func (l *Logger) WithFields(fields ...zap.Field) *Logger {
|
||||
return &Logger{l.Logger.With(fields...)}
|
||||
}
|
||||
|
||||
// Info logs an info message with optional fields
|
||||
func (l *Logger) Info(msg string, fields ...interface{}) {
|
||||
zapFields := toZapFields(fields...)
|
||||
l.Logger.Info(msg, zapFields...)
|
||||
}
|
||||
|
||||
// Error logs an error message with optional fields
|
||||
func (l *Logger) Error(msg string, fields ...interface{}) {
|
||||
zapFields := toZapFields(fields...)
|
||||
l.Logger.Error(msg, zapFields...)
|
||||
}
|
||||
|
||||
// Warn logs a warning message with optional fields
|
||||
func (l *Logger) Warn(msg string, fields ...interface{}) {
|
||||
zapFields := toZapFields(fields...)
|
||||
l.Logger.Warn(msg, zapFields...)
|
||||
}
|
||||
|
||||
// Debug logs a debug message with optional fields
|
||||
func (l *Logger) Debug(msg string, fields ...interface{}) {
|
||||
zapFields := toZapFields(fields...)
|
||||
l.Logger.Debug(msg, zapFields...)
|
||||
}
|
||||
|
||||
// Fatal logs a fatal message and exits
|
||||
func (l *Logger) Fatal(msg string, fields ...interface{}) {
|
||||
zapFields := toZapFields(fields...)
|
||||
l.Logger.Fatal(msg, zapFields...)
|
||||
}
|
||||
|
||||
// toZapFields converts key-value pairs to zap fields
|
||||
func toZapFields(fields ...interface{}) []zap.Field {
|
||||
zapFields := make([]zap.Field, 0, len(fields)/2)
|
||||
for i := 0; i < len(fields)-1; i += 2 {
|
||||
key, ok := fields[i].(string)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
zapFields = append(zapFields, zap.Any(key, fields[i+1]))
|
||||
}
|
||||
return zapFields
|
||||
}
|
||||
|
||||
106
backend/internal/common/password/password.go
Normal file
106
backend/internal/common/password/password.go
Normal file
@@ -0,0 +1,106 @@
|
||||
package password
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"crypto/subtle"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/config"
|
||||
"golang.org/x/crypto/argon2"
|
||||
)
|
||||
|
||||
// HashPassword hashes a password using Argon2id
|
||||
func HashPassword(password string, params config.Argon2Params) (string, error) {
|
||||
// Generate a random salt
|
||||
salt := make([]byte, params.SaltLength)
|
||||
if _, err := rand.Read(salt); err != nil {
|
||||
return "", fmt.Errorf("failed to generate salt: %w", err)
|
||||
}
|
||||
|
||||
// Hash the password
|
||||
hash := argon2.IDKey(
|
||||
[]byte(password),
|
||||
salt,
|
||||
params.Iterations,
|
||||
params.Memory,
|
||||
params.Parallelism,
|
||||
params.KeyLength,
|
||||
)
|
||||
|
||||
// Encode the hash and salt in the standard format
|
||||
// Format: $argon2id$v=<version>$m=<memory>,t=<iterations>,p=<parallelism>$<salt>$<hash>
|
||||
b64Salt := base64.RawStdEncoding.EncodeToString(salt)
|
||||
b64Hash := base64.RawStdEncoding.EncodeToString(hash)
|
||||
|
||||
encodedHash := fmt.Sprintf(
|
||||
"$argon2id$v=%d$m=%d,t=%d,p=%d$%s$%s",
|
||||
argon2.Version,
|
||||
params.Memory,
|
||||
params.Iterations,
|
||||
params.Parallelism,
|
||||
b64Salt,
|
||||
b64Hash,
|
||||
)
|
||||
|
||||
return encodedHash, nil
|
||||
}
|
||||
|
||||
// VerifyPassword verifies a password against an Argon2id hash
|
||||
func VerifyPassword(password, encodedHash string) (bool, error) {
|
||||
// Parse the encoded hash
|
||||
parts := strings.Split(encodedHash, "$")
|
||||
if len(parts) != 6 {
|
||||
return false, errors.New("invalid hash format")
|
||||
}
|
||||
|
||||
if parts[1] != "argon2id" {
|
||||
return false, errors.New("unsupported hash algorithm")
|
||||
}
|
||||
|
||||
// Parse version
|
||||
var version int
|
||||
if _, err := fmt.Sscanf(parts[2], "v=%d", &version); err != nil {
|
||||
return false, fmt.Errorf("failed to parse version: %w", err)
|
||||
}
|
||||
if version != argon2.Version {
|
||||
return false, errors.New("incompatible version")
|
||||
}
|
||||
|
||||
// Parse parameters
|
||||
var memory, iterations uint32
|
||||
var parallelism uint8
|
||||
if _, err := fmt.Sscanf(parts[3], "m=%d,t=%d,p=%d", &memory, &iterations, ¶llelism); err != nil {
|
||||
return false, fmt.Errorf("failed to parse parameters: %w", err)
|
||||
}
|
||||
|
||||
// Decode salt and hash
|
||||
salt, err := base64.RawStdEncoding.DecodeString(parts[4])
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to decode salt: %w", err)
|
||||
}
|
||||
|
||||
hash, err := base64.RawStdEncoding.DecodeString(parts[5])
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("failed to decode hash: %w", err)
|
||||
}
|
||||
|
||||
// Compute the hash of the provided password
|
||||
otherHash := argon2.IDKey(
|
||||
[]byte(password),
|
||||
salt,
|
||||
iterations,
|
||||
memory,
|
||||
parallelism,
|
||||
uint32(len(hash)),
|
||||
)
|
||||
|
||||
// Compare hashes using constant-time comparison
|
||||
if subtle.ConstantTimeCompare(hash, otherHash) == 1 {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
return false, nil
|
||||
}
|
||||
182
backend/internal/common/password/password_test.go
Normal file
182
backend/internal/common/password/password_test.go
Normal file
@@ -0,0 +1,182 @@
|
||||
package password
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/config"
|
||||
)
|
||||
|
||||
func min(a, b int) int {
|
||||
if a < b {
|
||||
return a
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
func contains(s, substr string) bool {
|
||||
for i := 0; i <= len(s)-len(substr); i++ {
|
||||
if s[i:i+len(substr)] == substr {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func TestHashPassword(t *testing.T) {
|
||||
params := config.Argon2Params{
|
||||
Memory: 64 * 1024,
|
||||
Iterations: 3,
|
||||
Parallelism: 4,
|
||||
SaltLength: 16,
|
||||
KeyLength: 32,
|
||||
}
|
||||
|
||||
password := "test-password-123"
|
||||
hash, err := HashPassword(password, params)
|
||||
if err != nil {
|
||||
t.Fatalf("HashPassword failed: %v", err)
|
||||
}
|
||||
|
||||
// Verify hash format
|
||||
if hash == "" {
|
||||
t.Error("HashPassword returned empty string")
|
||||
}
|
||||
|
||||
// Verify hash starts with Argon2id prefix
|
||||
if len(hash) < 12 || hash[:12] != "$argon2id$v=" {
|
||||
t.Errorf("Hash does not start with expected prefix, got: %s", hash[:min(30, len(hash))])
|
||||
}
|
||||
|
||||
// Verify hash contains required components
|
||||
if !contains(hash, "$m=") || !contains(hash, ",t=") || !contains(hash, ",p=") {
|
||||
t.Errorf("Hash missing required components, got: %s", hash[:min(50, len(hash))])
|
||||
}
|
||||
|
||||
// Verify hash is different each time (due to random salt)
|
||||
hash2, err := HashPassword(password, params)
|
||||
if err != nil {
|
||||
t.Fatalf("HashPassword failed on second call: %v", err)
|
||||
}
|
||||
|
||||
if hash == hash2 {
|
||||
t.Error("HashPassword returned same hash for same password (salt should be random)")
|
||||
}
|
||||
}
|
||||
|
||||
func TestVerifyPassword(t *testing.T) {
|
||||
params := config.Argon2Params{
|
||||
Memory: 64 * 1024,
|
||||
Iterations: 3,
|
||||
Parallelism: 4,
|
||||
SaltLength: 16,
|
||||
KeyLength: 32,
|
||||
}
|
||||
|
||||
password := "test-password-123"
|
||||
hash, err := HashPassword(password, params)
|
||||
if err != nil {
|
||||
t.Fatalf("HashPassword failed: %v", err)
|
||||
}
|
||||
|
||||
// Test correct password
|
||||
valid, err := VerifyPassword(password, hash)
|
||||
if err != nil {
|
||||
t.Fatalf("VerifyPassword failed: %v", err)
|
||||
}
|
||||
if !valid {
|
||||
t.Error("VerifyPassword returned false for correct password")
|
||||
}
|
||||
|
||||
// Test wrong password
|
||||
valid, err = VerifyPassword("wrong-password", hash)
|
||||
if err != nil {
|
||||
t.Fatalf("VerifyPassword failed: %v", err)
|
||||
}
|
||||
if valid {
|
||||
t.Error("VerifyPassword returned true for wrong password")
|
||||
}
|
||||
|
||||
// Test empty password
|
||||
valid, err = VerifyPassword("", hash)
|
||||
if err != nil {
|
||||
t.Fatalf("VerifyPassword failed: %v", err)
|
||||
}
|
||||
if valid {
|
||||
t.Error("VerifyPassword returned true for empty password")
|
||||
}
|
||||
}
|
||||
|
||||
func TestVerifyPassword_InvalidHash(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
hash string
|
||||
}{
|
||||
{"empty hash", ""},
|
||||
{"invalid format", "not-a-hash"},
|
||||
{"wrong algorithm", "$argon2$v=19$m=65536,t=3,p=4$salt$hash"},
|
||||
{"incomplete hash", "$argon2id$v=19"},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
valid, err := VerifyPassword("test-password", tt.hash)
|
||||
if err == nil {
|
||||
t.Error("VerifyPassword should return error for invalid hash")
|
||||
}
|
||||
if valid {
|
||||
t.Error("VerifyPassword should return false for invalid hash")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashPassword_DifferentPasswords(t *testing.T) {
|
||||
params := config.Argon2Params{
|
||||
Memory: 64 * 1024,
|
||||
Iterations: 3,
|
||||
Parallelism: 4,
|
||||
SaltLength: 16,
|
||||
KeyLength: 32,
|
||||
}
|
||||
|
||||
password1 := "password1"
|
||||
password2 := "password2"
|
||||
|
||||
hash1, err := HashPassword(password1, params)
|
||||
if err != nil {
|
||||
t.Fatalf("HashPassword failed: %v", err)
|
||||
}
|
||||
|
||||
hash2, err := HashPassword(password2, params)
|
||||
if err != nil {
|
||||
t.Fatalf("HashPassword failed: %v", err)
|
||||
}
|
||||
|
||||
// Hashes should be different
|
||||
if hash1 == hash2 {
|
||||
t.Error("Different passwords produced same hash")
|
||||
}
|
||||
|
||||
// Each password should verify against its own hash
|
||||
valid, err := VerifyPassword(password1, hash1)
|
||||
if err != nil || !valid {
|
||||
t.Error("Password1 should verify against its own hash")
|
||||
}
|
||||
|
||||
valid, err = VerifyPassword(password2, hash2)
|
||||
if err != nil || !valid {
|
||||
t.Error("Password2 should verify against its own hash")
|
||||
}
|
||||
|
||||
// Passwords should not verify against each other's hash
|
||||
valid, err = VerifyPassword(password1, hash2)
|
||||
if err != nil || valid {
|
||||
t.Error("Password1 should not verify against password2's hash")
|
||||
}
|
||||
|
||||
valid, err = VerifyPassword(password2, hash1)
|
||||
if err != nil || valid {
|
||||
t.Error("Password2 should not verify against password1's hash")
|
||||
}
|
||||
}
|
||||
|
||||
181
backend/internal/common/router/cache.go
Normal file
181
backend/internal/common/router/cache.go
Normal file
@@ -0,0 +1,181 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/cache"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// GenerateKey generates a cache key from parts (local helper)
|
||||
func GenerateKey(prefix string, parts ...string) string {
|
||||
key := prefix
|
||||
for _, part := range parts {
|
||||
key += ":" + part
|
||||
}
|
||||
|
||||
// Hash long keys to keep them manageable
|
||||
if len(key) > 200 {
|
||||
hash := sha256.Sum256([]byte(key))
|
||||
return prefix + ":" + hex.EncodeToString(hash[:])
|
||||
}
|
||||
|
||||
return key
|
||||
}
|
||||
|
||||
// CacheConfig holds cache configuration
|
||||
type CacheConfig struct {
|
||||
Enabled bool
|
||||
DefaultTTL time.Duration
|
||||
MaxAge int // seconds for Cache-Control header
|
||||
}
|
||||
|
||||
// cacheMiddleware creates a caching middleware
|
||||
func cacheMiddleware(cfg CacheConfig, cache *cache.Cache) gin.HandlerFunc {
|
||||
if !cfg.Enabled || cache == nil {
|
||||
return func(c *gin.Context) {
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
return func(c *gin.Context) {
|
||||
// Only cache GET requests
|
||||
if c.Request.Method != http.MethodGet {
|
||||
c.Next()
|
||||
return
|
||||
}
|
||||
|
||||
// Don't cache VTL endpoints - they change frequently
|
||||
path := c.Request.URL.Path
|
||||
if strings.HasPrefix(path, "/api/v1/tape/vtl/") {
|
||||
c.Next()
|
||||
return
|
||||
}
|
||||
|
||||
// Generate cache key from request path and query string
|
||||
keyParts := []string{c.Request.URL.Path}
|
||||
if c.Request.URL.RawQuery != "" {
|
||||
keyParts = append(keyParts, c.Request.URL.RawQuery)
|
||||
}
|
||||
cacheKey := GenerateKey("http", keyParts...)
|
||||
|
||||
// Try to get from cache
|
||||
if cached, found := cache.Get(cacheKey); found {
|
||||
if cachedResponse, ok := cached.([]byte); ok {
|
||||
// Set cache headers
|
||||
if cfg.MaxAge > 0 {
|
||||
c.Header("Cache-Control", fmt.Sprintf("public, max-age=%d", cfg.MaxAge))
|
||||
c.Header("X-Cache", "HIT")
|
||||
}
|
||||
c.Data(http.StatusOK, "application/json", cachedResponse)
|
||||
c.Abort()
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Cache miss - capture response
|
||||
writer := &responseWriter{
|
||||
ResponseWriter: c.Writer,
|
||||
body: &bytes.Buffer{},
|
||||
}
|
||||
c.Writer = writer
|
||||
|
||||
c.Next()
|
||||
|
||||
// Only cache successful responses
|
||||
if writer.Status() == http.StatusOK {
|
||||
// Cache the response body
|
||||
responseBody := writer.body.Bytes()
|
||||
cache.Set(cacheKey, responseBody)
|
||||
|
||||
// Set cache headers
|
||||
if cfg.MaxAge > 0 {
|
||||
c.Header("Cache-Control", fmt.Sprintf("public, max-age=%d", cfg.MaxAge))
|
||||
c.Header("X-Cache", "MISS")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// responseWriter wraps gin.ResponseWriter to capture response body
|
||||
type responseWriter struct {
|
||||
gin.ResponseWriter
|
||||
body *bytes.Buffer
|
||||
}
|
||||
|
||||
func (w *responseWriter) Write(b []byte) (int, error) {
|
||||
w.body.Write(b)
|
||||
return w.ResponseWriter.Write(b)
|
||||
}
|
||||
|
||||
func (w *responseWriter) WriteString(s string) (int, error) {
|
||||
w.body.WriteString(s)
|
||||
return w.ResponseWriter.WriteString(s)
|
||||
}
|
||||
|
||||
// cacheControlMiddleware adds Cache-Control headers based on endpoint
|
||||
func cacheControlMiddleware() gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
path := c.Request.URL.Path
|
||||
|
||||
// Set appropriate cache control for different endpoints
|
||||
switch {
|
||||
case path == "/api/v1/health":
|
||||
// Health check can be cached for a short time
|
||||
c.Header("Cache-Control", "public, max-age=30")
|
||||
case path == "/api/v1/monitoring/metrics":
|
||||
// Metrics can be cached for a short time
|
||||
c.Header("Cache-Control", "public, max-age=60")
|
||||
case path == "/api/v1/monitoring/alerts":
|
||||
// Alerts should have minimal caching
|
||||
c.Header("Cache-Control", "public, max-age=10")
|
||||
case path == "/api/v1/storage/disks":
|
||||
// Disk list can be cached for a moderate time
|
||||
c.Header("Cache-Control", "public, max-age=300")
|
||||
case path == "/api/v1/storage/repositories":
|
||||
// Repositories can be cached
|
||||
c.Header("Cache-Control", "public, max-age=180")
|
||||
case path == "/api/v1/system/services":
|
||||
// Service list can be cached briefly
|
||||
c.Header("Cache-Control", "public, max-age=60")
|
||||
case strings.HasPrefix(path, "/api/v1/storage/zfs/pools"):
|
||||
// ZFS pools and datasets should not be cached - they change frequently
|
||||
c.Header("Cache-Control", "no-cache, no-store, must-revalidate")
|
||||
default:
|
||||
// Default: no cache for other endpoints
|
||||
c.Header("Cache-Control", "no-cache, no-store, must-revalidate")
|
||||
}
|
||||
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
// InvalidateCacheKey invalidates a specific cache key
|
||||
func InvalidateCacheKey(cache *cache.Cache, key string) {
|
||||
if cache != nil {
|
||||
cache.Delete(key)
|
||||
}
|
||||
}
|
||||
|
||||
// InvalidateCachePattern invalidates all cache keys matching a pattern
|
||||
func InvalidateCachePattern(cache *cache.Cache, pattern string) {
|
||||
if cache == nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Get all keys and delete matching ones
|
||||
// Note: This is a simple implementation. For production, consider using
|
||||
// a cache library that supports pattern matching (like Redis)
|
||||
stats := cache.Stats()
|
||||
if total, ok := stats["total_entries"].(int); ok && total > 0 {
|
||||
// For now, we'll clear the entire cache if pattern matching is needed
|
||||
// In production, use Redis with pattern matching
|
||||
cache.Clear()
|
||||
}
|
||||
}
|
||||
161
backend/internal/common/router/middleware.go
Normal file
161
backend/internal/common/router/middleware.go
Normal file
@@ -0,0 +1,161 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/atlasos/calypso/internal/auth"
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/iam"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// authMiddleware validates JWT tokens and sets user context
|
||||
func authMiddleware(authHandler *auth.Handler) gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
var token string
|
||||
|
||||
// Try to extract token from Authorization header first
|
||||
authHeader := c.GetHeader("Authorization")
|
||||
if authHeader != "" {
|
||||
// Parse Bearer token
|
||||
parts := strings.SplitN(authHeader, " ", 2)
|
||||
if len(parts) == 2 && parts[0] == "Bearer" {
|
||||
token = parts[1]
|
||||
}
|
||||
}
|
||||
|
||||
// If no token from header, try query parameter (for WebSocket)
|
||||
if token == "" {
|
||||
token = c.Query("token")
|
||||
}
|
||||
|
||||
// If still no token, return error
|
||||
if token == "" {
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "missing authorization token"})
|
||||
c.Abort()
|
||||
return
|
||||
}
|
||||
|
||||
// Validate token and get user
|
||||
user, err := authHandler.ValidateToken(token)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "invalid or expired token"})
|
||||
c.Abort()
|
||||
return
|
||||
}
|
||||
|
||||
// Load user roles and permissions from database
|
||||
// We need to get the DB from the auth handler's context
|
||||
// For now, we'll load them in the permission middleware instead
|
||||
|
||||
// Set user in context
|
||||
c.Set("user", user)
|
||||
c.Set("user_id", user.ID)
|
||||
c.Set("username", user.Username)
|
||||
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
// requireRole creates middleware that requires a specific role
|
||||
func requireRole(roleName string) gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
user, exists := c.Get("user")
|
||||
if !exists {
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "authentication required"})
|
||||
c.Abort()
|
||||
return
|
||||
}
|
||||
|
||||
authUser, ok := user.(*iam.User)
|
||||
if !ok {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "invalid user context"})
|
||||
c.Abort()
|
||||
return
|
||||
}
|
||||
|
||||
// Load roles if not already loaded
|
||||
if len(authUser.Roles) == 0 {
|
||||
// Get DB from context (set by router)
|
||||
db, exists := c.Get("db")
|
||||
if exists {
|
||||
if dbConn, ok := db.(*database.DB); ok {
|
||||
roles, err := iam.GetUserRoles(dbConn, authUser.ID)
|
||||
if err == nil {
|
||||
authUser.Roles = roles
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check if user has the required role
|
||||
hasRole := false
|
||||
for _, role := range authUser.Roles {
|
||||
if role == roleName {
|
||||
hasRole = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !hasRole {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "insufficient permissions"})
|
||||
c.Abort()
|
||||
return
|
||||
}
|
||||
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
// requirePermission creates middleware that requires a specific permission
|
||||
func requirePermission(resource, action string) gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
user, exists := c.Get("user")
|
||||
if !exists {
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "authentication required"})
|
||||
c.Abort()
|
||||
return
|
||||
}
|
||||
|
||||
authUser, ok := user.(*iam.User)
|
||||
if !ok {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "invalid user context"})
|
||||
c.Abort()
|
||||
return
|
||||
}
|
||||
|
||||
// Load permissions if not already loaded
|
||||
if len(authUser.Permissions) == 0 {
|
||||
// Get DB from context (set by router)
|
||||
db, exists := c.Get("db")
|
||||
if exists {
|
||||
if dbConn, ok := db.(*database.DB); ok {
|
||||
permissions, err := iam.GetUserPermissions(dbConn, authUser.ID)
|
||||
if err == nil {
|
||||
authUser.Permissions = permissions
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check if user has the required permission
|
||||
permissionName := resource + ":" + action
|
||||
hasPermission := false
|
||||
for _, perm := range authUser.Permissions {
|
||||
if perm == permissionName {
|
||||
hasPermission = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !hasPermission {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "insufficient permissions"})
|
||||
c.Abort()
|
||||
return
|
||||
}
|
||||
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
83
backend/internal/common/router/ratelimit.go
Normal file
83
backend/internal/common/router/ratelimit.go
Normal file
@@ -0,0 +1,83 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"sync"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/config"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/gin-gonic/gin"
|
||||
"golang.org/x/time/rate"
|
||||
)
|
||||
|
||||
// rateLimiter manages rate limiting per IP address
|
||||
type rateLimiter struct {
|
||||
limiters map[string]*rate.Limiter
|
||||
mu sync.RWMutex
|
||||
config config.RateLimitConfig
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// newRateLimiter creates a new rate limiter
|
||||
func newRateLimiter(cfg config.RateLimitConfig, log *logger.Logger) *rateLimiter {
|
||||
return &rateLimiter{
|
||||
limiters: make(map[string]*rate.Limiter),
|
||||
config: cfg,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// getLimiter returns a rate limiter for the given IP address
|
||||
func (rl *rateLimiter) getLimiter(ip string) *rate.Limiter {
|
||||
rl.mu.RLock()
|
||||
limiter, exists := rl.limiters[ip]
|
||||
rl.mu.RUnlock()
|
||||
|
||||
if exists {
|
||||
return limiter
|
||||
}
|
||||
|
||||
// Create new limiter for this IP
|
||||
rl.mu.Lock()
|
||||
defer rl.mu.Unlock()
|
||||
|
||||
// Double-check after acquiring write lock
|
||||
if limiter, exists := rl.limiters[ip]; exists {
|
||||
return limiter
|
||||
}
|
||||
|
||||
// Create limiter with configured rate
|
||||
limiter = rate.NewLimiter(rate.Limit(rl.config.RequestsPerSecond), rl.config.BurstSize)
|
||||
rl.limiters[ip] = limiter
|
||||
|
||||
return limiter
|
||||
}
|
||||
|
||||
// rateLimitMiddleware creates rate limiting middleware
|
||||
func rateLimitMiddleware(cfg *config.Config, log *logger.Logger) gin.HandlerFunc {
|
||||
if !cfg.Security.RateLimit.Enabled {
|
||||
// Rate limiting disabled, return no-op middleware
|
||||
return func(c *gin.Context) {
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
limiter := newRateLimiter(cfg.Security.RateLimit, log)
|
||||
|
||||
return func(c *gin.Context) {
|
||||
ip := c.ClientIP()
|
||||
limiter := limiter.getLimiter(ip)
|
||||
|
||||
if !limiter.Allow() {
|
||||
log.Warn("Rate limit exceeded", "ip", ip, "path", c.Request.URL.Path)
|
||||
c.JSON(http.StatusTooManyRequests, gin.H{
|
||||
"error": "rate limit exceeded",
|
||||
})
|
||||
c.Abort()
|
||||
return
|
||||
}
|
||||
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
428
backend/internal/common/router/router.go
Normal file
428
backend/internal/common/router/router.go
Normal file
@@ -0,0 +1,428 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/audit"
|
||||
"github.com/atlasos/calypso/internal/auth"
|
||||
"github.com/atlasos/calypso/internal/backup"
|
||||
"github.com/atlasos/calypso/internal/common/cache"
|
||||
"github.com/atlasos/calypso/internal/common/config"
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/atlasos/calypso/internal/iam"
|
||||
"github.com/atlasos/calypso/internal/monitoring"
|
||||
"github.com/atlasos/calypso/internal/scst"
|
||||
"github.com/atlasos/calypso/internal/shares"
|
||||
"github.com/atlasos/calypso/internal/storage"
|
||||
"github.com/atlasos/calypso/internal/system"
|
||||
"github.com/atlasos/calypso/internal/tape_physical"
|
||||
"github.com/atlasos/calypso/internal/tape_vtl"
|
||||
"github.com/atlasos/calypso/internal/tasks"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// NewRouter creates and configures the HTTP router
|
||||
func NewRouter(cfg *config.Config, db *database.DB, log *logger.Logger) *gin.Engine {
|
||||
if cfg.Logging.Level == "debug" {
|
||||
gin.SetMode(gin.DebugMode)
|
||||
} else {
|
||||
gin.SetMode(gin.ReleaseMode)
|
||||
}
|
||||
|
||||
r := gin.New()
|
||||
|
||||
// Initialize cache if enabled
|
||||
var responseCache *cache.Cache
|
||||
if cfg.Server.Cache.Enabled {
|
||||
responseCache = cache.NewCache(cfg.Server.Cache.DefaultTTL)
|
||||
log.Info("Response caching enabled", "default_ttl", cfg.Server.Cache.DefaultTTL)
|
||||
}
|
||||
|
||||
// Middleware
|
||||
r.Use(ginLogger(log))
|
||||
r.Use(gin.Recovery())
|
||||
r.Use(securityHeadersMiddleware(cfg))
|
||||
r.Use(rateLimitMiddleware(cfg, log))
|
||||
r.Use(corsMiddleware(cfg))
|
||||
|
||||
// Cache control headers (always applied)
|
||||
r.Use(cacheControlMiddleware())
|
||||
|
||||
// Response caching middleware (if enabled)
|
||||
if cfg.Server.Cache.Enabled {
|
||||
cacheConfig := CacheConfig{
|
||||
Enabled: cfg.Server.Cache.Enabled,
|
||||
DefaultTTL: cfg.Server.Cache.DefaultTTL,
|
||||
MaxAge: cfg.Server.Cache.MaxAge,
|
||||
}
|
||||
r.Use(cacheMiddleware(cacheConfig, responseCache))
|
||||
}
|
||||
|
||||
// Initialize monitoring services
|
||||
eventHub := monitoring.NewEventHub(log)
|
||||
alertService := monitoring.NewAlertService(db, log)
|
||||
alertService.SetEventHub(eventHub) // Connect alert service to event hub
|
||||
metricsService := monitoring.NewMetricsService(db, log)
|
||||
healthService := monitoring.NewHealthService(db, log, metricsService)
|
||||
|
||||
// Start event hub in background
|
||||
go eventHub.Run()
|
||||
|
||||
// Start metrics broadcaster in background
|
||||
go func() {
|
||||
ticker := time.NewTicker(30 * time.Second) // Broadcast metrics every 30 seconds
|
||||
defer ticker.Stop()
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
if metrics, err := metricsService.CollectMetrics(context.Background()); err == nil {
|
||||
eventHub.BroadcastMetrics(metrics)
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// Initialize and start alert rule engine
|
||||
alertRuleEngine := monitoring.NewAlertRuleEngine(db, log, alertService)
|
||||
|
||||
// Register default alert rules
|
||||
alertRuleEngine.RegisterRule(monitoring.NewAlertRule(
|
||||
"storage-capacity-warning",
|
||||
"Storage Capacity Warning",
|
||||
monitoring.AlertSourceStorage,
|
||||
&monitoring.StorageCapacityCondition{ThresholdPercent: 80.0},
|
||||
monitoring.AlertSeverityWarning,
|
||||
true,
|
||||
"Alert when storage repositories exceed 80% capacity",
|
||||
))
|
||||
alertRuleEngine.RegisterRule(monitoring.NewAlertRule(
|
||||
"storage-capacity-critical",
|
||||
"Storage Capacity Critical",
|
||||
monitoring.AlertSourceStorage,
|
||||
&monitoring.StorageCapacityCondition{ThresholdPercent: 95.0},
|
||||
monitoring.AlertSeverityCritical,
|
||||
true,
|
||||
"Alert when storage repositories exceed 95% capacity",
|
||||
))
|
||||
alertRuleEngine.RegisterRule(monitoring.NewAlertRule(
|
||||
"task-failure",
|
||||
"Task Failure",
|
||||
monitoring.AlertSourceTask,
|
||||
&monitoring.TaskFailureCondition{LookbackMinutes: 60},
|
||||
monitoring.AlertSeverityWarning,
|
||||
true,
|
||||
"Alert when tasks fail within the last hour",
|
||||
))
|
||||
|
||||
// Start alert rule engine in background
|
||||
ctx := context.Background()
|
||||
go alertRuleEngine.Start(ctx)
|
||||
|
||||
// Health check (no auth required) - enhanced
|
||||
r.GET("/api/v1/health", func(c *gin.Context) {
|
||||
health := healthService.CheckHealth(c.Request.Context())
|
||||
statusCode := 200
|
||||
if health.Status == "unhealthy" {
|
||||
statusCode = 503
|
||||
} else if health.Status == "degraded" {
|
||||
statusCode = 200 // Still 200 but with degraded status
|
||||
}
|
||||
c.JSON(statusCode, health)
|
||||
})
|
||||
|
||||
// API v1 routes
|
||||
v1 := r.Group("/api/v1")
|
||||
{
|
||||
// Auth routes (public)
|
||||
authHandler := auth.NewHandler(db, cfg, log)
|
||||
v1.POST("/auth/login", authHandler.Login)
|
||||
v1.POST("/auth/logout", authHandler.Logout)
|
||||
|
||||
// Audit middleware for mutating operations (applied to all v1 routes)
|
||||
auditMiddleware := audit.NewMiddleware(db, log)
|
||||
v1.Use(auditMiddleware.LogRequest())
|
||||
|
||||
// Protected routes
|
||||
protected := v1.Group("")
|
||||
protected.Use(authMiddleware(authHandler))
|
||||
protected.Use(func(c *gin.Context) {
|
||||
// Store DB in context for permission middleware
|
||||
c.Set("db", db)
|
||||
c.Next()
|
||||
})
|
||||
{
|
||||
// Auth
|
||||
protected.GET("/auth/me", authHandler.Me)
|
||||
|
||||
// Tasks
|
||||
taskHandler := tasks.NewHandler(db, log)
|
||||
protected.GET("/tasks/:id", taskHandler.GetTask)
|
||||
|
||||
// Storage
|
||||
storageHandler := storage.NewHandler(db, log)
|
||||
// Pass cache to storage handler for cache invalidation
|
||||
if responseCache != nil {
|
||||
storageHandler.SetCache(responseCache)
|
||||
}
|
||||
|
||||
// Start disk monitor service in background (syncs disks every 5 minutes)
|
||||
diskMonitor := storage.NewDiskMonitor(db, log, 5*time.Minute)
|
||||
go diskMonitor.Start(context.Background())
|
||||
|
||||
// Start ZFS pool monitor service in background (syncs pools every 2 minutes)
|
||||
zfsPoolMonitor := storage.NewZFSPoolMonitor(db, log, 2*time.Minute)
|
||||
go zfsPoolMonitor.Start(context.Background())
|
||||
|
||||
storageGroup := protected.Group("/storage")
|
||||
storageGroup.Use(requirePermission("storage", "read"))
|
||||
{
|
||||
storageGroup.GET("/disks", storageHandler.ListDisks)
|
||||
storageGroup.POST("/disks/sync", storageHandler.SyncDisks)
|
||||
storageGroup.GET("/volume-groups", storageHandler.ListVolumeGroups)
|
||||
storageGroup.GET("/repositories", storageHandler.ListRepositories)
|
||||
storageGroup.GET("/repositories/:id", storageHandler.GetRepository)
|
||||
storageGroup.POST("/repositories", requirePermission("storage", "write"), storageHandler.CreateRepository)
|
||||
storageGroup.DELETE("/repositories/:id", requirePermission("storage", "write"), storageHandler.DeleteRepository)
|
||||
// ZFS Pools
|
||||
storageGroup.GET("/zfs/pools", storageHandler.ListZFSPools)
|
||||
storageGroup.GET("/zfs/pools/:id", storageHandler.GetZFSPool)
|
||||
storageGroup.POST("/zfs/pools", requirePermission("storage", "write"), storageHandler.CreateZPool)
|
||||
storageGroup.DELETE("/zfs/pools/:id", requirePermission("storage", "write"), storageHandler.DeleteZFSPool)
|
||||
storageGroup.POST("/zfs/pools/:id/spare", requirePermission("storage", "write"), storageHandler.AddSpareDisk)
|
||||
// ZFS Datasets
|
||||
storageGroup.GET("/zfs/pools/:id/datasets", storageHandler.ListZFSDatasets)
|
||||
storageGroup.POST("/zfs/pools/:id/datasets", requirePermission("storage", "write"), storageHandler.CreateZFSDataset)
|
||||
storageGroup.DELETE("/zfs/pools/:id/datasets/:dataset", requirePermission("storage", "write"), storageHandler.DeleteZFSDataset)
|
||||
// ZFS ARC Stats
|
||||
storageGroup.GET("/zfs/arc/stats", storageHandler.GetARCStats)
|
||||
}
|
||||
|
||||
// Shares (CIFS/NFS)
|
||||
sharesHandler := shares.NewHandler(db, log)
|
||||
sharesGroup := protected.Group("/shares")
|
||||
sharesGroup.Use(requirePermission("storage", "read"))
|
||||
{
|
||||
sharesGroup.GET("", sharesHandler.ListShares)
|
||||
sharesGroup.GET("/:id", sharesHandler.GetShare)
|
||||
sharesGroup.POST("", requirePermission("storage", "write"), sharesHandler.CreateShare)
|
||||
sharesGroup.PUT("/:id", requirePermission("storage", "write"), sharesHandler.UpdateShare)
|
||||
sharesGroup.DELETE("/:id", requirePermission("storage", "write"), sharesHandler.DeleteShare)
|
||||
}
|
||||
|
||||
// SCST
|
||||
scstHandler := scst.NewHandler(db, log)
|
||||
scstGroup := protected.Group("/scst")
|
||||
scstGroup.Use(requirePermission("iscsi", "read"))
|
||||
{
|
||||
scstGroup.GET("/targets", scstHandler.ListTargets)
|
||||
scstGroup.GET("/targets/:id", scstHandler.GetTarget)
|
||||
scstGroup.POST("/targets", scstHandler.CreateTarget)
|
||||
scstGroup.POST("/targets/:id/luns", requirePermission("iscsi", "write"), scstHandler.AddLUN)
|
||||
scstGroup.DELETE("/targets/:id/luns/:lunId", requirePermission("iscsi", "write"), scstHandler.RemoveLUN)
|
||||
scstGroup.POST("/targets/:id/initiators", scstHandler.AddInitiator)
|
||||
scstGroup.POST("/targets/:id/enable", scstHandler.EnableTarget)
|
||||
scstGroup.POST("/targets/:id/disable", scstHandler.DisableTarget)
|
||||
scstGroup.DELETE("/targets/:id", requirePermission("iscsi", "write"), scstHandler.DeleteTarget)
|
||||
scstGroup.GET("/initiators", scstHandler.ListAllInitiators)
|
||||
scstGroup.GET("/initiators/:id", scstHandler.GetInitiator)
|
||||
scstGroup.DELETE("/initiators/:id", scstHandler.RemoveInitiator)
|
||||
scstGroup.GET("/extents", scstHandler.ListExtents)
|
||||
scstGroup.POST("/extents", scstHandler.CreateExtent)
|
||||
scstGroup.DELETE("/extents/:device", scstHandler.DeleteExtent)
|
||||
scstGroup.POST("/config/apply", scstHandler.ApplyConfig)
|
||||
scstGroup.GET("/handlers", scstHandler.ListHandlers)
|
||||
scstGroup.GET("/portals", scstHandler.ListPortals)
|
||||
scstGroup.GET("/portals/:id", scstHandler.GetPortal)
|
||||
scstGroup.POST("/portals", scstHandler.CreatePortal)
|
||||
scstGroup.PUT("/portals/:id", scstHandler.UpdatePortal)
|
||||
scstGroup.DELETE("/portals/:id", scstHandler.DeletePortal)
|
||||
// Initiator Groups routes
|
||||
scstGroup.GET("/initiator-groups", scstHandler.ListAllInitiatorGroups)
|
||||
scstGroup.GET("/initiator-groups/:id", scstHandler.GetInitiatorGroup)
|
||||
scstGroup.POST("/initiator-groups", requirePermission("iscsi", "write"), scstHandler.CreateInitiatorGroup)
|
||||
scstGroup.PUT("/initiator-groups/:id", requirePermission("iscsi", "write"), scstHandler.UpdateInitiatorGroup)
|
||||
scstGroup.DELETE("/initiator-groups/:id", requirePermission("iscsi", "write"), scstHandler.DeleteInitiatorGroup)
|
||||
scstGroup.POST("/initiator-groups/:id/initiators", requirePermission("iscsi", "write"), scstHandler.AddInitiatorToGroup)
|
||||
// Config file management
|
||||
scstGroup.GET("/config/file", requirePermission("iscsi", "read"), scstHandler.GetConfigFile)
|
||||
scstGroup.PUT("/config/file", requirePermission("iscsi", "write"), scstHandler.UpdateConfigFile)
|
||||
}
|
||||
|
||||
// Physical Tape Libraries
|
||||
tapeHandler := tape_physical.NewHandler(db, log)
|
||||
tapeGroup := protected.Group("/tape/physical")
|
||||
tapeGroup.Use(requirePermission("tape", "read"))
|
||||
{
|
||||
tapeGroup.GET("/libraries", tapeHandler.ListLibraries)
|
||||
tapeGroup.POST("/libraries/discover", tapeHandler.DiscoverLibraries)
|
||||
tapeGroup.GET("/libraries/:id", tapeHandler.GetLibrary)
|
||||
tapeGroup.POST("/libraries/:id/inventory", tapeHandler.PerformInventory)
|
||||
tapeGroup.POST("/libraries/:id/load", tapeHandler.LoadTape)
|
||||
tapeGroup.POST("/libraries/:id/unload", tapeHandler.UnloadTape)
|
||||
}
|
||||
|
||||
// Virtual Tape Libraries
|
||||
vtlHandler := tape_vtl.NewHandler(db, log)
|
||||
|
||||
// Start MHVTL monitor service in background (syncs every 5 minutes)
|
||||
mhvtlMonitor := tape_vtl.NewMHVTLMonitor(db, log, "/etc/mhvtl", 5*time.Minute)
|
||||
go mhvtlMonitor.Start(context.Background())
|
||||
|
||||
vtlGroup := protected.Group("/tape/vtl")
|
||||
vtlGroup.Use(requirePermission("tape", "read"))
|
||||
{
|
||||
vtlGroup.GET("/libraries", vtlHandler.ListLibraries)
|
||||
vtlGroup.POST("/libraries", vtlHandler.CreateLibrary)
|
||||
vtlGroup.GET("/libraries/:id", vtlHandler.GetLibrary)
|
||||
vtlGroup.DELETE("/libraries/:id", vtlHandler.DeleteLibrary)
|
||||
vtlGroup.GET("/libraries/:id/drives", vtlHandler.GetLibraryDrives)
|
||||
vtlGroup.GET("/libraries/:id/tapes", vtlHandler.GetLibraryTapes)
|
||||
vtlGroup.POST("/libraries/:id/tapes", vtlHandler.CreateTape)
|
||||
vtlGroup.POST("/libraries/:id/load", vtlHandler.LoadTape)
|
||||
vtlGroup.POST("/libraries/:id/unload", vtlHandler.UnloadTape)
|
||||
}
|
||||
|
||||
// System Management
|
||||
systemService := system.NewService(log)
|
||||
systemHandler := system.NewHandler(log, tasks.NewEngine(db, log))
|
||||
// Set service in handler (if handler needs direct access)
|
||||
// Note: Handler already has service via NewHandler, but we need to ensure it's the same instance
|
||||
|
||||
// Start network monitoring with RRD
|
||||
if err := systemService.StartNetworkMonitoring(context.Background()); err != nil {
|
||||
log.Warn("Failed to start network monitoring", "error", err)
|
||||
} else {
|
||||
log.Info("Network monitoring started with RRD")
|
||||
}
|
||||
|
||||
systemGroup := protected.Group("/system")
|
||||
systemGroup.Use(requirePermission("system", "read"))
|
||||
{
|
||||
systemGroup.GET("/services", systemHandler.ListServices)
|
||||
systemGroup.GET("/services/:name", systemHandler.GetServiceStatus)
|
||||
systemGroup.POST("/services/:name/restart", systemHandler.RestartService)
|
||||
systemGroup.GET("/services/:name/logs", systemHandler.GetServiceLogs)
|
||||
systemGroup.GET("/logs", systemHandler.GetSystemLogs)
|
||||
systemGroup.GET("/network/throughput", systemHandler.GetNetworkThroughput)
|
||||
systemGroup.POST("/support-bundle", systemHandler.GenerateSupportBundle)
|
||||
systemGroup.GET("/interfaces", systemHandler.ListNetworkInterfaces)
|
||||
systemGroup.PUT("/interfaces/:name", systemHandler.UpdateNetworkInterface)
|
||||
systemGroup.GET("/ntp", systemHandler.GetNTPSettings)
|
||||
systemGroup.POST("/ntp", systemHandler.SaveNTPSettings)
|
||||
systemGroup.POST("/execute", requirePermission("system", "write"), systemHandler.ExecuteCommand)
|
||||
}
|
||||
|
||||
// IAM routes - GetUser can be accessed by user viewing own profile or admin
|
||||
iamHandler := iam.NewHandler(db, cfg, log)
|
||||
protected.GET("/iam/users/:id", iamHandler.GetUser)
|
||||
|
||||
// IAM admin routes
|
||||
iamGroup := protected.Group("/iam")
|
||||
iamGroup.Use(requireRole("admin"))
|
||||
{
|
||||
iamGroup.GET("/users", iamHandler.ListUsers)
|
||||
iamGroup.POST("/users", iamHandler.CreateUser)
|
||||
iamGroup.PUT("/users/:id", iamHandler.UpdateUser)
|
||||
iamGroup.DELETE("/users/:id", iamHandler.DeleteUser)
|
||||
// Roles routes
|
||||
iamGroup.GET("/roles", iamHandler.ListRoles)
|
||||
iamGroup.GET("/roles/:id", iamHandler.GetRole)
|
||||
iamGroup.POST("/roles", iamHandler.CreateRole)
|
||||
iamGroup.PUT("/roles/:id", iamHandler.UpdateRole)
|
||||
iamGroup.DELETE("/roles/:id", iamHandler.DeleteRole)
|
||||
iamGroup.GET("/roles/:id/permissions", iamHandler.GetRolePermissions)
|
||||
iamGroup.POST("/roles/:id/permissions", iamHandler.AssignPermissionToRole)
|
||||
iamGroup.DELETE("/roles/:id/permissions", iamHandler.RemovePermissionFromRole)
|
||||
|
||||
// Permissions routes
|
||||
iamGroup.GET("/permissions", iamHandler.ListPermissions)
|
||||
|
||||
// User role/group assignment
|
||||
iamGroup.POST("/users/:id/roles", iamHandler.AssignRoleToUser)
|
||||
iamGroup.DELETE("/users/:id/roles", iamHandler.RemoveRoleFromUser)
|
||||
iamGroup.POST("/users/:id/groups", iamHandler.AssignGroupToUser)
|
||||
iamGroup.DELETE("/users/:id/groups", iamHandler.RemoveGroupFromUser)
|
||||
|
||||
// Groups routes
|
||||
iamGroup.GET("/groups", iamHandler.ListGroups)
|
||||
iamGroup.GET("/groups/:id", iamHandler.GetGroup)
|
||||
iamGroup.POST("/groups", iamHandler.CreateGroup)
|
||||
iamGroup.PUT("/groups/:id", iamHandler.UpdateGroup)
|
||||
iamGroup.DELETE("/groups/:id", iamHandler.DeleteGroup)
|
||||
iamGroup.POST("/groups/:id/users", iamHandler.AddUserToGroup)
|
||||
iamGroup.DELETE("/groups/:id/users/:user_id", iamHandler.RemoveUserFromGroup)
|
||||
}
|
||||
|
||||
// Backup Jobs
|
||||
backupService := backup.NewService(db, log)
|
||||
// Set up direct connection to Bacula database
|
||||
// Try common Bacula database names
|
||||
baculaDBName := "bacula" // Default
|
||||
if err := backupService.SetBaculaDatabase(cfg.Database, baculaDBName); err != nil {
|
||||
log.Warn("Failed to connect to Bacula database, trying 'bareos'", "error", err)
|
||||
// Try 'bareos' as alternative
|
||||
if err := backupService.SetBaculaDatabase(cfg.Database, "bareos"); err != nil {
|
||||
log.Error("Failed to connect to Bacula database", "error", err, "tried", []string{"bacula", "bareos"})
|
||||
// Continue anyway - will fallback to bconsole
|
||||
}
|
||||
}
|
||||
backupHandler := backup.NewHandler(backupService, log)
|
||||
backupGroup := protected.Group("/backup")
|
||||
backupGroup.Use(requirePermission("backup", "read"))
|
||||
{
|
||||
backupGroup.GET("/dashboard/stats", backupHandler.GetDashboardStats)
|
||||
backupGroup.GET("/jobs", backupHandler.ListJobs)
|
||||
backupGroup.GET("/jobs/:id", backupHandler.GetJob)
|
||||
backupGroup.POST("/jobs", requirePermission("backup", "write"), backupHandler.CreateJob)
|
||||
backupGroup.GET("/clients", backupHandler.ListClients)
|
||||
backupGroup.GET("/storage/pools", backupHandler.ListStoragePools)
|
||||
backupGroup.POST("/storage/pools", requirePermission("backup", "write"), backupHandler.CreateStoragePool)
|
||||
backupGroup.DELETE("/storage/pools/:id", requirePermission("backup", "write"), backupHandler.DeleteStoragePool)
|
||||
backupGroup.GET("/storage/volumes", backupHandler.ListStorageVolumes)
|
||||
backupGroup.POST("/storage/volumes", requirePermission("backup", "write"), backupHandler.CreateStorageVolume)
|
||||
backupGroup.PUT("/storage/volumes/:id", requirePermission("backup", "write"), backupHandler.UpdateStorageVolume)
|
||||
backupGroup.DELETE("/storage/volumes/:id", requirePermission("backup", "write"), backupHandler.DeleteStorageVolume)
|
||||
backupGroup.GET("/media", backupHandler.ListMedia)
|
||||
backupGroup.GET("/storage/daemons", backupHandler.ListStorageDaemons)
|
||||
backupGroup.POST("/console/execute", requirePermission("backup", "write"), backupHandler.ExecuteBconsoleCommand)
|
||||
}
|
||||
|
||||
// Monitoring
|
||||
monitoringHandler := monitoring.NewHandler(db, log, alertService, metricsService, eventHub)
|
||||
monitoringGroup := protected.Group("/monitoring")
|
||||
monitoringGroup.Use(requirePermission("monitoring", "read"))
|
||||
{
|
||||
// Alerts
|
||||
monitoringGroup.GET("/alerts", monitoringHandler.ListAlerts)
|
||||
monitoringGroup.GET("/alerts/:id", monitoringHandler.GetAlert)
|
||||
monitoringGroup.POST("/alerts/:id/acknowledge", monitoringHandler.AcknowledgeAlert)
|
||||
monitoringGroup.POST("/alerts/:id/resolve", monitoringHandler.ResolveAlert)
|
||||
|
||||
// Metrics
|
||||
monitoringGroup.GET("/metrics", monitoringHandler.GetMetrics)
|
||||
|
||||
// WebSocket (no permission check needed, handled by auth middleware)
|
||||
monitoringGroup.GET("/events", monitoringHandler.WebSocketHandler)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return r
|
||||
}
|
||||
|
||||
// ginLogger creates a Gin middleware for logging
|
||||
func ginLogger(log *logger.Logger) gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
c.Next()
|
||||
|
||||
log.Info("HTTP request",
|
||||
"method", c.Request.Method,
|
||||
"path", c.Request.URL.Path,
|
||||
"status", c.Writer.Status(),
|
||||
"client_ip", c.ClientIP(),
|
||||
"latency_ms", c.Writer.Size(),
|
||||
)
|
||||
}
|
||||
}
|
||||
102
backend/internal/common/router/security.go
Normal file
102
backend/internal/common/router/security.go
Normal file
@@ -0,0 +1,102 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"github.com/atlasos/calypso/internal/common/config"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// securityHeadersMiddleware adds security headers to responses
|
||||
func securityHeadersMiddleware(cfg *config.Config) gin.HandlerFunc {
|
||||
if !cfg.Security.SecurityHeaders.Enabled {
|
||||
return func(c *gin.Context) {
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
return func(c *gin.Context) {
|
||||
// Prevent clickjacking
|
||||
c.Header("X-Frame-Options", "DENY")
|
||||
|
||||
// Prevent MIME type sniffing
|
||||
c.Header("X-Content-Type-Options", "nosniff")
|
||||
|
||||
// Enable XSS protection
|
||||
c.Header("X-XSS-Protection", "1; mode=block")
|
||||
|
||||
// Strict Transport Security (HSTS) - only if using HTTPS
|
||||
// c.Header("Strict-Transport-Security", "max-age=31536000; includeSubDomains")
|
||||
|
||||
// Content Security Policy (basic)
|
||||
c.Header("Content-Security-Policy", "default-src 'self'")
|
||||
|
||||
// Referrer Policy
|
||||
c.Header("Referrer-Policy", "strict-origin-when-cross-origin")
|
||||
|
||||
// Permissions Policy
|
||||
c.Header("Permissions-Policy", "geolocation=(), microphone=(), camera=()")
|
||||
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
// corsMiddleware creates configurable CORS middleware
|
||||
func corsMiddleware(cfg *config.Config) gin.HandlerFunc {
|
||||
return func(c *gin.Context) {
|
||||
origin := c.Request.Header.Get("Origin")
|
||||
|
||||
// Check if origin is allowed
|
||||
allowed := false
|
||||
for _, allowedOrigin := range cfg.Security.CORS.AllowedOrigins {
|
||||
if allowedOrigin == "*" || allowedOrigin == origin {
|
||||
allowed = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if allowed {
|
||||
c.Writer.Header().Set("Access-Control-Allow-Origin", origin)
|
||||
}
|
||||
|
||||
if cfg.Security.CORS.AllowCredentials {
|
||||
c.Writer.Header().Set("Access-Control-Allow-Credentials", "true")
|
||||
}
|
||||
|
||||
// Set allowed methods
|
||||
methods := cfg.Security.CORS.AllowedMethods
|
||||
if len(methods) == 0 {
|
||||
methods = []string{"GET", "POST", "PUT", "DELETE", "PATCH", "OPTIONS"}
|
||||
}
|
||||
c.Writer.Header().Set("Access-Control-Allow-Methods", joinStrings(methods, ", "))
|
||||
|
||||
// Set allowed headers
|
||||
headers := cfg.Security.CORS.AllowedHeaders
|
||||
if len(headers) == 0 {
|
||||
headers = []string{"Content-Type", "Authorization", "Accept", "Origin"}
|
||||
}
|
||||
c.Writer.Header().Set("Access-Control-Allow-Headers", joinStrings(headers, ", "))
|
||||
|
||||
// Handle preflight requests
|
||||
if c.Request.Method == "OPTIONS" {
|
||||
c.AbortWithStatus(204)
|
||||
return
|
||||
}
|
||||
|
||||
c.Next()
|
||||
}
|
||||
}
|
||||
|
||||
// joinStrings joins a slice of strings with a separator
|
||||
func joinStrings(strs []string, sep string) string {
|
||||
if len(strs) == 0 {
|
||||
return ""
|
||||
}
|
||||
if len(strs) == 1 {
|
||||
return strs[0]
|
||||
}
|
||||
result := strs[0]
|
||||
for _, s := range strs[1:] {
|
||||
result += sep + s
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
221
backend/internal/iam/group.go
Normal file
221
backend/internal/iam/group.go
Normal file
@@ -0,0 +1,221 @@
|
||||
package iam
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
)
|
||||
|
||||
// Group represents a user group
|
||||
type Group struct {
|
||||
ID string
|
||||
Name string
|
||||
Description string
|
||||
IsSystem bool
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
UserCount int
|
||||
RoleCount int
|
||||
}
|
||||
|
||||
// GetGroupByID retrieves a group by ID
|
||||
func GetGroupByID(db *database.DB, groupID string) (*Group, error) {
|
||||
query := `
|
||||
SELECT id, name, description, is_system, created_at, updated_at
|
||||
FROM groups
|
||||
WHERE id = $1
|
||||
`
|
||||
|
||||
var group Group
|
||||
err := db.QueryRow(query, groupID).Scan(
|
||||
&group.ID, &group.Name, &group.Description, &group.IsSystem,
|
||||
&group.CreatedAt, &group.UpdatedAt,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Get user count
|
||||
var userCount int
|
||||
db.QueryRow("SELECT COUNT(*) FROM user_groups WHERE group_id = $1", groupID).Scan(&userCount)
|
||||
group.UserCount = userCount
|
||||
|
||||
// Get role count
|
||||
var roleCount int
|
||||
db.QueryRow("SELECT COUNT(*) FROM group_roles WHERE group_id = $1", groupID).Scan(&roleCount)
|
||||
group.RoleCount = roleCount
|
||||
|
||||
return &group, nil
|
||||
}
|
||||
|
||||
// GetGroupByName retrieves a group by name
|
||||
func GetGroupByName(db *database.DB, name string) (*Group, error) {
|
||||
query := `
|
||||
SELECT id, name, description, is_system, created_at, updated_at
|
||||
FROM groups
|
||||
WHERE name = $1
|
||||
`
|
||||
|
||||
var group Group
|
||||
err := db.QueryRow(query, name).Scan(
|
||||
&group.ID, &group.Name, &group.Description, &group.IsSystem,
|
||||
&group.CreatedAt, &group.UpdatedAt,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &group, nil
|
||||
}
|
||||
|
||||
// GetUserGroups retrieves all groups for a user
|
||||
func GetUserGroups(db *database.DB, userID string) ([]string, error) {
|
||||
query := `
|
||||
SELECT g.name
|
||||
FROM groups g
|
||||
INNER JOIN user_groups ug ON g.id = ug.group_id
|
||||
WHERE ug.user_id = $1
|
||||
ORDER BY g.name
|
||||
`
|
||||
|
||||
rows, err := db.Query(query, userID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var groups []string
|
||||
for rows.Next() {
|
||||
var groupName string
|
||||
if err := rows.Scan(&groupName); err != nil {
|
||||
return []string{}, err
|
||||
}
|
||||
groups = append(groups, groupName)
|
||||
}
|
||||
|
||||
if groups == nil {
|
||||
groups = []string{}
|
||||
}
|
||||
return groups, rows.Err()
|
||||
}
|
||||
|
||||
// GetGroupUsers retrieves all users in a group
|
||||
func GetGroupUsers(db *database.DB, groupID string) ([]string, error) {
|
||||
query := `
|
||||
SELECT u.id
|
||||
FROM users u
|
||||
INNER JOIN user_groups ug ON u.id = ug.user_id
|
||||
WHERE ug.group_id = $1
|
||||
ORDER BY u.username
|
||||
`
|
||||
|
||||
rows, err := db.Query(query, groupID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var userIDs []string
|
||||
for rows.Next() {
|
||||
var userID string
|
||||
if err := rows.Scan(&userID); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
userIDs = append(userIDs, userID)
|
||||
}
|
||||
|
||||
return userIDs, rows.Err()
|
||||
}
|
||||
|
||||
// GetGroupRoles retrieves all roles for a group
|
||||
func GetGroupRoles(db *database.DB, groupID string) ([]string, error) {
|
||||
query := `
|
||||
SELECT r.name
|
||||
FROM roles r
|
||||
INNER JOIN group_roles gr ON r.id = gr.role_id
|
||||
WHERE gr.group_id = $1
|
||||
ORDER BY r.name
|
||||
`
|
||||
|
||||
rows, err := db.Query(query, groupID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var roles []string
|
||||
for rows.Next() {
|
||||
var role string
|
||||
if err := rows.Scan(&role); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
roles = append(roles, role)
|
||||
}
|
||||
|
||||
return roles, rows.Err()
|
||||
}
|
||||
|
||||
// AddUserToGroup adds a user to a group
|
||||
func AddUserToGroup(db *database.DB, userID, groupID, assignedBy string) error {
|
||||
query := `
|
||||
INSERT INTO user_groups (user_id, group_id, assigned_by)
|
||||
VALUES ($1, $2, $3)
|
||||
ON CONFLICT (user_id, group_id) DO NOTHING
|
||||
`
|
||||
_, err := db.Exec(query, userID, groupID, assignedBy)
|
||||
return err
|
||||
}
|
||||
|
||||
// RemoveUserFromGroup removes a user from a group
|
||||
func RemoveUserFromGroup(db *database.DB, userID, groupID string) error {
|
||||
query := `DELETE FROM user_groups WHERE user_id = $1 AND group_id = $2`
|
||||
_, err := db.Exec(query, userID, groupID)
|
||||
return err
|
||||
}
|
||||
|
||||
// AddRoleToGroup adds a role to a group
|
||||
func AddRoleToGroup(db *database.DB, groupID, roleID string) error {
|
||||
query := `
|
||||
INSERT INTO group_roles (group_id, role_id)
|
||||
VALUES ($1, $2)
|
||||
ON CONFLICT (group_id, role_id) DO NOTHING
|
||||
`
|
||||
_, err := db.Exec(query, groupID, roleID)
|
||||
return err
|
||||
}
|
||||
|
||||
// RemoveRoleFromGroup removes a role from a group
|
||||
func RemoveRoleFromGroup(db *database.DB, groupID, roleID string) error {
|
||||
query := `DELETE FROM group_roles WHERE group_id = $1 AND role_id = $2`
|
||||
_, err := db.Exec(query, groupID, roleID)
|
||||
return err
|
||||
}
|
||||
|
||||
// GetUserRolesFromGroups retrieves all roles for a user via groups
|
||||
func GetUserRolesFromGroups(db *database.DB, userID string) ([]string, error) {
|
||||
query := `
|
||||
SELECT DISTINCT r.name
|
||||
FROM roles r
|
||||
INNER JOIN group_roles gr ON r.id = gr.role_id
|
||||
INNER JOIN user_groups ug ON gr.group_id = ug.group_id
|
||||
WHERE ug.user_id = $1
|
||||
ORDER BY r.name
|
||||
`
|
||||
|
||||
rows, err := db.Query(query, userID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var roles []string
|
||||
for rows.Next() {
|
||||
var role string
|
||||
if err := rows.Scan(&role); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
roles = append(roles, role)
|
||||
}
|
||||
|
||||
return roles, rows.Err()
|
||||
}
|
||||
1245
backend/internal/iam/handler.go
Normal file
1245
backend/internal/iam/handler.go
Normal file
File diff suppressed because it is too large
Load Diff
237
backend/internal/iam/role.go
Normal file
237
backend/internal/iam/role.go
Normal file
@@ -0,0 +1,237 @@
|
||||
package iam
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
)
|
||||
|
||||
// Role represents a system role
|
||||
type Role struct {
|
||||
ID string
|
||||
Name string
|
||||
Description string
|
||||
IsSystem bool
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
}
|
||||
|
||||
// GetRoleByID retrieves a role by ID
|
||||
func GetRoleByID(db *database.DB, roleID string) (*Role, error) {
|
||||
query := `
|
||||
SELECT id, name, description, is_system, created_at, updated_at
|
||||
FROM roles
|
||||
WHERE id = $1
|
||||
`
|
||||
|
||||
var role Role
|
||||
err := db.QueryRow(query, roleID).Scan(
|
||||
&role.ID, &role.Name, &role.Description, &role.IsSystem,
|
||||
&role.CreatedAt, &role.UpdatedAt,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &role, nil
|
||||
}
|
||||
|
||||
// GetRoleByName retrieves a role by name
|
||||
func GetRoleByName(db *database.DB, name string) (*Role, error) {
|
||||
query := `
|
||||
SELECT id, name, description, is_system, created_at, updated_at
|
||||
FROM roles
|
||||
WHERE name = $1
|
||||
`
|
||||
|
||||
var role Role
|
||||
err := db.QueryRow(query, name).Scan(
|
||||
&role.ID, &role.Name, &role.Description, &role.IsSystem,
|
||||
&role.CreatedAt, &role.UpdatedAt,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &role, nil
|
||||
}
|
||||
|
||||
// ListRoles retrieves all roles
|
||||
func ListRoles(db *database.DB) ([]*Role, error) {
|
||||
query := `
|
||||
SELECT id, name, description, is_system, created_at, updated_at
|
||||
FROM roles
|
||||
ORDER BY name
|
||||
`
|
||||
|
||||
rows, err := db.Query(query)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var roles []*Role
|
||||
for rows.Next() {
|
||||
var role Role
|
||||
if err := rows.Scan(
|
||||
&role.ID, &role.Name, &role.Description, &role.IsSystem,
|
||||
&role.CreatedAt, &role.UpdatedAt,
|
||||
); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
roles = append(roles, &role)
|
||||
}
|
||||
|
||||
return roles, rows.Err()
|
||||
}
|
||||
|
||||
// CreateRole creates a new role
|
||||
func CreateRole(db *database.DB, name, description string) (*Role, error) {
|
||||
query := `
|
||||
INSERT INTO roles (name, description)
|
||||
VALUES ($1, $2)
|
||||
RETURNING id, name, description, is_system, created_at, updated_at
|
||||
`
|
||||
|
||||
var role Role
|
||||
err := db.QueryRow(query, name, description).Scan(
|
||||
&role.ID, &role.Name, &role.Description, &role.IsSystem,
|
||||
&role.CreatedAt, &role.UpdatedAt,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &role, nil
|
||||
}
|
||||
|
||||
// UpdateRole updates an existing role
|
||||
func UpdateRole(db *database.DB, roleID, name, description string) error {
|
||||
query := `
|
||||
UPDATE roles
|
||||
SET name = $1, description = $2, updated_at = NOW()
|
||||
WHERE id = $3
|
||||
`
|
||||
_, err := db.Exec(query, name, description, roleID)
|
||||
return err
|
||||
}
|
||||
|
||||
// DeleteRole deletes a role
|
||||
func DeleteRole(db *database.DB, roleID string) error {
|
||||
query := `DELETE FROM roles WHERE id = $1`
|
||||
_, err := db.Exec(query, roleID)
|
||||
return err
|
||||
}
|
||||
|
||||
// GetRoleUsers retrieves all users with a specific role
|
||||
func GetRoleUsers(db *database.DB, roleID string) ([]string, error) {
|
||||
query := `
|
||||
SELECT u.id
|
||||
FROM users u
|
||||
INNER JOIN user_roles ur ON u.id = ur.user_id
|
||||
WHERE ur.role_id = $1
|
||||
ORDER BY u.username
|
||||
`
|
||||
|
||||
rows, err := db.Query(query, roleID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var userIDs []string
|
||||
for rows.Next() {
|
||||
var userID string
|
||||
if err := rows.Scan(&userID); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
userIDs = append(userIDs, userID)
|
||||
}
|
||||
|
||||
return userIDs, rows.Err()
|
||||
}
|
||||
|
||||
// GetRolePermissions retrieves all permissions for a role
|
||||
func GetRolePermissions(db *database.DB, roleID string) ([]string, error) {
|
||||
query := `
|
||||
SELECT p.name
|
||||
FROM permissions p
|
||||
INNER JOIN role_permissions rp ON p.id = rp.permission_id
|
||||
WHERE rp.role_id = $1
|
||||
ORDER BY p.name
|
||||
`
|
||||
|
||||
rows, err := db.Query(query, roleID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var permissions []string
|
||||
for rows.Next() {
|
||||
var perm string
|
||||
if err := rows.Scan(&perm); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
permissions = append(permissions, perm)
|
||||
}
|
||||
|
||||
return permissions, rows.Err()
|
||||
}
|
||||
|
||||
// AddPermissionToRole assigns a permission to a role
|
||||
func AddPermissionToRole(db *database.DB, roleID, permissionID string) error {
|
||||
query := `
|
||||
INSERT INTO role_permissions (role_id, permission_id)
|
||||
VALUES ($1, $2)
|
||||
ON CONFLICT (role_id, permission_id) DO NOTHING
|
||||
`
|
||||
_, err := db.Exec(query, roleID, permissionID)
|
||||
return err
|
||||
}
|
||||
|
||||
// RemovePermissionFromRole removes a permission from a role
|
||||
func RemovePermissionFromRole(db *database.DB, roleID, permissionID string) error {
|
||||
query := `DELETE FROM role_permissions WHERE role_id = $1 AND permission_id = $2`
|
||||
_, err := db.Exec(query, roleID, permissionID)
|
||||
return err
|
||||
}
|
||||
|
||||
// GetPermissionIDByName retrieves a permission ID by name
|
||||
func GetPermissionIDByName(db *database.DB, permissionName string) (string, error) {
|
||||
var permissionID string
|
||||
err := db.QueryRow("SELECT id FROM permissions WHERE name = $1", permissionName).Scan(&permissionID)
|
||||
return permissionID, err
|
||||
}
|
||||
|
||||
// ListPermissions retrieves all permissions
|
||||
func ListPermissions(db *database.DB) ([]map[string]interface{}, error) {
|
||||
query := `
|
||||
SELECT id, name, resource, action, description
|
||||
FROM permissions
|
||||
ORDER BY resource, action
|
||||
`
|
||||
|
||||
rows, err := db.Query(query)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var permissions []map[string]interface{}
|
||||
for rows.Next() {
|
||||
var id, name, resource, action, description string
|
||||
if err := rows.Scan(&id, &name, &resource, &action, &description); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
permissions = append(permissions, map[string]interface{}{
|
||||
"id": id,
|
||||
"name": name,
|
||||
"resource": resource,
|
||||
"action": action,
|
||||
"description": description,
|
||||
})
|
||||
}
|
||||
|
||||
return permissions, rows.Err()
|
||||
}
|
||||
174
backend/internal/iam/user.go
Normal file
174
backend/internal/iam/user.go
Normal file
@@ -0,0 +1,174 @@
|
||||
package iam
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
)
|
||||
|
||||
// User represents a system user
|
||||
type User struct {
|
||||
ID string
|
||||
Username string
|
||||
Email string
|
||||
PasswordHash string
|
||||
FullName string
|
||||
IsActive bool
|
||||
IsSystem bool
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
LastLoginAt sql.NullTime
|
||||
Roles []string
|
||||
Permissions []string
|
||||
}
|
||||
|
||||
// GetUserByID retrieves a user by ID
|
||||
func GetUserByID(db *database.DB, userID string) (*User, error) {
|
||||
query := `
|
||||
SELECT id, username, email, password_hash, full_name, is_active, is_system,
|
||||
created_at, updated_at, last_login_at
|
||||
FROM users
|
||||
WHERE id = $1
|
||||
`
|
||||
|
||||
var user User
|
||||
var lastLogin sql.NullTime
|
||||
err := db.QueryRow(query, userID).Scan(
|
||||
&user.ID, &user.Username, &user.Email, &user.PasswordHash,
|
||||
&user.FullName, &user.IsActive, &user.IsSystem,
|
||||
&user.CreatedAt, &user.UpdatedAt, &lastLogin,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
user.LastLoginAt = lastLogin
|
||||
return &user, nil
|
||||
}
|
||||
|
||||
// GetUserByUsername retrieves a user by username
|
||||
func GetUserByUsername(db *database.DB, username string) (*User, error) {
|
||||
query := `
|
||||
SELECT id, username, email, password_hash, full_name, is_active, is_system,
|
||||
created_at, updated_at, last_login_at
|
||||
FROM users
|
||||
WHERE username = $1
|
||||
`
|
||||
|
||||
var user User
|
||||
var lastLogin sql.NullTime
|
||||
err := db.QueryRow(query, username).Scan(
|
||||
&user.ID, &user.Username, &user.Email, &user.PasswordHash,
|
||||
&user.FullName, &user.IsActive, &user.IsSystem,
|
||||
&user.CreatedAt, &user.UpdatedAt, &lastLogin,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
user.LastLoginAt = lastLogin
|
||||
return &user, nil
|
||||
}
|
||||
|
||||
// GetUserRoles retrieves all roles for a user
|
||||
func GetUserRoles(db *database.DB, userID string) ([]string, error) {
|
||||
query := `
|
||||
SELECT r.name
|
||||
FROM roles r
|
||||
INNER JOIN user_roles ur ON r.id = ur.role_id
|
||||
WHERE ur.user_id = $1
|
||||
`
|
||||
|
||||
rows, err := db.Query(query, userID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var roles []string
|
||||
for rows.Next() {
|
||||
var role string
|
||||
if err := rows.Scan(&role); err != nil {
|
||||
return []string{}, err
|
||||
}
|
||||
roles = append(roles, role)
|
||||
}
|
||||
|
||||
if roles == nil {
|
||||
roles = []string{}
|
||||
}
|
||||
return roles, rows.Err()
|
||||
}
|
||||
|
||||
// GetUserPermissions retrieves all permissions for a user (via roles)
|
||||
func GetUserPermissions(db *database.DB, userID string) ([]string, error) {
|
||||
query := `
|
||||
SELECT DISTINCT p.name
|
||||
FROM permissions p
|
||||
INNER JOIN role_permissions rp ON p.id = rp.permission_id
|
||||
INNER JOIN user_roles ur ON rp.role_id = ur.role_id
|
||||
WHERE ur.user_id = $1
|
||||
`
|
||||
|
||||
rows, err := db.Query(query, userID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var permissions []string
|
||||
for rows.Next() {
|
||||
var perm string
|
||||
if err := rows.Scan(&perm); err != nil {
|
||||
return []string{}, err
|
||||
}
|
||||
permissions = append(permissions, perm)
|
||||
}
|
||||
|
||||
if permissions == nil {
|
||||
permissions = []string{}
|
||||
}
|
||||
return permissions, rows.Err()
|
||||
}
|
||||
|
||||
// AddUserRole assigns a role to a user
|
||||
func AddUserRole(db *database.DB, userID, roleID, assignedBy string) error {
|
||||
query := `
|
||||
INSERT INTO user_roles (user_id, role_id, assigned_by)
|
||||
VALUES ($1, $2, $3)
|
||||
ON CONFLICT (user_id, role_id) DO NOTHING
|
||||
`
|
||||
result, err := db.Exec(query, userID, roleID, assignedBy)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to insert user role: %w", err)
|
||||
}
|
||||
|
||||
// Check if row was actually inserted (not just skipped due to conflict)
|
||||
rowsAffected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get rows affected: %w", err)
|
||||
}
|
||||
|
||||
if rowsAffected == 0 {
|
||||
// Row already exists, this is not an error but we should know about it
|
||||
return nil // ON CONFLICT DO NOTHING means this is expected
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// RemoveUserRole removes a role from a user
|
||||
func RemoveUserRole(db *database.DB, userID, roleID string) error {
|
||||
query := `DELETE FROM user_roles WHERE user_id = $1 AND role_id = $2`
|
||||
_, err := db.Exec(query, userID, roleID)
|
||||
return err
|
||||
}
|
||||
|
||||
// GetRoleIDByName retrieves a role ID by name
|
||||
func GetRoleIDByName(db *database.DB, roleName string) (string, error) {
|
||||
var roleID string
|
||||
err := db.QueryRow("SELECT id FROM roles WHERE name = $1", roleName).Scan(&roleID)
|
||||
return roleID, err
|
||||
}
|
||||
383
backend/internal/monitoring/alert.go
Normal file
383
backend/internal/monitoring/alert.go
Normal file
@@ -0,0 +1,383 @@
|
||||
package monitoring
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
// AlertSeverity represents the severity level of an alert
|
||||
type AlertSeverity string
|
||||
|
||||
const (
|
||||
AlertSeverityInfo AlertSeverity = "info"
|
||||
AlertSeverityWarning AlertSeverity = "warning"
|
||||
AlertSeverityCritical AlertSeverity = "critical"
|
||||
)
|
||||
|
||||
// AlertSource represents where the alert originated
|
||||
type AlertSource string
|
||||
|
||||
const (
|
||||
AlertSourceSystem AlertSource = "system"
|
||||
AlertSourceStorage AlertSource = "storage"
|
||||
AlertSourceSCST AlertSource = "scst"
|
||||
AlertSourceTape AlertSource = "tape"
|
||||
AlertSourceVTL AlertSource = "vtl"
|
||||
AlertSourceTask AlertSource = "task"
|
||||
AlertSourceAPI AlertSource = "api"
|
||||
)
|
||||
|
||||
// Alert represents a system alert
|
||||
type Alert struct {
|
||||
ID string `json:"id"`
|
||||
Severity AlertSeverity `json:"severity"`
|
||||
Source AlertSource `json:"source"`
|
||||
Title string `json:"title"`
|
||||
Message string `json:"message"`
|
||||
ResourceType string `json:"resource_type,omitempty"`
|
||||
ResourceID string `json:"resource_id,omitempty"`
|
||||
IsAcknowledged bool `json:"is_acknowledged"`
|
||||
AcknowledgedBy string `json:"acknowledged_by,omitempty"`
|
||||
AcknowledgedAt *time.Time `json:"acknowledged_at,omitempty"`
|
||||
ResolvedAt *time.Time `json:"resolved_at,omitempty"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
Metadata map[string]interface{} `json:"metadata,omitempty"`
|
||||
}
|
||||
|
||||
// AlertService manages alerts
|
||||
type AlertService struct {
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
eventHub *EventHub
|
||||
}
|
||||
|
||||
// NewAlertService creates a new alert service
|
||||
func NewAlertService(db *database.DB, log *logger.Logger) *AlertService {
|
||||
return &AlertService{
|
||||
db: db,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// SetEventHub sets the event hub for broadcasting alerts
|
||||
func (s *AlertService) SetEventHub(eventHub *EventHub) {
|
||||
s.eventHub = eventHub
|
||||
}
|
||||
|
||||
// CreateAlert creates a new alert
|
||||
func (s *AlertService) CreateAlert(ctx context.Context, alert *Alert) error {
|
||||
alert.ID = uuid.New().String()
|
||||
alert.CreatedAt = time.Now()
|
||||
|
||||
var metadataJSON *string
|
||||
if alert.Metadata != nil {
|
||||
bytes, err := json.Marshal(alert.Metadata)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to marshal metadata: %w", err)
|
||||
}
|
||||
jsonStr := string(bytes)
|
||||
metadataJSON = &jsonStr
|
||||
}
|
||||
|
||||
query := `
|
||||
INSERT INTO alerts (id, severity, source, title, message, resource_type, resource_id, metadata)
|
||||
VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
|
||||
`
|
||||
|
||||
_, err := s.db.ExecContext(ctx, query,
|
||||
alert.ID,
|
||||
string(alert.Severity),
|
||||
string(alert.Source),
|
||||
alert.Title,
|
||||
alert.Message,
|
||||
alert.ResourceType,
|
||||
alert.ResourceID,
|
||||
metadataJSON,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create alert: %w", err)
|
||||
}
|
||||
|
||||
s.logger.Info("Alert created",
|
||||
"alert_id", alert.ID,
|
||||
"severity", alert.Severity,
|
||||
"source", alert.Source,
|
||||
"title", alert.Title,
|
||||
)
|
||||
|
||||
// Broadcast alert via WebSocket if event hub is set
|
||||
if s.eventHub != nil {
|
||||
s.eventHub.BroadcastAlert(alert)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ListAlerts retrieves alerts with optional filters
|
||||
func (s *AlertService) ListAlerts(ctx context.Context, filters *AlertFilters) ([]*Alert, error) {
|
||||
query := `
|
||||
SELECT id, severity, source, title, message, resource_type, resource_id,
|
||||
is_acknowledged, acknowledged_by, acknowledged_at, resolved_at,
|
||||
created_at, metadata
|
||||
FROM alerts
|
||||
WHERE 1=1
|
||||
`
|
||||
args := []interface{}{}
|
||||
argIndex := 1
|
||||
|
||||
if filters != nil {
|
||||
if filters.Severity != "" {
|
||||
query += fmt.Sprintf(" AND severity = $%d", argIndex)
|
||||
args = append(args, string(filters.Severity))
|
||||
argIndex++
|
||||
}
|
||||
if filters.Source != "" {
|
||||
query += fmt.Sprintf(" AND source = $%d", argIndex)
|
||||
args = append(args, string(filters.Source))
|
||||
argIndex++
|
||||
}
|
||||
if filters.IsAcknowledged != nil {
|
||||
query += fmt.Sprintf(" AND is_acknowledged = $%d", argIndex)
|
||||
args = append(args, *filters.IsAcknowledged)
|
||||
argIndex++
|
||||
}
|
||||
if filters.ResourceType != "" {
|
||||
query += fmt.Sprintf(" AND resource_type = $%d", argIndex)
|
||||
args = append(args, filters.ResourceType)
|
||||
argIndex++
|
||||
}
|
||||
if filters.ResourceID != "" {
|
||||
query += fmt.Sprintf(" AND resource_id = $%d", argIndex)
|
||||
args = append(args, filters.ResourceID)
|
||||
argIndex++
|
||||
}
|
||||
}
|
||||
|
||||
query += " ORDER BY created_at DESC"
|
||||
|
||||
if filters != nil && filters.Limit > 0 {
|
||||
query += fmt.Sprintf(" LIMIT $%d", argIndex)
|
||||
args = append(args, filters.Limit)
|
||||
}
|
||||
|
||||
rows, err := s.db.QueryContext(ctx, query, args...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to query alerts: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var alerts []*Alert
|
||||
for rows.Next() {
|
||||
alert, err := s.scanAlert(rows)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to scan alert: %w", err)
|
||||
}
|
||||
alerts = append(alerts, alert)
|
||||
}
|
||||
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("error iterating alerts: %w", err)
|
||||
}
|
||||
|
||||
return alerts, nil
|
||||
}
|
||||
|
||||
// GetAlert retrieves a single alert by ID
|
||||
func (s *AlertService) GetAlert(ctx context.Context, alertID string) (*Alert, error) {
|
||||
query := `
|
||||
SELECT id, severity, source, title, message, resource_type, resource_id,
|
||||
is_acknowledged, acknowledged_by, acknowledged_at, resolved_at,
|
||||
created_at, metadata
|
||||
FROM alerts
|
||||
WHERE id = $1
|
||||
`
|
||||
|
||||
row := s.db.QueryRowContext(ctx, query, alertID)
|
||||
alert, err := s.scanAlertRow(row)
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, fmt.Errorf("alert not found")
|
||||
}
|
||||
return nil, fmt.Errorf("failed to get alert: %w", err)
|
||||
}
|
||||
|
||||
return alert, nil
|
||||
}
|
||||
|
||||
// AcknowledgeAlert marks an alert as acknowledged
|
||||
func (s *AlertService) AcknowledgeAlert(ctx context.Context, alertID string, userID string) error {
|
||||
query := `
|
||||
UPDATE alerts
|
||||
SET is_acknowledged = true, acknowledged_by = $1, acknowledged_at = NOW()
|
||||
WHERE id = $2 AND is_acknowledged = false
|
||||
`
|
||||
|
||||
result, err := s.db.ExecContext(ctx, query, userID, alertID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to acknowledge alert: %w", err)
|
||||
}
|
||||
|
||||
rows, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get rows affected: %w", err)
|
||||
}
|
||||
|
||||
if rows == 0 {
|
||||
return fmt.Errorf("alert not found or already acknowledged")
|
||||
}
|
||||
|
||||
s.logger.Info("Alert acknowledged", "alert_id", alertID, "user_id", userID)
|
||||
return nil
|
||||
}
|
||||
|
||||
// ResolveAlert marks an alert as resolved
|
||||
func (s *AlertService) ResolveAlert(ctx context.Context, alertID string) error {
|
||||
query := `
|
||||
UPDATE alerts
|
||||
SET resolved_at = NOW()
|
||||
WHERE id = $1 AND resolved_at IS NULL
|
||||
`
|
||||
|
||||
result, err := s.db.ExecContext(ctx, query, alertID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to resolve alert: %w", err)
|
||||
}
|
||||
|
||||
rows, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get rows affected: %w", err)
|
||||
}
|
||||
|
||||
if rows == 0 {
|
||||
return fmt.Errorf("alert not found or already resolved")
|
||||
}
|
||||
|
||||
s.logger.Info("Alert resolved", "alert_id", alertID)
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteAlert deletes an alert (soft delete by resolving it)
|
||||
func (s *AlertService) DeleteAlert(ctx context.Context, alertID string) error {
|
||||
// For safety, we'll just resolve it instead of hard delete
|
||||
return s.ResolveAlert(ctx, alertID)
|
||||
}
|
||||
|
||||
// AlertFilters represents filters for listing alerts
|
||||
type AlertFilters struct {
|
||||
Severity AlertSeverity
|
||||
Source AlertSource
|
||||
IsAcknowledged *bool
|
||||
ResourceType string
|
||||
ResourceID string
|
||||
Limit int
|
||||
}
|
||||
|
||||
// scanAlert scans a row into an Alert struct
|
||||
func (s *AlertService) scanAlert(rows *sql.Rows) (*Alert, error) {
|
||||
var alert Alert
|
||||
var severity, source string
|
||||
var resourceType, resourceID, acknowledgedBy sql.NullString
|
||||
var acknowledgedAt, resolvedAt sql.NullTime
|
||||
var metadata sql.NullString
|
||||
|
||||
err := rows.Scan(
|
||||
&alert.ID,
|
||||
&severity,
|
||||
&source,
|
||||
&alert.Title,
|
||||
&alert.Message,
|
||||
&resourceType,
|
||||
&resourceID,
|
||||
&alert.IsAcknowledged,
|
||||
&acknowledgedBy,
|
||||
&acknowledgedAt,
|
||||
&resolvedAt,
|
||||
&alert.CreatedAt,
|
||||
&metadata,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
alert.Severity = AlertSeverity(severity)
|
||||
alert.Source = AlertSource(source)
|
||||
if resourceType.Valid {
|
||||
alert.ResourceType = resourceType.String
|
||||
}
|
||||
if resourceID.Valid {
|
||||
alert.ResourceID = resourceID.String
|
||||
}
|
||||
if acknowledgedBy.Valid {
|
||||
alert.AcknowledgedBy = acknowledgedBy.String
|
||||
}
|
||||
if acknowledgedAt.Valid {
|
||||
alert.AcknowledgedAt = &acknowledgedAt.Time
|
||||
}
|
||||
if resolvedAt.Valid {
|
||||
alert.ResolvedAt = &resolvedAt.Time
|
||||
}
|
||||
if metadata.Valid && metadata.String != "" {
|
||||
json.Unmarshal([]byte(metadata.String), &alert.Metadata)
|
||||
}
|
||||
|
||||
return &alert, nil
|
||||
}
|
||||
|
||||
// scanAlertRow scans a single row into an Alert struct
|
||||
func (s *AlertService) scanAlertRow(row *sql.Row) (*Alert, error) {
|
||||
var alert Alert
|
||||
var severity, source string
|
||||
var resourceType, resourceID, acknowledgedBy sql.NullString
|
||||
var acknowledgedAt, resolvedAt sql.NullTime
|
||||
var metadata sql.NullString
|
||||
|
||||
err := row.Scan(
|
||||
&alert.ID,
|
||||
&severity,
|
||||
&source,
|
||||
&alert.Title,
|
||||
&alert.Message,
|
||||
&resourceType,
|
||||
&resourceID,
|
||||
&alert.IsAcknowledged,
|
||||
&acknowledgedBy,
|
||||
&acknowledgedAt,
|
||||
&resolvedAt,
|
||||
&alert.CreatedAt,
|
||||
&metadata,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
alert.Severity = AlertSeverity(severity)
|
||||
alert.Source = AlertSource(source)
|
||||
if resourceType.Valid {
|
||||
alert.ResourceType = resourceType.String
|
||||
}
|
||||
if resourceID.Valid {
|
||||
alert.ResourceID = resourceID.String
|
||||
}
|
||||
if acknowledgedBy.Valid {
|
||||
alert.AcknowledgedBy = acknowledgedBy.String
|
||||
}
|
||||
if acknowledgedAt.Valid {
|
||||
alert.AcknowledgedAt = &acknowledgedAt.Time
|
||||
}
|
||||
if resolvedAt.Valid {
|
||||
alert.ResolvedAt = &resolvedAt.Time
|
||||
}
|
||||
if metadata.Valid && metadata.String != "" {
|
||||
json.Unmarshal([]byte(metadata.String), &alert.Metadata)
|
||||
}
|
||||
|
||||
return &alert, nil
|
||||
}
|
||||
|
||||
159
backend/internal/monitoring/events.go
Normal file
159
backend/internal/monitoring/events.go
Normal file
@@ -0,0 +1,159 @@
|
||||
package monitoring
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/gorilla/websocket"
|
||||
)
|
||||
|
||||
// EventType represents the type of event
|
||||
type EventType string
|
||||
|
||||
const (
|
||||
EventTypeAlert EventType = "alert"
|
||||
EventTypeTask EventType = "task"
|
||||
EventTypeSystem EventType = "system"
|
||||
EventTypeStorage EventType = "storage"
|
||||
EventTypeSCST EventType = "scst"
|
||||
EventTypeTape EventType = "tape"
|
||||
EventTypeVTL EventType = "vtl"
|
||||
EventTypeMetrics EventType = "metrics"
|
||||
)
|
||||
|
||||
// Event represents a system event
|
||||
type Event struct {
|
||||
Type EventType `json:"type"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
Data map[string]interface{} `json:"data"`
|
||||
}
|
||||
|
||||
// EventHub manages WebSocket connections and broadcasts events
|
||||
type EventHub struct {
|
||||
clients map[*websocket.Conn]bool
|
||||
broadcast chan *Event
|
||||
register chan *websocket.Conn
|
||||
unregister chan *websocket.Conn
|
||||
mu sync.RWMutex
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewEventHub creates a new event hub
|
||||
func NewEventHub(log *logger.Logger) *EventHub {
|
||||
return &EventHub{
|
||||
clients: make(map[*websocket.Conn]bool),
|
||||
broadcast: make(chan *Event, 256),
|
||||
register: make(chan *websocket.Conn),
|
||||
unregister: make(chan *websocket.Conn),
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// Run starts the event hub
|
||||
func (h *EventHub) Run() {
|
||||
for {
|
||||
select {
|
||||
case conn := <-h.register:
|
||||
h.mu.Lock()
|
||||
h.clients[conn] = true
|
||||
h.mu.Unlock()
|
||||
h.logger.Info("WebSocket client connected", "total_clients", len(h.clients))
|
||||
|
||||
case conn := <-h.unregister:
|
||||
h.mu.Lock()
|
||||
if _, ok := h.clients[conn]; ok {
|
||||
delete(h.clients, conn)
|
||||
conn.Close()
|
||||
}
|
||||
h.mu.Unlock()
|
||||
h.logger.Info("WebSocket client disconnected", "total_clients", len(h.clients))
|
||||
|
||||
case event := <-h.broadcast:
|
||||
h.mu.RLock()
|
||||
for conn := range h.clients {
|
||||
select {
|
||||
case <-time.After(5 * time.Second):
|
||||
// Timeout - close connection
|
||||
h.mu.RUnlock()
|
||||
h.mu.Lock()
|
||||
delete(h.clients, conn)
|
||||
conn.Close()
|
||||
h.mu.Unlock()
|
||||
h.mu.RLock()
|
||||
default:
|
||||
conn.SetWriteDeadline(time.Now().Add(10 * time.Second))
|
||||
if err := conn.WriteJSON(event); err != nil {
|
||||
h.logger.Error("Failed to send event to client", "error", err)
|
||||
h.mu.RUnlock()
|
||||
h.mu.Lock()
|
||||
delete(h.clients, conn)
|
||||
conn.Close()
|
||||
h.mu.Unlock()
|
||||
h.mu.RLock()
|
||||
}
|
||||
}
|
||||
}
|
||||
h.mu.RUnlock()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Broadcast broadcasts an event to all connected clients
|
||||
func (h *EventHub) Broadcast(eventType EventType, data map[string]interface{}) {
|
||||
event := &Event{
|
||||
Type: eventType,
|
||||
Timestamp: time.Now(),
|
||||
Data: data,
|
||||
}
|
||||
|
||||
select {
|
||||
case h.broadcast <- event:
|
||||
default:
|
||||
h.logger.Warn("Event broadcast channel full, dropping event", "type", eventType)
|
||||
}
|
||||
}
|
||||
|
||||
// BroadcastAlert broadcasts an alert event
|
||||
func (h *EventHub) BroadcastAlert(alert *Alert) {
|
||||
data := map[string]interface{}{
|
||||
"id": alert.ID,
|
||||
"severity": alert.Severity,
|
||||
"source": alert.Source,
|
||||
"title": alert.Title,
|
||||
"message": alert.Message,
|
||||
"resource_type": alert.ResourceType,
|
||||
"resource_id": alert.ResourceID,
|
||||
"is_acknowledged": alert.IsAcknowledged,
|
||||
"created_at": alert.CreatedAt,
|
||||
}
|
||||
h.Broadcast(EventTypeAlert, data)
|
||||
}
|
||||
|
||||
// BroadcastTaskUpdate broadcasts a task update event
|
||||
func (h *EventHub) BroadcastTaskUpdate(taskID string, status string, progress int, message string) {
|
||||
data := map[string]interface{}{
|
||||
"task_id": taskID,
|
||||
"status": status,
|
||||
"progress": progress,
|
||||
"message": message,
|
||||
}
|
||||
h.Broadcast(EventTypeTask, data)
|
||||
}
|
||||
|
||||
// BroadcastMetrics broadcasts metrics update
|
||||
func (h *EventHub) BroadcastMetrics(metrics *Metrics) {
|
||||
data := make(map[string]interface{})
|
||||
bytes, _ := json.Marshal(metrics)
|
||||
json.Unmarshal(bytes, &data)
|
||||
h.Broadcast(EventTypeMetrics, data)
|
||||
}
|
||||
|
||||
// GetClientCount returns the number of connected clients
|
||||
func (h *EventHub) GetClientCount() int {
|
||||
h.mu.RLock()
|
||||
defer h.mu.RUnlock()
|
||||
return len(h.clients)
|
||||
}
|
||||
|
||||
184
backend/internal/monitoring/handler.go
Normal file
184
backend/internal/monitoring/handler.go
Normal file
@@ -0,0 +1,184 @@
|
||||
package monitoring
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/atlasos/calypso/internal/iam"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/gorilla/websocket"
|
||||
)
|
||||
|
||||
// Handler handles monitoring API requests
|
||||
type Handler struct {
|
||||
alertService *AlertService
|
||||
metricsService *MetricsService
|
||||
eventHub *EventHub
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewHandler creates a new monitoring handler
|
||||
func NewHandler(db *database.DB, log *logger.Logger, alertService *AlertService, metricsService *MetricsService, eventHub *EventHub) *Handler {
|
||||
return &Handler{
|
||||
alertService: alertService,
|
||||
metricsService: metricsService,
|
||||
eventHub: eventHub,
|
||||
db: db,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// ListAlerts lists alerts with optional filters
|
||||
func (h *Handler) ListAlerts(c *gin.Context) {
|
||||
filters := &AlertFilters{}
|
||||
|
||||
// Parse query parameters
|
||||
if severity := c.Query("severity"); severity != "" {
|
||||
filters.Severity = AlertSeverity(severity)
|
||||
}
|
||||
if source := c.Query("source"); source != "" {
|
||||
filters.Source = AlertSource(source)
|
||||
}
|
||||
if acknowledged := c.Query("acknowledged"); acknowledged != "" {
|
||||
ack, err := strconv.ParseBool(acknowledged)
|
||||
if err == nil {
|
||||
filters.IsAcknowledged = &ack
|
||||
}
|
||||
}
|
||||
if resourceType := c.Query("resource_type"); resourceType != "" {
|
||||
filters.ResourceType = resourceType
|
||||
}
|
||||
if resourceID := c.Query("resource_id"); resourceID != "" {
|
||||
filters.ResourceID = resourceID
|
||||
}
|
||||
if limitStr := c.Query("limit"); limitStr != "" {
|
||||
if limit, err := strconv.Atoi(limitStr); err == nil && limit > 0 {
|
||||
filters.Limit = limit
|
||||
}
|
||||
}
|
||||
|
||||
alerts, err := h.alertService.ListAlerts(c.Request.Context(), filters)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list alerts", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list alerts"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"alerts": alerts})
|
||||
}
|
||||
|
||||
// GetAlert retrieves a single alert
|
||||
func (h *Handler) GetAlert(c *gin.Context) {
|
||||
alertID := c.Param("id")
|
||||
|
||||
alert, err := h.alertService.GetAlert(c.Request.Context(), alertID)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get alert", "alert_id", alertID, "error", err)
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "alert not found"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, alert)
|
||||
}
|
||||
|
||||
// AcknowledgeAlert acknowledges an alert
|
||||
func (h *Handler) AcknowledgeAlert(c *gin.Context) {
|
||||
alertID := c.Param("id")
|
||||
|
||||
// Get current user
|
||||
user, exists := c.Get("user")
|
||||
if !exists {
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "unauthorized"})
|
||||
return
|
||||
}
|
||||
|
||||
authUser, ok := user.(*iam.User)
|
||||
if !ok {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "invalid user context"})
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.alertService.AcknowledgeAlert(c.Request.Context(), alertID, authUser.ID); err != nil {
|
||||
h.logger.Error("Failed to acknowledge alert", "alert_id", alertID, "error", err)
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "alert acknowledged"})
|
||||
}
|
||||
|
||||
// ResolveAlert resolves an alert
|
||||
func (h *Handler) ResolveAlert(c *gin.Context) {
|
||||
alertID := c.Param("id")
|
||||
|
||||
if err := h.alertService.ResolveAlert(c.Request.Context(), alertID); err != nil {
|
||||
h.logger.Error("Failed to resolve alert", "alert_id", alertID, "error", err)
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "alert resolved"})
|
||||
}
|
||||
|
||||
// GetMetrics retrieves current system metrics
|
||||
func (h *Handler) GetMetrics(c *gin.Context) {
|
||||
metrics, err := h.metricsService.CollectMetrics(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to collect metrics", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to collect metrics"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, metrics)
|
||||
}
|
||||
|
||||
// WebSocketHandler handles WebSocket connections for event streaming
|
||||
func (h *Handler) WebSocketHandler(c *gin.Context) {
|
||||
// Upgrade connection to WebSocket
|
||||
upgrader := websocket.Upgrader{
|
||||
CheckOrigin: func(r *http.Request) bool {
|
||||
// Allow all origins for now (should be restricted in production)
|
||||
return true
|
||||
},
|
||||
}
|
||||
|
||||
conn, err := upgrader.Upgrade(c.Writer, c.Request, nil)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to upgrade WebSocket connection", "error", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Register client
|
||||
h.eventHub.register <- conn
|
||||
|
||||
// Keep connection alive and handle ping/pong
|
||||
go func() {
|
||||
defer func() {
|
||||
h.eventHub.unregister <- conn
|
||||
}()
|
||||
|
||||
conn.SetReadDeadline(time.Now().Add(60 * time.Second))
|
||||
conn.SetPongHandler(func(string) error {
|
||||
conn.SetReadDeadline(time.Now().Add(60 * time.Second))
|
||||
return nil
|
||||
})
|
||||
|
||||
// Send ping every 30 seconds
|
||||
ticker := time.NewTicker(30 * time.Second)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
if err := conn.WriteMessage(websocket.PingMessage, nil); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
201
backend/internal/monitoring/health.go
Normal file
201
backend/internal/monitoring/health.go
Normal file
@@ -0,0 +1,201 @@
|
||||
package monitoring
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
)
|
||||
|
||||
// HealthStatus represents the health status of a component
|
||||
type HealthStatus string
|
||||
|
||||
const (
|
||||
HealthStatusHealthy HealthStatus = "healthy"
|
||||
HealthStatusDegraded HealthStatus = "degraded"
|
||||
HealthStatusUnhealthy HealthStatus = "unhealthy"
|
||||
HealthStatusUnknown HealthStatus = "unknown"
|
||||
)
|
||||
|
||||
// ComponentHealth represents the health of a system component
|
||||
type ComponentHealth struct {
|
||||
Name string `json:"name"`
|
||||
Status HealthStatus `json:"status"`
|
||||
Message string `json:"message,omitempty"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
}
|
||||
|
||||
// EnhancedHealth represents enhanced health check response
|
||||
type EnhancedHealth struct {
|
||||
Status string `json:"status"`
|
||||
Service string `json:"service"`
|
||||
Version string `json:"version,omitempty"`
|
||||
Uptime int64 `json:"uptime_seconds"`
|
||||
Components []ComponentHealth `json:"components"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
}
|
||||
|
||||
// HealthService provides enhanced health checking
|
||||
type HealthService struct {
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
startTime time.Time
|
||||
metricsService *MetricsService
|
||||
}
|
||||
|
||||
// NewHealthService creates a new health service
|
||||
func NewHealthService(db *database.DB, log *logger.Logger, metricsService *MetricsService) *HealthService {
|
||||
return &HealthService{
|
||||
db: db,
|
||||
logger: log,
|
||||
startTime: time.Now(),
|
||||
metricsService: metricsService,
|
||||
}
|
||||
}
|
||||
|
||||
// CheckHealth performs a comprehensive health check
|
||||
func (s *HealthService) CheckHealth(ctx context.Context) *EnhancedHealth {
|
||||
health := &EnhancedHealth{
|
||||
Status: string(HealthStatusHealthy),
|
||||
Service: "calypso-api",
|
||||
Uptime: int64(time.Since(s.startTime).Seconds()),
|
||||
Timestamp: time.Now(),
|
||||
Components: []ComponentHealth{},
|
||||
}
|
||||
|
||||
// Check database
|
||||
dbHealth := s.checkDatabase(ctx)
|
||||
health.Components = append(health.Components, dbHealth)
|
||||
|
||||
// Check storage
|
||||
storageHealth := s.checkStorage(ctx)
|
||||
health.Components = append(health.Components, storageHealth)
|
||||
|
||||
// Check SCST
|
||||
scstHealth := s.checkSCST(ctx)
|
||||
health.Components = append(health.Components, scstHealth)
|
||||
|
||||
// Determine overall status
|
||||
hasUnhealthy := false
|
||||
hasDegraded := false
|
||||
for _, comp := range health.Components {
|
||||
if comp.Status == HealthStatusUnhealthy {
|
||||
hasUnhealthy = true
|
||||
} else if comp.Status == HealthStatusDegraded {
|
||||
hasDegraded = true
|
||||
}
|
||||
}
|
||||
|
||||
if hasUnhealthy {
|
||||
health.Status = string(HealthStatusUnhealthy)
|
||||
} else if hasDegraded {
|
||||
health.Status = string(HealthStatusDegraded)
|
||||
}
|
||||
|
||||
return health
|
||||
}
|
||||
|
||||
// checkDatabase checks database health
|
||||
func (s *HealthService) checkDatabase(ctx context.Context) ComponentHealth {
|
||||
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
if err := s.db.PingContext(ctx); err != nil {
|
||||
return ComponentHealth{
|
||||
Name: "database",
|
||||
Status: HealthStatusUnhealthy,
|
||||
Message: "Database connection failed: " + err.Error(),
|
||||
Timestamp: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
// Check if we can query
|
||||
var count int
|
||||
if err := s.db.QueryRowContext(ctx, "SELECT 1").Scan(&count); err != nil {
|
||||
return ComponentHealth{
|
||||
Name: "database",
|
||||
Status: HealthStatusDegraded,
|
||||
Message: "Database query failed: " + err.Error(),
|
||||
Timestamp: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
return ComponentHealth{
|
||||
Name: "database",
|
||||
Status: HealthStatusHealthy,
|
||||
Timestamp: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
// checkStorage checks storage component health
|
||||
func (s *HealthService) checkStorage(ctx context.Context) ComponentHealth {
|
||||
// Check if we have any active repositories
|
||||
var count int
|
||||
if err := s.db.QueryRowContext(ctx, "SELECT COUNT(*) FROM disk_repositories WHERE is_active = true").Scan(&count); err != nil {
|
||||
return ComponentHealth{
|
||||
Name: "storage",
|
||||
Status: HealthStatusDegraded,
|
||||
Message: "Failed to query storage repositories",
|
||||
Timestamp: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
if count == 0 {
|
||||
return ComponentHealth{
|
||||
Name: "storage",
|
||||
Status: HealthStatusDegraded,
|
||||
Message: "No active storage repositories configured",
|
||||
Timestamp: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
// Check repository capacity
|
||||
var usagePercent float64
|
||||
query := `
|
||||
SELECT COALESCE(
|
||||
SUM(used_bytes)::float / NULLIF(SUM(total_bytes), 0) * 100,
|
||||
0
|
||||
)
|
||||
FROM disk_repositories
|
||||
WHERE is_active = true
|
||||
`
|
||||
if err := s.db.QueryRowContext(ctx, query).Scan(&usagePercent); err == nil {
|
||||
if usagePercent > 95 {
|
||||
return ComponentHealth{
|
||||
Name: "storage",
|
||||
Status: HealthStatusDegraded,
|
||||
Message: "Storage repositories are nearly full",
|
||||
Timestamp: time.Now(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ComponentHealth{
|
||||
Name: "storage",
|
||||
Status: HealthStatusHealthy,
|
||||
Timestamp: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
// checkSCST checks SCST component health
|
||||
func (s *HealthService) checkSCST(ctx context.Context) ComponentHealth {
|
||||
// Check if SCST targets exist
|
||||
var count int
|
||||
if err := s.db.QueryRowContext(ctx, "SELECT COUNT(*) FROM scst_targets").Scan(&count); err != nil {
|
||||
return ComponentHealth{
|
||||
Name: "scst",
|
||||
Status: HealthStatusUnknown,
|
||||
Message: "Failed to query SCST targets",
|
||||
Timestamp: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
// SCST is healthy if we can query it (even if no targets exist)
|
||||
return ComponentHealth{
|
||||
Name: "scst",
|
||||
Status: HealthStatusHealthy,
|
||||
Timestamp: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
651
backend/internal/monitoring/metrics.go
Normal file
651
backend/internal/monitoring/metrics.go
Normal file
@@ -0,0 +1,651 @@
|
||||
package monitoring
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"os"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
)
|
||||
|
||||
// Metrics represents system metrics
|
||||
type Metrics struct {
|
||||
System SystemMetrics `json:"system"`
|
||||
Storage StorageMetrics `json:"storage"`
|
||||
SCST SCSTMetrics `json:"scst"`
|
||||
Tape TapeMetrics `json:"tape"`
|
||||
VTL VTLMetrics `json:"vtl"`
|
||||
Tasks TaskMetrics `json:"tasks"`
|
||||
API APIMetrics `json:"api"`
|
||||
CollectedAt time.Time `json:"collected_at"`
|
||||
}
|
||||
|
||||
// SystemMetrics represents system-level metrics
|
||||
type SystemMetrics struct {
|
||||
CPUUsagePercent float64 `json:"cpu_usage_percent"`
|
||||
MemoryUsed int64 `json:"memory_used_bytes"`
|
||||
MemoryTotal int64 `json:"memory_total_bytes"`
|
||||
MemoryPercent float64 `json:"memory_usage_percent"`
|
||||
DiskUsed int64 `json:"disk_used_bytes"`
|
||||
DiskTotal int64 `json:"disk_total_bytes"`
|
||||
DiskPercent float64 `json:"disk_usage_percent"`
|
||||
UptimeSeconds int64 `json:"uptime_seconds"`
|
||||
}
|
||||
|
||||
// StorageMetrics represents storage metrics
|
||||
type StorageMetrics struct {
|
||||
TotalDisks int `json:"total_disks"`
|
||||
TotalRepositories int `json:"total_repositories"`
|
||||
TotalCapacityBytes int64 `json:"total_capacity_bytes"`
|
||||
UsedCapacityBytes int64 `json:"used_capacity_bytes"`
|
||||
AvailableBytes int64 `json:"available_bytes"`
|
||||
UsagePercent float64 `json:"usage_percent"`
|
||||
}
|
||||
|
||||
// SCSTMetrics represents SCST metrics
|
||||
type SCSTMetrics struct {
|
||||
TotalTargets int `json:"total_targets"`
|
||||
TotalLUNs int `json:"total_luns"`
|
||||
TotalInitiators int `json:"total_initiators"`
|
||||
ActiveTargets int `json:"active_targets"`
|
||||
}
|
||||
|
||||
// TapeMetrics represents physical tape metrics
|
||||
type TapeMetrics struct {
|
||||
TotalLibraries int `json:"total_libraries"`
|
||||
TotalDrives int `json:"total_drives"`
|
||||
TotalSlots int `json:"total_slots"`
|
||||
OccupiedSlots int `json:"occupied_slots"`
|
||||
}
|
||||
|
||||
// VTLMetrics represents virtual tape library metrics
|
||||
type VTLMetrics struct {
|
||||
TotalLibraries int `json:"total_libraries"`
|
||||
TotalDrives int `json:"total_drives"`
|
||||
TotalTapes int `json:"total_tapes"`
|
||||
ActiveDrives int `json:"active_drives"`
|
||||
LoadedTapes int `json:"loaded_tapes"`
|
||||
}
|
||||
|
||||
// TaskMetrics represents task execution metrics
|
||||
type TaskMetrics struct {
|
||||
TotalTasks int `json:"total_tasks"`
|
||||
PendingTasks int `json:"pending_tasks"`
|
||||
RunningTasks int `json:"running_tasks"`
|
||||
CompletedTasks int `json:"completed_tasks"`
|
||||
FailedTasks int `json:"failed_tasks"`
|
||||
AvgDurationSec float64 `json:"avg_duration_seconds"`
|
||||
}
|
||||
|
||||
// APIMetrics represents API metrics
|
||||
type APIMetrics struct {
|
||||
TotalRequests int64 `json:"total_requests"`
|
||||
RequestsPerSec float64 `json:"requests_per_second"`
|
||||
ErrorRate float64 `json:"error_rate"`
|
||||
AvgLatencyMs float64 `json:"avg_latency_ms"`
|
||||
ActiveConnections int `json:"active_connections"`
|
||||
}
|
||||
|
||||
// MetricsService collects and provides system metrics
|
||||
type MetricsService struct {
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
startTime time.Time
|
||||
lastCPU *cpuStats // For CPU usage calculation
|
||||
lastCPUTime time.Time
|
||||
}
|
||||
|
||||
// cpuStats represents CPU statistics from /proc/stat
|
||||
type cpuStats struct {
|
||||
user uint64
|
||||
nice uint64
|
||||
system uint64
|
||||
idle uint64
|
||||
iowait uint64
|
||||
irq uint64
|
||||
softirq uint64
|
||||
steal uint64
|
||||
guest uint64
|
||||
}
|
||||
|
||||
// NewMetricsService creates a new metrics service
|
||||
func NewMetricsService(db *database.DB, log *logger.Logger) *MetricsService {
|
||||
return &MetricsService{
|
||||
db: db,
|
||||
logger: log,
|
||||
startTime: time.Now(),
|
||||
}
|
||||
}
|
||||
|
||||
// CollectMetrics collects all system metrics
|
||||
func (s *MetricsService) CollectMetrics(ctx context.Context) (*Metrics, error) {
|
||||
metrics := &Metrics{
|
||||
CollectedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Collect system metrics
|
||||
sysMetrics, err := s.collectSystemMetrics(ctx)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to collect system metrics", "error", err)
|
||||
// Set default/zero values if collection fails
|
||||
metrics.System = SystemMetrics{}
|
||||
} else {
|
||||
metrics.System = *sysMetrics
|
||||
}
|
||||
|
||||
// Collect storage metrics
|
||||
storageMetrics, err := s.collectStorageMetrics(ctx)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to collect storage metrics", "error", err)
|
||||
} else {
|
||||
metrics.Storage = *storageMetrics
|
||||
}
|
||||
|
||||
// Collect SCST metrics
|
||||
scstMetrics, err := s.collectSCSTMetrics(ctx)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to collect SCST metrics", "error", err)
|
||||
} else {
|
||||
metrics.SCST = *scstMetrics
|
||||
}
|
||||
|
||||
// Collect tape metrics
|
||||
tapeMetrics, err := s.collectTapeMetrics(ctx)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to collect tape metrics", "error", err)
|
||||
} else {
|
||||
metrics.Tape = *tapeMetrics
|
||||
}
|
||||
|
||||
// Collect VTL metrics
|
||||
vtlMetrics, err := s.collectVTLMetrics(ctx)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to collect VTL metrics", "error", err)
|
||||
} else {
|
||||
metrics.VTL = *vtlMetrics
|
||||
}
|
||||
|
||||
// Collect task metrics
|
||||
taskMetrics, err := s.collectTaskMetrics(ctx)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to collect task metrics", "error", err)
|
||||
} else {
|
||||
metrics.Tasks = *taskMetrics
|
||||
}
|
||||
|
||||
// API metrics are collected separately via middleware
|
||||
metrics.API = APIMetrics{} // Placeholder
|
||||
|
||||
return metrics, nil
|
||||
}
|
||||
|
||||
// collectSystemMetrics collects system-level metrics
|
||||
func (s *MetricsService) collectSystemMetrics(ctx context.Context) (*SystemMetrics, error) {
|
||||
// Get system memory from /proc/meminfo
|
||||
memoryTotal, memoryUsed, memoryPercent := s.getSystemMemory()
|
||||
|
||||
// Get CPU usage from /proc/stat
|
||||
cpuUsage := s.getCPUUsage()
|
||||
|
||||
// Get system uptime from /proc/uptime
|
||||
uptime := s.getSystemUptime()
|
||||
|
||||
metrics := &SystemMetrics{
|
||||
CPUUsagePercent: cpuUsage,
|
||||
MemoryUsed: memoryUsed,
|
||||
MemoryTotal: memoryTotal,
|
||||
MemoryPercent: memoryPercent,
|
||||
DiskUsed: 0, // Would need to read from df
|
||||
DiskTotal: 0,
|
||||
DiskPercent: 0,
|
||||
UptimeSeconds: int64(uptime),
|
||||
}
|
||||
|
||||
return metrics, nil
|
||||
}
|
||||
|
||||
// collectStorageMetrics collects storage metrics
|
||||
func (s *MetricsService) collectStorageMetrics(ctx context.Context) (*StorageMetrics, error) {
|
||||
// Count disks
|
||||
diskQuery := `SELECT COUNT(*) FROM physical_disks WHERE is_active = true`
|
||||
var totalDisks int
|
||||
if err := s.db.QueryRowContext(ctx, diskQuery).Scan(&totalDisks); err != nil {
|
||||
return nil, fmt.Errorf("failed to count disks: %w", err)
|
||||
}
|
||||
|
||||
// Count repositories and calculate capacity
|
||||
repoQuery := `
|
||||
SELECT COUNT(*), COALESCE(SUM(total_bytes), 0), COALESCE(SUM(used_bytes), 0)
|
||||
FROM disk_repositories
|
||||
WHERE is_active = true
|
||||
`
|
||||
var totalRepos int
|
||||
var totalCapacity, usedCapacity int64
|
||||
if err := s.db.QueryRowContext(ctx, repoQuery).Scan(&totalRepos, &totalCapacity, &usedCapacity); err != nil {
|
||||
return nil, fmt.Errorf("failed to query repositories: %w", err)
|
||||
}
|
||||
|
||||
availableBytes := totalCapacity - usedCapacity
|
||||
usagePercent := 0.0
|
||||
if totalCapacity > 0 {
|
||||
usagePercent = float64(usedCapacity) / float64(totalCapacity) * 100
|
||||
}
|
||||
|
||||
return &StorageMetrics{
|
||||
TotalDisks: totalDisks,
|
||||
TotalRepositories: totalRepos,
|
||||
TotalCapacityBytes: totalCapacity,
|
||||
UsedCapacityBytes: usedCapacity,
|
||||
AvailableBytes: availableBytes,
|
||||
UsagePercent: usagePercent,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// collectSCSTMetrics collects SCST metrics
|
||||
func (s *MetricsService) collectSCSTMetrics(ctx context.Context) (*SCSTMetrics, error) {
|
||||
// Count targets
|
||||
targetQuery := `SELECT COUNT(*) FROM scst_targets`
|
||||
var totalTargets int
|
||||
if err := s.db.QueryRowContext(ctx, targetQuery).Scan(&totalTargets); err != nil {
|
||||
return nil, fmt.Errorf("failed to count targets: %w", err)
|
||||
}
|
||||
|
||||
// Count LUNs
|
||||
lunQuery := `SELECT COUNT(*) FROM scst_luns`
|
||||
var totalLUNs int
|
||||
if err := s.db.QueryRowContext(ctx, lunQuery).Scan(&totalLUNs); err != nil {
|
||||
return nil, fmt.Errorf("failed to count LUNs: %w", err)
|
||||
}
|
||||
|
||||
// Count initiators
|
||||
initQuery := `SELECT COUNT(*) FROM scst_initiators`
|
||||
var totalInitiators int
|
||||
if err := s.db.QueryRowContext(ctx, initQuery).Scan(&totalInitiators); err != nil {
|
||||
return nil, fmt.Errorf("failed to count initiators: %w", err)
|
||||
}
|
||||
|
||||
// Active targets (targets with at least one LUN)
|
||||
activeQuery := `
|
||||
SELECT COUNT(DISTINCT target_id)
|
||||
FROM scst_luns
|
||||
`
|
||||
var activeTargets int
|
||||
if err := s.db.QueryRowContext(ctx, activeQuery).Scan(&activeTargets); err != nil {
|
||||
activeTargets = 0 // Not critical
|
||||
}
|
||||
|
||||
return &SCSTMetrics{
|
||||
TotalTargets: totalTargets,
|
||||
TotalLUNs: totalLUNs,
|
||||
TotalInitiators: totalInitiators,
|
||||
ActiveTargets: activeTargets,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// collectTapeMetrics collects physical tape metrics
|
||||
func (s *MetricsService) collectTapeMetrics(ctx context.Context) (*TapeMetrics, error) {
|
||||
// Count libraries
|
||||
libQuery := `SELECT COUNT(*) FROM physical_tape_libraries`
|
||||
var totalLibraries int
|
||||
if err := s.db.QueryRowContext(ctx, libQuery).Scan(&totalLibraries); err != nil {
|
||||
return nil, fmt.Errorf("failed to count libraries: %w", err)
|
||||
}
|
||||
|
||||
// Count drives
|
||||
driveQuery := `SELECT COUNT(*) FROM physical_tape_drives`
|
||||
var totalDrives int
|
||||
if err := s.db.QueryRowContext(ctx, driveQuery).Scan(&totalDrives); err != nil {
|
||||
return nil, fmt.Errorf("failed to count drives: %w", err)
|
||||
}
|
||||
|
||||
// Count slots
|
||||
slotQuery := `
|
||||
SELECT COUNT(*), COUNT(CASE WHEN tape_barcode IS NOT NULL THEN 1 END)
|
||||
FROM physical_tape_slots
|
||||
`
|
||||
var totalSlots, occupiedSlots int
|
||||
if err := s.db.QueryRowContext(ctx, slotQuery).Scan(&totalSlots, &occupiedSlots); err != nil {
|
||||
return nil, fmt.Errorf("failed to count slots: %w", err)
|
||||
}
|
||||
|
||||
return &TapeMetrics{
|
||||
TotalLibraries: totalLibraries,
|
||||
TotalDrives: totalDrives,
|
||||
TotalSlots: totalSlots,
|
||||
OccupiedSlots: occupiedSlots,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// collectVTLMetrics collects VTL metrics
|
||||
func (s *MetricsService) collectVTLMetrics(ctx context.Context) (*VTLMetrics, error) {
|
||||
// Count libraries
|
||||
libQuery := `SELECT COUNT(*) FROM virtual_tape_libraries`
|
||||
var totalLibraries int
|
||||
if err := s.db.QueryRowContext(ctx, libQuery).Scan(&totalLibraries); err != nil {
|
||||
return nil, fmt.Errorf("failed to count VTL libraries: %w", err)
|
||||
}
|
||||
|
||||
// Count drives
|
||||
driveQuery := `SELECT COUNT(*) FROM virtual_tape_drives`
|
||||
var totalDrives int
|
||||
if err := s.db.QueryRowContext(ctx, driveQuery).Scan(&totalDrives); err != nil {
|
||||
return nil, fmt.Errorf("failed to count VTL drives: %w", err)
|
||||
}
|
||||
|
||||
// Count tapes
|
||||
tapeQuery := `SELECT COUNT(*) FROM virtual_tapes`
|
||||
var totalTapes int
|
||||
if err := s.db.QueryRowContext(ctx, tapeQuery).Scan(&totalTapes); err != nil {
|
||||
return nil, fmt.Errorf("failed to count VTL tapes: %w", err)
|
||||
}
|
||||
|
||||
// Count active drives (drives with loaded tape)
|
||||
activeQuery := `
|
||||
SELECT COUNT(*)
|
||||
FROM virtual_tape_drives
|
||||
WHERE loaded_tape_id IS NOT NULL
|
||||
`
|
||||
var activeDrives int
|
||||
if err := s.db.QueryRowContext(ctx, activeQuery).Scan(&activeDrives); err != nil {
|
||||
activeDrives = 0
|
||||
}
|
||||
|
||||
// Count loaded tapes
|
||||
loadedQuery := `
|
||||
SELECT COUNT(*)
|
||||
FROM virtual_tapes
|
||||
WHERE is_loaded = true
|
||||
`
|
||||
var loadedTapes int
|
||||
if err := s.db.QueryRowContext(ctx, loadedQuery).Scan(&loadedTapes); err != nil {
|
||||
loadedTapes = 0
|
||||
}
|
||||
|
||||
return &VTLMetrics{
|
||||
TotalLibraries: totalLibraries,
|
||||
TotalDrives: totalDrives,
|
||||
TotalTapes: totalTapes,
|
||||
ActiveDrives: activeDrives,
|
||||
LoadedTapes: loadedTapes,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// collectTaskMetrics collects task execution metrics
|
||||
func (s *MetricsService) collectTaskMetrics(ctx context.Context) (*TaskMetrics, error) {
|
||||
// Count tasks by status
|
||||
query := `
|
||||
SELECT
|
||||
COUNT(*) as total,
|
||||
COUNT(*) FILTER (WHERE status = 'pending') as pending,
|
||||
COUNT(*) FILTER (WHERE status = 'running') as running,
|
||||
COUNT(*) FILTER (WHERE status = 'completed') as completed,
|
||||
COUNT(*) FILTER (WHERE status = 'failed') as failed
|
||||
FROM tasks
|
||||
`
|
||||
var total, pending, running, completed, failed int
|
||||
if err := s.db.QueryRowContext(ctx, query).Scan(&total, &pending, &running, &completed, &failed); err != nil {
|
||||
return nil, fmt.Errorf("failed to count tasks: %w", err)
|
||||
}
|
||||
|
||||
// Calculate average duration for completed tasks
|
||||
avgDurationQuery := `
|
||||
SELECT AVG(EXTRACT(EPOCH FROM (completed_at - started_at)))
|
||||
FROM tasks
|
||||
WHERE status = 'completed' AND started_at IS NOT NULL AND completed_at IS NOT NULL
|
||||
`
|
||||
var avgDuration sql.NullFloat64
|
||||
if err := s.db.QueryRowContext(ctx, avgDurationQuery).Scan(&avgDuration); err != nil {
|
||||
avgDuration = sql.NullFloat64{Valid: false}
|
||||
}
|
||||
|
||||
avgDurationSec := 0.0
|
||||
if avgDuration.Valid {
|
||||
avgDurationSec = avgDuration.Float64
|
||||
}
|
||||
|
||||
return &TaskMetrics{
|
||||
TotalTasks: total,
|
||||
PendingTasks: pending,
|
||||
RunningTasks: running,
|
||||
CompletedTasks: completed,
|
||||
FailedTasks: failed,
|
||||
AvgDurationSec: avgDurationSec,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// getSystemUptime reads system uptime from /proc/uptime
|
||||
// Returns uptime in seconds, or service uptime as fallback
|
||||
func (s *MetricsService) getSystemUptime() float64 {
|
||||
file, err := os.Open("/proc/uptime")
|
||||
if err != nil {
|
||||
// Fallback to service uptime if /proc/uptime is not available
|
||||
s.logger.Warn("Failed to read /proc/uptime, using service uptime", "error", err)
|
||||
return time.Since(s.startTime).Seconds()
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
scanner := bufio.NewScanner(file)
|
||||
if !scanner.Scan() {
|
||||
// Fallback to service uptime if file is empty
|
||||
s.logger.Warn("Failed to read /proc/uptime content, using service uptime")
|
||||
return time.Since(s.startTime).Seconds()
|
||||
}
|
||||
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) == 0 {
|
||||
// Fallback to service uptime if no data
|
||||
s.logger.Warn("No data in /proc/uptime, using service uptime")
|
||||
return time.Since(s.startTime).Seconds()
|
||||
}
|
||||
|
||||
// First field is system uptime in seconds
|
||||
uptimeSeconds, err := strconv.ParseFloat(fields[0], 64)
|
||||
if err != nil {
|
||||
// Fallback to service uptime if parsing fails
|
||||
s.logger.Warn("Failed to parse /proc/uptime, using service uptime", "error", err)
|
||||
return time.Since(s.startTime).Seconds()
|
||||
}
|
||||
|
||||
return uptimeSeconds
|
||||
}
|
||||
|
||||
// getSystemMemory reads system memory from /proc/meminfo
|
||||
// Returns total, used (in bytes), and usage percentage
|
||||
func (s *MetricsService) getSystemMemory() (int64, int64, float64) {
|
||||
file, err := os.Open("/proc/meminfo")
|
||||
if err != nil {
|
||||
s.logger.Warn("Failed to read /proc/meminfo, using Go runtime memory", "error", err)
|
||||
var m runtime.MemStats
|
||||
runtime.ReadMemStats(&m)
|
||||
memoryUsed := int64(m.Alloc)
|
||||
memoryTotal := int64(m.Sys)
|
||||
memoryPercent := float64(memoryUsed) / float64(memoryTotal) * 100
|
||||
return memoryTotal, memoryUsed, memoryPercent
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
var memTotal, memAvailable, memFree, buffers, cached int64
|
||||
scanner := bufio.NewScanner(file)
|
||||
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse line like "MemTotal: 16375596 kB"
|
||||
// or "MemTotal: 16375596" (some systems don't have unit)
|
||||
colonIdx := strings.Index(line, ":")
|
||||
if colonIdx == -1 {
|
||||
continue
|
||||
}
|
||||
|
||||
key := strings.TrimSpace(line[:colonIdx])
|
||||
valuePart := strings.TrimSpace(line[colonIdx+1:])
|
||||
|
||||
// Split value part to get number (ignore unit like "kB")
|
||||
fields := strings.Fields(valuePart)
|
||||
if len(fields) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
value, err := strconv.ParseInt(fields[0], 10, 64)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Values in /proc/meminfo are in KB, convert to bytes
|
||||
valueBytes := value * 1024
|
||||
|
||||
switch key {
|
||||
case "MemTotal":
|
||||
memTotal = valueBytes
|
||||
case "MemAvailable":
|
||||
memAvailable = valueBytes
|
||||
case "MemFree":
|
||||
memFree = valueBytes
|
||||
case "Buffers":
|
||||
buffers = valueBytes
|
||||
case "Cached":
|
||||
cached = valueBytes
|
||||
}
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil {
|
||||
s.logger.Warn("Error scanning /proc/meminfo", "error", err)
|
||||
}
|
||||
|
||||
if memTotal == 0 {
|
||||
s.logger.Warn("Failed to get MemTotal from /proc/meminfo, using Go runtime memory", "memTotal", memTotal)
|
||||
var m runtime.MemStats
|
||||
runtime.ReadMemStats(&m)
|
||||
memoryUsed := int64(m.Alloc)
|
||||
memoryTotal := int64(m.Sys)
|
||||
memoryPercent := float64(memoryUsed) / float64(memoryTotal) * 100
|
||||
return memoryTotal, memoryUsed, memoryPercent
|
||||
}
|
||||
|
||||
// Calculate used memory
|
||||
// If MemAvailable exists (kernel 3.14+), use it for more accurate calculation
|
||||
var memoryUsed int64
|
||||
if memAvailable > 0 {
|
||||
memoryUsed = memTotal - memAvailable
|
||||
} else {
|
||||
// Fallback: MemTotal - MemFree - Buffers - Cached
|
||||
memoryUsed = memTotal - memFree - buffers - cached
|
||||
if memoryUsed < 0 {
|
||||
memoryUsed = memTotal - memFree
|
||||
}
|
||||
}
|
||||
|
||||
memoryPercent := float64(memoryUsed) / float64(memTotal) * 100
|
||||
|
||||
s.logger.Debug("System memory stats",
|
||||
"memTotal", memTotal,
|
||||
"memAvailable", memAvailable,
|
||||
"memoryUsed", memoryUsed,
|
||||
"memoryPercent", memoryPercent)
|
||||
|
||||
return memTotal, memoryUsed, memoryPercent
|
||||
}
|
||||
|
||||
// getCPUUsage reads CPU usage from /proc/stat
|
||||
// Requires two readings to calculate percentage
|
||||
func (s *MetricsService) getCPUUsage() float64 {
|
||||
currentCPU, err := s.readCPUStats()
|
||||
if err != nil {
|
||||
s.logger.Warn("Failed to read CPU stats", "error", err)
|
||||
return 0.0
|
||||
}
|
||||
|
||||
// If this is the first reading, store it and return 0
|
||||
if s.lastCPU == nil {
|
||||
s.lastCPU = currentCPU
|
||||
s.lastCPUTime = time.Now()
|
||||
return 0.0
|
||||
}
|
||||
|
||||
// Calculate time difference
|
||||
timeDiff := time.Since(s.lastCPUTime).Seconds()
|
||||
if timeDiff < 0.1 {
|
||||
// Too soon, return previous value or 0
|
||||
return 0.0
|
||||
}
|
||||
|
||||
// Calculate total CPU time
|
||||
prevTotal := s.lastCPU.user + s.lastCPU.nice + s.lastCPU.system + s.lastCPU.idle +
|
||||
s.lastCPU.iowait + s.lastCPU.irq + s.lastCPU.softirq + s.lastCPU.steal + s.lastCPU.guest
|
||||
currTotal := currentCPU.user + currentCPU.nice + currentCPU.system + currentCPU.idle +
|
||||
currentCPU.iowait + currentCPU.irq + currentCPU.softirq + currentCPU.steal + currentCPU.guest
|
||||
|
||||
// Calculate idle time
|
||||
prevIdle := s.lastCPU.idle + s.lastCPU.iowait
|
||||
currIdle := currentCPU.idle + currentCPU.iowait
|
||||
|
||||
// Calculate used time
|
||||
totalDiff := currTotal - prevTotal
|
||||
idleDiff := currIdle - prevIdle
|
||||
|
||||
if totalDiff == 0 {
|
||||
return 0.0
|
||||
}
|
||||
|
||||
// Calculate CPU usage percentage
|
||||
usagePercent := 100.0 * (1.0 - float64(idleDiff)/float64(totalDiff))
|
||||
|
||||
// Update last CPU stats
|
||||
s.lastCPU = currentCPU
|
||||
s.lastCPUTime = time.Now()
|
||||
|
||||
return usagePercent
|
||||
}
|
||||
|
||||
// readCPUStats reads CPU statistics from /proc/stat
|
||||
func (s *MetricsService) readCPUStats() (*cpuStats, error) {
|
||||
file, err := os.Open("/proc/stat")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to open /proc/stat: %w", err)
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
scanner := bufio.NewScanner(file)
|
||||
if !scanner.Scan() {
|
||||
return nil, fmt.Errorf("failed to read /proc/stat")
|
||||
}
|
||||
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
if !strings.HasPrefix(line, "cpu ") {
|
||||
return nil, fmt.Errorf("invalid /proc/stat format")
|
||||
}
|
||||
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) < 8 {
|
||||
return nil, fmt.Errorf("insufficient CPU stats fields")
|
||||
}
|
||||
|
||||
stats := &cpuStats{}
|
||||
stats.user, _ = strconv.ParseUint(fields[1], 10, 64)
|
||||
stats.nice, _ = strconv.ParseUint(fields[2], 10, 64)
|
||||
stats.system, _ = strconv.ParseUint(fields[3], 10, 64)
|
||||
stats.idle, _ = strconv.ParseUint(fields[4], 10, 64)
|
||||
stats.iowait, _ = strconv.ParseUint(fields[5], 10, 64)
|
||||
stats.irq, _ = strconv.ParseUint(fields[6], 10, 64)
|
||||
stats.softirq, _ = strconv.ParseUint(fields[7], 10, 64)
|
||||
|
||||
if len(fields) > 8 {
|
||||
stats.steal, _ = strconv.ParseUint(fields[8], 10, 64)
|
||||
}
|
||||
if len(fields) > 9 {
|
||||
stats.guest, _ = strconv.ParseUint(fields[9], 10, 64)
|
||||
}
|
||||
|
||||
return stats, nil
|
||||
}
|
||||
233
backend/internal/monitoring/rules.go
Normal file
233
backend/internal/monitoring/rules.go
Normal file
@@ -0,0 +1,233 @@
|
||||
package monitoring
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
)
|
||||
|
||||
// AlertRule represents a rule that can trigger alerts
|
||||
type AlertRule struct {
|
||||
ID string
|
||||
Name string
|
||||
Source AlertSource
|
||||
Condition AlertCondition
|
||||
Severity AlertSeverity
|
||||
Enabled bool
|
||||
Description string
|
||||
}
|
||||
|
||||
// NewAlertRule creates a new alert rule (helper function)
|
||||
func NewAlertRule(id, name string, source AlertSource, condition AlertCondition, severity AlertSeverity, enabled bool, description string) *AlertRule {
|
||||
return &AlertRule{
|
||||
ID: id,
|
||||
Name: name,
|
||||
Source: source,
|
||||
Condition: condition,
|
||||
Severity: severity,
|
||||
Enabled: enabled,
|
||||
Description: description,
|
||||
}
|
||||
}
|
||||
|
||||
// AlertCondition represents a condition that triggers an alert
|
||||
type AlertCondition interface {
|
||||
Evaluate(ctx context.Context, db *database.DB, logger *logger.Logger) (bool, *Alert, error)
|
||||
}
|
||||
|
||||
// AlertRuleEngine manages alert rules and evaluation
|
||||
type AlertRuleEngine struct {
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
service *AlertService
|
||||
rules []*AlertRule
|
||||
interval time.Duration
|
||||
stopCh chan struct{}
|
||||
}
|
||||
|
||||
// NewAlertRuleEngine creates a new alert rule engine
|
||||
func NewAlertRuleEngine(db *database.DB, log *logger.Logger, service *AlertService) *AlertRuleEngine {
|
||||
return &AlertRuleEngine{
|
||||
db: db,
|
||||
logger: log,
|
||||
service: service,
|
||||
rules: []*AlertRule{},
|
||||
interval: 30 * time.Second, // Check every 30 seconds
|
||||
stopCh: make(chan struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
// RegisterRule registers an alert rule
|
||||
func (e *AlertRuleEngine) RegisterRule(rule *AlertRule) {
|
||||
e.rules = append(e.rules, rule)
|
||||
e.logger.Info("Alert rule registered", "rule_id", rule.ID, "name", rule.Name)
|
||||
}
|
||||
|
||||
// Start starts the alert rule engine background monitoring
|
||||
func (e *AlertRuleEngine) Start(ctx context.Context) {
|
||||
e.logger.Info("Starting alert rule engine", "interval", e.interval)
|
||||
ticker := time.NewTicker(e.interval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
e.logger.Info("Alert rule engine stopped")
|
||||
return
|
||||
case <-e.stopCh:
|
||||
e.logger.Info("Alert rule engine stopped")
|
||||
return
|
||||
case <-ticker.C:
|
||||
e.evaluateRules(ctx)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Stop stops the alert rule engine
|
||||
func (e *AlertRuleEngine) Stop() {
|
||||
close(e.stopCh)
|
||||
}
|
||||
|
||||
// evaluateRules evaluates all registered rules
|
||||
func (e *AlertRuleEngine) evaluateRules(ctx context.Context) {
|
||||
for _, rule := range e.rules {
|
||||
if !rule.Enabled {
|
||||
continue
|
||||
}
|
||||
|
||||
triggered, alert, err := rule.Condition.Evaluate(ctx, e.db, e.logger)
|
||||
if err != nil {
|
||||
e.logger.Error("Error evaluating alert rule",
|
||||
"rule_id", rule.ID,
|
||||
"rule_name", rule.Name,
|
||||
"error", err,
|
||||
)
|
||||
continue
|
||||
}
|
||||
|
||||
if triggered && alert != nil {
|
||||
alert.Severity = rule.Severity
|
||||
alert.Source = rule.Source
|
||||
if err := e.service.CreateAlert(ctx, alert); err != nil {
|
||||
e.logger.Error("Failed to create alert from rule",
|
||||
"rule_id", rule.ID,
|
||||
"error", err,
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Built-in alert conditions
|
||||
|
||||
// StorageCapacityCondition checks if storage capacity is below threshold
|
||||
type StorageCapacityCondition struct {
|
||||
ThresholdPercent float64
|
||||
}
|
||||
|
||||
func (c *StorageCapacityCondition) Evaluate(ctx context.Context, db *database.DB, logger *logger.Logger) (bool, *Alert, error) {
|
||||
query := `
|
||||
SELECT id, name, used_bytes, total_bytes
|
||||
FROM disk_repositories
|
||||
WHERE is_active = true
|
||||
`
|
||||
|
||||
rows, err := db.QueryContext(ctx, query)
|
||||
if err != nil {
|
||||
return false, nil, fmt.Errorf("failed to query repositories: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
for rows.Next() {
|
||||
var id, name string
|
||||
var usedBytes, totalBytes int64
|
||||
|
||||
if err := rows.Scan(&id, &name, &usedBytes, &totalBytes); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if totalBytes == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
usagePercent := float64(usedBytes) / float64(totalBytes) * 100
|
||||
|
||||
if usagePercent >= c.ThresholdPercent {
|
||||
alert := &Alert{
|
||||
Title: fmt.Sprintf("Storage repository %s is %d%% full", name, int(usagePercent)),
|
||||
Message: fmt.Sprintf("Repository %s has used %d%% of its capacity (%d/%d bytes)", name, int(usagePercent), usedBytes, totalBytes),
|
||||
ResourceType: "repository",
|
||||
ResourceID: id,
|
||||
Metadata: map[string]interface{}{
|
||||
"usage_percent": usagePercent,
|
||||
"used_bytes": usedBytes,
|
||||
"total_bytes": totalBytes,
|
||||
},
|
||||
}
|
||||
return true, alert, nil
|
||||
}
|
||||
}
|
||||
|
||||
return false, nil, nil
|
||||
}
|
||||
|
||||
// TaskFailureCondition checks for failed tasks
|
||||
type TaskFailureCondition struct {
|
||||
LookbackMinutes int
|
||||
}
|
||||
|
||||
func (c *TaskFailureCondition) Evaluate(ctx context.Context, db *database.DB, logger *logger.Logger) (bool, *Alert, error) {
|
||||
query := `
|
||||
SELECT id, type, error_message, created_at
|
||||
FROM tasks
|
||||
WHERE status = 'failed'
|
||||
AND created_at > NOW() - INTERVAL '%d minutes'
|
||||
ORDER BY created_at DESC
|
||||
LIMIT 1
|
||||
`
|
||||
|
||||
rows, err := db.QueryContext(ctx, fmt.Sprintf(query, c.LookbackMinutes))
|
||||
if err != nil {
|
||||
return false, nil, fmt.Errorf("failed to query failed tasks: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
if rows.Next() {
|
||||
var id, taskType, errorMsg string
|
||||
var createdAt time.Time
|
||||
|
||||
if err := rows.Scan(&id, &taskType, &errorMsg, &createdAt); err != nil {
|
||||
return false, nil, err
|
||||
}
|
||||
|
||||
alert := &Alert{
|
||||
Title: fmt.Sprintf("Task %s failed", taskType),
|
||||
Message: errorMsg,
|
||||
ResourceType: "task",
|
||||
ResourceID: id,
|
||||
Metadata: map[string]interface{}{
|
||||
"task_type": taskType,
|
||||
"created_at": createdAt,
|
||||
},
|
||||
}
|
||||
return true, alert, nil
|
||||
}
|
||||
|
||||
return false, nil, nil
|
||||
}
|
||||
|
||||
// SystemServiceDownCondition checks if critical services are down
|
||||
type SystemServiceDownCondition struct {
|
||||
CriticalServices []string
|
||||
}
|
||||
|
||||
func (c *SystemServiceDownCondition) Evaluate(ctx context.Context, db *database.DB, logger *logger.Logger) (bool, *Alert, error) {
|
||||
// This would check systemd service status
|
||||
// For now, we'll return false as this requires systemd integration
|
||||
// This is a placeholder for future implementation
|
||||
return false, nil, nil
|
||||
}
|
||||
|
||||
793
backend/internal/scst/handler.go
Normal file
793
backend/internal/scst/handler.go
Normal file
@@ -0,0 +1,793 @@
|
||||
package scst
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/atlasos/calypso/internal/tasks"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/go-playground/validator/v10"
|
||||
)
|
||||
|
||||
// Handler handles SCST-related API requests
|
||||
type Handler struct {
|
||||
service *Service
|
||||
taskEngine *tasks.Engine
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewHandler creates a new SCST handler
|
||||
func NewHandler(db *database.DB, log *logger.Logger) *Handler {
|
||||
return &Handler{
|
||||
service: NewService(db, log),
|
||||
taskEngine: tasks.NewEngine(db, log),
|
||||
db: db,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// ListTargets lists all SCST targets
|
||||
func (h *Handler) ListTargets(c *gin.Context) {
|
||||
targets, err := h.service.ListTargets(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list targets", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list targets"})
|
||||
return
|
||||
}
|
||||
|
||||
// Ensure we return an empty array instead of null
|
||||
if targets == nil {
|
||||
targets = []Target{}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"targets": targets})
|
||||
}
|
||||
|
||||
// GetTarget retrieves a target by ID
|
||||
func (h *Handler) GetTarget(c *gin.Context) {
|
||||
targetID := c.Param("id")
|
||||
|
||||
target, err := h.service.GetTarget(c.Request.Context(), targetID)
|
||||
if err != nil {
|
||||
if err.Error() == "target not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "target not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get target", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get target"})
|
||||
return
|
||||
}
|
||||
|
||||
// Get LUNs
|
||||
luns, err := h.service.GetTargetLUNs(c.Request.Context(), targetID)
|
||||
if err != nil {
|
||||
h.logger.Warn("Failed to get LUNs", "target_id", targetID, "error", err)
|
||||
// Return empty array instead of nil
|
||||
luns = []LUN{}
|
||||
}
|
||||
|
||||
// Get initiator groups
|
||||
groups, err2 := h.service.GetTargetInitiatorGroups(c.Request.Context(), targetID)
|
||||
if err2 != nil {
|
||||
h.logger.Warn("Failed to get initiator groups", "target_id", targetID, "error", err2)
|
||||
groups = []InitiatorGroup{}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"target": target,
|
||||
"luns": luns,
|
||||
"initiator_groups": groups,
|
||||
})
|
||||
}
|
||||
|
||||
// CreateTargetRequest represents a target creation request
|
||||
type CreateTargetRequest struct {
|
||||
IQN string `json:"iqn" binding:"required"`
|
||||
TargetType string `json:"target_type" binding:"required"`
|
||||
Name string `json:"name" binding:"required"`
|
||||
Description string `json:"description"`
|
||||
SingleInitiatorOnly bool `json:"single_initiator_only"`
|
||||
}
|
||||
|
||||
// CreateTarget creates a new SCST target
|
||||
func (h *Handler) CreateTarget(c *gin.Context) {
|
||||
var req CreateTargetRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
|
||||
return
|
||||
}
|
||||
|
||||
userID, _ := c.Get("user_id")
|
||||
|
||||
target := &Target{
|
||||
IQN: req.IQN,
|
||||
TargetType: req.TargetType,
|
||||
Name: req.Name,
|
||||
Description: req.Description,
|
||||
IsActive: true,
|
||||
SingleInitiatorOnly: req.SingleInitiatorOnly || req.TargetType == "vtl" || req.TargetType == "physical_tape",
|
||||
CreatedBy: userID.(string),
|
||||
}
|
||||
|
||||
if err := h.service.CreateTarget(c.Request.Context(), target); err != nil {
|
||||
h.logger.Error("Failed to create target", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Set alias to name for frontend compatibility (same as ListTargets)
|
||||
target.Alias = target.Name
|
||||
// LUNCount will be 0 for newly created target
|
||||
target.LUNCount = 0
|
||||
|
||||
c.JSON(http.StatusCreated, target)
|
||||
}
|
||||
|
||||
// AddLUNRequest represents a LUN addition request
|
||||
type AddLUNRequest struct {
|
||||
DeviceName string `json:"device_name" binding:"required"`
|
||||
DevicePath string `json:"device_path" binding:"required"`
|
||||
LUNNumber int `json:"lun_number"` // Note: cannot use binding:"required" for int as 0 is valid
|
||||
HandlerType string `json:"handler_type" binding:"required"`
|
||||
}
|
||||
|
||||
// AddLUN adds a LUN to a target
|
||||
func (h *Handler) AddLUN(c *gin.Context) {
|
||||
targetID := c.Param("id")
|
||||
|
||||
target, err := h.service.GetTarget(c.Request.Context(), targetID)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "target not found"})
|
||||
return
|
||||
}
|
||||
|
||||
var req AddLUNRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
h.logger.Error("Failed to bind AddLUN request", "error", err)
|
||||
// Provide more detailed error message
|
||||
if validationErr, ok := err.(validator.ValidationErrors); ok {
|
||||
var errorMessages []string
|
||||
for _, fieldErr := range validationErr {
|
||||
errorMessages = append(errorMessages, fmt.Sprintf("%s is required", fieldErr.Field()))
|
||||
}
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": fmt.Sprintf("validation failed: %s", strings.Join(errorMessages, ", "))})
|
||||
} else {
|
||||
// Extract error message without full struct name
|
||||
errMsg := err.Error()
|
||||
if idx := strings.Index(errMsg, "Key: '"); idx >= 0 {
|
||||
// Extract field name from error message
|
||||
fieldStart := idx + 6 // Length of "Key: '"
|
||||
if fieldEnd := strings.Index(errMsg[fieldStart:], "'"); fieldEnd >= 0 {
|
||||
fieldName := errMsg[fieldStart : fieldStart+fieldEnd]
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": fmt.Sprintf("invalid or missing field: %s", fieldName)})
|
||||
} else {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request format"})
|
||||
}
|
||||
} else {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": fmt.Sprintf("invalid request: %v", err)})
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Validate required fields (additional check in case binding doesn't catch it)
|
||||
if req.DeviceName == "" || req.DevicePath == "" || req.HandlerType == "" {
|
||||
h.logger.Error("Missing required fields in AddLUN request", "device_name", req.DeviceName, "device_path", req.DevicePath, "handler_type", req.HandlerType)
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "device_name, device_path, and handler_type are required"})
|
||||
return
|
||||
}
|
||||
|
||||
// Validate LUN number range
|
||||
if req.LUNNumber < 0 || req.LUNNumber > 255 {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "lun_number must be between 0 and 255"})
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.service.AddLUN(c.Request.Context(), target.IQN, req.DeviceName, req.DevicePath, req.LUNNumber, req.HandlerType); err != nil {
|
||||
h.logger.Error("Failed to add LUN", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "LUN added successfully"})
|
||||
}
|
||||
|
||||
// RemoveLUN removes a LUN from a target
|
||||
func (h *Handler) RemoveLUN(c *gin.Context) {
|
||||
targetID := c.Param("id")
|
||||
lunID := c.Param("lunId")
|
||||
|
||||
// Get target
|
||||
target, err := h.service.GetTarget(c.Request.Context(), targetID)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "target not found"})
|
||||
return
|
||||
}
|
||||
|
||||
// Get LUN to get the LUN number
|
||||
var lunNumber int
|
||||
err = h.db.QueryRowContext(c.Request.Context(),
|
||||
"SELECT lun_number FROM scst_luns WHERE id = $1 AND target_id = $2",
|
||||
lunID, targetID,
|
||||
).Scan(&lunNumber)
|
||||
if err != nil {
|
||||
if strings.Contains(err.Error(), "no rows") {
|
||||
// LUN already deleted from database - check if it still exists in SCST
|
||||
// Try to get LUN number from URL or try common LUN numbers
|
||||
// For now, return success since it's already deleted (idempotent)
|
||||
h.logger.Info("LUN not found in database, may already be deleted", "lun_id", lunID, "target_id", targetID)
|
||||
c.JSON(http.StatusOK, gin.H{"message": "LUN already removed or not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get LUN", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get LUN"})
|
||||
return
|
||||
}
|
||||
|
||||
// Remove LUN
|
||||
if err := h.service.RemoveLUN(c.Request.Context(), target.IQN, lunNumber); err != nil {
|
||||
h.logger.Error("Failed to remove LUN", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "LUN removed successfully"})
|
||||
}
|
||||
|
||||
// AddInitiatorRequest represents an initiator addition request
|
||||
type AddInitiatorRequest struct {
|
||||
InitiatorIQN string `json:"initiator_iqn" binding:"required"`
|
||||
}
|
||||
|
||||
// AddInitiator adds an initiator to a target
|
||||
func (h *Handler) AddInitiator(c *gin.Context) {
|
||||
targetID := c.Param("id")
|
||||
|
||||
target, err := h.service.GetTarget(c.Request.Context(), targetID)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "target not found"})
|
||||
return
|
||||
}
|
||||
|
||||
var req AddInitiatorRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.service.AddInitiator(c.Request.Context(), target.IQN, req.InitiatorIQN); err != nil {
|
||||
h.logger.Error("Failed to add initiator", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "Initiator added successfully"})
|
||||
}
|
||||
|
||||
// AddInitiatorToGroupRequest represents a request to add an initiator to a group
|
||||
type AddInitiatorToGroupRequest struct {
|
||||
InitiatorIQN string `json:"initiator_iqn" binding:"required"`
|
||||
}
|
||||
|
||||
// AddInitiatorToGroup adds an initiator to a specific group
|
||||
func (h *Handler) AddInitiatorToGroup(c *gin.Context) {
|
||||
groupID := c.Param("id")
|
||||
|
||||
var req AddInitiatorToGroupRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
validationErrors := make(map[string]string)
|
||||
if ve, ok := err.(validator.ValidationErrors); ok {
|
||||
for _, fe := range ve {
|
||||
field := strings.ToLower(fe.Field())
|
||||
validationErrors[field] = fmt.Sprintf("Field '%s' is required", field)
|
||||
}
|
||||
}
|
||||
c.JSON(http.StatusBadRequest, gin.H{
|
||||
"error": "invalid request",
|
||||
"validation_errors": validationErrors,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
err := h.service.AddInitiatorToGroup(c.Request.Context(), groupID, req.InitiatorIQN)
|
||||
if err != nil {
|
||||
if strings.Contains(err.Error(), "not found") || strings.Contains(err.Error(), "already exists") || strings.Contains(err.Error(), "single initiator only") {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to add initiator to group", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to add initiator to group"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "Initiator added to group successfully"})
|
||||
}
|
||||
|
||||
// ListAllInitiators lists all initiators across all targets
|
||||
func (h *Handler) ListAllInitiators(c *gin.Context) {
|
||||
initiators, err := h.service.ListAllInitiators(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list initiators", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list initiators"})
|
||||
return
|
||||
}
|
||||
|
||||
if initiators == nil {
|
||||
initiators = []InitiatorWithTarget{}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"initiators": initiators})
|
||||
}
|
||||
|
||||
// RemoveInitiator removes an initiator
|
||||
func (h *Handler) RemoveInitiator(c *gin.Context) {
|
||||
initiatorID := c.Param("id")
|
||||
|
||||
if err := h.service.RemoveInitiator(c.Request.Context(), initiatorID); err != nil {
|
||||
if err.Error() == "initiator not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "initiator not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to remove initiator", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "Initiator removed successfully"})
|
||||
}
|
||||
|
||||
// GetInitiator retrieves an initiator by ID
|
||||
func (h *Handler) GetInitiator(c *gin.Context) {
|
||||
initiatorID := c.Param("id")
|
||||
|
||||
initiator, err := h.service.GetInitiator(c.Request.Context(), initiatorID)
|
||||
if err != nil {
|
||||
if err.Error() == "initiator not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "initiator not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get initiator", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get initiator"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, initiator)
|
||||
}
|
||||
|
||||
// ListExtents lists all device extents
|
||||
func (h *Handler) ListExtents(c *gin.Context) {
|
||||
extents, err := h.service.ListExtents(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list extents", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list extents"})
|
||||
return
|
||||
}
|
||||
|
||||
if extents == nil {
|
||||
extents = []Extent{}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"extents": extents})
|
||||
}
|
||||
|
||||
// CreateExtentRequest represents a request to create an extent
|
||||
type CreateExtentRequest struct {
|
||||
DeviceName string `json:"device_name" binding:"required"`
|
||||
DevicePath string `json:"device_path" binding:"required"`
|
||||
HandlerType string `json:"handler_type" binding:"required"`
|
||||
}
|
||||
|
||||
// CreateExtent creates a new device extent
|
||||
func (h *Handler) CreateExtent(c *gin.Context) {
|
||||
var req CreateExtentRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.service.CreateExtent(c.Request.Context(), req.DeviceName, req.DevicePath, req.HandlerType); err != nil {
|
||||
h.logger.Error("Failed to create extent", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusCreated, gin.H{"message": "Extent created successfully"})
|
||||
}
|
||||
|
||||
// DeleteExtent deletes a device extent
|
||||
func (h *Handler) DeleteExtent(c *gin.Context) {
|
||||
deviceName := c.Param("device")
|
||||
|
||||
if err := h.service.DeleteExtent(c.Request.Context(), deviceName); err != nil {
|
||||
h.logger.Error("Failed to delete extent", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "Extent deleted successfully"})
|
||||
}
|
||||
|
||||
// ApplyConfig applies SCST configuration
|
||||
func (h *Handler) ApplyConfig(c *gin.Context) {
|
||||
userID, _ := c.Get("user_id")
|
||||
|
||||
// Create async task
|
||||
taskID, err := h.taskEngine.CreateTask(c.Request.Context(),
|
||||
tasks.TaskTypeApplySCST, userID.(string), map[string]interface{}{
|
||||
"operation": "apply_scst_config",
|
||||
})
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create task"})
|
||||
return
|
||||
}
|
||||
|
||||
// Run apply in background
|
||||
go func() {
|
||||
ctx := c.Request.Context()
|
||||
h.taskEngine.StartTask(ctx, taskID)
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 50, "Writing SCST configuration...")
|
||||
|
||||
configPath := "/etc/calypso/scst/generated.conf"
|
||||
if err := h.service.WriteConfig(ctx, configPath); err != nil {
|
||||
h.taskEngine.FailTask(ctx, taskID, err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 100, "SCST configuration applied")
|
||||
h.taskEngine.CompleteTask(ctx, taskID, "SCST configuration applied successfully")
|
||||
}()
|
||||
|
||||
c.JSON(http.StatusAccepted, gin.H{"task_id": taskID})
|
||||
}
|
||||
|
||||
// ListHandlers lists available SCST handlers
|
||||
func (h *Handler) ListHandlers(c *gin.Context) {
|
||||
handlers, err := h.service.DetectHandlers(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list handlers", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list handlers"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"handlers": handlers})
|
||||
}
|
||||
|
||||
// ListPortals lists all iSCSI portals
|
||||
func (h *Handler) ListPortals(c *gin.Context) {
|
||||
portals, err := h.service.ListPortals(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list portals", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list portals"})
|
||||
return
|
||||
}
|
||||
|
||||
// Ensure we return an empty array instead of null
|
||||
if portals == nil {
|
||||
portals = []Portal{}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"portals": portals})
|
||||
}
|
||||
|
||||
// CreatePortal creates a new portal
|
||||
func (h *Handler) CreatePortal(c *gin.Context) {
|
||||
var portal Portal
|
||||
if err := c.ShouldBindJSON(&portal); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.service.CreatePortal(c.Request.Context(), &portal); err != nil {
|
||||
h.logger.Error("Failed to create portal", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusCreated, portal)
|
||||
}
|
||||
|
||||
// UpdatePortal updates a portal
|
||||
func (h *Handler) UpdatePortal(c *gin.Context) {
|
||||
id := c.Param("id")
|
||||
|
||||
var portal Portal
|
||||
if err := c.ShouldBindJSON(&portal); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.service.UpdatePortal(c.Request.Context(), id, &portal); err != nil {
|
||||
if err.Error() == "portal not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "portal not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to update portal", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, portal)
|
||||
}
|
||||
|
||||
// EnableTarget enables a target
|
||||
func (h *Handler) EnableTarget(c *gin.Context) {
|
||||
targetID := c.Param("id")
|
||||
|
||||
target, err := h.service.GetTarget(c.Request.Context(), targetID)
|
||||
if err != nil {
|
||||
if err.Error() == "target not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "target not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get target", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get target"})
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.service.EnableTarget(c.Request.Context(), target.IQN); err != nil {
|
||||
h.logger.Error("Failed to enable target", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "Target enabled successfully"})
|
||||
}
|
||||
|
||||
// DisableTarget disables a target
|
||||
func (h *Handler) DisableTarget(c *gin.Context) {
|
||||
targetID := c.Param("id")
|
||||
|
||||
target, err := h.service.GetTarget(c.Request.Context(), targetID)
|
||||
if err != nil {
|
||||
if err.Error() == "target not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "target not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get target", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get target"})
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.service.DisableTarget(c.Request.Context(), target.IQN); err != nil {
|
||||
h.logger.Error("Failed to disable target", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "Target disabled successfully"})
|
||||
}
|
||||
|
||||
// DeleteTarget deletes a target
|
||||
func (h *Handler) DeleteTarget(c *gin.Context) {
|
||||
targetID := c.Param("id")
|
||||
|
||||
if err := h.service.DeleteTarget(c.Request.Context(), targetID); err != nil {
|
||||
if err.Error() == "target not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "target not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to delete target", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "Target deleted successfully"})
|
||||
}
|
||||
|
||||
// DeletePortal deletes a portal
|
||||
func (h *Handler) DeletePortal(c *gin.Context) {
|
||||
id := c.Param("id")
|
||||
|
||||
if err := h.service.DeletePortal(c.Request.Context(), id); err != nil {
|
||||
if err.Error() == "portal not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "portal not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to delete portal", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "Portal deleted successfully"})
|
||||
}
|
||||
|
||||
// GetPortal retrieves a portal by ID
|
||||
func (h *Handler) GetPortal(c *gin.Context) {
|
||||
id := c.Param("id")
|
||||
|
||||
portal, err := h.service.GetPortal(c.Request.Context(), id)
|
||||
if err != nil {
|
||||
if err.Error() == "portal not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "portal not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get portal", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get portal"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, portal)
|
||||
}
|
||||
|
||||
// CreateInitiatorGroupRequest represents a request to create an initiator group
|
||||
type CreateInitiatorGroupRequest struct {
|
||||
TargetID string `json:"target_id" binding:"required"`
|
||||
GroupName string `json:"group_name" binding:"required"`
|
||||
}
|
||||
|
||||
// CreateInitiatorGroup creates a new initiator group
|
||||
func (h *Handler) CreateInitiatorGroup(c *gin.Context) {
|
||||
var req CreateInitiatorGroupRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
validationErrors := make(map[string]string)
|
||||
if ve, ok := err.(validator.ValidationErrors); ok {
|
||||
for _, fe := range ve {
|
||||
field := strings.ToLower(fe.Field())
|
||||
validationErrors[field] = fmt.Sprintf("Field '%s' is required", field)
|
||||
}
|
||||
}
|
||||
c.JSON(http.StatusBadRequest, gin.H{
|
||||
"error": "invalid request",
|
||||
"validation_errors": validationErrors,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
group, err := h.service.CreateInitiatorGroup(c.Request.Context(), req.TargetID, req.GroupName)
|
||||
if err != nil {
|
||||
if strings.Contains(err.Error(), "already exists") || strings.Contains(err.Error(), "not found") {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to create initiator group", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create initiator group"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, group)
|
||||
}
|
||||
|
||||
// UpdateInitiatorGroupRequest represents a request to update an initiator group
|
||||
type UpdateInitiatorGroupRequest struct {
|
||||
GroupName string `json:"group_name" binding:"required"`
|
||||
}
|
||||
|
||||
// UpdateInitiatorGroup updates an initiator group
|
||||
func (h *Handler) UpdateInitiatorGroup(c *gin.Context) {
|
||||
groupID := c.Param("id")
|
||||
|
||||
var req UpdateInitiatorGroupRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
validationErrors := make(map[string]string)
|
||||
if ve, ok := err.(validator.ValidationErrors); ok {
|
||||
for _, fe := range ve {
|
||||
field := strings.ToLower(fe.Field())
|
||||
validationErrors[field] = fmt.Sprintf("Field '%s' is required", field)
|
||||
}
|
||||
}
|
||||
c.JSON(http.StatusBadRequest, gin.H{
|
||||
"error": "invalid request",
|
||||
"validation_errors": validationErrors,
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
group, err := h.service.UpdateInitiatorGroup(c.Request.Context(), groupID, req.GroupName)
|
||||
if err != nil {
|
||||
if strings.Contains(err.Error(), "not found") || strings.Contains(err.Error(), "already exists") {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to update initiator group", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to update initiator group"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, group)
|
||||
}
|
||||
|
||||
// DeleteInitiatorGroup deletes an initiator group
|
||||
func (h *Handler) DeleteInitiatorGroup(c *gin.Context) {
|
||||
groupID := c.Param("id")
|
||||
|
||||
err := h.service.DeleteInitiatorGroup(c.Request.Context(), groupID)
|
||||
if err != nil {
|
||||
if strings.Contains(err.Error(), "not found") {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
if strings.Contains(err.Error(), "cannot delete") || strings.Contains(err.Error(), "contains") {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to delete initiator group", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to delete initiator group"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "initiator group deleted successfully"})
|
||||
}
|
||||
|
||||
// GetInitiatorGroup retrieves an initiator group by ID
|
||||
func (h *Handler) GetInitiatorGroup(c *gin.Context) {
|
||||
groupID := c.Param("id")
|
||||
|
||||
group, err := h.service.GetInitiatorGroup(c.Request.Context(), groupID)
|
||||
if err != nil {
|
||||
if strings.Contains(err.Error(), "not found") {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "initiator group not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get initiator group", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get initiator group"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, group)
|
||||
}
|
||||
|
||||
// ListAllInitiatorGroups lists all initiator groups
|
||||
func (h *Handler) ListAllInitiatorGroups(c *gin.Context) {
|
||||
groups, err := h.service.ListAllInitiatorGroups(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list initiator groups", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list initiator groups"})
|
||||
return
|
||||
}
|
||||
|
||||
if groups == nil {
|
||||
groups = []InitiatorGroup{}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"groups": groups})
|
||||
}
|
||||
|
||||
// GetConfigFile reads the SCST configuration file content
|
||||
func (h *Handler) GetConfigFile(c *gin.Context) {
|
||||
configPath := c.DefaultQuery("path", "/etc/scst.conf")
|
||||
|
||||
content, err := h.service.ReadConfigFile(c.Request.Context(), configPath)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to read config file", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"content": content,
|
||||
"path": configPath,
|
||||
})
|
||||
}
|
||||
|
||||
// UpdateConfigFile writes content to SCST configuration file
|
||||
func (h *Handler) UpdateConfigFile(c *gin.Context) {
|
||||
var req struct {
|
||||
Content string `json:"content" binding:"required"`
|
||||
Path string `json:"path"`
|
||||
}
|
||||
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
|
||||
return
|
||||
}
|
||||
|
||||
configPath := req.Path
|
||||
if configPath == "" {
|
||||
configPath = "/etc/scst.conf"
|
||||
}
|
||||
|
||||
if err := h.service.WriteConfigFile(c.Request.Context(), configPath, req.Content); err != nil {
|
||||
h.logger.Error("Failed to write config file", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"message": "Configuration file updated successfully",
|
||||
"path": configPath,
|
||||
})
|
||||
}
|
||||
2367
backend/internal/scst/service.go
Normal file
2367
backend/internal/scst/service.go
Normal file
File diff suppressed because it is too large
Load Diff
147
backend/internal/shares/handler.go
Normal file
147
backend/internal/shares/handler.go
Normal file
@@ -0,0 +1,147 @@
|
||||
package shares
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/go-playground/validator/v10"
|
||||
)
|
||||
|
||||
// Handler handles Shares-related API requests
|
||||
type Handler struct {
|
||||
service *Service
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewHandler creates a new Shares handler
|
||||
func NewHandler(db *database.DB, log *logger.Logger) *Handler {
|
||||
return &Handler{
|
||||
service: NewService(db, log),
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// ListShares lists all shares
|
||||
func (h *Handler) ListShares(c *gin.Context) {
|
||||
shares, err := h.service.ListShares(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list shares", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list shares"})
|
||||
return
|
||||
}
|
||||
|
||||
// Ensure we return an empty array instead of null
|
||||
if shares == nil {
|
||||
shares = []*Share{}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"shares": shares})
|
||||
}
|
||||
|
||||
// GetShare retrieves a share by ID
|
||||
func (h *Handler) GetShare(c *gin.Context) {
|
||||
shareID := c.Param("id")
|
||||
|
||||
share, err := h.service.GetShare(c.Request.Context(), shareID)
|
||||
if err != nil {
|
||||
if err.Error() == "share not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "share not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get share", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get share"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, share)
|
||||
}
|
||||
|
||||
// CreateShare creates a new share
|
||||
func (h *Handler) CreateShare(c *gin.Context) {
|
||||
var req CreateShareRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
h.logger.Error("Invalid create share request", "error", err)
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Validate request
|
||||
validate := validator.New()
|
||||
if err := validate.Struct(req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "validation failed: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Get user ID from context (set by auth middleware)
|
||||
userID, exists := c.Get("user_id")
|
||||
if !exists {
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "unauthorized"})
|
||||
return
|
||||
}
|
||||
|
||||
share, err := h.service.CreateShare(c.Request.Context(), &req, userID.(string))
|
||||
if err != nil {
|
||||
if err.Error() == "dataset not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "dataset not found"})
|
||||
return
|
||||
}
|
||||
if err.Error() == "only filesystem datasets can be shared (not volumes)" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
if err.Error() == "at least one protocol (NFS or SMB) must be enabled" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to create share", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusCreated, share)
|
||||
}
|
||||
|
||||
// UpdateShare updates an existing share
|
||||
func (h *Handler) UpdateShare(c *gin.Context) {
|
||||
shareID := c.Param("id")
|
||||
|
||||
var req UpdateShareRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
h.logger.Error("Invalid update share request", "error", err)
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
share, err := h.service.UpdateShare(c.Request.Context(), shareID, &req)
|
||||
if err != nil {
|
||||
if err.Error() == "share not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "share not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to update share", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, share)
|
||||
}
|
||||
|
||||
// DeleteShare deletes a share
|
||||
func (h *Handler) DeleteShare(c *gin.Context) {
|
||||
shareID := c.Param("id")
|
||||
|
||||
err := h.service.DeleteShare(c.Request.Context(), shareID)
|
||||
if err != nil {
|
||||
if err.Error() == "share not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "share not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to delete share", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "share deleted successfully"})
|
||||
}
|
||||
806
backend/internal/shares/service.go
Normal file
806
backend/internal/shares/service.go
Normal file
@@ -0,0 +1,806 @@
|
||||
package shares
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/lib/pq"
|
||||
)
|
||||
|
||||
// Service handles Shares (CIFS/NFS) operations
|
||||
type Service struct {
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewService creates a new Shares service
|
||||
func NewService(db *database.DB, log *logger.Logger) *Service {
|
||||
return &Service{
|
||||
db: db,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// Share represents a filesystem share (NFS/SMB)
|
||||
type Share struct {
|
||||
ID string `json:"id"`
|
||||
DatasetID string `json:"dataset_id"`
|
||||
DatasetName string `json:"dataset_name"`
|
||||
MountPoint string `json:"mount_point"`
|
||||
ShareType string `json:"share_type"` // 'nfs', 'smb', 'both'
|
||||
NFSEnabled bool `json:"nfs_enabled"`
|
||||
NFSOptions string `json:"nfs_options,omitempty"`
|
||||
NFSClients []string `json:"nfs_clients,omitempty"`
|
||||
SMBEnabled bool `json:"smb_enabled"`
|
||||
SMBShareName string `json:"smb_share_name,omitempty"`
|
||||
SMBPath string `json:"smb_path,omitempty"`
|
||||
SMBComment string `json:"smb_comment,omitempty"`
|
||||
SMBGuestOK bool `json:"smb_guest_ok"`
|
||||
SMBReadOnly bool `json:"smb_read_only"`
|
||||
SMBBrowseable bool `json:"smb_browseable"`
|
||||
IsActive bool `json:"is_active"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
CreatedBy string `json:"created_by"`
|
||||
}
|
||||
|
||||
// ListShares lists all shares
|
||||
func (s *Service) ListShares(ctx context.Context) ([]*Share, error) {
|
||||
query := `
|
||||
SELECT
|
||||
zs.id, zs.dataset_id, zd.name as dataset_name, zd.mount_point,
|
||||
zs.share_type, zs.nfs_enabled, zs.nfs_options, zs.nfs_clients,
|
||||
zs.smb_enabled, zs.smb_share_name, zs.smb_path, zs.smb_comment,
|
||||
zs.smb_guest_ok, zs.smb_read_only, zs.smb_browseable,
|
||||
zs.is_active, zs.created_at, zs.updated_at, zs.created_by
|
||||
FROM zfs_shares zs
|
||||
JOIN zfs_datasets zd ON zs.dataset_id = zd.id
|
||||
ORDER BY zd.name
|
||||
`
|
||||
|
||||
rows, err := s.db.QueryContext(ctx, query)
|
||||
if err != nil {
|
||||
if strings.Contains(err.Error(), "does not exist") {
|
||||
s.logger.Warn("zfs_shares table does not exist, returning empty list")
|
||||
return []*Share{}, nil
|
||||
}
|
||||
return nil, fmt.Errorf("failed to list shares: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var shares []*Share
|
||||
for rows.Next() {
|
||||
var share Share
|
||||
var mountPoint sql.NullString
|
||||
var nfsOptions sql.NullString
|
||||
var smbShareName sql.NullString
|
||||
var smbPath sql.NullString
|
||||
var smbComment sql.NullString
|
||||
var nfsClients []string
|
||||
|
||||
err := rows.Scan(
|
||||
&share.ID, &share.DatasetID, &share.DatasetName, &mountPoint,
|
||||
&share.ShareType, &share.NFSEnabled, &nfsOptions, pq.Array(&nfsClients),
|
||||
&share.SMBEnabled, &smbShareName, &smbPath, &smbComment,
|
||||
&share.SMBGuestOK, &share.SMBReadOnly, &share.SMBBrowseable,
|
||||
&share.IsActive, &share.CreatedAt, &share.UpdatedAt, &share.CreatedBy,
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to scan share row", "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
share.NFSClients = nfsClients
|
||||
|
||||
if mountPoint.Valid {
|
||||
share.MountPoint = mountPoint.String
|
||||
}
|
||||
if nfsOptions.Valid {
|
||||
share.NFSOptions = nfsOptions.String
|
||||
}
|
||||
if smbShareName.Valid {
|
||||
share.SMBShareName = smbShareName.String
|
||||
}
|
||||
if smbPath.Valid {
|
||||
share.SMBPath = smbPath.String
|
||||
}
|
||||
if smbComment.Valid {
|
||||
share.SMBComment = smbComment.String
|
||||
}
|
||||
|
||||
shares = append(shares, &share)
|
||||
}
|
||||
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("error iterating share rows: %w", err)
|
||||
}
|
||||
|
||||
return shares, nil
|
||||
}
|
||||
|
||||
// GetShare retrieves a share by ID
|
||||
func (s *Service) GetShare(ctx context.Context, shareID string) (*Share, error) {
|
||||
query := `
|
||||
SELECT
|
||||
zs.id, zs.dataset_id, zd.name as dataset_name, zd.mount_point,
|
||||
zs.share_type, zs.nfs_enabled, zs.nfs_options, zs.nfs_clients,
|
||||
zs.smb_enabled, zs.smb_share_name, zs.smb_path, zs.smb_comment,
|
||||
zs.smb_guest_ok, zs.smb_read_only, zs.smb_browseable,
|
||||
zs.is_active, zs.created_at, zs.updated_at, zs.created_by
|
||||
FROM zfs_shares zs
|
||||
JOIN zfs_datasets zd ON zs.dataset_id = zd.id
|
||||
WHERE zs.id = $1
|
||||
`
|
||||
|
||||
var share Share
|
||||
var mountPoint sql.NullString
|
||||
var nfsOptions sql.NullString
|
||||
var smbShareName sql.NullString
|
||||
var smbPath sql.NullString
|
||||
var smbComment sql.NullString
|
||||
var nfsClients []string
|
||||
|
||||
err := s.db.QueryRowContext(ctx, query, shareID).Scan(
|
||||
&share.ID, &share.DatasetID, &share.DatasetName, &mountPoint,
|
||||
&share.ShareType, &share.NFSEnabled, &nfsOptions, pq.Array(&nfsClients),
|
||||
&share.SMBEnabled, &smbShareName, &smbPath, &smbComment,
|
||||
&share.SMBGuestOK, &share.SMBReadOnly, &share.SMBBrowseable,
|
||||
&share.IsActive, &share.CreatedAt, &share.UpdatedAt, &share.CreatedBy,
|
||||
)
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, fmt.Errorf("share not found")
|
||||
}
|
||||
return nil, fmt.Errorf("failed to get share: %w", err)
|
||||
}
|
||||
|
||||
share.NFSClients = nfsClients
|
||||
|
||||
if mountPoint.Valid {
|
||||
share.MountPoint = mountPoint.String
|
||||
}
|
||||
if nfsOptions.Valid {
|
||||
share.NFSOptions = nfsOptions.String
|
||||
}
|
||||
if smbShareName.Valid {
|
||||
share.SMBShareName = smbShareName.String
|
||||
}
|
||||
if smbPath.Valid {
|
||||
share.SMBPath = smbPath.String
|
||||
}
|
||||
if smbComment.Valid {
|
||||
share.SMBComment = smbComment.String
|
||||
}
|
||||
|
||||
return &share, nil
|
||||
}
|
||||
|
||||
// CreateShareRequest represents a share creation request
|
||||
type CreateShareRequest struct {
|
||||
DatasetID string `json:"dataset_id" binding:"required"`
|
||||
NFSEnabled bool `json:"nfs_enabled"`
|
||||
NFSOptions string `json:"nfs_options"`
|
||||
NFSClients []string `json:"nfs_clients"`
|
||||
SMBEnabled bool `json:"smb_enabled"`
|
||||
SMBShareName string `json:"smb_share_name"`
|
||||
SMBPath string `json:"smb_path"`
|
||||
SMBComment string `json:"smb_comment"`
|
||||
SMBGuestOK bool `json:"smb_guest_ok"`
|
||||
SMBReadOnly bool `json:"smb_read_only"`
|
||||
SMBBrowseable bool `json:"smb_browseable"`
|
||||
}
|
||||
|
||||
// CreateShare creates a new share
|
||||
func (s *Service) CreateShare(ctx context.Context, req *CreateShareRequest, userID string) (*Share, error) {
|
||||
// Validate dataset exists and is a filesystem (not volume)
|
||||
// req.DatasetID can be either UUID or dataset name
|
||||
var datasetID, datasetType, datasetName, mountPoint string
|
||||
var mountPointNull sql.NullString
|
||||
|
||||
// Try to find by ID first (UUID)
|
||||
err := s.db.QueryRowContext(ctx,
|
||||
"SELECT id, type, name, mount_point FROM zfs_datasets WHERE id = $1",
|
||||
req.DatasetID,
|
||||
).Scan(&datasetID, &datasetType, &datasetName, &mountPointNull)
|
||||
|
||||
// If not found by ID, try by name
|
||||
if err == sql.ErrNoRows {
|
||||
err = s.db.QueryRowContext(ctx,
|
||||
"SELECT id, type, name, mount_point FROM zfs_datasets WHERE name = $1",
|
||||
req.DatasetID,
|
||||
).Scan(&datasetID, &datasetType, &datasetName, &mountPointNull)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, fmt.Errorf("dataset not found")
|
||||
}
|
||||
return nil, fmt.Errorf("failed to validate dataset: %w", err)
|
||||
}
|
||||
|
||||
if mountPointNull.Valid {
|
||||
mountPoint = mountPointNull.String
|
||||
} else {
|
||||
mountPoint = "none"
|
||||
}
|
||||
|
||||
if datasetType != "filesystem" {
|
||||
return nil, fmt.Errorf("only filesystem datasets can be shared (not volumes)")
|
||||
}
|
||||
|
||||
// Determine share type
|
||||
shareType := "none"
|
||||
if req.NFSEnabled && req.SMBEnabled {
|
||||
shareType = "both"
|
||||
} else if req.NFSEnabled {
|
||||
shareType = "nfs"
|
||||
} else if req.SMBEnabled {
|
||||
shareType = "smb"
|
||||
} else {
|
||||
return nil, fmt.Errorf("at least one protocol (NFS or SMB) must be enabled")
|
||||
}
|
||||
|
||||
// Set default NFS options if not provided
|
||||
nfsOptions := req.NFSOptions
|
||||
if nfsOptions == "" {
|
||||
nfsOptions = "rw,sync,no_subtree_check"
|
||||
}
|
||||
|
||||
// Set default SMB share name if not provided
|
||||
smbShareName := req.SMBShareName
|
||||
if smbShareName == "" {
|
||||
// Extract dataset name from full path (e.g., "pool/dataset" -> "dataset")
|
||||
parts := strings.Split(datasetName, "/")
|
||||
smbShareName = parts[len(parts)-1]
|
||||
}
|
||||
|
||||
// Set SMB path (use mount_point if available, otherwise use dataset name)
|
||||
smbPath := req.SMBPath
|
||||
if smbPath == "" {
|
||||
if mountPoint != "" && mountPoint != "none" {
|
||||
smbPath = mountPoint
|
||||
} else {
|
||||
smbPath = fmt.Sprintf("/mnt/%s", strings.ReplaceAll(datasetName, "/", "_"))
|
||||
}
|
||||
}
|
||||
|
||||
// Insert into database
|
||||
query := `
|
||||
INSERT INTO zfs_shares (
|
||||
dataset_id, share_type, nfs_enabled, nfs_options, nfs_clients,
|
||||
smb_enabled, smb_share_name, smb_path, smb_comment,
|
||||
smb_guest_ok, smb_read_only, smb_browseable, is_active, created_by
|
||||
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14)
|
||||
RETURNING id, created_at, updated_at
|
||||
`
|
||||
|
||||
var shareID string
|
||||
var createdAt, updatedAt time.Time
|
||||
|
||||
// Handle nfs_clients array - use empty array if nil
|
||||
nfsClients := req.NFSClients
|
||||
if nfsClients == nil {
|
||||
nfsClients = []string{}
|
||||
}
|
||||
|
||||
err = s.db.QueryRowContext(ctx, query,
|
||||
datasetID, shareType, req.NFSEnabled, nfsOptions, pq.Array(nfsClients),
|
||||
req.SMBEnabled, smbShareName, smbPath, req.SMBComment,
|
||||
req.SMBGuestOK, req.SMBReadOnly, req.SMBBrowseable, true, userID,
|
||||
).Scan(&shareID, &createdAt, &updatedAt)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create share: %w", err)
|
||||
}
|
||||
|
||||
// Apply NFS export if enabled
|
||||
if req.NFSEnabled {
|
||||
if err := s.applyNFSExport(ctx, mountPoint, nfsOptions, req.NFSClients); err != nil {
|
||||
s.logger.Error("Failed to apply NFS export", "error", err, "share_id", shareID)
|
||||
// Don't fail the creation, but log the error
|
||||
}
|
||||
}
|
||||
|
||||
// Apply SMB share if enabled
|
||||
if req.SMBEnabled {
|
||||
if err := s.applySMBShare(ctx, smbShareName, smbPath, req.SMBComment, req.SMBGuestOK, req.SMBReadOnly, req.SMBBrowseable); err != nil {
|
||||
s.logger.Error("Failed to apply SMB share", "error", err, "share_id", shareID)
|
||||
// Don't fail the creation, but log the error
|
||||
}
|
||||
}
|
||||
|
||||
// Return the created share
|
||||
return s.GetShare(ctx, shareID)
|
||||
}
|
||||
|
||||
// UpdateShareRequest represents a share update request
|
||||
type UpdateShareRequest struct {
|
||||
NFSEnabled *bool `json:"nfs_enabled"`
|
||||
NFSOptions *string `json:"nfs_options"`
|
||||
NFSClients *[]string `json:"nfs_clients"`
|
||||
SMBEnabled *bool `json:"smb_enabled"`
|
||||
SMBShareName *string `json:"smb_share_name"`
|
||||
SMBComment *string `json:"smb_comment"`
|
||||
SMBGuestOK *bool `json:"smb_guest_ok"`
|
||||
SMBReadOnly *bool `json:"smb_read_only"`
|
||||
SMBBrowseable *bool `json:"smb_browseable"`
|
||||
IsActive *bool `json:"is_active"`
|
||||
}
|
||||
|
||||
// UpdateShare updates an existing share
|
||||
func (s *Service) UpdateShare(ctx context.Context, shareID string, req *UpdateShareRequest) (*Share, error) {
|
||||
// Get current share
|
||||
share, err := s.GetShare(ctx, shareID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Build update query dynamically
|
||||
updates := []string{}
|
||||
args := []interface{}{}
|
||||
argIndex := 1
|
||||
|
||||
if req.NFSEnabled != nil {
|
||||
updates = append(updates, fmt.Sprintf("nfs_enabled = $%d", argIndex))
|
||||
args = append(args, *req.NFSEnabled)
|
||||
argIndex++
|
||||
}
|
||||
if req.NFSOptions != nil {
|
||||
updates = append(updates, fmt.Sprintf("nfs_options = $%d", argIndex))
|
||||
args = append(args, *req.NFSOptions)
|
||||
argIndex++
|
||||
}
|
||||
if req.NFSClients != nil {
|
||||
updates = append(updates, fmt.Sprintf("nfs_clients = $%d", argIndex))
|
||||
args = append(args, pq.Array(*req.NFSClients))
|
||||
argIndex++
|
||||
}
|
||||
if req.SMBEnabled != nil {
|
||||
updates = append(updates, fmt.Sprintf("smb_enabled = $%d", argIndex))
|
||||
args = append(args, *req.SMBEnabled)
|
||||
argIndex++
|
||||
}
|
||||
if req.SMBShareName != nil {
|
||||
updates = append(updates, fmt.Sprintf("smb_share_name = $%d", argIndex))
|
||||
args = append(args, *req.SMBShareName)
|
||||
argIndex++
|
||||
}
|
||||
if req.SMBComment != nil {
|
||||
updates = append(updates, fmt.Sprintf("smb_comment = $%d", argIndex))
|
||||
args = append(args, *req.SMBComment)
|
||||
argIndex++
|
||||
}
|
||||
if req.SMBGuestOK != nil {
|
||||
updates = append(updates, fmt.Sprintf("smb_guest_ok = $%d", argIndex))
|
||||
args = append(args, *req.SMBGuestOK)
|
||||
argIndex++
|
||||
}
|
||||
if req.SMBReadOnly != nil {
|
||||
updates = append(updates, fmt.Sprintf("smb_read_only = $%d", argIndex))
|
||||
args = append(args, *req.SMBReadOnly)
|
||||
argIndex++
|
||||
}
|
||||
if req.SMBBrowseable != nil {
|
||||
updates = append(updates, fmt.Sprintf("smb_browseable = $%d", argIndex))
|
||||
args = append(args, *req.SMBBrowseable)
|
||||
argIndex++
|
||||
}
|
||||
if req.IsActive != nil {
|
||||
updates = append(updates, fmt.Sprintf("is_active = $%d", argIndex))
|
||||
args = append(args, *req.IsActive)
|
||||
argIndex++
|
||||
}
|
||||
|
||||
if len(updates) == 0 {
|
||||
return share, nil // No changes
|
||||
}
|
||||
|
||||
// Update share_type based on enabled protocols
|
||||
nfsEnabled := share.NFSEnabled
|
||||
smbEnabled := share.SMBEnabled
|
||||
if req.NFSEnabled != nil {
|
||||
nfsEnabled = *req.NFSEnabled
|
||||
}
|
||||
if req.SMBEnabled != nil {
|
||||
smbEnabled = *req.SMBEnabled
|
||||
}
|
||||
|
||||
shareType := "none"
|
||||
if nfsEnabled && smbEnabled {
|
||||
shareType = "both"
|
||||
} else if nfsEnabled {
|
||||
shareType = "nfs"
|
||||
} else if smbEnabled {
|
||||
shareType = "smb"
|
||||
}
|
||||
|
||||
updates = append(updates, fmt.Sprintf("share_type = $%d", argIndex))
|
||||
args = append(args, shareType)
|
||||
argIndex++
|
||||
|
||||
updates = append(updates, fmt.Sprintf("updated_at = NOW()"))
|
||||
args = append(args, shareID)
|
||||
|
||||
query := fmt.Sprintf(`
|
||||
UPDATE zfs_shares
|
||||
SET %s
|
||||
WHERE id = $%d
|
||||
`, strings.Join(updates, ", "), argIndex)
|
||||
|
||||
_, err = s.db.ExecContext(ctx, query, args...)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to update share: %w", err)
|
||||
}
|
||||
|
||||
// Re-apply NFS export if NFS is enabled
|
||||
if nfsEnabled {
|
||||
nfsOptions := share.NFSOptions
|
||||
if req.NFSOptions != nil {
|
||||
nfsOptions = *req.NFSOptions
|
||||
}
|
||||
nfsClients := share.NFSClients
|
||||
if req.NFSClients != nil {
|
||||
nfsClients = *req.NFSClients
|
||||
}
|
||||
if err := s.applyNFSExport(ctx, share.MountPoint, nfsOptions, nfsClients); err != nil {
|
||||
s.logger.Error("Failed to apply NFS export", "error", err, "share_id", shareID)
|
||||
}
|
||||
} else {
|
||||
// Remove NFS export if disabled
|
||||
if err := s.removeNFSExport(ctx, share.MountPoint); err != nil {
|
||||
s.logger.Error("Failed to remove NFS export", "error", err, "share_id", shareID)
|
||||
}
|
||||
}
|
||||
|
||||
// Re-apply SMB share if SMB is enabled
|
||||
if smbEnabled {
|
||||
smbShareName := share.SMBShareName
|
||||
if req.SMBShareName != nil {
|
||||
smbShareName = *req.SMBShareName
|
||||
}
|
||||
smbPath := share.SMBPath
|
||||
smbComment := share.SMBComment
|
||||
if req.SMBComment != nil {
|
||||
smbComment = *req.SMBComment
|
||||
}
|
||||
smbGuestOK := share.SMBGuestOK
|
||||
if req.SMBGuestOK != nil {
|
||||
smbGuestOK = *req.SMBGuestOK
|
||||
}
|
||||
smbReadOnly := share.SMBReadOnly
|
||||
if req.SMBReadOnly != nil {
|
||||
smbReadOnly = *req.SMBReadOnly
|
||||
}
|
||||
smbBrowseable := share.SMBBrowseable
|
||||
if req.SMBBrowseable != nil {
|
||||
smbBrowseable = *req.SMBBrowseable
|
||||
}
|
||||
if err := s.applySMBShare(ctx, smbShareName, smbPath, smbComment, smbGuestOK, smbReadOnly, smbBrowseable); err != nil {
|
||||
s.logger.Error("Failed to apply SMB share", "error", err, "share_id", shareID)
|
||||
}
|
||||
} else {
|
||||
// Remove SMB share if disabled
|
||||
if err := s.removeSMBShare(ctx, share.SMBShareName); err != nil {
|
||||
s.logger.Error("Failed to remove SMB share", "error", err, "share_id", shareID)
|
||||
}
|
||||
}
|
||||
|
||||
return s.GetShare(ctx, shareID)
|
||||
}
|
||||
|
||||
// DeleteShare deletes a share
|
||||
func (s *Service) DeleteShare(ctx context.Context, shareID string) error {
|
||||
// Get share to get mount point and share name
|
||||
share, err := s.GetShare(ctx, shareID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Remove NFS export
|
||||
if share.NFSEnabled {
|
||||
if err := s.removeNFSExport(ctx, share.MountPoint); err != nil {
|
||||
s.logger.Error("Failed to remove NFS export", "error", err, "share_id", shareID)
|
||||
}
|
||||
}
|
||||
|
||||
// Remove SMB share
|
||||
if share.SMBEnabled {
|
||||
if err := s.removeSMBShare(ctx, share.SMBShareName); err != nil {
|
||||
s.logger.Error("Failed to remove SMB share", "error", err, "share_id", shareID)
|
||||
}
|
||||
}
|
||||
|
||||
// Delete from database
|
||||
_, err = s.db.ExecContext(ctx, "DELETE FROM zfs_shares WHERE id = $1", shareID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to delete share: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// applyNFSExport adds or updates an NFS export in /etc/exports
|
||||
func (s *Service) applyNFSExport(ctx context.Context, mountPoint, options string, clients []string) error {
|
||||
if mountPoint == "" || mountPoint == "none" {
|
||||
return fmt.Errorf("mount point is required for NFS export")
|
||||
}
|
||||
|
||||
// Build client list (default to * if empty)
|
||||
clientList := "*"
|
||||
if len(clients) > 0 {
|
||||
clientList = strings.Join(clients, " ")
|
||||
}
|
||||
|
||||
// Build export line
|
||||
exportLine := fmt.Sprintf("%s %s(%s)", mountPoint, clientList, options)
|
||||
|
||||
// Read current /etc/exports
|
||||
exportsPath := "/etc/exports"
|
||||
exportsContent, err := os.ReadFile(exportsPath)
|
||||
if err != nil && !os.IsNotExist(err) {
|
||||
return fmt.Errorf("failed to read exports file: %w", err)
|
||||
}
|
||||
|
||||
lines := strings.Split(string(exportsContent), "\n")
|
||||
var newLines []string
|
||||
found := false
|
||||
|
||||
// Check if this mount point already exists
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" || strings.HasPrefix(line, "#") {
|
||||
newLines = append(newLines, line)
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if this line is for our mount point
|
||||
if strings.HasPrefix(line, mountPoint+" ") {
|
||||
newLines = append(newLines, exportLine)
|
||||
found = true
|
||||
} else {
|
||||
newLines = append(newLines, line)
|
||||
}
|
||||
}
|
||||
|
||||
// Add if not found
|
||||
if !found {
|
||||
newLines = append(newLines, exportLine)
|
||||
}
|
||||
|
||||
// Write back to file
|
||||
newContent := strings.Join(newLines, "\n") + "\n"
|
||||
if err := os.WriteFile(exportsPath, []byte(newContent), 0644); err != nil {
|
||||
return fmt.Errorf("failed to write exports file: %w", err)
|
||||
}
|
||||
|
||||
// Apply exports
|
||||
cmd := exec.CommandContext(ctx, "sudo", "exportfs", "-ra")
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("failed to apply exports: %s: %w", string(output), err)
|
||||
}
|
||||
|
||||
s.logger.Info("NFS export applied", "mount_point", mountPoint, "clients", clientList)
|
||||
return nil
|
||||
}
|
||||
|
||||
// removeNFSExport removes an NFS export from /etc/exports
|
||||
func (s *Service) removeNFSExport(ctx context.Context, mountPoint string) error {
|
||||
if mountPoint == "" || mountPoint == "none" {
|
||||
return nil // Nothing to remove
|
||||
}
|
||||
|
||||
exportsPath := "/etc/exports"
|
||||
exportsContent, err := os.ReadFile(exportsPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil // File doesn't exist, nothing to remove
|
||||
}
|
||||
return fmt.Errorf("failed to read exports file: %w", err)
|
||||
}
|
||||
|
||||
lines := strings.Split(string(exportsContent), "\n")
|
||||
var newLines []string
|
||||
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" || strings.HasPrefix(line, "#") {
|
||||
newLines = append(newLines, line)
|
||||
continue
|
||||
}
|
||||
|
||||
// Skip lines for this mount point
|
||||
if strings.HasPrefix(line, mountPoint+" ") {
|
||||
continue
|
||||
}
|
||||
|
||||
newLines = append(newLines, line)
|
||||
}
|
||||
|
||||
// Write back to file
|
||||
newContent := strings.Join(newLines, "\n")
|
||||
if newContent != "" && !strings.HasSuffix(newContent, "\n") {
|
||||
newContent += "\n"
|
||||
}
|
||||
if err := os.WriteFile(exportsPath, []byte(newContent), 0644); err != nil {
|
||||
return fmt.Errorf("failed to write exports file: %w", err)
|
||||
}
|
||||
|
||||
// Apply exports
|
||||
cmd := exec.CommandContext(ctx, "sudo", "exportfs", "-ra")
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("failed to apply exports: %s: %w", string(output), err)
|
||||
}
|
||||
|
||||
s.logger.Info("NFS export removed", "mount_point", mountPoint)
|
||||
return nil
|
||||
}
|
||||
|
||||
// applySMBShare adds or updates an SMB share in /etc/samba/smb.conf
|
||||
func (s *Service) applySMBShare(ctx context.Context, shareName, path, comment string, guestOK, readOnly, browseable bool) error {
|
||||
if shareName == "" {
|
||||
return fmt.Errorf("SMB share name is required")
|
||||
}
|
||||
if path == "" {
|
||||
return fmt.Errorf("SMB path is required")
|
||||
}
|
||||
|
||||
smbConfPath := "/etc/samba/smb.conf"
|
||||
smbContent, err := os.ReadFile(smbConfPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read smb.conf: %w", err)
|
||||
}
|
||||
|
||||
// Parse and update smb.conf
|
||||
lines := strings.Split(string(smbContent), "\n")
|
||||
var newLines []string
|
||||
inShare := false
|
||||
shareStart := -1
|
||||
|
||||
for i, line := range lines {
|
||||
trimmed := strings.TrimSpace(line)
|
||||
|
||||
// Check if we're entering our share section
|
||||
if strings.HasPrefix(trimmed, "[") && strings.HasSuffix(trimmed, "]") {
|
||||
sectionName := trimmed[1 : len(trimmed)-1]
|
||||
if sectionName == shareName {
|
||||
inShare = true
|
||||
shareStart = i
|
||||
continue
|
||||
} else if inShare {
|
||||
// We've left our share section, insert the share config here
|
||||
newLines = append(newLines, s.buildSMBShareConfig(shareName, path, comment, guestOK, readOnly, browseable))
|
||||
inShare = false
|
||||
}
|
||||
}
|
||||
|
||||
if inShare {
|
||||
// Skip lines until we find the next section or end of file
|
||||
continue
|
||||
}
|
||||
|
||||
newLines = append(newLines, line)
|
||||
}
|
||||
|
||||
// If we were still in the share at the end, add it
|
||||
if inShare {
|
||||
newLines = append(newLines, s.buildSMBShareConfig(shareName, path, comment, guestOK, readOnly, browseable))
|
||||
} else if shareStart == -1 {
|
||||
// Share doesn't exist, add it at the end
|
||||
newLines = append(newLines, "")
|
||||
newLines = append(newLines, s.buildSMBShareConfig(shareName, path, comment, guestOK, readOnly, browseable))
|
||||
}
|
||||
|
||||
// Write back to file
|
||||
newContent := strings.Join(newLines, "\n")
|
||||
if err := os.WriteFile(smbConfPath, []byte(newContent), 0644); err != nil {
|
||||
return fmt.Errorf("failed to write smb.conf: %w", err)
|
||||
}
|
||||
|
||||
// Reload Samba
|
||||
cmd := exec.CommandContext(ctx, "sudo", "systemctl", "reload", "smbd")
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
// Try restart if reload fails
|
||||
cmd = exec.CommandContext(ctx, "sudo", "systemctl", "restart", "smbd")
|
||||
if output2, err2 := cmd.CombinedOutput(); err2 != nil {
|
||||
return fmt.Errorf("failed to reload/restart smbd: %s / %s: %w", string(output), string(output2), err2)
|
||||
}
|
||||
}
|
||||
|
||||
s.logger.Info("SMB share applied", "share_name", shareName, "path", path)
|
||||
return nil
|
||||
}
|
||||
|
||||
// buildSMBShareConfig builds the SMB share configuration block
|
||||
func (s *Service) buildSMBShareConfig(shareName, path, comment string, guestOK, readOnly, browseable bool) string {
|
||||
var config []string
|
||||
config = append(config, fmt.Sprintf("[%s]", shareName))
|
||||
if comment != "" {
|
||||
config = append(config, fmt.Sprintf(" comment = %s", comment))
|
||||
}
|
||||
config = append(config, fmt.Sprintf(" path = %s", path))
|
||||
if guestOK {
|
||||
config = append(config, " guest ok = yes")
|
||||
} else {
|
||||
config = append(config, " guest ok = no")
|
||||
}
|
||||
if readOnly {
|
||||
config = append(config, " read only = yes")
|
||||
} else {
|
||||
config = append(config, " read only = no")
|
||||
}
|
||||
if browseable {
|
||||
config = append(config, " browseable = yes")
|
||||
} else {
|
||||
config = append(config, " browseable = no")
|
||||
}
|
||||
return strings.Join(config, "\n")
|
||||
}
|
||||
|
||||
// removeSMBShare removes an SMB share from /etc/samba/smb.conf
|
||||
func (s *Service) removeSMBShare(ctx context.Context, shareName string) error {
|
||||
if shareName == "" {
|
||||
return nil // Nothing to remove
|
||||
}
|
||||
|
||||
smbConfPath := "/etc/samba/smb.conf"
|
||||
smbContent, err := os.ReadFile(smbConfPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil // File doesn't exist, nothing to remove
|
||||
}
|
||||
return fmt.Errorf("failed to read smb.conf: %w", err)
|
||||
}
|
||||
|
||||
lines := strings.Split(string(smbContent), "\n")
|
||||
var newLines []string
|
||||
inShare := false
|
||||
|
||||
for _, line := range lines {
|
||||
trimmed := strings.TrimSpace(line)
|
||||
|
||||
// Check if we're entering our share section
|
||||
if strings.HasPrefix(trimmed, "[") && strings.HasSuffix(trimmed, "]") {
|
||||
sectionName := trimmed[1 : len(trimmed)-1]
|
||||
if sectionName == shareName {
|
||||
inShare = true
|
||||
continue
|
||||
} else if inShare {
|
||||
// We've left our share section
|
||||
inShare = false
|
||||
}
|
||||
}
|
||||
|
||||
if inShare {
|
||||
// Skip lines in this share section
|
||||
continue
|
||||
}
|
||||
|
||||
newLines = append(newLines, line)
|
||||
}
|
||||
|
||||
// Write back to file
|
||||
newContent := strings.Join(newLines, "\n")
|
||||
if err := os.WriteFile(smbConfPath, []byte(newContent), 0644); err != nil {
|
||||
return fmt.Errorf("failed to write smb.conf: %w", err)
|
||||
}
|
||||
|
||||
// Reload Samba
|
||||
cmd := exec.CommandContext(ctx, "sudo", "systemctl", "reload", "smbd")
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
// Try restart if reload fails
|
||||
cmd = exec.CommandContext(ctx, "sudo", "systemctl", "restart", "smbd")
|
||||
if output2, err2 := cmd.CombinedOutput(); err2 != nil {
|
||||
return fmt.Errorf("failed to reload/restart smbd: %s / %s: %w", string(output), string(output2), err2)
|
||||
}
|
||||
}
|
||||
|
||||
s.logger.Info("SMB share removed", "share_name", shareName)
|
||||
return nil
|
||||
}
|
||||
111
backend/internal/storage/arc.go
Normal file
111
backend/internal/storage/arc.go
Normal file
@@ -0,0 +1,111 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
)
|
||||
|
||||
// ARCStats represents ZFS ARC (Adaptive Replacement Cache) statistics
|
||||
type ARCStats struct {
|
||||
HitRatio float64 `json:"hit_ratio"` // Percentage of cache hits
|
||||
CacheUsage float64 `json:"cache_usage"` // Percentage of cache used
|
||||
CacheSize int64 `json:"cache_size"` // Current ARC size in bytes
|
||||
CacheMax int64 `json:"cache_max"` // Maximum ARC size in bytes
|
||||
Hits int64 `json:"hits"` // Total cache hits
|
||||
Misses int64 `json:"misses"` // Total cache misses
|
||||
DemandHits int64 `json:"demand_hits"` // Demand data/metadata hits
|
||||
PrefetchHits int64 `json:"prefetch_hits"` // Prefetch hits
|
||||
MRUHits int64 `json:"mru_hits"` // Most Recently Used hits
|
||||
MFUHits int64 `json:"mfu_hits"` // Most Frequently Used hits
|
||||
CollectedAt string `json:"collected_at"` // Timestamp when stats were collected
|
||||
}
|
||||
|
||||
// ARCService handles ZFS ARC statistics collection
|
||||
type ARCService struct {
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewARCService creates a new ARC service
|
||||
func NewARCService(log *logger.Logger) *ARCService {
|
||||
return &ARCService{
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// GetARCStats reads and parses ARC statistics from /proc/spl/kstat/zfs/arcstats
|
||||
func (s *ARCService) GetARCStats(ctx context.Context) (*ARCStats, error) {
|
||||
stats := &ARCStats{}
|
||||
|
||||
// Read ARC stats file
|
||||
file, err := os.Open("/proc/spl/kstat/zfs/arcstats")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to open arcstats file: %w", err)
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
// Parse the file
|
||||
scanner := bufio.NewScanner(file)
|
||||
arcData := make(map[string]int64)
|
||||
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
|
||||
// Skip empty lines and header lines
|
||||
if line == "" || strings.HasPrefix(line, "name") || strings.HasPrefix(line, "9") {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse lines like: "hits 4 311154"
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) >= 3 {
|
||||
key := fields[0]
|
||||
// The value is in the last field (field index 2)
|
||||
if value, err := strconv.ParseInt(fields[len(fields)-1], 10, 64); err == nil {
|
||||
arcData[key] = value
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil {
|
||||
return nil, fmt.Errorf("failed to read arcstats file: %w", err)
|
||||
}
|
||||
|
||||
// Extract key metrics
|
||||
stats.Hits = arcData["hits"]
|
||||
stats.Misses = arcData["misses"]
|
||||
stats.DemandHits = arcData["demand_data_hits"] + arcData["demand_metadata_hits"]
|
||||
stats.PrefetchHits = arcData["prefetch_data_hits"] + arcData["prefetch_metadata_hits"]
|
||||
stats.MRUHits = arcData["mru_hits"]
|
||||
stats.MFUHits = arcData["mfu_hits"]
|
||||
|
||||
// Current ARC size (c) and max size (c_max)
|
||||
stats.CacheSize = arcData["c"]
|
||||
stats.CacheMax = arcData["c_max"]
|
||||
|
||||
// Calculate hit ratio
|
||||
totalRequests := stats.Hits + stats.Misses
|
||||
if totalRequests > 0 {
|
||||
stats.HitRatio = float64(stats.Hits) / float64(totalRequests) * 100.0
|
||||
} else {
|
||||
stats.HitRatio = 0.0
|
||||
}
|
||||
|
||||
// Calculate cache usage percentage
|
||||
if stats.CacheMax > 0 {
|
||||
stats.CacheUsage = float64(stats.CacheSize) / float64(stats.CacheMax) * 100.0
|
||||
} else {
|
||||
stats.CacheUsage = 0.0
|
||||
}
|
||||
|
||||
// Set collection timestamp
|
||||
stats.CollectedAt = time.Now().Format(time.RFC3339)
|
||||
|
||||
return stats, nil
|
||||
}
|
||||
397
backend/internal/storage/disk.go
Normal file
397
backend/internal/storage/disk.go
Normal file
@@ -0,0 +1,397 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os/exec"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
)
|
||||
|
||||
// DiskService handles disk discovery and management
|
||||
type DiskService struct {
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewDiskService creates a new disk service
|
||||
func NewDiskService(db *database.DB, log *logger.Logger) *DiskService {
|
||||
return &DiskService{
|
||||
db: db,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// PhysicalDisk represents a physical disk
|
||||
type PhysicalDisk struct {
|
||||
ID string `json:"id"`
|
||||
DevicePath string `json:"device_path"`
|
||||
Vendor string `json:"vendor"`
|
||||
Model string `json:"model"`
|
||||
SerialNumber string `json:"serial_number"`
|
||||
SizeBytes int64 `json:"size_bytes"`
|
||||
SectorSize int `json:"sector_size"`
|
||||
IsSSD bool `json:"is_ssd"`
|
||||
HealthStatus string `json:"health_status"`
|
||||
HealthDetails map[string]interface{} `json:"health_details"`
|
||||
IsUsed bool `json:"is_used"`
|
||||
AttachedToPool string `json:"attached_to_pool"` // Pool name if disk is used in a ZFS pool
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
}
|
||||
|
||||
// DiscoverDisks discovers physical disks on the system
|
||||
func (s *DiskService) DiscoverDisks(ctx context.Context) ([]PhysicalDisk, error) {
|
||||
// Use lsblk to discover block devices
|
||||
cmd := exec.CommandContext(ctx, "lsblk", "-b", "-o", "NAME,SIZE,TYPE", "-J")
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to run lsblk: %w", err)
|
||||
}
|
||||
|
||||
var lsblkOutput struct {
|
||||
BlockDevices []struct {
|
||||
Name string `json:"name"`
|
||||
Size interface{} `json:"size"` // Can be string or number
|
||||
Type string `json:"type"`
|
||||
} `json:"blockdevices"`
|
||||
}
|
||||
|
||||
if err := json.Unmarshal(output, &lsblkOutput); err != nil {
|
||||
return nil, fmt.Errorf("failed to parse lsblk output: %w", err)
|
||||
}
|
||||
|
||||
var disks []PhysicalDisk
|
||||
for _, device := range lsblkOutput.BlockDevices {
|
||||
// Only process disk devices (not partitions)
|
||||
if device.Type != "disk" {
|
||||
continue
|
||||
}
|
||||
|
||||
devicePath := "/dev/" + device.Name
|
||||
|
||||
// Skip ZFS volume block devices (zd* devices are ZFS volumes exported as block devices)
|
||||
// These are not physical disks and should not appear in physical disk list
|
||||
if strings.HasPrefix(device.Name, "zd") {
|
||||
s.logger.Debug("Skipping ZFS volume block device", "device", devicePath)
|
||||
continue
|
||||
}
|
||||
|
||||
// Skip devices under /dev/zvol (ZFS volume devices in zvol directory)
|
||||
// These are virtual block devices created from ZFS volumes, not physical hardware
|
||||
if strings.HasPrefix(devicePath, "/dev/zvol/") {
|
||||
s.logger.Debug("Skipping ZFS volume device", "device", devicePath)
|
||||
continue
|
||||
}
|
||||
|
||||
// Skip OS disk (disk that has root or boot partition)
|
||||
if s.isOSDisk(ctx, devicePath) {
|
||||
s.logger.Debug("Skipping OS disk", "device", devicePath)
|
||||
continue
|
||||
}
|
||||
|
||||
disk, err := s.getDiskInfo(ctx, devicePath)
|
||||
if err != nil {
|
||||
s.logger.Warn("Failed to get disk info", "device", devicePath, "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse size (can be string or number)
|
||||
var sizeBytes int64
|
||||
switch v := device.Size.(type) {
|
||||
case string:
|
||||
if size, err := strconv.ParseInt(v, 10, 64); err == nil {
|
||||
sizeBytes = size
|
||||
}
|
||||
case float64:
|
||||
sizeBytes = int64(v)
|
||||
case int64:
|
||||
sizeBytes = v
|
||||
case int:
|
||||
sizeBytes = int64(v)
|
||||
}
|
||||
disk.SizeBytes = sizeBytes
|
||||
|
||||
disks = append(disks, *disk)
|
||||
}
|
||||
|
||||
return disks, nil
|
||||
}
|
||||
|
||||
// getDiskInfo retrieves detailed information about a disk
|
||||
func (s *DiskService) getDiskInfo(ctx context.Context, devicePath string) (*PhysicalDisk, error) {
|
||||
disk := &PhysicalDisk{
|
||||
DevicePath: devicePath,
|
||||
HealthStatus: "unknown",
|
||||
HealthDetails: make(map[string]interface{}),
|
||||
}
|
||||
|
||||
// Get disk information using udevadm
|
||||
cmd := exec.CommandContext(ctx, "udevadm", "info", "--query=property", "--name="+devicePath)
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get udev info: %w", err)
|
||||
}
|
||||
|
||||
props := parseUdevProperties(string(output))
|
||||
disk.Vendor = props["ID_VENDOR"]
|
||||
disk.Model = props["ID_MODEL"]
|
||||
disk.SerialNumber = props["ID_SERIAL_SHORT"]
|
||||
|
||||
if props["ID_ATA_ROTATION_RATE"] == "0" {
|
||||
disk.IsSSD = true
|
||||
}
|
||||
|
||||
// Get sector size
|
||||
if sectorSize, err := strconv.Atoi(props["ID_SECTOR_SIZE"]); err == nil {
|
||||
disk.SectorSize = sectorSize
|
||||
}
|
||||
|
||||
// Check if disk is in use (part of a volume group or ZFS pool)
|
||||
disk.IsUsed = s.isDiskInUse(ctx, devicePath)
|
||||
|
||||
// Check if disk is used in a ZFS pool
|
||||
poolName := s.getZFSPoolForDisk(ctx, devicePath)
|
||||
if poolName != "" {
|
||||
disk.IsUsed = true
|
||||
disk.AttachedToPool = poolName
|
||||
}
|
||||
|
||||
// Get health status (simplified - would use smartctl in production)
|
||||
disk.HealthStatus = "healthy" // Placeholder
|
||||
|
||||
return disk, nil
|
||||
}
|
||||
|
||||
// parseUdevProperties parses udevadm output
|
||||
func parseUdevProperties(output string) map[string]string {
|
||||
props := make(map[string]string)
|
||||
lines := strings.Split(output, "\n")
|
||||
for _, line := range lines {
|
||||
parts := strings.SplitN(line, "=", 2)
|
||||
if len(parts) == 2 {
|
||||
props[parts[0]] = parts[1]
|
||||
}
|
||||
}
|
||||
return props
|
||||
}
|
||||
|
||||
// isDiskInUse checks if a disk is part of a volume group
|
||||
func (s *DiskService) isDiskInUse(ctx context.Context, devicePath string) bool {
|
||||
cmd := exec.CommandContext(ctx, "pvdisplay", devicePath)
|
||||
err := cmd.Run()
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// getZFSPoolForDisk checks if a disk is used in a ZFS pool and returns the pool name
|
||||
func (s *DiskService) getZFSPoolForDisk(ctx context.Context, devicePath string) string {
|
||||
// Extract device name (e.g., /dev/sde -> sde)
|
||||
deviceName := strings.TrimPrefix(devicePath, "/dev/")
|
||||
|
||||
// Get all ZFS pools
|
||||
cmd := exec.CommandContext(ctx, "zpool", "list", "-H", "-o", "name")
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
pools := strings.Split(strings.TrimSpace(string(output)), "\n")
|
||||
for _, poolName := range pools {
|
||||
if poolName == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check pool status for this device
|
||||
statusCmd := exec.CommandContext(ctx, "zpool", "status", poolName)
|
||||
statusOutput, err := statusCmd.Output()
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
statusStr := string(statusOutput)
|
||||
// Check if device is in the pool (as data disk or spare)
|
||||
if strings.Contains(statusStr, deviceName) {
|
||||
return poolName
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
// isOSDisk checks if a disk is used as OS disk (has root or boot partition)
|
||||
func (s *DiskService) isOSDisk(ctx context.Context, devicePath string) bool {
|
||||
// Extract device name (e.g., /dev/sda -> sda)
|
||||
deviceName := strings.TrimPrefix(devicePath, "/dev/")
|
||||
|
||||
// Check if any partition of this disk is mounted as root or boot
|
||||
// Use lsblk to get mount points for this device and its children
|
||||
cmd := exec.CommandContext(ctx, "lsblk", "-n", "-o", "NAME,MOUNTPOINT", devicePath)
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
lines := strings.Split(string(output), "\n")
|
||||
for _, line := range lines {
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) >= 2 {
|
||||
mountPoint := fields[1]
|
||||
// Check if mounted as root or boot
|
||||
if mountPoint == "/" || mountPoint == "/boot" || mountPoint == "/boot/efi" {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Also check all partitions of this disk using lsblk with recursive listing
|
||||
partCmd := exec.CommandContext(ctx, "lsblk", "-n", "-o", "NAME,MOUNTPOINT", "-l")
|
||||
partOutput, err := partCmd.Output()
|
||||
if err == nil {
|
||||
partLines := strings.Split(string(partOutput), "\n")
|
||||
for _, line := range partLines {
|
||||
if strings.HasPrefix(line, deviceName) {
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) >= 2 {
|
||||
mountPoint := fields[1]
|
||||
if mountPoint == "/" || mountPoint == "/boot" || mountPoint == "/boot/efi" {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// SyncDisksToDatabase syncs discovered disks to the database
|
||||
func (s *DiskService) SyncDisksToDatabase(ctx context.Context) error {
|
||||
s.logger.Info("Starting disk discovery and sync")
|
||||
disks, err := s.DiscoverDisks(ctx)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to discover disks", "error", err)
|
||||
return fmt.Errorf("failed to discover disks: %w", err)
|
||||
}
|
||||
|
||||
s.logger.Info("Discovered disks", "count", len(disks))
|
||||
|
||||
for _, disk := range disks {
|
||||
// Check if disk exists
|
||||
var existingID string
|
||||
err := s.db.QueryRowContext(ctx,
|
||||
"SELECT id FROM physical_disks WHERE device_path = $1",
|
||||
disk.DevicePath,
|
||||
).Scan(&existingID)
|
||||
|
||||
healthDetailsJSON, _ := json.Marshal(disk.HealthDetails)
|
||||
|
||||
if err == sql.ErrNoRows {
|
||||
// Insert new disk
|
||||
_, err = s.db.ExecContext(ctx, `
|
||||
INSERT INTO physical_disks (
|
||||
device_path, vendor, model, serial_number, size_bytes,
|
||||
sector_size, is_ssd, health_status, health_details, is_used
|
||||
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
|
||||
`, disk.DevicePath, disk.Vendor, disk.Model, disk.SerialNumber,
|
||||
disk.SizeBytes, disk.SectorSize, disk.IsSSD,
|
||||
disk.HealthStatus, healthDetailsJSON, disk.IsUsed)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to insert disk", "device", disk.DevicePath, "error", err)
|
||||
}
|
||||
} else if err == nil {
|
||||
// Update existing disk
|
||||
_, err = s.db.ExecContext(ctx, `
|
||||
UPDATE physical_disks SET
|
||||
vendor = $1, model = $2, serial_number = $3,
|
||||
size_bytes = $4, sector_size = $5, is_ssd = $6,
|
||||
health_status = $7, health_details = $8, is_used = $9,
|
||||
updated_at = NOW()
|
||||
WHERE id = $10
|
||||
`, disk.Vendor, disk.Model, disk.SerialNumber,
|
||||
disk.SizeBytes, disk.SectorSize, disk.IsSSD,
|
||||
disk.HealthStatus, healthDetailsJSON, disk.IsUsed, existingID)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to update disk", "device", disk.DevicePath, "error", err)
|
||||
} else {
|
||||
s.logger.Debug("Updated disk", "device", disk.DevicePath)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
s.logger.Info("Disk sync completed", "total_disks", len(disks))
|
||||
return nil
|
||||
}
|
||||
|
||||
// ListDisksFromDatabase retrieves all physical disks from the database
|
||||
func (s *DiskService) ListDisksFromDatabase(ctx context.Context) ([]PhysicalDisk, error) {
|
||||
query := `
|
||||
SELECT
|
||||
id, device_path, vendor, model, serial_number, size_bytes,
|
||||
sector_size, is_ssd, health_status, health_details, is_used,
|
||||
created_at, updated_at
|
||||
FROM physical_disks
|
||||
ORDER BY device_path
|
||||
`
|
||||
|
||||
rows, err := s.db.QueryContext(ctx, query)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to query disks: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var disks []PhysicalDisk
|
||||
for rows.Next() {
|
||||
var disk PhysicalDisk
|
||||
var healthDetailsJSON []byte
|
||||
var attachedToPool sql.NullString
|
||||
|
||||
err := rows.Scan(
|
||||
&disk.ID, &disk.DevicePath, &disk.Vendor, &disk.Model,
|
||||
&disk.SerialNumber, &disk.SizeBytes, &disk.SectorSize,
|
||||
&disk.IsSSD, &disk.HealthStatus, &healthDetailsJSON,
|
||||
&disk.IsUsed, &disk.CreatedAt, &disk.UpdatedAt,
|
||||
)
|
||||
if err != nil {
|
||||
s.logger.Warn("Failed to scan disk row", "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse health details JSON
|
||||
if len(healthDetailsJSON) > 0 {
|
||||
if err := json.Unmarshal(healthDetailsJSON, &disk.HealthDetails); err != nil {
|
||||
s.logger.Warn("Failed to parse health details", "error", err)
|
||||
disk.HealthDetails = make(map[string]interface{})
|
||||
}
|
||||
} else {
|
||||
disk.HealthDetails = make(map[string]interface{})
|
||||
}
|
||||
|
||||
// Get ZFS pool attachment if disk is used
|
||||
if disk.IsUsed {
|
||||
err := s.db.QueryRowContext(ctx,
|
||||
`SELECT zp.name FROM zfs_pools zp
|
||||
INNER JOIN zfs_pool_disks zpd ON zp.id = zpd.pool_id
|
||||
WHERE zpd.disk_id = $1
|
||||
LIMIT 1`,
|
||||
disk.ID,
|
||||
).Scan(&attachedToPool)
|
||||
if err == nil && attachedToPool.Valid {
|
||||
disk.AttachedToPool = attachedToPool.String
|
||||
}
|
||||
}
|
||||
|
||||
disks = append(disks, disk)
|
||||
}
|
||||
|
||||
if err := rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("error iterating disk rows: %w", err)
|
||||
}
|
||||
|
||||
return disks, nil
|
||||
}
|
||||
65
backend/internal/storage/disk_monitor.go
Normal file
65
backend/internal/storage/disk_monitor.go
Normal file
@@ -0,0 +1,65 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
)
|
||||
|
||||
// DiskMonitor handles periodic disk discovery and sync to database
|
||||
type DiskMonitor struct {
|
||||
diskService *DiskService
|
||||
logger *logger.Logger
|
||||
interval time.Duration
|
||||
stopCh chan struct{}
|
||||
}
|
||||
|
||||
// NewDiskMonitor creates a new disk monitor service
|
||||
func NewDiskMonitor(db *database.DB, log *logger.Logger, interval time.Duration) *DiskMonitor {
|
||||
return &DiskMonitor{
|
||||
diskService: NewDiskService(db, log),
|
||||
logger: log,
|
||||
interval: interval,
|
||||
stopCh: make(chan struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
// Start starts the disk monitor background service
|
||||
func (m *DiskMonitor) Start(ctx context.Context) {
|
||||
m.logger.Info("Starting disk monitor service", "interval", m.interval)
|
||||
ticker := time.NewTicker(m.interval)
|
||||
defer ticker.Stop()
|
||||
|
||||
// Run initial sync immediately
|
||||
m.syncDisks(ctx)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
m.logger.Info("Disk monitor service stopped")
|
||||
return
|
||||
case <-m.stopCh:
|
||||
m.logger.Info("Disk monitor service stopped")
|
||||
return
|
||||
case <-ticker.C:
|
||||
m.syncDisks(ctx)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Stop stops the disk monitor service
|
||||
func (m *DiskMonitor) Stop() {
|
||||
close(m.stopCh)
|
||||
}
|
||||
|
||||
// syncDisks performs disk discovery and sync to database
|
||||
func (m *DiskMonitor) syncDisks(ctx context.Context) {
|
||||
m.logger.Debug("Running periodic disk sync")
|
||||
if err := m.diskService.SyncDisksToDatabase(ctx); err != nil {
|
||||
m.logger.Error("Periodic disk sync failed", "error", err)
|
||||
} else {
|
||||
m.logger.Debug("Periodic disk sync completed")
|
||||
}
|
||||
}
|
||||
511
backend/internal/storage/handler.go
Normal file
511
backend/internal/storage/handler.go
Normal file
@@ -0,0 +1,511 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/cache"
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/atlasos/calypso/internal/tasks"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// Handler handles storage-related API requests
|
||||
type Handler struct {
|
||||
diskService *DiskService
|
||||
lvmService *LVMService
|
||||
zfsService *ZFSService
|
||||
arcService *ARCService
|
||||
taskEngine *tasks.Engine
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
cache *cache.Cache // Cache for invalidation
|
||||
}
|
||||
|
||||
// SetCache sets the cache instance for cache invalidation
|
||||
func (h *Handler) SetCache(c *cache.Cache) {
|
||||
h.cache = c
|
||||
}
|
||||
|
||||
// NewHandler creates a new storage handler
|
||||
func NewHandler(db *database.DB, log *logger.Logger) *Handler {
|
||||
return &Handler{
|
||||
diskService: NewDiskService(db, log),
|
||||
lvmService: NewLVMService(db, log),
|
||||
zfsService: NewZFSService(db, log),
|
||||
arcService: NewARCService(log),
|
||||
taskEngine: tasks.NewEngine(db, log),
|
||||
db: db,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// ListDisks lists all physical disks from database
|
||||
func (h *Handler) ListDisks(c *gin.Context) {
|
||||
disks, err := h.diskService.ListDisksFromDatabase(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list disks", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list disks"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"disks": disks})
|
||||
}
|
||||
|
||||
// SyncDisks syncs discovered disks to database
|
||||
func (h *Handler) SyncDisks(c *gin.Context) {
|
||||
userID, _ := c.Get("user_id")
|
||||
|
||||
// Create async task
|
||||
taskID, err := h.taskEngine.CreateTask(c.Request.Context(),
|
||||
tasks.TaskTypeRescan, userID.(string), map[string]interface{}{
|
||||
"operation": "sync_disks",
|
||||
})
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create task"})
|
||||
return
|
||||
}
|
||||
|
||||
// Run sync in background
|
||||
go func() {
|
||||
// Create new context for background task (don't use request context which may expire)
|
||||
ctx := context.Background()
|
||||
h.taskEngine.StartTask(ctx, taskID)
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 50, "Discovering disks...")
|
||||
h.logger.Info("Starting disk sync", "task_id", taskID)
|
||||
|
||||
if err := h.diskService.SyncDisksToDatabase(ctx); err != nil {
|
||||
h.logger.Error("Disk sync failed", "task_id", taskID, "error", err)
|
||||
h.taskEngine.FailTask(ctx, taskID, err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
h.logger.Info("Disk sync completed", "task_id", taskID)
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 100, "Disk sync completed")
|
||||
h.taskEngine.CompleteTask(ctx, taskID, "Disks synchronized successfully")
|
||||
}()
|
||||
|
||||
c.JSON(http.StatusAccepted, gin.H{"task_id": taskID})
|
||||
}
|
||||
|
||||
// ListVolumeGroups lists all volume groups
|
||||
func (h *Handler) ListVolumeGroups(c *gin.Context) {
|
||||
vgs, err := h.lvmService.ListVolumeGroups(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list volume groups", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list volume groups"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"volume_groups": vgs})
|
||||
}
|
||||
|
||||
// ListRepositories lists all repositories
|
||||
func (h *Handler) ListRepositories(c *gin.Context) {
|
||||
repos, err := h.lvmService.ListRepositories(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list repositories", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list repositories"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"repositories": repos})
|
||||
}
|
||||
|
||||
// GetRepository retrieves a repository by ID
|
||||
func (h *Handler) GetRepository(c *gin.Context) {
|
||||
repoID := c.Param("id")
|
||||
|
||||
repo, err := h.lvmService.GetRepository(c.Request.Context(), repoID)
|
||||
if err != nil {
|
||||
if err.Error() == "repository not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "repository not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get repository", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get repository"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, repo)
|
||||
}
|
||||
|
||||
// CreateRepositoryRequest represents a repository creation request
|
||||
type CreateRepositoryRequest struct {
|
||||
Name string `json:"name" binding:"required"`
|
||||
Description string `json:"description"`
|
||||
VolumeGroup string `json:"volume_group" binding:"required"`
|
||||
SizeGB int64 `json:"size_gb" binding:"required"`
|
||||
}
|
||||
|
||||
// CreateRepository creates a new repository
|
||||
func (h *Handler) CreateRepository(c *gin.Context) {
|
||||
var req CreateRepositoryRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
|
||||
return
|
||||
}
|
||||
|
||||
userID, _ := c.Get("user_id")
|
||||
sizeBytes := req.SizeGB * 1024 * 1024 * 1024
|
||||
|
||||
repo, err := h.lvmService.CreateRepository(
|
||||
c.Request.Context(),
|
||||
req.Name,
|
||||
req.VolumeGroup,
|
||||
sizeBytes,
|
||||
userID.(string),
|
||||
)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to create repository", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusCreated, repo)
|
||||
}
|
||||
|
||||
// DeleteRepository deletes a repository
|
||||
func (h *Handler) DeleteRepository(c *gin.Context) {
|
||||
repoID := c.Param("id")
|
||||
|
||||
if err := h.lvmService.DeleteRepository(c.Request.Context(), repoID); err != nil {
|
||||
if err.Error() == "repository not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "repository not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to delete repository", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "repository deleted successfully"})
|
||||
}
|
||||
|
||||
// CreateZPoolRequest represents a ZFS pool creation request
|
||||
type CreateZPoolRequest struct {
|
||||
Name string `json:"name" binding:"required"`
|
||||
Description string `json:"description"`
|
||||
RaidLevel string `json:"raid_level" binding:"required"` // stripe, mirror, raidz, raidz2, raidz3
|
||||
Disks []string `json:"disks" binding:"required"` // device paths
|
||||
Compression string `json:"compression"` // off, lz4, zstd, gzip
|
||||
Deduplication bool `json:"deduplication"`
|
||||
AutoExpand bool `json:"auto_expand"`
|
||||
}
|
||||
|
||||
// CreateZPool creates a new ZFS pool
|
||||
func (h *Handler) CreateZPool(c *gin.Context) {
|
||||
var req CreateZPoolRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
h.logger.Error("Invalid request body", "error", err)
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Validate required fields
|
||||
if req.Name == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "pool name is required"})
|
||||
return
|
||||
}
|
||||
if req.RaidLevel == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "raid_level is required"})
|
||||
return
|
||||
}
|
||||
if len(req.Disks) == 0 {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "at least one disk is required"})
|
||||
return
|
||||
}
|
||||
|
||||
userID, exists := c.Get("user_id")
|
||||
if !exists {
|
||||
h.logger.Error("User ID not found in context")
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "authentication required"})
|
||||
return
|
||||
}
|
||||
|
||||
userIDStr, ok := userID.(string)
|
||||
if !ok {
|
||||
h.logger.Error("Invalid user ID type", "type", fmt.Sprintf("%T", userID))
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "invalid user context"})
|
||||
return
|
||||
}
|
||||
|
||||
// Set default compression if not provided
|
||||
if req.Compression == "" {
|
||||
req.Compression = "lz4"
|
||||
}
|
||||
|
||||
h.logger.Info("Creating ZFS pool request", "name", req.Name, "raid_level", req.RaidLevel, "disks", req.Disks, "compression", req.Compression)
|
||||
|
||||
pool, err := h.zfsService.CreatePool(
|
||||
c.Request.Context(),
|
||||
req.Name,
|
||||
req.RaidLevel,
|
||||
req.Disks,
|
||||
req.Compression,
|
||||
req.Deduplication,
|
||||
req.AutoExpand,
|
||||
userIDStr,
|
||||
)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to create ZFS pool", "error", err, "name", req.Name, "raid_level", req.RaidLevel, "disks", req.Disks)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
h.logger.Info("ZFS pool created successfully", "pool_id", pool.ID, "name", pool.Name)
|
||||
c.JSON(http.StatusCreated, pool)
|
||||
}
|
||||
|
||||
// ListZFSPools lists all ZFS pools
|
||||
func (h *Handler) ListZFSPools(c *gin.Context) {
|
||||
pools, err := h.zfsService.ListPools(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list ZFS pools", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list ZFS pools"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"pools": pools})
|
||||
}
|
||||
|
||||
// GetZFSPool retrieves a ZFS pool by ID
|
||||
func (h *Handler) GetZFSPool(c *gin.Context) {
|
||||
poolID := c.Param("id")
|
||||
|
||||
pool, err := h.zfsService.GetPool(c.Request.Context(), poolID)
|
||||
if err != nil {
|
||||
if err.Error() == "pool not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "pool not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get ZFS pool", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get ZFS pool"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, pool)
|
||||
}
|
||||
|
||||
// DeleteZFSPool deletes a ZFS pool
|
||||
func (h *Handler) DeleteZFSPool(c *gin.Context) {
|
||||
poolID := c.Param("id")
|
||||
|
||||
if err := h.zfsService.DeletePool(c.Request.Context(), poolID); err != nil {
|
||||
if err.Error() == "pool not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "pool not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to delete ZFS pool", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Invalidate cache for pools list
|
||||
if h.cache != nil {
|
||||
cacheKey := "http:/api/v1/storage/zfs/pools:"
|
||||
h.cache.Delete(cacheKey)
|
||||
h.logger.Debug("Cache invalidated for pools list", "key", cacheKey)
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "ZFS pool deleted successfully"})
|
||||
}
|
||||
|
||||
// AddSpareDiskRequest represents a request to add spare disks to a pool
|
||||
type AddSpareDiskRequest struct {
|
||||
Disks []string `json:"disks" binding:"required"`
|
||||
}
|
||||
|
||||
// AddSpareDisk adds spare disks to a ZFS pool
|
||||
func (h *Handler) AddSpareDisk(c *gin.Context) {
|
||||
poolID := c.Param("id")
|
||||
|
||||
var req AddSpareDiskRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
h.logger.Error("Invalid add spare disk request", "error", err)
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
if len(req.Disks) == 0 {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "at least one disk must be specified"})
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.zfsService.AddSpareDisk(c.Request.Context(), poolID, req.Disks); err != nil {
|
||||
if err.Error() == "pool not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "pool not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to add spare disks", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "Spare disks added successfully"})
|
||||
}
|
||||
|
||||
// ListZFSDatasets lists all datasets in a ZFS pool
|
||||
func (h *Handler) ListZFSDatasets(c *gin.Context) {
|
||||
poolID := c.Param("id")
|
||||
|
||||
// Get pool to get pool name
|
||||
pool, err := h.zfsService.GetPool(c.Request.Context(), poolID)
|
||||
if err != nil {
|
||||
if err.Error() == "pool not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "pool not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get pool", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get pool"})
|
||||
return
|
||||
}
|
||||
|
||||
datasets, err := h.zfsService.ListDatasets(c.Request.Context(), pool.Name)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list datasets", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Ensure we return an empty array instead of null
|
||||
if datasets == nil {
|
||||
datasets = []*ZFSDataset{}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"datasets": datasets})
|
||||
}
|
||||
|
||||
// CreateZFSDatasetRequest represents a request to create a ZFS dataset
|
||||
type CreateZFSDatasetRequest struct {
|
||||
Name string `json:"name" binding:"required"` // Dataset name (without pool prefix)
|
||||
Type string `json:"type" binding:"required"` // "filesystem" or "volume"
|
||||
Compression string `json:"compression"` // off, lz4, zstd, gzip, etc.
|
||||
Quota int64 `json:"quota"` // -1 for unlimited, >0 for size
|
||||
Reservation int64 `json:"reservation"` // 0 for none
|
||||
MountPoint string `json:"mount_point"` // Optional mount point
|
||||
}
|
||||
|
||||
// CreateZFSDataset creates a new ZFS dataset in a pool
|
||||
func (h *Handler) CreateZFSDataset(c *gin.Context) {
|
||||
poolID := c.Param("id")
|
||||
|
||||
// Get pool to get pool name
|
||||
pool, err := h.zfsService.GetPool(c.Request.Context(), poolID)
|
||||
if err != nil {
|
||||
if err.Error() == "pool not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "pool not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get pool", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get pool"})
|
||||
return
|
||||
}
|
||||
|
||||
var req CreateZFSDatasetRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
h.logger.Error("Invalid create dataset request", "error", err)
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Validate type
|
||||
if req.Type != "filesystem" && req.Type != "volume" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "type must be 'filesystem' or 'volume'"})
|
||||
return
|
||||
}
|
||||
|
||||
// Validate mount point: volumes cannot have mount points
|
||||
if req.Type == "volume" && req.MountPoint != "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "mount point cannot be set for volume datasets (volumes are block devices for iSCSI export)"})
|
||||
return
|
||||
}
|
||||
|
||||
// Validate dataset name (should not contain pool name)
|
||||
if strings.Contains(req.Name, "/") {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "dataset name should not contain '/' (pool name is automatically prepended)"})
|
||||
return
|
||||
}
|
||||
|
||||
// Create dataset request - CreateDatasetRequest is in the same package (zfs.go)
|
||||
createReq := CreateDatasetRequest{
|
||||
Name: req.Name,
|
||||
Type: req.Type,
|
||||
Compression: req.Compression,
|
||||
Quota: req.Quota,
|
||||
Reservation: req.Reservation,
|
||||
MountPoint: req.MountPoint,
|
||||
}
|
||||
|
||||
dataset, err := h.zfsService.CreateDataset(c.Request.Context(), pool.Name, createReq)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to create dataset", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusCreated, dataset)
|
||||
}
|
||||
|
||||
// DeleteZFSDataset deletes a ZFS dataset
|
||||
func (h *Handler) DeleteZFSDataset(c *gin.Context) {
|
||||
poolID := c.Param("id")
|
||||
datasetName := c.Param("dataset")
|
||||
|
||||
// Get pool to get pool name
|
||||
pool, err := h.zfsService.GetPool(c.Request.Context(), poolID)
|
||||
if err != nil {
|
||||
if err.Error() == "pool not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "pool not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get pool", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get pool"})
|
||||
return
|
||||
}
|
||||
|
||||
// Construct full dataset name
|
||||
fullDatasetName := pool.Name + "/" + datasetName
|
||||
|
||||
// Verify dataset belongs to this pool
|
||||
if !strings.HasPrefix(fullDatasetName, pool.Name+"/") {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "dataset does not belong to this pool"})
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.zfsService.DeleteDataset(c.Request.Context(), fullDatasetName); err != nil {
|
||||
if strings.Contains(err.Error(), "does not exist") || strings.Contains(err.Error(), "not found") {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "dataset not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to delete dataset", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
// Invalidate cache for this pool's datasets list
|
||||
if h.cache != nil {
|
||||
// Generate cache key using the same format as cache middleware
|
||||
cacheKey := fmt.Sprintf("http:/api/v1/storage/zfs/pools/%s/datasets:", poolID)
|
||||
h.cache.Delete(cacheKey)
|
||||
// Also invalidate any cached responses with query parameters
|
||||
h.logger.Debug("Cache invalidated for dataset list", "pool_id", poolID, "key", cacheKey)
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "Dataset deleted successfully"})
|
||||
}
|
||||
|
||||
// GetARCStats returns ZFS ARC statistics
|
||||
func (h *Handler) GetARCStats(c *gin.Context) {
|
||||
stats, err := h.arcService.GetARCStats(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get ARC stats", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get ARC stats: " + err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, stats)
|
||||
}
|
||||
291
backend/internal/storage/lvm.go
Normal file
291
backend/internal/storage/lvm.go
Normal file
@@ -0,0 +1,291 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"os/exec"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
)
|
||||
|
||||
// LVMService handles LVM operations
|
||||
type LVMService struct {
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewLVMService creates a new LVM service
|
||||
func NewLVMService(db *database.DB, log *logger.Logger) *LVMService {
|
||||
return &LVMService{
|
||||
db: db,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// VolumeGroup represents an LVM volume group
|
||||
type VolumeGroup struct {
|
||||
ID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
SizeBytes int64 `json:"size_bytes"`
|
||||
FreeBytes int64 `json:"free_bytes"`
|
||||
PhysicalVolumes []string `json:"physical_volumes"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
}
|
||||
|
||||
// Repository represents a disk repository (logical volume)
|
||||
type Repository struct {
|
||||
ID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
VolumeGroup string `json:"volume_group"`
|
||||
LogicalVolume string `json:"logical_volume"`
|
||||
SizeBytes int64 `json:"size_bytes"`
|
||||
UsedBytes int64 `json:"used_bytes"`
|
||||
FilesystemType string `json:"filesystem_type"`
|
||||
MountPoint string `json:"mount_point"`
|
||||
IsActive bool `json:"is_active"`
|
||||
WarningThresholdPercent int `json:"warning_threshold_percent"`
|
||||
CriticalThresholdPercent int `json:"critical_threshold_percent"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
CreatedBy string `json:"created_by"`
|
||||
}
|
||||
|
||||
// ListVolumeGroups lists all volume groups
|
||||
func (s *LVMService) ListVolumeGroups(ctx context.Context) ([]VolumeGroup, error) {
|
||||
cmd := exec.CommandContext(ctx, "vgs", "--units=b", "--noheadings", "--nosuffix", "-o", "vg_name,vg_size,vg_free,pv_name")
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to list volume groups: %w", err)
|
||||
}
|
||||
|
||||
vgMap := make(map[string]*VolumeGroup)
|
||||
lines := strings.Split(strings.TrimSpace(string(output)), "\n")
|
||||
|
||||
for _, line := range lines {
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) < 3 {
|
||||
continue
|
||||
}
|
||||
|
||||
vgName := fields[0]
|
||||
vgSize, _ := strconv.ParseInt(fields[1], 10, 64)
|
||||
vgFree, _ := strconv.ParseInt(fields[2], 10, 64)
|
||||
pvName := ""
|
||||
if len(fields) > 3 {
|
||||
pvName = fields[3]
|
||||
}
|
||||
|
||||
if vg, exists := vgMap[vgName]; exists {
|
||||
if pvName != "" {
|
||||
vg.PhysicalVolumes = append(vg.PhysicalVolumes, pvName)
|
||||
}
|
||||
} else {
|
||||
vgMap[vgName] = &VolumeGroup{
|
||||
Name: vgName,
|
||||
SizeBytes: vgSize,
|
||||
FreeBytes: vgFree,
|
||||
PhysicalVolumes: []string{},
|
||||
}
|
||||
if pvName != "" {
|
||||
vgMap[vgName].PhysicalVolumes = append(vgMap[vgName].PhysicalVolumes, pvName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var vgs []VolumeGroup
|
||||
for _, vg := range vgMap {
|
||||
vgs = append(vgs, *vg)
|
||||
}
|
||||
|
||||
return vgs, nil
|
||||
}
|
||||
|
||||
// CreateRepository creates a new repository (logical volume)
|
||||
func (s *LVMService) CreateRepository(ctx context.Context, name, vgName string, sizeBytes int64, createdBy string) (*Repository, error) {
|
||||
// Generate logical volume name
|
||||
lvName := "calypso-" + name
|
||||
|
||||
// Create logical volume
|
||||
cmd := exec.CommandContext(ctx, "lvcreate", "-L", fmt.Sprintf("%dB", sizeBytes), "-n", lvName, vgName)
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create logical volume: %s: %w", string(output), err)
|
||||
}
|
||||
|
||||
// Get device path
|
||||
devicePath := fmt.Sprintf("/dev/%s/%s", vgName, lvName)
|
||||
|
||||
// Create filesystem (XFS)
|
||||
cmd = exec.CommandContext(ctx, "mkfs.xfs", "-f", devicePath)
|
||||
output, err = cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
// Cleanup: remove LV if filesystem creation fails
|
||||
exec.CommandContext(ctx, "lvremove", "-f", fmt.Sprintf("%s/%s", vgName, lvName)).Run()
|
||||
return nil, fmt.Errorf("failed to create filesystem: %s: %w", string(output), err)
|
||||
}
|
||||
|
||||
// Insert into database
|
||||
query := `
|
||||
INSERT INTO disk_repositories (
|
||||
name, volume_group, logical_volume, size_bytes, used_bytes,
|
||||
filesystem_type, is_active, created_by
|
||||
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
|
||||
RETURNING id, created_at, updated_at
|
||||
`
|
||||
|
||||
var repo Repository
|
||||
err = s.db.QueryRowContext(ctx, query,
|
||||
name, vgName, lvName, sizeBytes, 0, "xfs", true, createdBy,
|
||||
).Scan(&repo.ID, &repo.CreatedAt, &repo.UpdatedAt)
|
||||
if err != nil {
|
||||
// Cleanup: remove LV if database insert fails
|
||||
exec.CommandContext(ctx, "lvremove", "-f", fmt.Sprintf("%s/%s", vgName, lvName)).Run()
|
||||
return nil, fmt.Errorf("failed to save repository to database: %w", err)
|
||||
}
|
||||
|
||||
repo.Name = name
|
||||
repo.VolumeGroup = vgName
|
||||
repo.LogicalVolume = lvName
|
||||
repo.SizeBytes = sizeBytes
|
||||
repo.UsedBytes = 0
|
||||
repo.FilesystemType = "xfs"
|
||||
repo.IsActive = true
|
||||
repo.WarningThresholdPercent = 80
|
||||
repo.CriticalThresholdPercent = 90
|
||||
repo.CreatedBy = createdBy
|
||||
|
||||
s.logger.Info("Repository created", "name", name, "size_bytes", sizeBytes)
|
||||
return &repo, nil
|
||||
}
|
||||
|
||||
// GetRepository retrieves a repository by ID
|
||||
func (s *LVMService) GetRepository(ctx context.Context, id string) (*Repository, error) {
|
||||
query := `
|
||||
SELECT id, name, description, volume_group, logical_volume,
|
||||
size_bytes, used_bytes, filesystem_type, mount_point,
|
||||
is_active, warning_threshold_percent, critical_threshold_percent,
|
||||
created_at, updated_at, created_by
|
||||
FROM disk_repositories
|
||||
WHERE id = $1
|
||||
`
|
||||
|
||||
var repo Repository
|
||||
err := s.db.QueryRowContext(ctx, query, id).Scan(
|
||||
&repo.ID, &repo.Name, &repo.Description, &repo.VolumeGroup,
|
||||
&repo.LogicalVolume, &repo.SizeBytes, &repo.UsedBytes,
|
||||
&repo.FilesystemType, &repo.MountPoint, &repo.IsActive,
|
||||
&repo.WarningThresholdPercent, &repo.CriticalThresholdPercent,
|
||||
&repo.CreatedAt, &repo.UpdatedAt, &repo.CreatedBy,
|
||||
)
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, fmt.Errorf("repository not found")
|
||||
}
|
||||
return nil, fmt.Errorf("failed to get repository: %w", err)
|
||||
}
|
||||
|
||||
// Update used bytes from actual filesystem
|
||||
s.updateRepositoryUsage(ctx, &repo)
|
||||
|
||||
return &repo, nil
|
||||
}
|
||||
|
||||
// ListRepositories lists all repositories
|
||||
func (s *LVMService) ListRepositories(ctx context.Context) ([]Repository, error) {
|
||||
query := `
|
||||
SELECT id, name, description, volume_group, logical_volume,
|
||||
size_bytes, used_bytes, filesystem_type, mount_point,
|
||||
is_active, warning_threshold_percent, critical_threshold_percent,
|
||||
created_at, updated_at, created_by
|
||||
FROM disk_repositories
|
||||
ORDER BY name
|
||||
`
|
||||
|
||||
rows, err := s.db.QueryContext(ctx, query)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to list repositories: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var repos []Repository
|
||||
for rows.Next() {
|
||||
var repo Repository
|
||||
err := rows.Scan(
|
||||
&repo.ID, &repo.Name, &repo.Description, &repo.VolumeGroup,
|
||||
&repo.LogicalVolume, &repo.SizeBytes, &repo.UsedBytes,
|
||||
&repo.FilesystemType, &repo.MountPoint, &repo.IsActive,
|
||||
&repo.WarningThresholdPercent, &repo.CriticalThresholdPercent,
|
||||
&repo.CreatedAt, &repo.UpdatedAt, &repo.CreatedBy,
|
||||
)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to scan repository", "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Update used bytes from actual filesystem
|
||||
s.updateRepositoryUsage(ctx, &repo)
|
||||
repos = append(repos, repo)
|
||||
}
|
||||
|
||||
return repos, rows.Err()
|
||||
}
|
||||
|
||||
// updateRepositoryUsage updates repository usage from filesystem
|
||||
func (s *LVMService) updateRepositoryUsage(ctx context.Context, repo *Repository) {
|
||||
// Use df to get filesystem usage (if mounted)
|
||||
// For now, use lvs to get actual size
|
||||
cmd := exec.CommandContext(ctx, "lvs", "--units=b", "--noheadings", "--nosuffix", "-o", "lv_size,data_percent", fmt.Sprintf("%s/%s", repo.VolumeGroup, repo.LogicalVolume))
|
||||
output, err := cmd.Output()
|
||||
if err == nil {
|
||||
fields := strings.Fields(string(output))
|
||||
if len(fields) >= 1 {
|
||||
if size, err := strconv.ParseInt(fields[0], 10, 64); err == nil {
|
||||
repo.SizeBytes = size
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Update in database
|
||||
s.db.ExecContext(ctx, `
|
||||
UPDATE disk_repositories SET used_bytes = $1, updated_at = NOW() WHERE id = $2
|
||||
`, repo.UsedBytes, repo.ID)
|
||||
}
|
||||
|
||||
// DeleteRepository deletes a repository
|
||||
func (s *LVMService) DeleteRepository(ctx context.Context, id string) error {
|
||||
repo, err := s.GetRepository(ctx, id)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if repo.IsActive {
|
||||
return fmt.Errorf("cannot delete active repository")
|
||||
}
|
||||
|
||||
// Remove logical volume
|
||||
cmd := exec.CommandContext(ctx, "lvremove", "-f", fmt.Sprintf("%s/%s", repo.VolumeGroup, repo.LogicalVolume))
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to remove logical volume: %s: %w", string(output), err)
|
||||
}
|
||||
|
||||
// Delete from database
|
||||
_, err = s.db.ExecContext(ctx, "DELETE FROM disk_repositories WHERE id = $1", id)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to delete repository from database: %w", err)
|
||||
}
|
||||
|
||||
s.logger.Info("Repository deleted", "id", id, "name", repo.Name)
|
||||
return nil
|
||||
}
|
||||
|
||||
1012
backend/internal/storage/zfs.go
Normal file
1012
backend/internal/storage/zfs.go
Normal file
File diff suppressed because it is too large
Load Diff
250
backend/internal/storage/zfs_pool_monitor.go
Normal file
250
backend/internal/storage/zfs_pool_monitor.go
Normal file
@@ -0,0 +1,250 @@
|
||||
package storage
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os/exec"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
)
|
||||
|
||||
// ZFSPoolMonitor handles periodic ZFS pool status monitoring and sync to database
|
||||
type ZFSPoolMonitor struct {
|
||||
zfsService *ZFSService
|
||||
logger *logger.Logger
|
||||
interval time.Duration
|
||||
stopCh chan struct{}
|
||||
}
|
||||
|
||||
// NewZFSPoolMonitor creates a new ZFS pool monitor service
|
||||
func NewZFSPoolMonitor(db *database.DB, log *logger.Logger, interval time.Duration) *ZFSPoolMonitor {
|
||||
return &ZFSPoolMonitor{
|
||||
zfsService: NewZFSService(db, log),
|
||||
logger: log,
|
||||
interval: interval,
|
||||
stopCh: make(chan struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
// Start starts the ZFS pool monitor background service
|
||||
func (m *ZFSPoolMonitor) Start(ctx context.Context) {
|
||||
m.logger.Info("Starting ZFS pool monitor service", "interval", m.interval)
|
||||
ticker := time.NewTicker(m.interval)
|
||||
defer ticker.Stop()
|
||||
|
||||
// Run initial sync immediately
|
||||
m.syncPools(ctx)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
m.logger.Info("ZFS pool monitor service stopped")
|
||||
return
|
||||
case <-m.stopCh:
|
||||
m.logger.Info("ZFS pool monitor service stopped")
|
||||
return
|
||||
case <-ticker.C:
|
||||
m.syncPools(ctx)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Stop stops the ZFS pool monitor service
|
||||
func (m *ZFSPoolMonitor) Stop() {
|
||||
close(m.stopCh)
|
||||
}
|
||||
|
||||
// syncPools syncs ZFS pool status from system to database
|
||||
func (m *ZFSPoolMonitor) syncPools(ctx context.Context) {
|
||||
m.logger.Debug("Running periodic ZFS pool sync")
|
||||
|
||||
// Get all pools from system
|
||||
systemPools, err := m.getSystemPools(ctx)
|
||||
if err != nil {
|
||||
m.logger.Error("Failed to get system pools", "error", err)
|
||||
return
|
||||
}
|
||||
|
||||
m.logger.Debug("Found pools in system", "count", len(systemPools))
|
||||
|
||||
// Update each pool in database
|
||||
for poolName, poolInfo := range systemPools {
|
||||
if err := m.updatePoolStatus(ctx, poolName, poolInfo); err != nil {
|
||||
m.logger.Error("Failed to update pool status", "pool", poolName, "error", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Mark pools that don't exist in system as offline
|
||||
if err := m.markMissingPoolsOffline(ctx, systemPools); err != nil {
|
||||
m.logger.Error("Failed to mark missing pools offline", "error", err)
|
||||
}
|
||||
|
||||
m.logger.Debug("ZFS pool sync completed")
|
||||
}
|
||||
|
||||
// PoolInfo represents pool information from system
|
||||
type PoolInfo struct {
|
||||
Name string
|
||||
SizeBytes int64
|
||||
UsedBytes int64
|
||||
Health string // online, degraded, faulted, offline, unavailable, removed
|
||||
}
|
||||
|
||||
// getSystemPools gets all pools from ZFS system
|
||||
func (m *ZFSPoolMonitor) getSystemPools(ctx context.Context) (map[string]PoolInfo, error) {
|
||||
pools := make(map[string]PoolInfo)
|
||||
|
||||
// Get pool list
|
||||
cmd := exec.CommandContext(ctx, "zpool", "list", "-H", "-o", "name,size,alloc,free,health")
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
lines := strings.Split(strings.TrimSpace(string(output)), "\n")
|
||||
for _, line := range lines {
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) < 5 {
|
||||
continue
|
||||
}
|
||||
|
||||
poolName := fields[0]
|
||||
sizeStr := fields[1]
|
||||
allocStr := fields[2]
|
||||
health := fields[4]
|
||||
|
||||
// Parse size (e.g., "95.5G" -> bytes)
|
||||
sizeBytes, err := parseSize(sizeStr)
|
||||
if err != nil {
|
||||
m.logger.Warn("Failed to parse pool size", "pool", poolName, "size", sizeStr, "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse allocated (used) size
|
||||
usedBytes, err := parseSize(allocStr)
|
||||
if err != nil {
|
||||
m.logger.Warn("Failed to parse pool used size", "pool", poolName, "alloc", allocStr, "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Normalize health status to lowercase
|
||||
healthNormalized := strings.ToLower(health)
|
||||
|
||||
pools[poolName] = PoolInfo{
|
||||
Name: poolName,
|
||||
SizeBytes: sizeBytes,
|
||||
UsedBytes: usedBytes,
|
||||
Health: healthNormalized,
|
||||
}
|
||||
}
|
||||
|
||||
return pools, nil
|
||||
}
|
||||
|
||||
// parseSize parses size string (e.g., "95.5G", "1.2T") to bytes
|
||||
func parseSize(sizeStr string) (int64, error) {
|
||||
// Remove any whitespace
|
||||
sizeStr = strings.TrimSpace(sizeStr)
|
||||
|
||||
// Match pattern like "95.5G", "1.2T", "512M"
|
||||
re := regexp.MustCompile(`^([\d.]+)([KMGT]?)$`)
|
||||
matches := re.FindStringSubmatch(strings.ToUpper(sizeStr))
|
||||
if len(matches) != 3 {
|
||||
return 0, nil // Return 0 if can't parse
|
||||
}
|
||||
|
||||
value, err := strconv.ParseFloat(matches[1], 64)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
unit := matches[2]
|
||||
var multiplier int64 = 1
|
||||
|
||||
switch unit {
|
||||
case "K":
|
||||
multiplier = 1024
|
||||
case "M":
|
||||
multiplier = 1024 * 1024
|
||||
case "G":
|
||||
multiplier = 1024 * 1024 * 1024
|
||||
case "T":
|
||||
multiplier = 1024 * 1024 * 1024 * 1024
|
||||
case "P":
|
||||
multiplier = 1024 * 1024 * 1024 * 1024 * 1024
|
||||
}
|
||||
|
||||
return int64(value * float64(multiplier)), nil
|
||||
}
|
||||
|
||||
// updatePoolStatus updates pool status in database
|
||||
func (m *ZFSPoolMonitor) updatePoolStatus(ctx context.Context, poolName string, poolInfo PoolInfo) error {
|
||||
// Get pool from database by name
|
||||
var poolID string
|
||||
err := m.zfsService.db.QueryRowContext(ctx,
|
||||
"SELECT id FROM zfs_pools WHERE name = $1",
|
||||
poolName,
|
||||
).Scan(&poolID)
|
||||
|
||||
if err != nil {
|
||||
// Pool not in database, skip (might be created outside of Calypso)
|
||||
m.logger.Debug("Pool not found in database, skipping", "pool", poolName)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Update pool status, size, and used bytes
|
||||
_, err = m.zfsService.db.ExecContext(ctx, `
|
||||
UPDATE zfs_pools SET
|
||||
size_bytes = $1,
|
||||
used_bytes = $2,
|
||||
health_status = $3,
|
||||
updated_at = NOW()
|
||||
WHERE id = $4
|
||||
`, poolInfo.SizeBytes, poolInfo.UsedBytes, poolInfo.Health, poolID)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
m.logger.Debug("Updated pool status", "pool", poolName, "health", poolInfo.Health, "size", poolInfo.SizeBytes, "used", poolInfo.UsedBytes)
|
||||
return nil
|
||||
}
|
||||
|
||||
// markMissingPoolsOffline marks pools that exist in database but not in system as offline or deletes them
|
||||
func (m *ZFSPoolMonitor) markMissingPoolsOffline(ctx context.Context, systemPools map[string]PoolInfo) error {
|
||||
// Get all pools from database
|
||||
rows, err := m.zfsService.db.QueryContext(ctx, "SELECT id, name FROM zfs_pools WHERE is_active = true")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
for rows.Next() {
|
||||
var poolID, poolName string
|
||||
if err := rows.Scan(&poolID, &poolName); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if pool exists in system
|
||||
if _, exists := systemPools[poolName]; !exists {
|
||||
// Pool doesn't exist in system - delete from database (pool was destroyed)
|
||||
m.logger.Info("Pool not found in system, removing from database", "pool", poolName)
|
||||
_, err = m.zfsService.db.ExecContext(ctx, "DELETE FROM zfs_pools WHERE id = $1", poolID)
|
||||
if err != nil {
|
||||
m.logger.Warn("Failed to delete missing pool from database", "pool", poolName, "error", err)
|
||||
} else {
|
||||
m.logger.Info("Removed missing pool from database", "pool", poolName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return rows.Err()
|
||||
}
|
||||
282
backend/internal/system/handler.go
Normal file
282
backend/internal/system/handler.go
Normal file
@@ -0,0 +1,282 @@
|
||||
package system
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/atlasos/calypso/internal/tasks"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// Handler handles system management API requests
|
||||
type Handler struct {
|
||||
service *Service
|
||||
taskEngine *tasks.Engine
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewHandler creates a new system handler
|
||||
func NewHandler(log *logger.Logger, taskEngine *tasks.Engine) *Handler {
|
||||
return &Handler{
|
||||
service: NewService(log),
|
||||
taskEngine: taskEngine,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// ListServices lists all system services
|
||||
func (h *Handler) ListServices(c *gin.Context) {
|
||||
services, err := h.service.ListServices(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list services", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list services"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"services": services})
|
||||
}
|
||||
|
||||
// GetServiceStatus retrieves status of a specific service
|
||||
func (h *Handler) GetServiceStatus(c *gin.Context) {
|
||||
serviceName := c.Param("name")
|
||||
|
||||
status, err := h.service.GetServiceStatus(c.Request.Context(), serviceName)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "service not found"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, status)
|
||||
}
|
||||
|
||||
// RestartService restarts a system service
|
||||
func (h *Handler) RestartService(c *gin.Context) {
|
||||
serviceName := c.Param("name")
|
||||
|
||||
if err := h.service.RestartService(c.Request.Context(), serviceName); err != nil {
|
||||
h.logger.Error("Failed to restart service", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "service restarted successfully"})
|
||||
}
|
||||
|
||||
// GetServiceLogs retrieves journald logs for a service
|
||||
func (h *Handler) GetServiceLogs(c *gin.Context) {
|
||||
serviceName := c.Param("name")
|
||||
linesStr := c.DefaultQuery("lines", "100")
|
||||
lines, err := strconv.Atoi(linesStr)
|
||||
if err != nil {
|
||||
lines = 100
|
||||
}
|
||||
|
||||
logs, err := h.service.GetJournalLogs(c.Request.Context(), serviceName, lines)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get logs", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get logs"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"logs": logs})
|
||||
}
|
||||
|
||||
// GenerateSupportBundle generates a diagnostic support bundle
|
||||
func (h *Handler) GenerateSupportBundle(c *gin.Context) {
|
||||
userID, _ := c.Get("user_id")
|
||||
|
||||
// Create async task
|
||||
taskID, err := h.taskEngine.CreateTask(c.Request.Context(),
|
||||
tasks.TaskTypeSupportBundle, userID.(string), map[string]interface{}{
|
||||
"operation": "generate_support_bundle",
|
||||
})
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create task"})
|
||||
return
|
||||
}
|
||||
|
||||
// Run bundle generation in background
|
||||
go func() {
|
||||
ctx := c.Request.Context()
|
||||
h.taskEngine.StartTask(ctx, taskID)
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 50, "Collecting system information...")
|
||||
|
||||
outputPath := "/tmp/calypso-support-bundle-" + taskID
|
||||
if err := h.service.GenerateSupportBundle(ctx, outputPath); err != nil {
|
||||
h.taskEngine.FailTask(ctx, taskID, err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 100, "Support bundle generated")
|
||||
h.taskEngine.CompleteTask(ctx, taskID, "Support bundle generated successfully")
|
||||
}()
|
||||
|
||||
c.JSON(http.StatusAccepted, gin.H{"task_id": taskID})
|
||||
}
|
||||
|
||||
// ListNetworkInterfaces lists all network interfaces
|
||||
func (h *Handler) ListNetworkInterfaces(c *gin.Context) {
|
||||
interfaces, err := h.service.ListNetworkInterfaces(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list network interfaces", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list network interfaces"})
|
||||
return
|
||||
}
|
||||
|
||||
// Ensure we return an empty array instead of null
|
||||
if interfaces == nil {
|
||||
interfaces = []NetworkInterface{}
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"interfaces": interfaces})
|
||||
}
|
||||
|
||||
// SaveNTPSettings saves NTP configuration to the OS
|
||||
func (h *Handler) SaveNTPSettings(c *gin.Context) {
|
||||
var settings NTPSettings
|
||||
if err := c.ShouldBindJSON(&settings); err != nil {
|
||||
h.logger.Error("Invalid request body", "error", err)
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request body"})
|
||||
return
|
||||
}
|
||||
|
||||
// Validate timezone
|
||||
if settings.Timezone == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "timezone is required"})
|
||||
return
|
||||
}
|
||||
|
||||
// Validate NTP servers
|
||||
if len(settings.NTPServers) == 0 {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "at least one NTP server is required"})
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.service.SaveNTPSettings(c.Request.Context(), settings); err != nil {
|
||||
h.logger.Error("Failed to save NTP settings", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "NTP settings saved successfully"})
|
||||
}
|
||||
|
||||
// GetNTPSettings retrieves current NTP configuration
|
||||
func (h *Handler) GetNTPSettings(c *gin.Context) {
|
||||
settings, err := h.service.GetNTPSettings(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get NTP settings", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get NTP settings"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"settings": settings})
|
||||
}
|
||||
|
||||
// UpdateNetworkInterface updates a network interface configuration
|
||||
func (h *Handler) UpdateNetworkInterface(c *gin.Context) {
|
||||
ifaceName := c.Param("name")
|
||||
if ifaceName == "" {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "interface name is required"})
|
||||
return
|
||||
}
|
||||
|
||||
var req struct {
|
||||
IPAddress string `json:"ip_address" binding:"required"`
|
||||
Subnet string `json:"subnet" binding:"required"`
|
||||
Gateway string `json:"gateway,omitempty"`
|
||||
DNS1 string `json:"dns1,omitempty"`
|
||||
DNS2 string `json:"dns2,omitempty"`
|
||||
Role string `json:"role,omitempty"`
|
||||
}
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
h.logger.Error("Invalid request body", "error", err)
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request body"})
|
||||
return
|
||||
}
|
||||
|
||||
// Convert to service request
|
||||
serviceReq := UpdateNetworkInterfaceRequest{
|
||||
IPAddress: req.IPAddress,
|
||||
Subnet: req.Subnet,
|
||||
Gateway: req.Gateway,
|
||||
DNS1: req.DNS1,
|
||||
DNS2: req.DNS2,
|
||||
Role: req.Role,
|
||||
}
|
||||
|
||||
updatedIface, err := h.service.UpdateNetworkInterface(c.Request.Context(), ifaceName, serviceReq)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to update network interface", "interface", ifaceName, "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"interface": updatedIface})
|
||||
}
|
||||
|
||||
// GetSystemLogs retrieves recent system logs
|
||||
func (h *Handler) GetSystemLogs(c *gin.Context) {
|
||||
limitStr := c.DefaultQuery("limit", "30")
|
||||
limit, err := strconv.Atoi(limitStr)
|
||||
if err != nil || limit <= 0 || limit > 100 {
|
||||
limit = 30
|
||||
}
|
||||
|
||||
logs, err := h.service.GetSystemLogs(c.Request.Context(), limit)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get system logs", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get system logs"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"logs": logs})
|
||||
}
|
||||
|
||||
// GetNetworkThroughput retrieves network throughput data from RRD
|
||||
func (h *Handler) GetNetworkThroughput(c *gin.Context) {
|
||||
// Default to last 5 minutes
|
||||
durationStr := c.DefaultQuery("duration", "5m")
|
||||
duration, err := time.ParseDuration(durationStr)
|
||||
if err != nil {
|
||||
duration = 5 * time.Minute
|
||||
}
|
||||
|
||||
data, err := h.service.GetNetworkThroughput(c.Request.Context(), duration)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get network throughput", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get network throughput"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"data": data})
|
||||
}
|
||||
|
||||
// ExecuteCommand executes a shell command
|
||||
func (h *Handler) ExecuteCommand(c *gin.Context) {
|
||||
var req struct {
|
||||
Command string `json:"command" binding:"required"`
|
||||
Service string `json:"service,omitempty"` // Optional: system, scst, storage, backup, tape
|
||||
}
|
||||
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
h.logger.Error("Invalid request body", "error", err)
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "command is required"})
|
||||
return
|
||||
}
|
||||
|
||||
// Execute command based on service context
|
||||
output, err := h.service.ExecuteCommand(c.Request.Context(), req.Command, req.Service)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to execute command", "error", err, "command", req.Command, "service", req.Service)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"error": err.Error(),
|
||||
"output": output, // Include output even on error
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"output": output})
|
||||
}
|
||||
292
backend/internal/system/rrd.go
Normal file
292
backend/internal/system/rrd.go
Normal file
@@ -0,0 +1,292 @@
|
||||
package system
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
)
|
||||
|
||||
// RRDService handles RRD database operations for network monitoring
|
||||
type RRDService struct {
|
||||
logger *logger.Logger
|
||||
rrdDir string
|
||||
interfaceName string
|
||||
}
|
||||
|
||||
// NewRRDService creates a new RRD service
|
||||
func NewRRDService(log *logger.Logger, rrdDir string, interfaceName string) *RRDService {
|
||||
return &RRDService{
|
||||
logger: log,
|
||||
rrdDir: rrdDir,
|
||||
interfaceName: interfaceName,
|
||||
}
|
||||
}
|
||||
|
||||
// NetworkStats represents network interface statistics
|
||||
type NetworkStats struct {
|
||||
Interface string `json:"interface"`
|
||||
RxBytes uint64 `json:"rx_bytes"`
|
||||
TxBytes uint64 `json:"tx_bytes"`
|
||||
RxPackets uint64 `json:"rx_packets"`
|
||||
TxPackets uint64 `json:"tx_packets"`
|
||||
Timestamp time.Time `json:"timestamp"`
|
||||
}
|
||||
|
||||
// GetNetworkStats reads network statistics from /proc/net/dev
|
||||
func (r *RRDService) GetNetworkStats(ctx context.Context, interfaceName string) (*NetworkStats, error) {
|
||||
data, err := os.ReadFile("/proc/net/dev")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read /proc/net/dev: %w", err)
|
||||
}
|
||||
|
||||
lines := strings.Split(string(data), "\n")
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if !strings.HasPrefix(line, interfaceName+":") {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse line: interface: rx_bytes rx_packets ... tx_bytes tx_packets ...
|
||||
parts := strings.Fields(line)
|
||||
if len(parts) < 17 {
|
||||
continue
|
||||
}
|
||||
|
||||
// Extract statistics
|
||||
// Format: interface: rx_bytes rx_packets rx_errs rx_drop ... tx_bytes tx_packets ...
|
||||
rxBytes, err := strconv.ParseUint(parts[1], 10, 64)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
rxPackets, err := strconv.ParseUint(parts[2], 10, 64)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
txBytes, err := strconv.ParseUint(parts[9], 10, 64)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
txPackets, err := strconv.ParseUint(parts[10], 10, 64)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
return &NetworkStats{
|
||||
Interface: interfaceName,
|
||||
RxBytes: rxBytes,
|
||||
TxBytes: txBytes,
|
||||
RxPackets: rxPackets,
|
||||
TxPackets: txPackets,
|
||||
Timestamp: time.Now(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("interface %s not found in /proc/net/dev", interfaceName)
|
||||
}
|
||||
|
||||
// InitializeRRD creates RRD database if it doesn't exist
|
||||
func (r *RRDService) InitializeRRD(ctx context.Context) error {
|
||||
// Ensure RRD directory exists
|
||||
if err := os.MkdirAll(r.rrdDir, 0755); err != nil {
|
||||
return fmt.Errorf("failed to create RRD directory: %w", err)
|
||||
}
|
||||
|
||||
rrdFile := filepath.Join(r.rrdDir, fmt.Sprintf("network-%s.rrd", r.interfaceName))
|
||||
|
||||
// Check if RRD file already exists
|
||||
if _, err := os.Stat(rrdFile); err == nil {
|
||||
r.logger.Info("RRD file already exists", "file", rrdFile)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Create RRD database
|
||||
// Use COUNTER type to track cumulative bytes, RRD will calculate rate automatically
|
||||
// DS:inbound:COUNTER:20:0:U - inbound cumulative bytes, 20s heartbeat
|
||||
// DS:outbound:COUNTER:20:0:U - outbound cumulative bytes, 20s heartbeat
|
||||
// RRA:AVERAGE:0.5:1:600 - 1 sample per step, 600 steps (100 minutes at 10s interval)
|
||||
// RRA:AVERAGE:0.5:6:700 - 6 samples per step, 700 steps (11.6 hours at 1min interval)
|
||||
// RRA:AVERAGE:0.5:60:730 - 60 samples per step, 730 steps (5 days at 1hour interval)
|
||||
// RRA:MAX:0.5:1:600 - Max values for same intervals
|
||||
// RRA:MAX:0.5:6:700
|
||||
// RRA:MAX:0.5:60:730
|
||||
cmd := exec.CommandContext(ctx, "rrdtool", "create", rrdFile,
|
||||
"--step", "10", // 10 second step
|
||||
"DS:inbound:COUNTER:20:0:U", // Inbound cumulative bytes, 20s heartbeat
|
||||
"DS:outbound:COUNTER:20:0:U", // Outbound cumulative bytes, 20s heartbeat
|
||||
"RRA:AVERAGE:0.5:1:600", // 10s resolution, 100 minutes
|
||||
"RRA:AVERAGE:0.5:6:700", // 1min resolution, 11.6 hours
|
||||
"RRA:AVERAGE:0.5:60:730", // 1hour resolution, 5 days
|
||||
"RRA:MAX:0.5:1:600", // Max values
|
||||
"RRA:MAX:0.5:6:700",
|
||||
"RRA:MAX:0.5:60:730",
|
||||
)
|
||||
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create RRD: %s: %w", string(output), err)
|
||||
}
|
||||
|
||||
r.logger.Info("RRD database created", "file", rrdFile)
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateRRD updates RRD database with new network statistics
|
||||
func (r *RRDService) UpdateRRD(ctx context.Context, stats *NetworkStats) error {
|
||||
rrdFile := filepath.Join(r.rrdDir, fmt.Sprintf("network-%s.rrd", stats.Interface))
|
||||
|
||||
// Update with cumulative byte counts (COUNTER type)
|
||||
// RRD will automatically calculate the rate (bytes per second)
|
||||
cmd := exec.CommandContext(ctx, "rrdtool", "update", rrdFile,
|
||||
fmt.Sprintf("%d:%d:%d", stats.Timestamp.Unix(), stats.RxBytes, stats.TxBytes),
|
||||
)
|
||||
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to update RRD: %s: %w", string(output), err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// FetchRRDData fetches data from RRD database for graphing
|
||||
func (r *RRDService) FetchRRDData(ctx context.Context, startTime time.Time, endTime time.Time, resolution string) ([]NetworkDataPoint, error) {
|
||||
rrdFile := filepath.Join(r.rrdDir, fmt.Sprintf("network-%s.rrd", r.interfaceName))
|
||||
|
||||
// Check if RRD file exists
|
||||
if _, err := os.Stat(rrdFile); os.IsNotExist(err) {
|
||||
return []NetworkDataPoint{}, nil
|
||||
}
|
||||
|
||||
// Fetch data using rrdtool fetch
|
||||
// Use AVERAGE consolidation with appropriate resolution
|
||||
cmd := exec.CommandContext(ctx, "rrdtool", "fetch", rrdFile,
|
||||
"AVERAGE",
|
||||
"--start", fmt.Sprintf("%d", startTime.Unix()),
|
||||
"--end", fmt.Sprintf("%d", endTime.Unix()),
|
||||
"--resolution", resolution,
|
||||
)
|
||||
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to fetch RRD data: %s: %w", string(output), err)
|
||||
}
|
||||
|
||||
// Parse rrdtool fetch output
|
||||
// Format:
|
||||
// inbound outbound
|
||||
// 1234567890: 1.2345678901e+06 2.3456789012e+06
|
||||
points := []NetworkDataPoint{}
|
||||
lines := strings.Split(string(output), "\n")
|
||||
|
||||
// Skip header lines
|
||||
dataStart := false
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if this is the data section
|
||||
if strings.Contains(line, "inbound") && strings.Contains(line, "outbound") {
|
||||
dataStart = true
|
||||
continue
|
||||
}
|
||||
|
||||
if !dataStart {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse data line: timestamp: inbound_value outbound_value
|
||||
parts := strings.Fields(line)
|
||||
if len(parts) < 3 {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse timestamp
|
||||
timestampStr := strings.TrimSuffix(parts[0], ":")
|
||||
timestamp, err := strconv.ParseInt(timestampStr, 10, 64)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse inbound (bytes per second from COUNTER, convert to Mbps)
|
||||
inboundStr := parts[1]
|
||||
inbound, err := strconv.ParseFloat(inboundStr, 64)
|
||||
if err != nil || inbound < 0 {
|
||||
// Skip NaN or negative values
|
||||
continue
|
||||
}
|
||||
// Convert bytes per second to Mbps (bytes/s * 8 / 1000000)
|
||||
inboundMbps := inbound * 8 / 1000000
|
||||
|
||||
// Parse outbound
|
||||
outboundStr := parts[2]
|
||||
outbound, err := strconv.ParseFloat(outboundStr, 64)
|
||||
if err != nil || outbound < 0 {
|
||||
// Skip NaN or negative values
|
||||
continue
|
||||
}
|
||||
outboundMbps := outbound * 8 / 1000000
|
||||
|
||||
// Format time as MM:SS
|
||||
t := time.Unix(timestamp, 0)
|
||||
timeStr := fmt.Sprintf("%02d:%02d", t.Minute(), t.Second())
|
||||
|
||||
points = append(points, NetworkDataPoint{
|
||||
Time: timeStr,
|
||||
Inbound: inboundMbps,
|
||||
Outbound: outboundMbps,
|
||||
})
|
||||
}
|
||||
|
||||
return points, nil
|
||||
}
|
||||
|
||||
// NetworkDataPoint represents a single data point for graphing
|
||||
type NetworkDataPoint struct {
|
||||
Time string `json:"time"`
|
||||
Inbound float64 `json:"inbound"` // Mbps
|
||||
Outbound float64 `json:"outbound"` // Mbps
|
||||
}
|
||||
|
||||
// StartCollector starts a background goroutine to periodically collect and update RRD
|
||||
func (r *RRDService) StartCollector(ctx context.Context, interval time.Duration) error {
|
||||
// Initialize RRD if needed
|
||||
if err := r.InitializeRRD(ctx); err != nil {
|
||||
return fmt.Errorf("failed to initialize RRD: %w", err)
|
||||
}
|
||||
|
||||
go func() {
|
||||
ticker := time.NewTicker(interval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-ticker.C:
|
||||
// Get current stats
|
||||
stats, err := r.GetNetworkStats(ctx, r.interfaceName)
|
||||
if err != nil {
|
||||
r.logger.Warn("Failed to get network stats", "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Update RRD with cumulative byte counts
|
||||
// RRD COUNTER type will automatically calculate rate
|
||||
if err := r.UpdateRRD(ctx, stats); err != nil {
|
||||
r.logger.Warn("Failed to update RRD", "error", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
1013
backend/internal/system/service.go
Normal file
1013
backend/internal/system/service.go
Normal file
File diff suppressed because it is too large
Load Diff
328
backend/internal/system/terminal.go
Normal file
328
backend/internal/system/terminal.go
Normal file
@@ -0,0 +1,328 @@
|
||||
package system
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/exec"
|
||||
"os/user"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/creack/pty"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/gorilla/websocket"
|
||||
)
|
||||
|
||||
const (
|
||||
// WebSocket timeouts
|
||||
writeWait = 10 * time.Second
|
||||
pongWait = 60 * time.Second
|
||||
pingPeriod = (pongWait * 9) / 10
|
||||
)
|
||||
|
||||
var upgrader = websocket.Upgrader{
|
||||
ReadBufferSize: 4096,
|
||||
WriteBufferSize: 4096,
|
||||
CheckOrigin: func(r *http.Request) bool {
|
||||
// Allow all origins - in production, validate against allowed domains
|
||||
return true
|
||||
},
|
||||
}
|
||||
|
||||
// TerminalSession manages a single terminal session
|
||||
type TerminalSession struct {
|
||||
conn *websocket.Conn
|
||||
pty *os.File
|
||||
cmd *exec.Cmd
|
||||
logger *logger.Logger
|
||||
mu sync.RWMutex
|
||||
closed bool
|
||||
username string
|
||||
done chan struct{}
|
||||
}
|
||||
|
||||
// HandleTerminalWebSocket handles WebSocket connection for terminal
|
||||
func HandleTerminalWebSocket(c *gin.Context, log *logger.Logger) {
|
||||
// Verify authentication
|
||||
userID, exists := c.Get("user_id")
|
||||
if !exists {
|
||||
log.Warn("Terminal WebSocket: unauthorized access", "ip", c.ClientIP())
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "unauthorized"})
|
||||
return
|
||||
}
|
||||
|
||||
username, _ := c.Get("username")
|
||||
if username == nil {
|
||||
username = userID
|
||||
}
|
||||
|
||||
log.Info("Terminal WebSocket: connection attempt", "username", username, "ip", c.ClientIP())
|
||||
|
||||
// Upgrade connection
|
||||
conn, err := upgrader.Upgrade(c.Writer, c.Request, nil)
|
||||
if err != nil {
|
||||
log.Error("Terminal WebSocket: upgrade failed", "error", err)
|
||||
return
|
||||
}
|
||||
|
||||
log.Info("Terminal WebSocket: connection upgraded", "username", username)
|
||||
|
||||
// Create session
|
||||
session := &TerminalSession{
|
||||
conn: conn,
|
||||
logger: log,
|
||||
username: username.(string),
|
||||
done: make(chan struct{}),
|
||||
}
|
||||
|
||||
// Start terminal
|
||||
if err := session.startPTY(); err != nil {
|
||||
log.Error("Terminal WebSocket: failed to start PTY", "error", err, "username", username)
|
||||
session.sendError(err.Error())
|
||||
session.close()
|
||||
return
|
||||
}
|
||||
|
||||
// Handle messages and PTY output
|
||||
go session.handleRead()
|
||||
go session.handleWrite()
|
||||
}
|
||||
|
||||
// startPTY starts the PTY session
|
||||
func (s *TerminalSession) startPTY() error {
|
||||
// Get user info
|
||||
currentUser, err := user.Lookup(s.username)
|
||||
if err != nil {
|
||||
// Fallback to current user
|
||||
currentUser, err = user.Current()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Determine shell
|
||||
shell := os.Getenv("SHELL")
|
||||
if shell == "" {
|
||||
shell = "/bin/bash"
|
||||
}
|
||||
|
||||
// Create command
|
||||
s.cmd = exec.Command(shell)
|
||||
s.cmd.Env = append(os.Environ(),
|
||||
"TERM=xterm-256color",
|
||||
"HOME="+currentUser.HomeDir,
|
||||
"USER="+currentUser.Username,
|
||||
"USERNAME="+currentUser.Username,
|
||||
)
|
||||
s.cmd.Dir = currentUser.HomeDir
|
||||
|
||||
// Start PTY
|
||||
ptyFile, err := pty.Start(s.cmd)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s.pty = ptyFile
|
||||
|
||||
// Set initial size
|
||||
pty.Setsize(ptyFile, &pty.Winsize{
|
||||
Rows: 24,
|
||||
Cols: 80,
|
||||
})
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// handleRead handles incoming WebSocket messages
|
||||
func (s *TerminalSession) handleRead() {
|
||||
defer s.close()
|
||||
|
||||
// Set read deadline and pong handler
|
||||
s.conn.SetReadDeadline(time.Now().Add(pongWait))
|
||||
s.conn.SetPongHandler(func(string) error {
|
||||
s.conn.SetReadDeadline(time.Now().Add(pongWait))
|
||||
return nil
|
||||
})
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-s.done:
|
||||
return
|
||||
default:
|
||||
messageType, data, err := s.conn.ReadMessage()
|
||||
if err != nil {
|
||||
if websocket.IsUnexpectedCloseError(err, websocket.CloseGoingAway, websocket.CloseAbnormalClosure) {
|
||||
s.logger.Error("Terminal WebSocket: read error", "error", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Handle binary messages (raw input)
|
||||
if messageType == websocket.BinaryMessage {
|
||||
s.writeToPTY(data)
|
||||
continue
|
||||
}
|
||||
|
||||
// Handle text messages (JSON commands)
|
||||
if messageType == websocket.TextMessage {
|
||||
var msg map[string]interface{}
|
||||
if err := json.Unmarshal(data, &msg); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
switch msg["type"] {
|
||||
case "input":
|
||||
if data, ok := msg["data"].(string); ok {
|
||||
s.writeToPTY([]byte(data))
|
||||
}
|
||||
|
||||
case "resize":
|
||||
if cols, ok1 := msg["cols"].(float64); ok1 {
|
||||
if rows, ok2 := msg["rows"].(float64); ok2 {
|
||||
s.resizePTY(uint16(cols), uint16(rows))
|
||||
}
|
||||
}
|
||||
|
||||
case "ping":
|
||||
s.writeWS(websocket.TextMessage, []byte(`{"type":"pong"}`))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// handleWrite handles PTY output to WebSocket
|
||||
func (s *TerminalSession) handleWrite() {
|
||||
defer s.close()
|
||||
|
||||
ticker := time.NewTicker(pingPeriod)
|
||||
defer ticker.Stop()
|
||||
|
||||
// Read from PTY and write to WebSocket
|
||||
buffer := make([]byte, 4096)
|
||||
for {
|
||||
select {
|
||||
case <-s.done:
|
||||
return
|
||||
case <-ticker.C:
|
||||
// Send ping
|
||||
if err := s.writeWS(websocket.PingMessage, nil); err != nil {
|
||||
return
|
||||
}
|
||||
default:
|
||||
// Read from PTY
|
||||
if s.pty != nil {
|
||||
n, err := s.pty.Read(buffer)
|
||||
if err != nil {
|
||||
if err != io.EOF {
|
||||
s.logger.Error("Terminal WebSocket: PTY read error", "error", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
if n > 0 {
|
||||
// Write binary data to WebSocket
|
||||
if err := s.writeWS(websocket.BinaryMessage, buffer[:n]); err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// writeToPTY writes data to PTY
|
||||
func (s *TerminalSession) writeToPTY(data []byte) {
|
||||
s.mu.RLock()
|
||||
closed := s.closed
|
||||
pty := s.pty
|
||||
s.mu.RUnlock()
|
||||
|
||||
if closed || pty == nil {
|
||||
return
|
||||
}
|
||||
|
||||
if _, err := pty.Write(data); err != nil {
|
||||
s.logger.Error("Terminal WebSocket: PTY write error", "error", err)
|
||||
}
|
||||
}
|
||||
|
||||
// resizePTY resizes the PTY
|
||||
func (s *TerminalSession) resizePTY(cols, rows uint16) {
|
||||
s.mu.RLock()
|
||||
closed := s.closed
|
||||
ptyFile := s.pty
|
||||
s.mu.RUnlock()
|
||||
|
||||
if closed || ptyFile == nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Use pty.Setsize from package, not method from variable
|
||||
pty.Setsize(ptyFile, &pty.Winsize{
|
||||
Cols: cols,
|
||||
Rows: rows,
|
||||
})
|
||||
}
|
||||
|
||||
// writeWS writes message to WebSocket
|
||||
func (s *TerminalSession) writeWS(messageType int, data []byte) error {
|
||||
s.mu.RLock()
|
||||
closed := s.closed
|
||||
conn := s.conn
|
||||
s.mu.RUnlock()
|
||||
|
||||
if closed || conn == nil {
|
||||
return io.ErrClosedPipe
|
||||
}
|
||||
|
||||
conn.SetWriteDeadline(time.Now().Add(writeWait))
|
||||
return conn.WriteMessage(messageType, data)
|
||||
}
|
||||
|
||||
// sendError sends error message
|
||||
func (s *TerminalSession) sendError(errMsg string) {
|
||||
msg := map[string]interface{}{
|
||||
"type": "error",
|
||||
"error": errMsg,
|
||||
}
|
||||
data, _ := json.Marshal(msg)
|
||||
s.writeWS(websocket.TextMessage, data)
|
||||
}
|
||||
|
||||
// close closes the terminal session
|
||||
func (s *TerminalSession) close() {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
if s.closed {
|
||||
return
|
||||
}
|
||||
|
||||
s.closed = true
|
||||
close(s.done)
|
||||
|
||||
// Close PTY
|
||||
if s.pty != nil {
|
||||
s.pty.Close()
|
||||
}
|
||||
|
||||
// Kill process
|
||||
if s.cmd != nil && s.cmd.Process != nil {
|
||||
s.cmd.Process.Signal(syscall.SIGTERM)
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
if s.cmd.ProcessState == nil || !s.cmd.ProcessState.Exited() {
|
||||
s.cmd.Process.Kill()
|
||||
}
|
||||
}
|
||||
|
||||
// Close WebSocket
|
||||
if s.conn != nil {
|
||||
s.conn.Close()
|
||||
}
|
||||
|
||||
s.logger.Info("Terminal WebSocket: session closed", "username", s.username)
|
||||
}
|
||||
477
backend/internal/tape_physical/handler.go
Normal file
477
backend/internal/tape_physical/handler.go
Normal file
@@ -0,0 +1,477 @@
|
||||
package tape_physical
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/atlasos/calypso/internal/tasks"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// Handler handles physical tape library API requests
|
||||
type Handler struct {
|
||||
service *Service
|
||||
taskEngine *tasks.Engine
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewHandler creates a new physical tape handler
|
||||
func NewHandler(db *database.DB, log *logger.Logger) *Handler {
|
||||
return &Handler{
|
||||
service: NewService(db, log),
|
||||
taskEngine: tasks.NewEngine(db, log),
|
||||
db: db,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// ListLibraries lists all physical tape libraries
|
||||
func (h *Handler) ListLibraries(c *gin.Context) {
|
||||
query := `
|
||||
SELECT id, name, serial_number, vendor, model,
|
||||
changer_device_path, changer_stable_path,
|
||||
slot_count, drive_count, is_active,
|
||||
discovered_at, last_inventory_at, created_at, updated_at
|
||||
FROM physical_tape_libraries
|
||||
ORDER BY name
|
||||
`
|
||||
|
||||
rows, err := h.db.QueryContext(c.Request.Context(), query)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list libraries", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list libraries"})
|
||||
return
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var libraries []TapeLibrary
|
||||
for rows.Next() {
|
||||
var lib TapeLibrary
|
||||
var lastInventory sql.NullTime
|
||||
err := rows.Scan(
|
||||
&lib.ID, &lib.Name, &lib.SerialNumber, &lib.Vendor, &lib.Model,
|
||||
&lib.ChangerDevicePath, &lib.ChangerStablePath,
|
||||
&lib.SlotCount, &lib.DriveCount, &lib.IsActive,
|
||||
&lib.DiscoveredAt, &lastInventory, &lib.CreatedAt, &lib.UpdatedAt,
|
||||
)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to scan library", "error", err)
|
||||
continue
|
||||
}
|
||||
if lastInventory.Valid {
|
||||
lib.LastInventoryAt = &lastInventory.Time
|
||||
}
|
||||
libraries = append(libraries, lib)
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"libraries": libraries})
|
||||
}
|
||||
|
||||
// GetLibrary retrieves a library by ID
|
||||
func (h *Handler) GetLibrary(c *gin.Context) {
|
||||
libraryID := c.Param("id")
|
||||
|
||||
query := `
|
||||
SELECT id, name, serial_number, vendor, model,
|
||||
changer_device_path, changer_stable_path,
|
||||
slot_count, drive_count, is_active,
|
||||
discovered_at, last_inventory_at, created_at, updated_at
|
||||
FROM physical_tape_libraries
|
||||
WHERE id = $1
|
||||
`
|
||||
|
||||
var lib TapeLibrary
|
||||
var lastInventory sql.NullTime
|
||||
err := h.db.QueryRowContext(c.Request.Context(), query, libraryID).Scan(
|
||||
&lib.ID, &lib.Name, &lib.SerialNumber, &lib.Vendor, &lib.Model,
|
||||
&lib.ChangerDevicePath, &lib.ChangerStablePath,
|
||||
&lib.SlotCount, &lib.DriveCount, &lib.IsActive,
|
||||
&lib.DiscoveredAt, &lastInventory, &lib.CreatedAt, &lib.UpdatedAt,
|
||||
)
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "library not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get library", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get library"})
|
||||
return
|
||||
}
|
||||
|
||||
if lastInventory.Valid {
|
||||
lib.LastInventoryAt = &lastInventory.Time
|
||||
}
|
||||
|
||||
// Get drives
|
||||
drives, _ := h.GetLibraryDrives(c, libraryID)
|
||||
|
||||
// Get slots
|
||||
slots, _ := h.GetLibrarySlots(c, libraryID)
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"library": lib,
|
||||
"drives": drives,
|
||||
"slots": slots,
|
||||
})
|
||||
}
|
||||
|
||||
// DiscoverLibraries discovers physical tape libraries (async)
|
||||
func (h *Handler) DiscoverLibraries(c *gin.Context) {
|
||||
userID, _ := c.Get("user_id")
|
||||
|
||||
// Create async task
|
||||
taskID, err := h.taskEngine.CreateTask(c.Request.Context(),
|
||||
tasks.TaskTypeRescan, userID.(string), map[string]interface{}{
|
||||
"operation": "discover_tape_libraries",
|
||||
})
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create task"})
|
||||
return
|
||||
}
|
||||
|
||||
// Run discovery in background
|
||||
go func() {
|
||||
ctx := c.Request.Context()
|
||||
h.taskEngine.StartTask(ctx, taskID)
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 30, "Discovering tape libraries...")
|
||||
|
||||
libraries, err := h.service.DiscoverLibraries(ctx)
|
||||
if err != nil {
|
||||
h.taskEngine.FailTask(ctx, taskID, err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 60, "Syncing libraries to database...")
|
||||
|
||||
// Sync each library to database
|
||||
for _, lib := range libraries {
|
||||
if err := h.service.SyncLibraryToDatabase(ctx, &lib); err != nil {
|
||||
h.logger.Warn("Failed to sync library", "library", lib.Name, "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Discover drives for this library
|
||||
if lib.ChangerDevicePath != "" {
|
||||
drives, err := h.service.DiscoverDrives(ctx, lib.ID, lib.ChangerDevicePath)
|
||||
if err == nil {
|
||||
// Sync drives to database
|
||||
for _, drive := range drives {
|
||||
h.syncDriveToDatabase(ctx, &drive)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 100, "Discovery completed")
|
||||
h.taskEngine.CompleteTask(ctx, taskID, "Tape libraries discovered successfully")
|
||||
}()
|
||||
|
||||
c.JSON(http.StatusAccepted, gin.H{"task_id": taskID})
|
||||
}
|
||||
|
||||
// GetLibraryDrives lists drives for a library
|
||||
func (h *Handler) GetLibraryDrives(c *gin.Context, libraryID string) ([]TapeDrive, error) {
|
||||
query := `
|
||||
SELECT id, library_id, drive_number, device_path, stable_path,
|
||||
vendor, model, serial_number, drive_type, status,
|
||||
current_tape_barcode, is_active, created_at, updated_at
|
||||
FROM physical_tape_drives
|
||||
WHERE library_id = $1
|
||||
ORDER BY drive_number
|
||||
`
|
||||
|
||||
rows, err := h.db.QueryContext(c.Request.Context(), query, libraryID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var drives []TapeDrive
|
||||
for rows.Next() {
|
||||
var drive TapeDrive
|
||||
var barcode sql.NullString
|
||||
err := rows.Scan(
|
||||
&drive.ID, &drive.LibraryID, &drive.DriveNumber, &drive.DevicePath, &drive.StablePath,
|
||||
&drive.Vendor, &drive.Model, &drive.SerialNumber, &drive.DriveType, &drive.Status,
|
||||
&barcode, &drive.IsActive, &drive.CreatedAt, &drive.UpdatedAt,
|
||||
)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to scan drive", "error", err)
|
||||
continue
|
||||
}
|
||||
if barcode.Valid {
|
||||
drive.CurrentTapeBarcode = barcode.String
|
||||
}
|
||||
drives = append(drives, drive)
|
||||
}
|
||||
|
||||
return drives, rows.Err()
|
||||
}
|
||||
|
||||
// GetLibrarySlots lists slots for a library
|
||||
func (h *Handler) GetLibrarySlots(c *gin.Context, libraryID string) ([]TapeSlot, error) {
|
||||
query := `
|
||||
SELECT id, library_id, slot_number, barcode, tape_present,
|
||||
tape_type, last_updated_at
|
||||
FROM physical_tape_slots
|
||||
WHERE library_id = $1
|
||||
ORDER BY slot_number
|
||||
`
|
||||
|
||||
rows, err := h.db.QueryContext(c.Request.Context(), query, libraryID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var slots []TapeSlot
|
||||
for rows.Next() {
|
||||
var slot TapeSlot
|
||||
err := rows.Scan(
|
||||
&slot.ID, &slot.LibraryID, &slot.SlotNumber, &slot.Barcode,
|
||||
&slot.TapePresent, &slot.TapeType, &slot.LastUpdatedAt,
|
||||
)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to scan slot", "error", err)
|
||||
continue
|
||||
}
|
||||
slots = append(slots, slot)
|
||||
}
|
||||
|
||||
return slots, rows.Err()
|
||||
}
|
||||
|
||||
// PerformInventory performs inventory of a library (async)
|
||||
func (h *Handler) PerformInventory(c *gin.Context) {
|
||||
libraryID := c.Param("id")
|
||||
|
||||
// Get library
|
||||
var changerPath string
|
||||
err := h.db.QueryRowContext(c.Request.Context(),
|
||||
"SELECT changer_device_path FROM physical_tape_libraries WHERE id = $1",
|
||||
libraryID,
|
||||
).Scan(&changerPath)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "library not found"})
|
||||
return
|
||||
}
|
||||
|
||||
userID, _ := c.Get("user_id")
|
||||
|
||||
// Create async task
|
||||
taskID, err := h.taskEngine.CreateTask(c.Request.Context(),
|
||||
tasks.TaskTypeInventory, userID.(string), map[string]interface{}{
|
||||
"operation": "inventory",
|
||||
"library_id": libraryID,
|
||||
})
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create task"})
|
||||
return
|
||||
}
|
||||
|
||||
// Run inventory in background
|
||||
go func() {
|
||||
ctx := c.Request.Context()
|
||||
h.taskEngine.StartTask(ctx, taskID)
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 50, "Performing inventory...")
|
||||
|
||||
slots, err := h.service.PerformInventory(ctx, libraryID, changerPath)
|
||||
if err != nil {
|
||||
h.taskEngine.FailTask(ctx, taskID, err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
// Sync slots to database
|
||||
for _, slot := range slots {
|
||||
h.syncSlotToDatabase(ctx, &slot)
|
||||
}
|
||||
|
||||
// Update last inventory time
|
||||
h.db.ExecContext(ctx,
|
||||
"UPDATE physical_tape_libraries SET last_inventory_at = NOW() WHERE id = $1",
|
||||
libraryID,
|
||||
)
|
||||
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 100, "Inventory completed")
|
||||
h.taskEngine.CompleteTask(ctx, taskID, fmt.Sprintf("Inventory completed: %d slots", len(slots)))
|
||||
}()
|
||||
|
||||
c.JSON(http.StatusAccepted, gin.H{"task_id": taskID})
|
||||
}
|
||||
|
||||
// LoadTapeRequest represents a load tape request
|
||||
type LoadTapeRequest struct {
|
||||
SlotNumber int `json:"slot_number" binding:"required"`
|
||||
DriveNumber int `json:"drive_number" binding:"required"`
|
||||
}
|
||||
|
||||
// LoadTape loads a tape from slot to drive (async)
|
||||
func (h *Handler) LoadTape(c *gin.Context) {
|
||||
libraryID := c.Param("id")
|
||||
|
||||
var req LoadTapeRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
|
||||
return
|
||||
}
|
||||
|
||||
// Get library
|
||||
var changerPath string
|
||||
err := h.db.QueryRowContext(c.Request.Context(),
|
||||
"SELECT changer_device_path FROM physical_tape_libraries WHERE id = $1",
|
||||
libraryID,
|
||||
).Scan(&changerPath)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "library not found"})
|
||||
return
|
||||
}
|
||||
|
||||
userID, _ := c.Get("user_id")
|
||||
|
||||
// Create async task
|
||||
taskID, err := h.taskEngine.CreateTask(c.Request.Context(),
|
||||
tasks.TaskTypeLoadUnload, userID.(string), map[string]interface{}{
|
||||
"operation": "load_tape",
|
||||
"library_id": libraryID,
|
||||
"slot_number": req.SlotNumber,
|
||||
"drive_number": req.DriveNumber,
|
||||
})
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create task"})
|
||||
return
|
||||
}
|
||||
|
||||
// Run load in background
|
||||
go func() {
|
||||
ctx := c.Request.Context()
|
||||
h.taskEngine.StartTask(ctx, taskID)
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 50, "Loading tape...")
|
||||
|
||||
if err := h.service.LoadTape(ctx, libraryID, changerPath, req.SlotNumber, req.DriveNumber); err != nil {
|
||||
h.taskEngine.FailTask(ctx, taskID, err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
// Update drive status
|
||||
h.db.ExecContext(ctx,
|
||||
"UPDATE physical_tape_drives SET status = 'ready', updated_at = NOW() WHERE library_id = $1 AND drive_number = $2",
|
||||
libraryID, req.DriveNumber,
|
||||
)
|
||||
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 100, "Tape loaded")
|
||||
h.taskEngine.CompleteTask(ctx, taskID, "Tape loaded successfully")
|
||||
}()
|
||||
|
||||
c.JSON(http.StatusAccepted, gin.H{"task_id": taskID})
|
||||
}
|
||||
|
||||
// UnloadTapeRequest represents an unload tape request
|
||||
type UnloadTapeRequest struct {
|
||||
DriveNumber int `json:"drive_number" binding:"required"`
|
||||
SlotNumber int `json:"slot_number" binding:"required"`
|
||||
}
|
||||
|
||||
// UnloadTape unloads a tape from drive to slot (async)
|
||||
func (h *Handler) UnloadTape(c *gin.Context) {
|
||||
libraryID := c.Param("id")
|
||||
|
||||
var req UnloadTapeRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
|
||||
return
|
||||
}
|
||||
|
||||
// Get library
|
||||
var changerPath string
|
||||
err := h.db.QueryRowContext(c.Request.Context(),
|
||||
"SELECT changer_device_path FROM physical_tape_libraries WHERE id = $1",
|
||||
libraryID,
|
||||
).Scan(&changerPath)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "library not found"})
|
||||
return
|
||||
}
|
||||
|
||||
userID, _ := c.Get("user_id")
|
||||
|
||||
// Create async task
|
||||
taskID, err := h.taskEngine.CreateTask(c.Request.Context(),
|
||||
tasks.TaskTypeLoadUnload, userID.(string), map[string]interface{}{
|
||||
"operation": "unload_tape",
|
||||
"library_id": libraryID,
|
||||
"slot_number": req.SlotNumber,
|
||||
"drive_number": req.DriveNumber,
|
||||
})
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create task"})
|
||||
return
|
||||
}
|
||||
|
||||
// Run unload in background
|
||||
go func() {
|
||||
ctx := c.Request.Context()
|
||||
h.taskEngine.StartTask(ctx, taskID)
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 50, "Unloading tape...")
|
||||
|
||||
if err := h.service.UnloadTape(ctx, libraryID, changerPath, req.DriveNumber, req.SlotNumber); err != nil {
|
||||
h.taskEngine.FailTask(ctx, taskID, err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
// Update drive status
|
||||
h.db.ExecContext(ctx,
|
||||
"UPDATE physical_tape_drives SET status = 'idle', current_tape_barcode = NULL, updated_at = NOW() WHERE library_id = $1 AND drive_number = $2",
|
||||
libraryID, req.DriveNumber,
|
||||
)
|
||||
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 100, "Tape unloaded")
|
||||
h.taskEngine.CompleteTask(ctx, taskID, "Tape unloaded successfully")
|
||||
}()
|
||||
|
||||
c.JSON(http.StatusAccepted, gin.H{"task_id": taskID})
|
||||
}
|
||||
|
||||
// syncDriveToDatabase syncs a drive to the database
|
||||
func (h *Handler) syncDriveToDatabase(ctx context.Context, drive *TapeDrive) {
|
||||
query := `
|
||||
INSERT INTO physical_tape_drives (
|
||||
library_id, drive_number, device_path, stable_path,
|
||||
vendor, model, serial_number, drive_type, status, is_active
|
||||
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
|
||||
ON CONFLICT (library_id, drive_number) DO UPDATE SET
|
||||
device_path = EXCLUDED.device_path,
|
||||
stable_path = EXCLUDED.stable_path,
|
||||
vendor = EXCLUDED.vendor,
|
||||
model = EXCLUDED.model,
|
||||
serial_number = EXCLUDED.serial_number,
|
||||
drive_type = EXCLUDED.drive_type,
|
||||
updated_at = NOW()
|
||||
`
|
||||
h.db.ExecContext(ctx, query,
|
||||
drive.LibraryID, drive.DriveNumber, drive.DevicePath, drive.StablePath,
|
||||
drive.Vendor, drive.Model, drive.SerialNumber, drive.DriveType, drive.Status, drive.IsActive,
|
||||
)
|
||||
}
|
||||
|
||||
// syncSlotToDatabase syncs a slot to the database
|
||||
func (h *Handler) syncSlotToDatabase(ctx context.Context, slot *TapeSlot) {
|
||||
query := `
|
||||
INSERT INTO physical_tape_slots (
|
||||
library_id, slot_number, barcode, tape_present, tape_type, last_updated_at
|
||||
) VALUES ($1, $2, $3, $4, $5, $6)
|
||||
ON CONFLICT (library_id, slot_number) DO UPDATE SET
|
||||
barcode = EXCLUDED.barcode,
|
||||
tape_present = EXCLUDED.tape_present,
|
||||
tape_type = EXCLUDED.tape_type,
|
||||
last_updated_at = EXCLUDED.last_updated_at
|
||||
`
|
||||
h.db.ExecContext(ctx, query,
|
||||
slot.LibraryID, slot.SlotNumber, slot.Barcode, slot.TapePresent, slot.TapeType, slot.LastUpdatedAt,
|
||||
)
|
||||
}
|
||||
|
||||
436
backend/internal/tape_physical/service.go
Normal file
436
backend/internal/tape_physical/service.go
Normal file
@@ -0,0 +1,436 @@
|
||||
package tape_physical
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"os/exec"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
)
|
||||
|
||||
// Service handles physical tape library operations
|
||||
type Service struct {
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewService creates a new physical tape service
|
||||
func NewService(db *database.DB, log *logger.Logger) *Service {
|
||||
return &Service{
|
||||
db: db,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// TapeLibrary represents a physical tape library
|
||||
type TapeLibrary struct {
|
||||
ID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
SerialNumber string `json:"serial_number"`
|
||||
Vendor string `json:"vendor"`
|
||||
Model string `json:"model"`
|
||||
ChangerDevicePath string `json:"changer_device_path"`
|
||||
ChangerStablePath string `json:"changer_stable_path"`
|
||||
SlotCount int `json:"slot_count"`
|
||||
DriveCount int `json:"drive_count"`
|
||||
IsActive bool `json:"is_active"`
|
||||
DiscoveredAt time.Time `json:"discovered_at"`
|
||||
LastInventoryAt *time.Time `json:"last_inventory_at"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
}
|
||||
|
||||
// TapeDrive represents a physical tape drive
|
||||
type TapeDrive struct {
|
||||
ID string `json:"id"`
|
||||
LibraryID string `json:"library_id"`
|
||||
DriveNumber int `json:"drive_number"`
|
||||
DevicePath string `json:"device_path"`
|
||||
StablePath string `json:"stable_path"`
|
||||
Vendor string `json:"vendor"`
|
||||
Model string `json:"model"`
|
||||
SerialNumber string `json:"serial_number"`
|
||||
DriveType string `json:"drive_type"`
|
||||
Status string `json:"status"`
|
||||
CurrentTapeBarcode string `json:"current_tape_barcode"`
|
||||
IsActive bool `json:"is_active"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
}
|
||||
|
||||
// TapeSlot represents a tape slot in the library
|
||||
type TapeSlot struct {
|
||||
ID string `json:"id"`
|
||||
LibraryID string `json:"library_id"`
|
||||
SlotNumber int `json:"slot_number"`
|
||||
Barcode string `json:"barcode"`
|
||||
TapePresent bool `json:"tape_present"`
|
||||
TapeType string `json:"tape_type"`
|
||||
LastUpdatedAt time.Time `json:"last_updated_at"`
|
||||
}
|
||||
|
||||
// DiscoverLibraries discovers physical tape libraries on the system
|
||||
func (s *Service) DiscoverLibraries(ctx context.Context) ([]TapeLibrary, error) {
|
||||
// Use lsscsi to find tape changers
|
||||
cmd := exec.CommandContext(ctx, "lsscsi", "-g")
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to run lsscsi: %w", err)
|
||||
}
|
||||
|
||||
var libraries []TapeLibrary
|
||||
lines := strings.Split(strings.TrimSpace(string(output)), "\n")
|
||||
|
||||
for _, line := range lines {
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse lsscsi output: [0:0:0:0] disk ATA ... /dev/sda /dev/sg0
|
||||
parts := strings.Fields(line)
|
||||
if len(parts) < 4 {
|
||||
continue
|
||||
}
|
||||
|
||||
deviceType := parts[2]
|
||||
devicePath := ""
|
||||
sgPath := ""
|
||||
|
||||
// Extract device paths
|
||||
for i := 3; i < len(parts); i++ {
|
||||
if strings.HasPrefix(parts[i], "/dev/") {
|
||||
if strings.HasPrefix(parts[i], "/dev/sg") {
|
||||
sgPath = parts[i]
|
||||
} else if strings.HasPrefix(parts[i], "/dev/sch") || strings.HasPrefix(parts[i], "/dev/st") {
|
||||
devicePath = parts[i]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for medium changer (tape library)
|
||||
if deviceType == "mediumx" || deviceType == "changer" {
|
||||
// Get changer information via sg_inq
|
||||
changerInfo, err := s.getChangerInfo(ctx, sgPath)
|
||||
if err != nil {
|
||||
s.logger.Warn("Failed to get changer info", "device", sgPath, "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
lib := TapeLibrary{
|
||||
Name: fmt.Sprintf("Library-%s", changerInfo["serial"]),
|
||||
SerialNumber: changerInfo["serial"],
|
||||
Vendor: changerInfo["vendor"],
|
||||
Model: changerInfo["model"],
|
||||
ChangerDevicePath: devicePath,
|
||||
ChangerStablePath: sgPath,
|
||||
IsActive: true,
|
||||
DiscoveredAt: time.Now(),
|
||||
}
|
||||
|
||||
// Get slot and drive count via mtx
|
||||
if slotCount, driveCount, err := s.getLibraryCounts(ctx, devicePath); err == nil {
|
||||
lib.SlotCount = slotCount
|
||||
lib.DriveCount = driveCount
|
||||
}
|
||||
|
||||
libraries = append(libraries, lib)
|
||||
}
|
||||
}
|
||||
|
||||
return libraries, nil
|
||||
}
|
||||
|
||||
// getChangerInfo retrieves changer information via sg_inq
|
||||
func (s *Service) getChangerInfo(ctx context.Context, sgPath string) (map[string]string, error) {
|
||||
cmd := exec.CommandContext(ctx, "sg_inq", "-i", sgPath)
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to run sg_inq: %w", err)
|
||||
}
|
||||
|
||||
info := make(map[string]string)
|
||||
lines := strings.Split(string(output), "\n")
|
||||
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if strings.HasPrefix(line, "Vendor identification:") {
|
||||
info["vendor"] = strings.TrimSpace(strings.TrimPrefix(line, "Vendor identification:"))
|
||||
} else if strings.HasPrefix(line, "Product identification:") {
|
||||
info["model"] = strings.TrimSpace(strings.TrimPrefix(line, "Product identification:"))
|
||||
} else if strings.HasPrefix(line, "Unit serial number:") {
|
||||
info["serial"] = strings.TrimSpace(strings.TrimPrefix(line, "Unit serial number:"))
|
||||
}
|
||||
}
|
||||
|
||||
return info, nil
|
||||
}
|
||||
|
||||
// getLibraryCounts gets slot and drive count via mtx
|
||||
func (s *Service) getLibraryCounts(ctx context.Context, changerPath string) (slots, drives int, err error) {
|
||||
// Use mtx status to get slot count
|
||||
cmd := exec.CommandContext(ctx, "mtx", "-f", changerPath, "status")
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
|
||||
lines := strings.Split(string(output), "\n")
|
||||
for _, line := range lines {
|
||||
if strings.Contains(line, "Storage Element") {
|
||||
// Parse: Storage Element 1:Full (Storage Element 1:Full)
|
||||
parts := strings.Fields(line)
|
||||
for _, part := range parts {
|
||||
if strings.HasPrefix(part, "Element") {
|
||||
// Extract number
|
||||
numStr := strings.TrimPrefix(part, "Element")
|
||||
if num, err := strconv.Atoi(numStr); err == nil {
|
||||
if num > slots {
|
||||
slots = num
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if strings.Contains(line, "Data Transfer Element") {
|
||||
drives++
|
||||
}
|
||||
}
|
||||
|
||||
return slots, drives, nil
|
||||
}
|
||||
|
||||
// DiscoverDrives discovers tape drives for a library
|
||||
func (s *Service) DiscoverDrives(ctx context.Context, libraryID, changerPath string) ([]TapeDrive, error) {
|
||||
// Use lsscsi to find tape drives
|
||||
cmd := exec.CommandContext(ctx, "lsscsi", "-g")
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to run lsscsi: %w", err)
|
||||
}
|
||||
|
||||
var drives []TapeDrive
|
||||
lines := strings.Split(strings.TrimSpace(string(output)), "\n")
|
||||
driveNum := 1
|
||||
|
||||
for _, line := range lines {
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
parts := strings.Fields(line)
|
||||
if len(parts) < 4 {
|
||||
continue
|
||||
}
|
||||
|
||||
deviceType := parts[2]
|
||||
devicePath := ""
|
||||
sgPath := ""
|
||||
|
||||
for i := 3; i < len(parts); i++ {
|
||||
if strings.HasPrefix(parts[i], "/dev/") {
|
||||
if strings.HasPrefix(parts[i], "/dev/sg") {
|
||||
sgPath = parts[i]
|
||||
} else if strings.HasPrefix(parts[i], "/dev/st") || strings.HasPrefix(parts[i], "/dev/nst") {
|
||||
devicePath = parts[i]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for tape drive
|
||||
if deviceType == "tape" && devicePath != "" {
|
||||
driveInfo, err := s.getDriveInfo(ctx, sgPath)
|
||||
if err != nil {
|
||||
s.logger.Warn("Failed to get drive info", "device", sgPath, "error", err)
|
||||
continue
|
||||
}
|
||||
|
||||
drive := TapeDrive{
|
||||
LibraryID: libraryID,
|
||||
DriveNumber: driveNum,
|
||||
DevicePath: devicePath,
|
||||
StablePath: sgPath,
|
||||
Vendor: driveInfo["vendor"],
|
||||
Model: driveInfo["model"],
|
||||
SerialNumber: driveInfo["serial"],
|
||||
DriveType: driveInfo["type"],
|
||||
Status: "idle",
|
||||
IsActive: true,
|
||||
}
|
||||
|
||||
drives = append(drives, drive)
|
||||
driveNum++
|
||||
}
|
||||
}
|
||||
|
||||
return drives, nil
|
||||
}
|
||||
|
||||
// getDriveInfo retrieves drive information via sg_inq
|
||||
func (s *Service) getDriveInfo(ctx context.Context, sgPath string) (map[string]string, error) {
|
||||
cmd := exec.CommandContext(ctx, "sg_inq", "-i", sgPath)
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to run sg_inq: %w", err)
|
||||
}
|
||||
|
||||
info := make(map[string]string)
|
||||
lines := strings.Split(string(output), "\n")
|
||||
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if strings.HasPrefix(line, "Vendor identification:") {
|
||||
info["vendor"] = strings.TrimSpace(strings.TrimPrefix(line, "Vendor identification:"))
|
||||
} else if strings.HasPrefix(line, "Product identification:") {
|
||||
info["model"] = strings.TrimSpace(strings.TrimPrefix(line, "Product identification:"))
|
||||
// Try to extract drive type from model (e.g., "LTO-8")
|
||||
if strings.Contains(strings.ToUpper(info["model"]), "LTO-8") {
|
||||
info["type"] = "LTO-8"
|
||||
} else if strings.Contains(strings.ToUpper(info["model"]), "LTO-9") {
|
||||
info["type"] = "LTO-9"
|
||||
} else {
|
||||
info["type"] = "Unknown"
|
||||
}
|
||||
} else if strings.HasPrefix(line, "Unit serial number:") {
|
||||
info["serial"] = strings.TrimSpace(strings.TrimPrefix(line, "Unit serial number:"))
|
||||
}
|
||||
}
|
||||
|
||||
return info, nil
|
||||
}
|
||||
|
||||
// PerformInventory performs a slot inventory of the library
|
||||
func (s *Service) PerformInventory(ctx context.Context, libraryID, changerPath string) ([]TapeSlot, error) {
|
||||
// Use mtx to get inventory
|
||||
cmd := exec.CommandContext(ctx, "mtx", "-f", changerPath, "status")
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to run mtx status: %w", err)
|
||||
}
|
||||
|
||||
var slots []TapeSlot
|
||||
lines := strings.Split(string(output), "\n")
|
||||
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if strings.Contains(line, "Storage Element") && strings.Contains(line, ":") {
|
||||
// Parse: Storage Element 1:Full (Storage Element 1:Full) [Storage Changer Serial Number]
|
||||
parts := strings.Fields(line)
|
||||
slotNum := 0
|
||||
barcode := ""
|
||||
tapePresent := false
|
||||
|
||||
for i, part := range parts {
|
||||
if part == "Element" && i+1 < len(parts) {
|
||||
// Next part should be the number
|
||||
if num, err := strconv.Atoi(strings.TrimSuffix(parts[i+1], ":")); err == nil {
|
||||
slotNum = num
|
||||
}
|
||||
}
|
||||
if part == "Full" {
|
||||
tapePresent = true
|
||||
}
|
||||
// Try to extract barcode from brackets
|
||||
if strings.HasPrefix(part, "[") && strings.HasSuffix(part, "]") {
|
||||
barcode = strings.Trim(part, "[]")
|
||||
}
|
||||
}
|
||||
|
||||
if slotNum > 0 {
|
||||
slot := TapeSlot{
|
||||
LibraryID: libraryID,
|
||||
SlotNumber: slotNum,
|
||||
Barcode: barcode,
|
||||
TapePresent: tapePresent,
|
||||
LastUpdatedAt: time.Now(),
|
||||
}
|
||||
slots = append(slots, slot)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return slots, nil
|
||||
}
|
||||
|
||||
// LoadTape loads a tape from a slot into a drive
|
||||
func (s *Service) LoadTape(ctx context.Context, libraryID, changerPath string, slotNumber, driveNumber int) error {
|
||||
// Use mtx to load tape
|
||||
cmd := exec.CommandContext(ctx, "mtx", "-f", changerPath, "load", strconv.Itoa(slotNumber), strconv.Itoa(driveNumber))
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to load tape: %s: %w", string(output), err)
|
||||
}
|
||||
|
||||
s.logger.Info("Tape loaded", "library_id", libraryID, "slot", slotNumber, "drive", driveNumber)
|
||||
return nil
|
||||
}
|
||||
|
||||
// UnloadTape unloads a tape from a drive to a slot
|
||||
func (s *Service) UnloadTape(ctx context.Context, libraryID, changerPath string, driveNumber, slotNumber int) error {
|
||||
// Use mtx to unload tape
|
||||
cmd := exec.CommandContext(ctx, "mtx", "-f", changerPath, "unload", strconv.Itoa(slotNumber), strconv.Itoa(driveNumber))
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to unload tape: %s: %w", string(output), err)
|
||||
}
|
||||
|
||||
s.logger.Info("Tape unloaded", "library_id", libraryID, "drive", driveNumber, "slot", slotNumber)
|
||||
return nil
|
||||
}
|
||||
|
||||
// SyncLibraryToDatabase syncs discovered library to database
|
||||
func (s *Service) SyncLibraryToDatabase(ctx context.Context, library *TapeLibrary) error {
|
||||
// Check if library exists
|
||||
var existingID string
|
||||
err := s.db.QueryRowContext(ctx,
|
||||
"SELECT id FROM physical_tape_libraries WHERE serial_number = $1",
|
||||
library.SerialNumber,
|
||||
).Scan(&existingID)
|
||||
|
||||
if err == sql.ErrNoRows {
|
||||
// Insert new library
|
||||
query := `
|
||||
INSERT INTO physical_tape_libraries (
|
||||
name, serial_number, vendor, model,
|
||||
changer_device_path, changer_stable_path,
|
||||
slot_count, drive_count, is_active, discovered_at
|
||||
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
|
||||
RETURNING id, created_at, updated_at
|
||||
`
|
||||
err = s.db.QueryRowContext(ctx, query,
|
||||
library.Name, library.SerialNumber, library.Vendor, library.Model,
|
||||
library.ChangerDevicePath, library.ChangerStablePath,
|
||||
library.SlotCount, library.DriveCount, library.IsActive, library.DiscoveredAt,
|
||||
).Scan(&library.ID, &library.CreatedAt, &library.UpdatedAt)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to insert library: %w", err)
|
||||
}
|
||||
} else if err == nil {
|
||||
// Update existing library
|
||||
query := `
|
||||
UPDATE physical_tape_libraries SET
|
||||
name = $1, vendor = $2, model = $3,
|
||||
changer_device_path = $4, changer_stable_path = $5,
|
||||
slot_count = $6, drive_count = $7,
|
||||
updated_at = NOW()
|
||||
WHERE id = $8
|
||||
`
|
||||
_, err = s.db.ExecContext(ctx, query,
|
||||
library.Name, library.Vendor, library.Model,
|
||||
library.ChangerDevicePath, library.ChangerStablePath,
|
||||
library.SlotCount, library.DriveCount, existingID,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to update library: %w", err)
|
||||
}
|
||||
library.ID = existingID
|
||||
} else {
|
||||
return fmt.Errorf("failed to check library existence: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
328
backend/internal/tape_vtl/handler.go
Normal file
328
backend/internal/tape_vtl/handler.go
Normal file
@@ -0,0 +1,328 @@
|
||||
package tape_vtl
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/atlasos/calypso/internal/tasks"
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
// Handler handles virtual tape library API requests
|
||||
type Handler struct {
|
||||
service *Service
|
||||
taskEngine *tasks.Engine
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewHandler creates a new VTL handler
|
||||
func NewHandler(db *database.DB, log *logger.Logger) *Handler {
|
||||
return &Handler{
|
||||
service: NewService(db, log),
|
||||
taskEngine: tasks.NewEngine(db, log),
|
||||
db: db,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// ListLibraries lists all virtual tape libraries
|
||||
func (h *Handler) ListLibraries(c *gin.Context) {
|
||||
h.logger.Info("ListLibraries called")
|
||||
libraries, err := h.service.ListLibraries(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list libraries", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list libraries"})
|
||||
return
|
||||
}
|
||||
|
||||
h.logger.Info("ListLibraries result", "count", len(libraries), "is_nil", libraries == nil)
|
||||
|
||||
// Ensure we return an empty array instead of null
|
||||
if libraries == nil {
|
||||
h.logger.Warn("Libraries is nil, converting to empty array")
|
||||
libraries = []VirtualTapeLibrary{}
|
||||
}
|
||||
|
||||
h.logger.Info("Returning libraries", "count", len(libraries), "libraries", libraries)
|
||||
|
||||
// Ensure we always return an array, never null
|
||||
if libraries == nil {
|
||||
libraries = []VirtualTapeLibrary{}
|
||||
}
|
||||
|
||||
// Force empty array if nil (double check)
|
||||
if libraries == nil {
|
||||
h.logger.Warn("Libraries is still nil in handler, forcing empty array")
|
||||
libraries = []VirtualTapeLibrary{}
|
||||
}
|
||||
|
||||
// Use explicit JSON marshalling to ensure empty array, not null
|
||||
response := map[string]interface{}{
|
||||
"libraries": libraries,
|
||||
}
|
||||
|
||||
h.logger.Info("Response payload", "count", len(libraries), "response_type", fmt.Sprintf("%T", libraries))
|
||||
|
||||
// Use JSON marshalling that handles empty slices correctly
|
||||
c.JSON(http.StatusOK, response)
|
||||
}
|
||||
|
||||
// GetLibrary retrieves a library by ID
|
||||
func (h *Handler) GetLibrary(c *gin.Context) {
|
||||
libraryID := c.Param("id")
|
||||
|
||||
lib, err := h.service.GetLibrary(c.Request.Context(), libraryID)
|
||||
if err != nil {
|
||||
if err.Error() == "library not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "library not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get library", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get library"})
|
||||
return
|
||||
}
|
||||
|
||||
// Get drives
|
||||
drives, _ := h.service.GetLibraryDrives(c.Request.Context(), libraryID)
|
||||
|
||||
// Get tapes
|
||||
tapes, _ := h.service.GetLibraryTapes(c.Request.Context(), libraryID)
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"library": lib,
|
||||
"drives": drives,
|
||||
"tapes": tapes,
|
||||
})
|
||||
}
|
||||
|
||||
// CreateLibraryRequest represents a library creation request
|
||||
type CreateLibraryRequest struct {
|
||||
Name string `json:"name" binding:"required"`
|
||||
Description string `json:"description"`
|
||||
BackingStorePath string `json:"backing_store_path" binding:"required"`
|
||||
SlotCount int `json:"slot_count" binding:"required"`
|
||||
DriveCount int `json:"drive_count" binding:"required"`
|
||||
}
|
||||
|
||||
// CreateLibrary creates a new virtual tape library
|
||||
func (h *Handler) CreateLibrary(c *gin.Context) {
|
||||
var req CreateLibraryRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
|
||||
return
|
||||
}
|
||||
|
||||
// Validate slot and drive counts
|
||||
if req.SlotCount < 1 || req.SlotCount > 1000 {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "slot_count must be between 1 and 1000"})
|
||||
return
|
||||
}
|
||||
if req.DriveCount < 1 || req.DriveCount > 8 {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "drive_count must be between 1 and 8"})
|
||||
return
|
||||
}
|
||||
|
||||
userID, _ := c.Get("user_id")
|
||||
|
||||
lib, err := h.service.CreateLibrary(
|
||||
c.Request.Context(),
|
||||
req.Name,
|
||||
req.Description,
|
||||
req.BackingStorePath,
|
||||
req.SlotCount,
|
||||
req.DriveCount,
|
||||
userID.(string),
|
||||
)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to create library", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusCreated, lib)
|
||||
}
|
||||
|
||||
// DeleteLibrary deletes a virtual tape library
|
||||
func (h *Handler) DeleteLibrary(c *gin.Context) {
|
||||
libraryID := c.Param("id")
|
||||
|
||||
if err := h.service.DeleteLibrary(c.Request.Context(), libraryID); err != nil {
|
||||
if err.Error() == "library not found" {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "library not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to delete library", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "library deleted successfully"})
|
||||
}
|
||||
|
||||
// GetLibraryDrives lists drives for a library
|
||||
func (h *Handler) GetLibraryDrives(c *gin.Context) {
|
||||
libraryID := c.Param("id")
|
||||
|
||||
drives, err := h.service.GetLibraryDrives(c.Request.Context(), libraryID)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get drives", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get drives"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"drives": drives})
|
||||
}
|
||||
|
||||
// GetLibraryTapes lists tapes for a library
|
||||
func (h *Handler) GetLibraryTapes(c *gin.Context) {
|
||||
libraryID := c.Param("id")
|
||||
|
||||
tapes, err := h.service.GetLibraryTapes(c.Request.Context(), libraryID)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get tapes", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get tapes"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"tapes": tapes})
|
||||
}
|
||||
|
||||
// CreateTapeRequest represents a tape creation request
|
||||
type CreateTapeRequest struct {
|
||||
Barcode string `json:"barcode" binding:"required"`
|
||||
SlotNumber int `json:"slot_number" binding:"required"`
|
||||
TapeType string `json:"tape_type" binding:"required"`
|
||||
SizeGB int64 `json:"size_gb" binding:"required"`
|
||||
}
|
||||
|
||||
// CreateTape creates a new virtual tape
|
||||
func (h *Handler) CreateTape(c *gin.Context) {
|
||||
libraryID := c.Param("id")
|
||||
|
||||
var req CreateTapeRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
|
||||
return
|
||||
}
|
||||
|
||||
sizeBytes := req.SizeGB * 1024 * 1024 * 1024
|
||||
|
||||
tape, err := h.service.CreateTape(
|
||||
c.Request.Context(),
|
||||
libraryID,
|
||||
req.Barcode,
|
||||
req.SlotNumber,
|
||||
req.TapeType,
|
||||
sizeBytes,
|
||||
)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to create tape", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusCreated, tape)
|
||||
}
|
||||
|
||||
// LoadTapeRequest represents a load tape request
|
||||
type LoadTapeRequest struct {
|
||||
SlotNumber int `json:"slot_number" binding:"required"`
|
||||
DriveNumber int `json:"drive_number" binding:"required"`
|
||||
}
|
||||
|
||||
// LoadTape loads a tape from slot to drive
|
||||
func (h *Handler) LoadTape(c *gin.Context) {
|
||||
libraryID := c.Param("id")
|
||||
|
||||
var req LoadTapeRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
h.logger.Warn("Invalid load tape request", "error", err)
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request", "details": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
userID, _ := c.Get("user_id")
|
||||
|
||||
// Create async task
|
||||
taskID, err := h.taskEngine.CreateTask(c.Request.Context(),
|
||||
tasks.TaskTypeLoadUnload, userID.(string), map[string]interface{}{
|
||||
"operation": "load_tape",
|
||||
"library_id": libraryID,
|
||||
"slot_number": req.SlotNumber,
|
||||
"drive_number": req.DriveNumber,
|
||||
})
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create task"})
|
||||
return
|
||||
}
|
||||
|
||||
// Run load in background
|
||||
go func() {
|
||||
ctx := c.Request.Context()
|
||||
h.taskEngine.StartTask(ctx, taskID)
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 50, "Loading tape...")
|
||||
|
||||
if err := h.service.LoadTape(ctx, libraryID, req.SlotNumber, req.DriveNumber); err != nil {
|
||||
h.taskEngine.FailTask(ctx, taskID, err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 100, "Tape loaded")
|
||||
h.taskEngine.CompleteTask(ctx, taskID, "Tape loaded successfully")
|
||||
}()
|
||||
|
||||
c.JSON(http.StatusAccepted, gin.H{"task_id": taskID})
|
||||
}
|
||||
|
||||
// UnloadTapeRequest represents an unload tape request
|
||||
type UnloadTapeRequest struct {
|
||||
DriveNumber int `json:"drive_number" binding:"required"`
|
||||
SlotNumber int `json:"slot_number" binding:"required"`
|
||||
}
|
||||
|
||||
// UnloadTape unloads a tape from drive to slot
|
||||
func (h *Handler) UnloadTape(c *gin.Context) {
|
||||
libraryID := c.Param("id")
|
||||
|
||||
var req UnloadTapeRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
h.logger.Warn("Invalid unload tape request", "error", err)
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request", "details": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
userID, _ := c.Get("user_id")
|
||||
|
||||
// Create async task
|
||||
taskID, err := h.taskEngine.CreateTask(c.Request.Context(),
|
||||
tasks.TaskTypeLoadUnload, userID.(string), map[string]interface{}{
|
||||
"operation": "unload_tape",
|
||||
"library_id": libraryID,
|
||||
"slot_number": req.SlotNumber,
|
||||
"drive_number": req.DriveNumber,
|
||||
})
|
||||
if err != nil {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create task"})
|
||||
return
|
||||
}
|
||||
|
||||
// Run unload in background
|
||||
go func() {
|
||||
ctx := c.Request.Context()
|
||||
h.taskEngine.StartTask(ctx, taskID)
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 50, "Unloading tape...")
|
||||
|
||||
if err := h.service.UnloadTape(ctx, libraryID, req.DriveNumber, req.SlotNumber); err != nil {
|
||||
h.taskEngine.FailTask(ctx, taskID, err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
h.taskEngine.UpdateProgress(ctx, taskID, 100, "Tape unloaded")
|
||||
h.taskEngine.CompleteTask(ctx, taskID, "Tape unloaded successfully")
|
||||
}()
|
||||
|
||||
c.JSON(http.StatusAccepted, gin.H{"task_id": taskID})
|
||||
}
|
||||
579
backend/internal/tape_vtl/mhvtl_monitor.go
Normal file
579
backend/internal/tape_vtl/mhvtl_monitor.go
Normal file
@@ -0,0 +1,579 @@
|
||||
package tape_vtl
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"database/sql"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
)
|
||||
|
||||
// MHVTLMonitor monitors mhvtl configuration files and syncs to database
|
||||
type MHVTLMonitor struct {
|
||||
service *Service
|
||||
logger *logger.Logger
|
||||
configPath string
|
||||
interval time.Duration
|
||||
stopCh chan struct{}
|
||||
}
|
||||
|
||||
// NewMHVTLMonitor creates a new MHVTL monitor service
|
||||
func NewMHVTLMonitor(db *database.DB, log *logger.Logger, configPath string, interval time.Duration) *MHVTLMonitor {
|
||||
return &MHVTLMonitor{
|
||||
service: NewService(db, log),
|
||||
logger: log,
|
||||
configPath: configPath,
|
||||
interval: interval,
|
||||
stopCh: make(chan struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
// Start starts the MHVTL monitor background service
|
||||
func (m *MHVTLMonitor) Start(ctx context.Context) {
|
||||
m.logger.Info("Starting MHVTL monitor service", "config_path", m.configPath, "interval", m.interval)
|
||||
ticker := time.NewTicker(m.interval)
|
||||
defer ticker.Stop()
|
||||
|
||||
// Run initial sync immediately
|
||||
m.syncMHVTL(ctx)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
m.logger.Info("MHVTL monitor service stopped")
|
||||
return
|
||||
case <-m.stopCh:
|
||||
m.logger.Info("MHVTL monitor service stopped")
|
||||
return
|
||||
case <-ticker.C:
|
||||
m.syncMHVTL(ctx)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Stop stops the MHVTL monitor service
|
||||
func (m *MHVTLMonitor) Stop() {
|
||||
close(m.stopCh)
|
||||
}
|
||||
|
||||
// syncMHVTL parses mhvtl configuration and syncs to database
|
||||
func (m *MHVTLMonitor) syncMHVTL(ctx context.Context) {
|
||||
m.logger.Info("Running MHVTL configuration sync")
|
||||
|
||||
deviceConfPath := filepath.Join(m.configPath, "device.conf")
|
||||
if _, err := os.Stat(deviceConfPath); os.IsNotExist(err) {
|
||||
m.logger.Warn("MHVTL device.conf not found", "path", deviceConfPath)
|
||||
return
|
||||
}
|
||||
|
||||
// Parse device.conf to get libraries and drives
|
||||
libraries, drives, err := m.parseDeviceConf(ctx, deviceConfPath)
|
||||
if err != nil {
|
||||
m.logger.Error("Failed to parse device.conf", "error", err)
|
||||
return
|
||||
}
|
||||
|
||||
m.logger.Info("Parsed MHVTL configuration", "libraries", len(libraries), "drives", len(drives))
|
||||
|
||||
// Log parsed drives for debugging
|
||||
for _, drive := range drives {
|
||||
m.logger.Debug("Parsed drive", "drive_id", drive.DriveID, "library_id", drive.LibraryID, "slot", drive.Slot)
|
||||
}
|
||||
|
||||
// Sync libraries to database
|
||||
for _, lib := range libraries {
|
||||
if err := m.syncLibrary(ctx, lib); err != nil {
|
||||
m.logger.Error("Failed to sync library", "library_id", lib.LibraryID, "error", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Sync drives to database
|
||||
for _, drive := range drives {
|
||||
if err := m.syncDrive(ctx, drive); err != nil {
|
||||
m.logger.Error("Failed to sync drive", "drive_id", drive.DriveID, "library_id", drive.LibraryID, "slot", drive.Slot, "error", err)
|
||||
} else {
|
||||
m.logger.Debug("Synced drive", "drive_id", drive.DriveID, "library_id", drive.LibraryID, "slot", drive.Slot)
|
||||
}
|
||||
}
|
||||
|
||||
// Parse library_contents files to get tapes
|
||||
for _, lib := range libraries {
|
||||
contentsPath := filepath.Join(m.configPath, fmt.Sprintf("library_contents.%d", lib.LibraryID))
|
||||
if err := m.syncLibraryContents(ctx, lib.LibraryID, contentsPath); err != nil {
|
||||
m.logger.Warn("Failed to sync library contents", "library_id", lib.LibraryID, "error", err)
|
||||
}
|
||||
}
|
||||
|
||||
m.logger.Info("MHVTL configuration sync completed")
|
||||
}
|
||||
|
||||
// LibraryInfo represents a library from device.conf
|
||||
type LibraryInfo struct {
|
||||
LibraryID int
|
||||
Vendor string
|
||||
Product string
|
||||
SerialNumber string
|
||||
HomeDirectory string
|
||||
Channel string
|
||||
Target string
|
||||
LUN string
|
||||
}
|
||||
|
||||
// DriveInfo represents a drive from device.conf
|
||||
type DriveInfo struct {
|
||||
DriveID int
|
||||
LibraryID int
|
||||
Slot int
|
||||
Vendor string
|
||||
Product string
|
||||
SerialNumber string
|
||||
Channel string
|
||||
Target string
|
||||
LUN string
|
||||
}
|
||||
|
||||
// parseDeviceConf parses mhvtl device.conf file
|
||||
func (m *MHVTLMonitor) parseDeviceConf(ctx context.Context, path string) ([]LibraryInfo, []DriveInfo, error) {
|
||||
file, err := os.Open(path)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("failed to open device.conf: %w", err)
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
var libraries []LibraryInfo
|
||||
var drives []DriveInfo
|
||||
|
||||
scanner := bufio.NewScanner(file)
|
||||
var currentLibrary *LibraryInfo
|
||||
var currentDrive *DriveInfo
|
||||
|
||||
libraryRegex := regexp.MustCompile(`^Library:\s+(\d+)\s+CHANNEL:\s+(\S+)\s+TARGET:\s+(\S+)\s+LUN:\s+(\S+)`)
|
||||
driveRegex := regexp.MustCompile(`^Drive:\s+(\d+)\s+CHANNEL:\s+(\S+)\s+TARGET:\s+(\S+)\s+LUN:\s+(\S+)`)
|
||||
libraryIDRegex := regexp.MustCompile(`Library ID:\s+(\d+)\s+Slot:\s+(\d+)`)
|
||||
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
|
||||
// Skip comments and empty lines
|
||||
if strings.HasPrefix(line, "#") || line == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check for Library entry
|
||||
if matches := libraryRegex.FindStringSubmatch(line); matches != nil {
|
||||
if currentLibrary != nil {
|
||||
libraries = append(libraries, *currentLibrary)
|
||||
}
|
||||
libID, _ := strconv.Atoi(matches[1])
|
||||
currentLibrary = &LibraryInfo{
|
||||
LibraryID: libID,
|
||||
Channel: matches[2],
|
||||
Target: matches[3],
|
||||
LUN: matches[4],
|
||||
}
|
||||
currentDrive = nil
|
||||
continue
|
||||
}
|
||||
|
||||
// Check for Drive entry
|
||||
if matches := driveRegex.FindStringSubmatch(line); matches != nil {
|
||||
if currentDrive != nil {
|
||||
drives = append(drives, *currentDrive)
|
||||
}
|
||||
driveID, _ := strconv.Atoi(matches[1])
|
||||
currentDrive = &DriveInfo{
|
||||
DriveID: driveID,
|
||||
Channel: matches[2],
|
||||
Target: matches[3],
|
||||
LUN: matches[4],
|
||||
}
|
||||
// Library ID and Slot might be on the same line or next line
|
||||
if matches := libraryIDRegex.FindStringSubmatch(line); matches != nil {
|
||||
libID, _ := strconv.Atoi(matches[1])
|
||||
slot, _ := strconv.Atoi(matches[2])
|
||||
currentDrive.LibraryID = libID
|
||||
currentDrive.Slot = slot
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse library fields (only if we're in a library section and not in a drive section)
|
||||
if currentLibrary != nil && currentDrive == nil {
|
||||
// Handle both "Vendor identification:" and " Vendor identification:" (with leading space)
|
||||
if strings.Contains(line, "Vendor identification:") {
|
||||
parts := strings.Split(line, "Vendor identification:")
|
||||
if len(parts) > 1 {
|
||||
currentLibrary.Vendor = strings.TrimSpace(parts[1])
|
||||
m.logger.Debug("Parsed vendor", "vendor", currentLibrary.Vendor, "library_id", currentLibrary.LibraryID)
|
||||
}
|
||||
} else if strings.Contains(line, "Product identification:") {
|
||||
parts := strings.Split(line, "Product identification:")
|
||||
if len(parts) > 1 {
|
||||
currentLibrary.Product = strings.TrimSpace(parts[1])
|
||||
m.logger.Info("Parsed library product", "product", currentLibrary.Product, "library_id", currentLibrary.LibraryID)
|
||||
}
|
||||
} else if strings.Contains(line, "Unit serial number:") {
|
||||
parts := strings.Split(line, "Unit serial number:")
|
||||
if len(parts) > 1 {
|
||||
currentLibrary.SerialNumber = strings.TrimSpace(parts[1])
|
||||
}
|
||||
} else if strings.Contains(line, "Home directory:") {
|
||||
parts := strings.Split(line, "Home directory:")
|
||||
if len(parts) > 1 {
|
||||
currentLibrary.HomeDirectory = strings.TrimSpace(parts[1])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Parse drive fields
|
||||
if currentDrive != nil {
|
||||
// Check for Library ID and Slot first (can be on separate line)
|
||||
if strings.Contains(line, "Library ID:") && strings.Contains(line, "Slot:") {
|
||||
matches := libraryIDRegex.FindStringSubmatch(line)
|
||||
if matches != nil {
|
||||
libID, _ := strconv.Atoi(matches[1])
|
||||
slot, _ := strconv.Atoi(matches[2])
|
||||
currentDrive.LibraryID = libID
|
||||
currentDrive.Slot = slot
|
||||
m.logger.Debug("Parsed drive Library ID and Slot", "drive_id", currentDrive.DriveID, "library_id", libID, "slot", slot)
|
||||
continue
|
||||
}
|
||||
}
|
||||
// Handle both "Vendor identification:" and " Vendor identification:" (with leading space)
|
||||
if strings.Contains(line, "Vendor identification:") {
|
||||
parts := strings.Split(line, "Vendor identification:")
|
||||
if len(parts) > 1 {
|
||||
currentDrive.Vendor = strings.TrimSpace(parts[1])
|
||||
}
|
||||
} else if strings.Contains(line, "Product identification:") {
|
||||
parts := strings.Split(line, "Product identification:")
|
||||
if len(parts) > 1 {
|
||||
currentDrive.Product = strings.TrimSpace(parts[1])
|
||||
}
|
||||
} else if strings.Contains(line, "Unit serial number:") {
|
||||
parts := strings.Split(line, "Unit serial number:")
|
||||
if len(parts) > 1 {
|
||||
currentDrive.SerialNumber = strings.TrimSpace(parts[1])
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Add last library and drive
|
||||
if currentLibrary != nil {
|
||||
libraries = append(libraries, *currentLibrary)
|
||||
}
|
||||
if currentDrive != nil {
|
||||
drives = append(drives, *currentDrive)
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil {
|
||||
return nil, nil, fmt.Errorf("error reading device.conf: %w", err)
|
||||
}
|
||||
|
||||
return libraries, drives, nil
|
||||
}
|
||||
|
||||
// syncLibrary syncs a library to database
|
||||
func (m *MHVTLMonitor) syncLibrary(ctx context.Context, libInfo LibraryInfo) error {
|
||||
// Check if library exists by mhvtl_library_id
|
||||
var existingID string
|
||||
err := m.service.db.QueryRowContext(ctx,
|
||||
"SELECT id FROM virtual_tape_libraries WHERE mhvtl_library_id = $1",
|
||||
libInfo.LibraryID,
|
||||
).Scan(&existingID)
|
||||
|
||||
m.logger.Debug("Syncing library", "library_id", libInfo.LibraryID, "vendor", libInfo.Vendor, "product", libInfo.Product)
|
||||
|
||||
// Use product identification for library name (without library ID)
|
||||
libraryName := fmt.Sprintf("VTL-%d", libInfo.LibraryID)
|
||||
if libInfo.Product != "" {
|
||||
// Use only product name, without library ID
|
||||
libraryName = libInfo.Product
|
||||
m.logger.Info("Using product for library name", "product", libInfo.Product, "library_id", libInfo.LibraryID, "name", libraryName)
|
||||
} else if libInfo.Vendor != "" {
|
||||
libraryName = libInfo.Vendor
|
||||
m.logger.Info("Using vendor for library name (product not available)", "vendor", libInfo.Vendor, "library_id", libInfo.LibraryID)
|
||||
}
|
||||
|
||||
if err == sql.ErrNoRows {
|
||||
// Create new library
|
||||
// Get backing store path from mhvtl.conf
|
||||
backingStorePath := "/opt/mhvtl"
|
||||
if libInfo.HomeDirectory != "" {
|
||||
backingStorePath = libInfo.HomeDirectory
|
||||
}
|
||||
|
||||
// Count slots and drives from library_contents file
|
||||
contentsPath := filepath.Join(m.configPath, fmt.Sprintf("library_contents.%d", libInfo.LibraryID))
|
||||
slotCount, driveCount := m.countSlotsAndDrives(contentsPath)
|
||||
|
||||
_, err = m.service.db.ExecContext(ctx, `
|
||||
INSERT INTO virtual_tape_libraries (
|
||||
name, description, mhvtl_library_id, backing_store_path,
|
||||
vendor, slot_count, drive_count, is_active
|
||||
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
|
||||
`, libraryName, fmt.Sprintf("MHVTL Library %d (%s)", libInfo.LibraryID, libInfo.Product),
|
||||
libInfo.LibraryID, backingStorePath, libInfo.Vendor, slotCount, driveCount, true)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to insert library: %w", err)
|
||||
}
|
||||
m.logger.Info("Created virtual library from MHVTL", "library_id", libInfo.LibraryID, "name", libraryName)
|
||||
} else if err == nil {
|
||||
// Update existing library - also update name if product is available
|
||||
updateName := libraryName
|
||||
// If product exists and current name doesn't match, update it
|
||||
if libInfo.Product != "" {
|
||||
var currentName string
|
||||
err := m.service.db.QueryRowContext(ctx,
|
||||
"SELECT name FROM virtual_tape_libraries WHERE id = $1", existingID,
|
||||
).Scan(¤tName)
|
||||
if err == nil {
|
||||
// Use only product name, without library ID
|
||||
expectedName := libInfo.Product
|
||||
if currentName != expectedName {
|
||||
updateName = expectedName
|
||||
m.logger.Info("Updating library name", "old", currentName, "new", updateName, "product", libInfo.Product)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
m.logger.Info("Updating existing library", "library_id", libInfo.LibraryID, "product", libInfo.Product, "vendor", libInfo.Vendor, "old_name", libraryName, "new_name", updateName)
|
||||
_, err = m.service.db.ExecContext(ctx, `
|
||||
UPDATE virtual_tape_libraries SET
|
||||
name = $1, description = $2, backing_store_path = $3,
|
||||
vendor = $4, is_active = $5, updated_at = NOW()
|
||||
WHERE id = $6
|
||||
`, updateName, fmt.Sprintf("MHVTL Library %d (%s)", libInfo.LibraryID, libInfo.Product),
|
||||
libInfo.HomeDirectory, libInfo.Vendor, true, existingID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to update library: %w", err)
|
||||
}
|
||||
m.logger.Debug("Updated virtual library from MHVTL", "library_id", libInfo.LibraryID)
|
||||
} else {
|
||||
return fmt.Errorf("failed to check library existence: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// syncDrive syncs a drive to database
|
||||
func (m *MHVTLMonitor) syncDrive(ctx context.Context, driveInfo DriveInfo) error {
|
||||
// Get library ID from mhvtl_library_id
|
||||
var libraryID string
|
||||
err := m.service.db.QueryRowContext(ctx,
|
||||
"SELECT id FROM virtual_tape_libraries WHERE mhvtl_library_id = $1",
|
||||
driveInfo.LibraryID,
|
||||
).Scan(&libraryID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("library not found for drive: %w", err)
|
||||
}
|
||||
|
||||
// Calculate drive number from slot (drives are typically in slots 1, 2, 3, etc.)
|
||||
driveNumber := driveInfo.Slot
|
||||
|
||||
// Check if drive exists
|
||||
var existingID string
|
||||
err = m.service.db.QueryRowContext(ctx,
|
||||
"SELECT id FROM virtual_tape_drives WHERE library_id = $1 AND drive_number = $2",
|
||||
libraryID, driveNumber,
|
||||
).Scan(&existingID)
|
||||
|
||||
// Get device path (typically /dev/stX or /dev/nstX)
|
||||
devicePath := fmt.Sprintf("/dev/st%d", driveInfo.DriveID-10) // Drive 11 -> st1, Drive 12 -> st2, etc.
|
||||
stablePath := fmt.Sprintf("/dev/tape/by-id/scsi-%s", driveInfo.SerialNumber)
|
||||
|
||||
if err == sql.ErrNoRows {
|
||||
// Create new drive
|
||||
_, err = m.service.db.ExecContext(ctx, `
|
||||
INSERT INTO virtual_tape_drives (
|
||||
library_id, drive_number, device_path, stable_path, status, is_active
|
||||
) VALUES ($1, $2, $3, $4, $5, $6)
|
||||
`, libraryID, driveNumber, devicePath, stablePath, "idle", true)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to insert drive: %w", err)
|
||||
}
|
||||
m.logger.Info("Created virtual drive from MHVTL", "drive_id", driveInfo.DriveID, "library_id", driveInfo.LibraryID)
|
||||
} else if err == nil {
|
||||
// Update existing drive
|
||||
_, err = m.service.db.ExecContext(ctx, `
|
||||
UPDATE virtual_tape_drives SET
|
||||
device_path = $1, stable_path = $2, is_active = $3, updated_at = NOW()
|
||||
WHERE id = $4
|
||||
`, devicePath, stablePath, true, existingID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to update drive: %w", err)
|
||||
}
|
||||
m.logger.Debug("Updated virtual drive from MHVTL", "drive_id", driveInfo.DriveID)
|
||||
} else {
|
||||
return fmt.Errorf("failed to check drive existence: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// syncLibraryContents syncs tapes from library_contents file
|
||||
func (m *MHVTLMonitor) syncLibraryContents(ctx context.Context, libraryID int, contentsPath string) error {
|
||||
// Get library ID from database
|
||||
var dbLibraryID string
|
||||
err := m.service.db.QueryRowContext(ctx,
|
||||
"SELECT id FROM virtual_tape_libraries WHERE mhvtl_library_id = $1",
|
||||
libraryID,
|
||||
).Scan(&dbLibraryID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("library not found: %w", err)
|
||||
}
|
||||
|
||||
// Get backing store path
|
||||
var backingStorePath string
|
||||
err = m.service.db.QueryRowContext(ctx,
|
||||
"SELECT backing_store_path FROM virtual_tape_libraries WHERE id = $1",
|
||||
dbLibraryID,
|
||||
).Scan(&backingStorePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get backing store path: %w", err)
|
||||
}
|
||||
|
||||
file, err := os.Open(contentsPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to open library_contents file: %w", err)
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
scanner := bufio.NewScanner(file)
|
||||
slotRegex := regexp.MustCompile(`^Slot\s+(\d+):\s+(.+)`)
|
||||
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
|
||||
// Skip comments and empty lines
|
||||
if strings.HasPrefix(line, "#") || line == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
matches := slotRegex.FindStringSubmatch(line)
|
||||
if matches != nil {
|
||||
slotNumber, _ := strconv.Atoi(matches[1])
|
||||
barcode := strings.TrimSpace(matches[2])
|
||||
|
||||
if barcode == "" || barcode == "?" {
|
||||
continue // Empty slot
|
||||
}
|
||||
|
||||
// Determine tape type from barcode suffix
|
||||
tapeType := "LTO-8" // Default
|
||||
if len(barcode) >= 2 {
|
||||
suffix := barcode[len(barcode)-2:]
|
||||
switch suffix {
|
||||
case "L1":
|
||||
tapeType = "LTO-1"
|
||||
case "L2":
|
||||
tapeType = "LTO-2"
|
||||
case "L3":
|
||||
tapeType = "LTO-3"
|
||||
case "L4":
|
||||
tapeType = "LTO-4"
|
||||
case "L5":
|
||||
tapeType = "LTO-5"
|
||||
case "L6":
|
||||
tapeType = "LTO-6"
|
||||
case "L7":
|
||||
tapeType = "LTO-7"
|
||||
case "L8":
|
||||
tapeType = "LTO-8"
|
||||
case "L9":
|
||||
tapeType = "LTO-9"
|
||||
}
|
||||
}
|
||||
|
||||
// Check if tape exists
|
||||
var existingID string
|
||||
err := m.service.db.QueryRowContext(ctx,
|
||||
"SELECT id FROM virtual_tapes WHERE library_id = $1 AND barcode = $2",
|
||||
dbLibraryID, barcode,
|
||||
).Scan(&existingID)
|
||||
|
||||
imagePath := filepath.Join(backingStorePath, "tapes", fmt.Sprintf("%s.img", barcode))
|
||||
defaultSize := int64(15 * 1024 * 1024 * 1024 * 1024) // 15 TB default for LTO-8
|
||||
|
||||
if err == sql.ErrNoRows {
|
||||
// Create new tape
|
||||
_, err = m.service.db.ExecContext(ctx, `
|
||||
INSERT INTO virtual_tapes (
|
||||
library_id, barcode, slot_number, image_file_path,
|
||||
size_bytes, used_bytes, tape_type, status
|
||||
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
|
||||
`, dbLibraryID, barcode, slotNumber, imagePath, defaultSize, 0, tapeType, "idle")
|
||||
if err != nil {
|
||||
m.logger.Warn("Failed to insert tape", "barcode", barcode, "error", err)
|
||||
} else {
|
||||
m.logger.Debug("Created virtual tape from MHVTL", "barcode", barcode, "slot", slotNumber)
|
||||
}
|
||||
} else if err == nil {
|
||||
// Update existing tape slot
|
||||
_, err = m.service.db.ExecContext(ctx, `
|
||||
UPDATE virtual_tapes SET
|
||||
slot_number = $1, tape_type = $2, updated_at = NOW()
|
||||
WHERE id = $3
|
||||
`, slotNumber, tapeType, existingID)
|
||||
if err != nil {
|
||||
m.logger.Warn("Failed to update tape", "barcode", barcode, "error", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return scanner.Err()
|
||||
}
|
||||
|
||||
// countSlotsAndDrives counts slots and drives from library_contents file
|
||||
func (m *MHVTLMonitor) countSlotsAndDrives(contentsPath string) (slotCount, driveCount int) {
|
||||
file, err := os.Open(contentsPath)
|
||||
if err != nil {
|
||||
return 10, 2 // Default values
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
scanner := bufio.NewScanner(file)
|
||||
slotRegex := regexp.MustCompile(`^Slot\s+(\d+):`)
|
||||
driveRegex := regexp.MustCompile(`^Drive\s+(\d+):`)
|
||||
|
||||
maxSlot := 0
|
||||
driveCount = 0
|
||||
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
if strings.HasPrefix(line, "#") || line == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
if matches := slotRegex.FindStringSubmatch(line); matches != nil {
|
||||
slot, _ := strconv.Atoi(matches[1])
|
||||
if slot > maxSlot {
|
||||
maxSlot = slot
|
||||
}
|
||||
}
|
||||
if matches := driveRegex.FindStringSubmatch(line); matches != nil {
|
||||
driveCount++
|
||||
}
|
||||
}
|
||||
|
||||
slotCount = maxSlot
|
||||
if slotCount == 0 {
|
||||
slotCount = 10 // Default
|
||||
}
|
||||
if driveCount == 0 {
|
||||
driveCount = 2 // Default
|
||||
}
|
||||
|
||||
return slotCount, driveCount
|
||||
}
|
||||
544
backend/internal/tape_vtl/service.go
Normal file
544
backend/internal/tape_vtl/service.go
Normal file
@@ -0,0 +1,544 @@
|
||||
package tape_vtl
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
)
|
||||
|
||||
// Service handles virtual tape library (MHVTL) operations
|
||||
type Service struct {
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewService creates a new VTL service
|
||||
func NewService(db *database.DB, log *logger.Logger) *Service {
|
||||
return &Service{
|
||||
db: db,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// VirtualTapeLibrary represents a virtual tape library
|
||||
type VirtualTapeLibrary struct {
|
||||
ID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
MHVTLibraryID int `json:"mhvtl_library_id"`
|
||||
BackingStorePath string `json:"backing_store_path"`
|
||||
Vendor string `json:"vendor,omitempty"`
|
||||
SlotCount int `json:"slot_count"`
|
||||
DriveCount int `json:"drive_count"`
|
||||
IsActive bool `json:"is_active"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
CreatedBy string `json:"created_by"`
|
||||
}
|
||||
|
||||
// VirtualTapeDrive represents a virtual tape drive
|
||||
type VirtualTapeDrive struct {
|
||||
ID string `json:"id"`
|
||||
LibraryID string `json:"library_id"`
|
||||
DriveNumber int `json:"drive_number"`
|
||||
DevicePath *string `json:"device_path,omitempty"`
|
||||
StablePath *string `json:"stable_path,omitempty"`
|
||||
Status string `json:"status"`
|
||||
CurrentTapeID string `json:"current_tape_id,omitempty"`
|
||||
IsActive bool `json:"is_active"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
}
|
||||
|
||||
// VirtualTape represents a virtual tape
|
||||
type VirtualTape struct {
|
||||
ID string `json:"id"`
|
||||
LibraryID string `json:"library_id"`
|
||||
Barcode string `json:"barcode"`
|
||||
SlotNumber int `json:"slot_number"`
|
||||
ImageFilePath string `json:"image_file_path"`
|
||||
SizeBytes int64 `json:"size_bytes"`
|
||||
UsedBytes int64 `json:"used_bytes"`
|
||||
TapeType string `json:"tape_type"`
|
||||
Status string `json:"status"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
}
|
||||
|
||||
// CreateLibrary creates a new virtual tape library
|
||||
func (s *Service) CreateLibrary(ctx context.Context, name, description, backingStorePath string, slotCount, driveCount int, createdBy string) (*VirtualTapeLibrary, error) {
|
||||
// Ensure backing store directory exists
|
||||
fullPath := filepath.Join(backingStorePath, name)
|
||||
if err := os.MkdirAll(fullPath, 0755); err != nil {
|
||||
return nil, fmt.Errorf("failed to create backing store directory: %w", err)
|
||||
}
|
||||
|
||||
// Create tapes directory
|
||||
tapesPath := filepath.Join(fullPath, "tapes")
|
||||
if err := os.MkdirAll(tapesPath, 0755); err != nil {
|
||||
return nil, fmt.Errorf("failed to create tapes directory: %w", err)
|
||||
}
|
||||
|
||||
// Generate MHVTL library ID (use next available ID)
|
||||
mhvtlID, err := s.getNextMHVTLID(ctx)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get next MHVTL ID: %w", err)
|
||||
}
|
||||
|
||||
// Insert into database
|
||||
query := `
|
||||
INSERT INTO virtual_tape_libraries (
|
||||
name, description, mhvtl_library_id, backing_store_path,
|
||||
slot_count, drive_count, is_active, created_by
|
||||
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
|
||||
RETURNING id, created_at, updated_at
|
||||
`
|
||||
|
||||
var lib VirtualTapeLibrary
|
||||
err = s.db.QueryRowContext(ctx, query,
|
||||
name, description, mhvtlID, fullPath,
|
||||
slotCount, driveCount, true, createdBy,
|
||||
).Scan(&lib.ID, &lib.CreatedAt, &lib.UpdatedAt)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to save library to database: %w", err)
|
||||
}
|
||||
|
||||
lib.Name = name
|
||||
lib.Description = description
|
||||
lib.MHVTLibraryID = mhvtlID
|
||||
lib.BackingStorePath = fullPath
|
||||
lib.SlotCount = slotCount
|
||||
lib.DriveCount = driveCount
|
||||
lib.IsActive = true
|
||||
lib.CreatedBy = createdBy
|
||||
|
||||
// Create virtual drives
|
||||
for i := 1; i <= driveCount; i++ {
|
||||
drive := VirtualTapeDrive{
|
||||
LibraryID: lib.ID,
|
||||
DriveNumber: i,
|
||||
Status: "idle",
|
||||
IsActive: true,
|
||||
}
|
||||
if err := s.createDrive(ctx, &drive); err != nil {
|
||||
s.logger.Error("Failed to create drive", "drive_number", i, "error", err)
|
||||
// Continue creating other drives even if one fails
|
||||
}
|
||||
}
|
||||
|
||||
// Create initial tapes in slots
|
||||
for i := 1; i <= slotCount; i++ {
|
||||
barcode := fmt.Sprintf("V%05d", i)
|
||||
tape := VirtualTape{
|
||||
LibraryID: lib.ID,
|
||||
Barcode: barcode,
|
||||
SlotNumber: i,
|
||||
ImageFilePath: filepath.Join(tapesPath, fmt.Sprintf("%s.img", barcode)),
|
||||
SizeBytes: 800 * 1024 * 1024 * 1024, // 800 GB default (LTO-8)
|
||||
UsedBytes: 0,
|
||||
TapeType: "LTO-8",
|
||||
Status: "idle",
|
||||
}
|
||||
if err := s.createTape(ctx, &tape); err != nil {
|
||||
s.logger.Error("Failed to create tape", "slot", i, "error", err)
|
||||
// Continue creating other tapes even if one fails
|
||||
}
|
||||
}
|
||||
|
||||
s.logger.Info("Virtual tape library created", "name", name, "id", lib.ID)
|
||||
return &lib, nil
|
||||
}
|
||||
|
||||
// getNextMHVTLID gets the next available MHVTL library ID
|
||||
func (s *Service) getNextMHVTLID(ctx context.Context) (int, error) {
|
||||
var maxID sql.NullInt64
|
||||
err := s.db.QueryRowContext(ctx,
|
||||
"SELECT MAX(mhvtl_library_id) FROM virtual_tape_libraries",
|
||||
).Scan(&maxID)
|
||||
|
||||
if err != nil && err != sql.ErrNoRows {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
if maxID.Valid {
|
||||
return int(maxID.Int64) + 1, nil
|
||||
}
|
||||
|
||||
return 1, nil
|
||||
}
|
||||
|
||||
// createDrive creates a virtual tape drive
|
||||
func (s *Service) createDrive(ctx context.Context, drive *VirtualTapeDrive) error {
|
||||
query := `
|
||||
INSERT INTO virtual_tape_drives (
|
||||
library_id, drive_number, status, is_active
|
||||
) VALUES ($1, $2, $3, $4)
|
||||
RETURNING id, created_at, updated_at
|
||||
`
|
||||
|
||||
err := s.db.QueryRowContext(ctx, query,
|
||||
drive.LibraryID, drive.DriveNumber, drive.Status, drive.IsActive,
|
||||
).Scan(&drive.ID, &drive.CreatedAt, &drive.UpdatedAt)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create drive: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// createTape creates a virtual tape
|
||||
func (s *Service) createTape(ctx context.Context, tape *VirtualTape) error {
|
||||
// Create empty tape image file
|
||||
file, err := os.Create(tape.ImageFilePath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create tape image: %w", err)
|
||||
}
|
||||
file.Close()
|
||||
|
||||
query := `
|
||||
INSERT INTO virtual_tapes (
|
||||
library_id, barcode, slot_number, image_file_path,
|
||||
size_bytes, used_bytes, tape_type, status
|
||||
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
|
||||
RETURNING id, created_at, updated_at
|
||||
`
|
||||
|
||||
err = s.db.QueryRowContext(ctx, query,
|
||||
tape.LibraryID, tape.Barcode, tape.SlotNumber, tape.ImageFilePath,
|
||||
tape.SizeBytes, tape.UsedBytes, tape.TapeType, tape.Status,
|
||||
).Scan(&tape.ID, &tape.CreatedAt, &tape.UpdatedAt)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create tape: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ListLibraries lists all virtual tape libraries
|
||||
func (s *Service) ListLibraries(ctx context.Context) ([]VirtualTapeLibrary, error) {
|
||||
query := `
|
||||
SELECT id, name, description, mhvtl_library_id, backing_store_path,
|
||||
COALESCE(vendor, '') as vendor,
|
||||
slot_count, drive_count, is_active, created_at, updated_at, created_by
|
||||
FROM virtual_tape_libraries
|
||||
ORDER BY name
|
||||
`
|
||||
|
||||
s.logger.Info("Executing query to list libraries")
|
||||
rows, err := s.db.QueryContext(ctx, query)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to query libraries", "error", err)
|
||||
return nil, fmt.Errorf("failed to list libraries: %w", err)
|
||||
}
|
||||
s.logger.Info("Query executed successfully, got rows")
|
||||
defer rows.Close()
|
||||
|
||||
libraries := make([]VirtualTapeLibrary, 0) // Initialize as empty slice, not nil
|
||||
s.logger.Info("Starting to scan library rows", "query", query)
|
||||
rowCount := 0
|
||||
for rows.Next() {
|
||||
rowCount++
|
||||
var lib VirtualTapeLibrary
|
||||
var description sql.NullString
|
||||
var createdBy sql.NullString
|
||||
err := rows.Scan(
|
||||
&lib.ID, &lib.Name, &description, &lib.MHVTLibraryID, &lib.BackingStorePath,
|
||||
&lib.Vendor,
|
||||
&lib.SlotCount, &lib.DriveCount, &lib.IsActive,
|
||||
&lib.CreatedAt, &lib.UpdatedAt, &createdBy,
|
||||
)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to scan library", "error", err, "row", rowCount)
|
||||
continue
|
||||
}
|
||||
if description.Valid {
|
||||
lib.Description = description.String
|
||||
}
|
||||
if createdBy.Valid {
|
||||
lib.CreatedBy = createdBy.String
|
||||
}
|
||||
libraries = append(libraries, lib)
|
||||
s.logger.Info("Added library to list", "library_id", lib.ID, "name", lib.Name, "mhvtl_id", lib.MHVTLibraryID)
|
||||
}
|
||||
s.logger.Info("Finished scanning library rows", "total_rows", rowCount, "libraries_added", len(libraries))
|
||||
|
||||
if err := rows.Err(); err != nil {
|
||||
s.logger.Error("Error iterating library rows", "error", err)
|
||||
return nil, fmt.Errorf("error iterating library rows: %w", err)
|
||||
}
|
||||
|
||||
s.logger.Info("Listed virtual tape libraries", "count", len(libraries), "is_nil", libraries == nil)
|
||||
// Ensure we return an empty slice, not nil
|
||||
if libraries == nil {
|
||||
s.logger.Warn("Libraries is nil in service, converting to empty array")
|
||||
libraries = []VirtualTapeLibrary{}
|
||||
}
|
||||
s.logger.Info("Returning from service", "count", len(libraries), "is_nil", libraries == nil)
|
||||
return libraries, nil
|
||||
}
|
||||
|
||||
// GetLibrary retrieves a library by ID
|
||||
func (s *Service) GetLibrary(ctx context.Context, id string) (*VirtualTapeLibrary, error) {
|
||||
query := `
|
||||
SELECT id, name, description, mhvtl_library_id, backing_store_path,
|
||||
COALESCE(vendor, '') as vendor,
|
||||
slot_count, drive_count, is_active, created_at, updated_at, created_by
|
||||
FROM virtual_tape_libraries
|
||||
WHERE id = $1
|
||||
`
|
||||
|
||||
var lib VirtualTapeLibrary
|
||||
var description sql.NullString
|
||||
var createdBy sql.NullString
|
||||
err := s.db.QueryRowContext(ctx, query, id).Scan(
|
||||
&lib.ID, &lib.Name, &description, &lib.MHVTLibraryID, &lib.BackingStorePath,
|
||||
&lib.Vendor,
|
||||
&lib.SlotCount, &lib.DriveCount, &lib.IsActive,
|
||||
&lib.CreatedAt, &lib.UpdatedAt, &createdBy,
|
||||
)
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, fmt.Errorf("library not found")
|
||||
}
|
||||
return nil, fmt.Errorf("failed to get library: %w", err)
|
||||
}
|
||||
|
||||
if description.Valid {
|
||||
lib.Description = description.String
|
||||
}
|
||||
if createdBy.Valid {
|
||||
lib.CreatedBy = createdBy.String
|
||||
}
|
||||
|
||||
return &lib, nil
|
||||
}
|
||||
|
||||
// GetLibraryDrives retrieves drives for a library
|
||||
func (s *Service) GetLibraryDrives(ctx context.Context, libraryID string) ([]VirtualTapeDrive, error) {
|
||||
query := `
|
||||
SELECT id, library_id, drive_number, device_path, stable_path,
|
||||
status, current_tape_id, is_active, created_at, updated_at
|
||||
FROM virtual_tape_drives
|
||||
WHERE library_id = $1
|
||||
ORDER BY drive_number
|
||||
`
|
||||
|
||||
rows, err := s.db.QueryContext(ctx, query, libraryID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get drives: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var drives []VirtualTapeDrive
|
||||
for rows.Next() {
|
||||
var drive VirtualTapeDrive
|
||||
var tapeID, devicePath, stablePath sql.NullString
|
||||
err := rows.Scan(
|
||||
&drive.ID, &drive.LibraryID, &drive.DriveNumber,
|
||||
&devicePath, &stablePath,
|
||||
&drive.Status, &tapeID, &drive.IsActive,
|
||||
&drive.CreatedAt, &drive.UpdatedAt,
|
||||
)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to scan drive", "error", err)
|
||||
continue
|
||||
}
|
||||
if devicePath.Valid {
|
||||
drive.DevicePath = &devicePath.String
|
||||
}
|
||||
if stablePath.Valid {
|
||||
drive.StablePath = &stablePath.String
|
||||
}
|
||||
if tapeID.Valid {
|
||||
drive.CurrentTapeID = tapeID.String
|
||||
}
|
||||
drives = append(drives, drive)
|
||||
}
|
||||
|
||||
return drives, rows.Err()
|
||||
}
|
||||
|
||||
// GetLibraryTapes retrieves tapes for a library
|
||||
func (s *Service) GetLibraryTapes(ctx context.Context, libraryID string) ([]VirtualTape, error) {
|
||||
query := `
|
||||
SELECT id, library_id, barcode, slot_number, image_file_path,
|
||||
size_bytes, used_bytes, tape_type, status, created_at, updated_at
|
||||
FROM virtual_tapes
|
||||
WHERE library_id = $1
|
||||
ORDER BY slot_number
|
||||
`
|
||||
|
||||
rows, err := s.db.QueryContext(ctx, query, libraryID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get tapes: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var tapes []VirtualTape
|
||||
for rows.Next() {
|
||||
var tape VirtualTape
|
||||
err := rows.Scan(
|
||||
&tape.ID, &tape.LibraryID, &tape.Barcode, &tape.SlotNumber,
|
||||
&tape.ImageFilePath, &tape.SizeBytes, &tape.UsedBytes,
|
||||
&tape.TapeType, &tape.Status, &tape.CreatedAt, &tape.UpdatedAt,
|
||||
)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to scan tape", "error", err)
|
||||
continue
|
||||
}
|
||||
tapes = append(tapes, tape)
|
||||
}
|
||||
|
||||
return tapes, rows.Err()
|
||||
}
|
||||
|
||||
// CreateTape creates a new virtual tape
|
||||
func (s *Service) CreateTape(ctx context.Context, libraryID, barcode string, slotNumber int, tapeType string, sizeBytes int64) (*VirtualTape, error) {
|
||||
// Get library to find backing store path
|
||||
lib, err := s.GetLibrary(ctx, libraryID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Create tape image file
|
||||
tapesPath := filepath.Join(lib.BackingStorePath, "tapes")
|
||||
imagePath := filepath.Join(tapesPath, fmt.Sprintf("%s.img", barcode))
|
||||
|
||||
file, err := os.Create(imagePath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create tape image: %w", err)
|
||||
}
|
||||
file.Close()
|
||||
|
||||
tape := VirtualTape{
|
||||
LibraryID: libraryID,
|
||||
Barcode: barcode,
|
||||
SlotNumber: slotNumber,
|
||||
ImageFilePath: imagePath,
|
||||
SizeBytes: sizeBytes,
|
||||
UsedBytes: 0,
|
||||
TapeType: tapeType,
|
||||
Status: "idle",
|
||||
}
|
||||
|
||||
return s.createTapeRecord(ctx, &tape)
|
||||
}
|
||||
|
||||
// createTapeRecord creates a tape record in the database
|
||||
func (s *Service) createTapeRecord(ctx context.Context, tape *VirtualTape) (*VirtualTape, error) {
|
||||
query := `
|
||||
INSERT INTO virtual_tapes (
|
||||
library_id, barcode, slot_number, image_file_path,
|
||||
size_bytes, used_bytes, tape_type, status
|
||||
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
|
||||
RETURNING id, created_at, updated_at
|
||||
`
|
||||
|
||||
err := s.db.QueryRowContext(ctx, query,
|
||||
tape.LibraryID, tape.Barcode, tape.SlotNumber, tape.ImageFilePath,
|
||||
tape.SizeBytes, tape.UsedBytes, tape.TapeType, tape.Status,
|
||||
).Scan(&tape.ID, &tape.CreatedAt, &tape.UpdatedAt)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create tape record: %w", err)
|
||||
}
|
||||
|
||||
return tape, nil
|
||||
}
|
||||
|
||||
// LoadTape loads a tape from slot to drive
|
||||
func (s *Service) LoadTape(ctx context.Context, libraryID string, slotNumber, driveNumber int) error {
|
||||
// Get tape from slot
|
||||
var tapeID, barcode string
|
||||
err := s.db.QueryRowContext(ctx,
|
||||
"SELECT id, barcode FROM virtual_tapes WHERE library_id = $1 AND slot_number = $2",
|
||||
libraryID, slotNumber,
|
||||
).Scan(&tapeID, &barcode)
|
||||
if err != nil {
|
||||
return fmt.Errorf("tape not found in slot: %w", err)
|
||||
}
|
||||
|
||||
// Update tape status
|
||||
_, err = s.db.ExecContext(ctx,
|
||||
"UPDATE virtual_tapes SET status = 'in_drive', updated_at = NOW() WHERE id = $1",
|
||||
tapeID,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to update tape status: %w", err)
|
||||
}
|
||||
|
||||
// Update drive status
|
||||
_, err = s.db.ExecContext(ctx,
|
||||
"UPDATE virtual_tape_drives SET status = 'ready', current_tape_id = $1, updated_at = NOW() WHERE library_id = $2 AND drive_number = $3",
|
||||
tapeID, libraryID, driveNumber,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to update drive status: %w", err)
|
||||
}
|
||||
|
||||
s.logger.Info("Virtual tape loaded", "library_id", libraryID, "slot", slotNumber, "drive", driveNumber, "barcode", barcode)
|
||||
return nil
|
||||
}
|
||||
|
||||
// UnloadTape unloads a tape from drive to slot
|
||||
func (s *Service) UnloadTape(ctx context.Context, libraryID string, driveNumber, slotNumber int) error {
|
||||
// Get current tape in drive
|
||||
var tapeID string
|
||||
err := s.db.QueryRowContext(ctx,
|
||||
"SELECT current_tape_id FROM virtual_tape_drives WHERE library_id = $1 AND drive_number = $2",
|
||||
libraryID, driveNumber,
|
||||
).Scan(&tapeID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("no tape in drive: %w", err)
|
||||
}
|
||||
|
||||
// Update tape status and slot
|
||||
_, err = s.db.ExecContext(ctx,
|
||||
"UPDATE virtual_tapes SET status = 'idle', slot_number = $1, updated_at = NOW() WHERE id = $2",
|
||||
slotNumber, tapeID,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to update tape: %w", err)
|
||||
}
|
||||
|
||||
// Update drive status
|
||||
_, err = s.db.ExecContext(ctx,
|
||||
"UPDATE virtual_tape_drives SET status = 'idle', current_tape_id = NULL, updated_at = NOW() WHERE library_id = $1 AND drive_number = $2",
|
||||
libraryID, driveNumber,
|
||||
)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to update drive: %w", err)
|
||||
}
|
||||
|
||||
s.logger.Info("Virtual tape unloaded", "library_id", libraryID, "drive", driveNumber, "slot", slotNumber)
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteLibrary deletes a virtual tape library
|
||||
func (s *Service) DeleteLibrary(ctx context.Context, id string) error {
|
||||
lib, err := s.GetLibrary(ctx, id)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if lib.IsActive {
|
||||
return fmt.Errorf("cannot delete active library")
|
||||
}
|
||||
|
||||
// Delete from database (cascade will handle drives and tapes)
|
||||
_, err = s.db.ExecContext(ctx, "DELETE FROM virtual_tape_libraries WHERE id = $1", id)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to delete library: %w", err)
|
||||
}
|
||||
|
||||
// Optionally remove backing store (commented out for safety)
|
||||
// os.RemoveAll(lib.BackingStorePath)
|
||||
|
||||
s.logger.Info("Virtual tape library deleted", "id", id, "name", lib.Name)
|
||||
return nil
|
||||
}
|
||||
222
backend/internal/tasks/engine.go
Normal file
222
backend/internal/tasks/engine.go
Normal file
@@ -0,0 +1,222 @@
|
||||
package tasks
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
// Engine manages async task execution
|
||||
type Engine struct {
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewEngine creates a new task engine
|
||||
func NewEngine(db *database.DB, log *logger.Logger) *Engine {
|
||||
return &Engine{
|
||||
db: db,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// TaskStatus represents the state of a task
|
||||
type TaskStatus string
|
||||
|
||||
const (
|
||||
TaskStatusPending TaskStatus = "pending"
|
||||
TaskStatusRunning TaskStatus = "running"
|
||||
TaskStatusCompleted TaskStatus = "completed"
|
||||
TaskStatusFailed TaskStatus = "failed"
|
||||
TaskStatusCancelled TaskStatus = "cancelled"
|
||||
)
|
||||
|
||||
// TaskType represents the type of task
|
||||
type TaskType string
|
||||
|
||||
const (
|
||||
TaskTypeInventory TaskType = "inventory"
|
||||
TaskTypeLoadUnload TaskType = "load_unload"
|
||||
TaskTypeRescan TaskType = "rescan"
|
||||
TaskTypeApplySCST TaskType = "apply_scst"
|
||||
TaskTypeSupportBundle TaskType = "support_bundle"
|
||||
)
|
||||
|
||||
// CreateTask creates a new task
|
||||
func (e *Engine) CreateTask(ctx context.Context, taskType TaskType, createdBy string, metadata map[string]interface{}) (string, error) {
|
||||
taskID := uuid.New().String()
|
||||
|
||||
var metadataJSON *string
|
||||
if metadata != nil {
|
||||
bytes, err := json.Marshal(metadata)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to marshal metadata: %w", err)
|
||||
}
|
||||
jsonStr := string(bytes)
|
||||
metadataJSON = &jsonStr
|
||||
}
|
||||
|
||||
query := `
|
||||
INSERT INTO tasks (id, type, status, progress, created_by, metadata)
|
||||
VALUES ($1, $2, $3, $4, $5, $6)
|
||||
`
|
||||
|
||||
_, err := e.db.ExecContext(ctx, query,
|
||||
taskID, string(taskType), string(TaskStatusPending), 0, createdBy, metadataJSON,
|
||||
)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create task: %w", err)
|
||||
}
|
||||
|
||||
e.logger.Info("Task created", "task_id", taskID, "type", taskType)
|
||||
return taskID, nil
|
||||
}
|
||||
|
||||
// StartTask marks a task as running
|
||||
func (e *Engine) StartTask(ctx context.Context, taskID string) error {
|
||||
query := `
|
||||
UPDATE tasks
|
||||
SET status = $1, progress = 0, started_at = NOW(), updated_at = NOW()
|
||||
WHERE id = $2 AND status = $3
|
||||
`
|
||||
|
||||
result, err := e.db.ExecContext(ctx, query, string(TaskStatusRunning), taskID, string(TaskStatusPending))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to start task: %w", err)
|
||||
}
|
||||
|
||||
rows, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get rows affected: %w", err)
|
||||
}
|
||||
|
||||
if rows == 0 {
|
||||
return fmt.Errorf("task not found or already started")
|
||||
}
|
||||
|
||||
e.logger.Info("Task started", "task_id", taskID)
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateProgress updates task progress
|
||||
func (e *Engine) UpdateProgress(ctx context.Context, taskID string, progress int, message string) error {
|
||||
if progress < 0 || progress > 100 {
|
||||
return fmt.Errorf("progress must be between 0 and 100")
|
||||
}
|
||||
|
||||
query := `
|
||||
UPDATE tasks
|
||||
SET progress = $1, message = $2, updated_at = NOW()
|
||||
WHERE id = $3
|
||||
`
|
||||
|
||||
_, err := e.db.ExecContext(ctx, query, progress, message, taskID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to update progress: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// CompleteTask marks a task as completed
|
||||
func (e *Engine) CompleteTask(ctx context.Context, taskID string, message string) error {
|
||||
query := `
|
||||
UPDATE tasks
|
||||
SET status = $1, progress = 100, message = $2, completed_at = NOW(), updated_at = NOW()
|
||||
WHERE id = $3
|
||||
`
|
||||
|
||||
result, err := e.db.ExecContext(ctx, query, string(TaskStatusCompleted), message, taskID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to complete task: %w", err)
|
||||
}
|
||||
|
||||
rows, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get rows affected: %w", err)
|
||||
}
|
||||
|
||||
if rows == 0 {
|
||||
return fmt.Errorf("task not found")
|
||||
}
|
||||
|
||||
e.logger.Info("Task completed", "task_id", taskID)
|
||||
return nil
|
||||
}
|
||||
|
||||
// FailTask marks a task as failed
|
||||
func (e *Engine) FailTask(ctx context.Context, taskID string, errorMessage string) error {
|
||||
query := `
|
||||
UPDATE tasks
|
||||
SET status = $1, error_message = $2, completed_at = NOW(), updated_at = NOW()
|
||||
WHERE id = $3
|
||||
`
|
||||
|
||||
result, err := e.db.ExecContext(ctx, query, string(TaskStatusFailed), errorMessage, taskID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to fail task: %w", err)
|
||||
}
|
||||
|
||||
rows, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get rows affected: %w", err)
|
||||
}
|
||||
|
||||
if rows == 0 {
|
||||
return fmt.Errorf("task not found")
|
||||
}
|
||||
|
||||
e.logger.Error("Task failed", "task_id", taskID, "error", errorMessage)
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetTask retrieves a task by ID
|
||||
func (e *Engine) GetTask(ctx context.Context, taskID string) (*Task, error) {
|
||||
query := `
|
||||
SELECT id, type, status, progress, message, error_message,
|
||||
created_by, started_at, completed_at, created_at, updated_at, metadata
|
||||
FROM tasks
|
||||
WHERE id = $1
|
||||
`
|
||||
|
||||
var task Task
|
||||
var errorMsg, createdBy sql.NullString
|
||||
var startedAt, completedAt sql.NullTime
|
||||
var metadata sql.NullString
|
||||
|
||||
err := e.db.QueryRowContext(ctx, query, taskID).Scan(
|
||||
&task.ID, &task.Type, &task.Status, &task.Progress,
|
||||
&task.Message, &errorMsg, &createdBy,
|
||||
&startedAt, &completedAt, &task.CreatedAt, &task.UpdatedAt, &metadata,
|
||||
)
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, fmt.Errorf("task not found")
|
||||
}
|
||||
return nil, fmt.Errorf("failed to get task: %w", err)
|
||||
}
|
||||
|
||||
if errorMsg.Valid {
|
||||
task.ErrorMessage = errorMsg.String
|
||||
}
|
||||
if createdBy.Valid {
|
||||
task.CreatedBy = createdBy.String
|
||||
}
|
||||
if startedAt.Valid {
|
||||
task.StartedAt = &startedAt.Time
|
||||
}
|
||||
if completedAt.Valid {
|
||||
task.CompletedAt = &completedAt.Time
|
||||
}
|
||||
if metadata.Valid && metadata.String != "" {
|
||||
json.Unmarshal([]byte(metadata.String), &task.Metadata)
|
||||
}
|
||||
|
||||
return &task, nil
|
||||
}
|
||||
|
||||
84
backend/internal/tasks/engine_test.go
Normal file
84
backend/internal/tasks/engine_test.go
Normal file
@@ -0,0 +1,84 @@
|
||||
package tasks
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestUpdateProgress_Validation(t *testing.T) {
|
||||
// Test that UpdateProgress validates progress range
|
||||
// Note: This tests the validation logic without requiring a database
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
progress int
|
||||
wantErr bool
|
||||
}{
|
||||
{"valid progress 0", 0, false},
|
||||
{"valid progress 50", 50, false},
|
||||
{"valid progress 100", 100, false},
|
||||
{"invalid progress -1", -1, true},
|
||||
{"invalid progress 101", 101, true},
|
||||
{"invalid progress -100", -100, true},
|
||||
{"invalid progress 200", 200, true},
|
||||
}
|
||||
|
||||
// We can't test the full function without a database, but we can test the validation logic
|
||||
// by checking the error message format
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// The validation happens in UpdateProgress, which requires a database
|
||||
// For unit testing, we verify the validation logic exists
|
||||
if tt.progress < 0 || tt.progress > 100 {
|
||||
// This is the validation that should happen
|
||||
if !tt.wantErr {
|
||||
t.Errorf("Expected error for progress %d, but validation should catch it", tt.progress)
|
||||
}
|
||||
} else {
|
||||
if tt.wantErr {
|
||||
t.Errorf("Did not expect error for progress %d", tt.progress)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestTaskStatus_Constants(t *testing.T) {
|
||||
// Test that task status constants are defined correctly
|
||||
statuses := []TaskStatus{
|
||||
TaskStatusPending,
|
||||
TaskStatusRunning,
|
||||
TaskStatusCompleted,
|
||||
TaskStatusFailed,
|
||||
TaskStatusCancelled,
|
||||
}
|
||||
|
||||
expected := []string{"pending", "running", "completed", "failed", "cancelled"}
|
||||
for i, status := range statuses {
|
||||
if string(status) != expected[i] {
|
||||
t.Errorf("TaskStatus[%d] = %s, expected %s", i, status, expected[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestTaskType_Constants(t *testing.T) {
|
||||
// Test that task type constants are defined correctly
|
||||
types := []TaskType{
|
||||
TaskTypeInventory,
|
||||
TaskTypeLoadUnload,
|
||||
TaskTypeRescan,
|
||||
TaskTypeApplySCST,
|
||||
TaskTypeSupportBundle,
|
||||
}
|
||||
|
||||
expected := []string{"inventory", "load_unload", "rescan", "apply_scst", "support_bundle"}
|
||||
for i, taskType := range types {
|
||||
if string(taskType) != expected[i] {
|
||||
t.Errorf("TaskType[%d] = %s, expected %s", i, taskType, expected[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Note: Full integration tests for task engine would require a test database
|
||||
// These are unit tests that verify constants and validation logic
|
||||
// Integration tests should be in a separate test file with database setup
|
||||
|
||||
100
backend/internal/tasks/handler.go
Normal file
100
backend/internal/tasks/handler.go
Normal file
@@ -0,0 +1,100 @@
|
||||
package tasks
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
// Handler handles task-related requests
|
||||
type Handler struct {
|
||||
db *database.DB
|
||||
logger *logger.Logger
|
||||
}
|
||||
|
||||
// NewHandler creates a new task handler
|
||||
func NewHandler(db *database.DB, log *logger.Logger) *Handler {
|
||||
return &Handler{
|
||||
db: db,
|
||||
logger: log,
|
||||
}
|
||||
}
|
||||
|
||||
// Task represents an async task
|
||||
type Task struct {
|
||||
ID string `json:"id"`
|
||||
Type string `json:"type"`
|
||||
Status string `json:"status"`
|
||||
Progress int `json:"progress"`
|
||||
Message string `json:"message"`
|
||||
ErrorMessage string `json:"error_message,omitempty"`
|
||||
CreatedBy string `json:"created_by,omitempty"`
|
||||
StartedAt *time.Time `json:"started_at,omitempty"`
|
||||
CompletedAt *time.Time `json:"completed_at,omitempty"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
Metadata map[string]interface{} `json:"metadata,omitempty"`
|
||||
}
|
||||
|
||||
// GetTask retrieves a task by ID
|
||||
func (h *Handler) GetTask(c *gin.Context) {
|
||||
taskID := c.Param("id")
|
||||
|
||||
// Validate UUID
|
||||
if _, err := uuid.Parse(taskID); err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid task ID"})
|
||||
return
|
||||
}
|
||||
|
||||
query := `
|
||||
SELECT id, type, status, progress, message, error_message,
|
||||
created_by, started_at, completed_at, created_at, updated_at, metadata
|
||||
FROM tasks
|
||||
WHERE id = $1
|
||||
`
|
||||
|
||||
var task Task
|
||||
var errorMsg, createdBy sql.NullString
|
||||
var startedAt, completedAt sql.NullTime
|
||||
var metadata sql.NullString
|
||||
|
||||
err := h.db.QueryRow(query, taskID).Scan(
|
||||
&task.ID, &task.Type, &task.Status, &task.Progress,
|
||||
&task.Message, &errorMsg, &createdBy,
|
||||
&startedAt, &completedAt, &task.CreatedAt, &task.UpdatedAt, &metadata,
|
||||
)
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "task not found"})
|
||||
return
|
||||
}
|
||||
h.logger.Error("Failed to get task", "error", err)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get task"})
|
||||
return
|
||||
}
|
||||
|
||||
if errorMsg.Valid {
|
||||
task.ErrorMessage = errorMsg.String
|
||||
}
|
||||
if createdBy.Valid {
|
||||
task.CreatedBy = createdBy.String
|
||||
}
|
||||
if startedAt.Valid {
|
||||
task.StartedAt = &startedAt.Time
|
||||
}
|
||||
if completedAt.Valid {
|
||||
task.CompletedAt = &completedAt.Time
|
||||
}
|
||||
if metadata.Valid && metadata.String != "" {
|
||||
json.Unmarshal([]byte(metadata.String), &task.Metadata)
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, task)
|
||||
}
|
||||
|
||||
85
backend/tests/integration/README.md
Normal file
85
backend/tests/integration/README.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# Integration Tests
|
||||
|
||||
This directory contains integration tests for the Calypso API.
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Create Test Database (Optional)
|
||||
|
||||
For isolated testing, create a separate test database:
|
||||
|
||||
```bash
|
||||
sudo -u postgres createdb calypso_test
|
||||
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE calypso_test TO calypso;"
|
||||
```
|
||||
|
||||
### 2. Environment Variables
|
||||
|
||||
Set test database configuration:
|
||||
|
||||
```bash
|
||||
export TEST_DB_HOST=localhost
|
||||
export TEST_DB_PORT=5432
|
||||
export TEST_DB_USER=calypso
|
||||
export TEST_DB_PASSWORD=calypso123
|
||||
export TEST_DB_NAME=calypso_test # or use existing 'calypso' database
|
||||
```
|
||||
|
||||
Or use the existing database:
|
||||
|
||||
```bash
|
||||
export TEST_DB_NAME=calypso
|
||||
export TEST_DB_PASSWORD=calypso123
|
||||
```
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Run All Integration Tests
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
go test ./tests/integration/... -v
|
||||
```
|
||||
|
||||
### Run Specific Test
|
||||
|
||||
```bash
|
||||
go test ./tests/integration/... -run TestHealthEndpoint -v
|
||||
```
|
||||
|
||||
### Run with Coverage
|
||||
|
||||
```bash
|
||||
go test -cover ./tests/integration/... -v
|
||||
```
|
||||
|
||||
## Test Structure
|
||||
|
||||
- `setup.go` - Test database setup and helper functions
|
||||
- `api_test.go` - API endpoint integration tests
|
||||
|
||||
## Test Coverage
|
||||
|
||||
### ✅ Implemented Tests
|
||||
|
||||
1. **TestHealthEndpoint** - Tests enhanced health check endpoint
|
||||
2. **TestLoginEndpoint** - Tests user login with password verification
|
||||
3. **TestLoginEndpoint_WrongPassword** - Tests wrong password rejection
|
||||
4. **TestGetCurrentUser** - Tests authenticated user info retrieval
|
||||
5. **TestListAlerts** - Tests monitoring alerts endpoint
|
||||
|
||||
### ⏳ Future Tests
|
||||
|
||||
- Storage endpoints
|
||||
- SCST endpoints
|
||||
- VTL endpoints
|
||||
- Task management
|
||||
- IAM endpoints
|
||||
|
||||
## Notes
|
||||
|
||||
- Tests use the actual database (test or production)
|
||||
- Tests clean up data after each test run
|
||||
- Tests create test users with proper password hashing
|
||||
- Tests verify authentication and authorization
|
||||
|
||||
309
backend/tests/integration/api_test.go
Normal file
309
backend/tests/integration/api_test.go
Normal file
@@ -0,0 +1,309 @@
|
||||
package integration
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/config"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/atlasos/calypso/internal/common/password"
|
||||
"github.com/atlasos/calypso/internal/common/router"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestMain(m *testing.M) {
|
||||
// Set Gin to test mode
|
||||
gin.SetMode(gin.TestMode)
|
||||
|
||||
// Run tests
|
||||
code := m.Run()
|
||||
|
||||
// Cleanup
|
||||
if TestDB != nil {
|
||||
TestDB.Close()
|
||||
}
|
||||
|
||||
os.Exit(code)
|
||||
}
|
||||
|
||||
func TestHealthEndpoint(t *testing.T) {
|
||||
db := SetupTestDB(t)
|
||||
defer CleanupTestDB(t)
|
||||
|
||||
cfg := TestConfig
|
||||
if cfg == nil {
|
||||
cfg = &config.Config{
|
||||
Server: config.ServerConfig{
|
||||
Port: 8080,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
log := TestLogger
|
||||
if log == nil {
|
||||
log = logger.NewLogger("test")
|
||||
}
|
||||
r := router.NewRouter(cfg, db, log)
|
||||
|
||||
req := httptest.NewRequest("GET", "/api/v1/health", nil)
|
||||
w := httptest.NewRecorder()
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusOK, w.Code)
|
||||
|
||||
var response map[string]interface{}
|
||||
err := json.Unmarshal(w.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "calypso-api", response["service"])
|
||||
}
|
||||
|
||||
func TestLoginEndpoint(t *testing.T) {
|
||||
db := SetupTestDB(t)
|
||||
defer CleanupTestDB(t)
|
||||
|
||||
// Create test user
|
||||
passwordHash, err := password.HashPassword("testpass123", config.Argon2Params{
|
||||
Memory: 64 * 1024,
|
||||
Iterations: 3,
|
||||
Parallelism: 4,
|
||||
SaltLength: 16,
|
||||
KeyLength: 32,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
userID := CreateTestUser(t, "testuser", "test@example.com", passwordHash, false)
|
||||
|
||||
cfg := TestConfig
|
||||
if cfg == nil {
|
||||
cfg = &config.Config{
|
||||
Server: config.ServerConfig{
|
||||
Port: 8080,
|
||||
},
|
||||
Auth: config.AuthConfig{
|
||||
JWTSecret: "test-jwt-secret-key-minimum-32-characters-long",
|
||||
TokenLifetime: 24 * 60 * 60 * 1000000000, // 24 hours in nanoseconds
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
log := TestLogger
|
||||
if log == nil {
|
||||
log = logger.NewLogger("test")
|
||||
}
|
||||
r := router.NewRouter(cfg, db, log)
|
||||
|
||||
// Test login
|
||||
loginData := map[string]string{
|
||||
"username": "testuser",
|
||||
"password": "testpass123",
|
||||
}
|
||||
jsonData, _ := json.Marshal(loginData)
|
||||
|
||||
req := httptest.NewRequest("POST", "/api/v1/auth/login", bytes.NewBuffer(jsonData))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
w := httptest.NewRecorder()
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusOK, w.Code)
|
||||
|
||||
var response map[string]interface{}
|
||||
err = json.Unmarshal(w.Body.Bytes(), &response)
|
||||
require.NoError(t, err)
|
||||
assert.NotEmpty(t, response["token"])
|
||||
assert.Equal(t, userID, response["user"].(map[string]interface{})["id"])
|
||||
}
|
||||
|
||||
func TestLoginEndpoint_WrongPassword(t *testing.T) {
|
||||
db := SetupTestDB(t)
|
||||
defer CleanupTestDB(t)
|
||||
|
||||
// Create test user
|
||||
passwordHash, err := password.HashPassword("testpass123", config.Argon2Params{
|
||||
Memory: 64 * 1024,
|
||||
Iterations: 3,
|
||||
Parallelism: 4,
|
||||
SaltLength: 16,
|
||||
KeyLength: 32,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
CreateTestUser(t, "testuser", "test@example.com", passwordHash, false)
|
||||
|
||||
cfg := TestConfig
|
||||
if cfg == nil {
|
||||
cfg = &config.Config{
|
||||
Server: config.ServerConfig{
|
||||
Port: 8080,
|
||||
},
|
||||
Auth: config.AuthConfig{
|
||||
JWTSecret: "test-jwt-secret-key-minimum-32-characters-long",
|
||||
TokenLifetime: 24 * 60 * 60 * 1000000000,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
log := TestLogger
|
||||
if log == nil {
|
||||
log = logger.NewLogger("test")
|
||||
}
|
||||
r := router.NewRouter(cfg, db, log)
|
||||
|
||||
// Test login with wrong password
|
||||
loginData := map[string]string{
|
||||
"username": "testuser",
|
||||
"password": "wrongpassword",
|
||||
}
|
||||
jsonData, _ := json.Marshal(loginData)
|
||||
|
||||
req := httptest.NewRequest("POST", "/api/v1/auth/login", bytes.NewBuffer(jsonData))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
w := httptest.NewRecorder()
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusUnauthorized, w.Code)
|
||||
}
|
||||
|
||||
func TestGetCurrentUser(t *testing.T) {
|
||||
db := SetupTestDB(t)
|
||||
defer CleanupTestDB(t)
|
||||
|
||||
// Create test user and get token
|
||||
passwordHash, err := password.HashPassword("testpass123", config.Argon2Params{
|
||||
Memory: 64 * 1024,
|
||||
Iterations: 3,
|
||||
Parallelism: 4,
|
||||
SaltLength: 16,
|
||||
KeyLength: 32,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
userID := CreateTestUser(t, "testuser", "test@example.com", passwordHash, false)
|
||||
|
||||
cfg := TestConfig
|
||||
if cfg == nil {
|
||||
cfg = &config.Config{
|
||||
Server: config.ServerConfig{
|
||||
Port: 8080,
|
||||
},
|
||||
Auth: config.AuthConfig{
|
||||
JWTSecret: "test-jwt-secret-key-minimum-32-characters-long",
|
||||
TokenLifetime: 24 * 60 * 60 * 1000000000,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
log := TestLogger
|
||||
if log == nil {
|
||||
log = logger.NewLogger("test")
|
||||
}
|
||||
|
||||
// Login to get token
|
||||
loginData := map[string]string{
|
||||
"username": "testuser",
|
||||
"password": "testpass123",
|
||||
}
|
||||
jsonData, _ := json.Marshal(loginData)
|
||||
|
||||
r := router.NewRouter(cfg, db, log)
|
||||
req := httptest.NewRequest("POST", "/api/v1/auth/login", bytes.NewBuffer(jsonData))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
w := httptest.NewRecorder()
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
require.Equal(t, http.StatusOK, w.Code)
|
||||
|
||||
var loginResponse map[string]interface{}
|
||||
err = json.Unmarshal(w.Body.Bytes(), &loginResponse)
|
||||
require.NoError(t, err)
|
||||
token := loginResponse["token"].(string)
|
||||
|
||||
// Test /auth/me endpoint (use same router instance)
|
||||
req2 := httptest.NewRequest("GET", "/api/v1/auth/me", nil)
|
||||
req2.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
|
||||
w2 := httptest.NewRecorder()
|
||||
r.ServeHTTP(w2, req2)
|
||||
|
||||
assert.Equal(t, http.StatusOK, w2.Code)
|
||||
|
||||
var userResponse map[string]interface{}
|
||||
err = json.Unmarshal(w2.Body.Bytes(), &userResponse)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, userID, userResponse["id"])
|
||||
assert.Equal(t, "testuser", userResponse["username"])
|
||||
}
|
||||
|
||||
func TestListAlerts(t *testing.T) {
|
||||
db := SetupTestDB(t)
|
||||
defer CleanupTestDB(t)
|
||||
|
||||
// Create test user and get token
|
||||
passwordHash, err := password.HashPassword("testpass123", config.Argon2Params{
|
||||
Memory: 64 * 1024,
|
||||
Iterations: 3,
|
||||
Parallelism: 4,
|
||||
SaltLength: 16,
|
||||
KeyLength: 32,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
CreateTestUser(t, "testuser", "test@example.com", passwordHash, true) // Admin user
|
||||
|
||||
cfg := TestConfig
|
||||
if cfg == nil {
|
||||
cfg = &config.Config{
|
||||
Server: config.ServerConfig{
|
||||
Port: 8080,
|
||||
},
|
||||
Auth: config.AuthConfig{
|
||||
JWTSecret: "test-jwt-secret-key-minimum-32-characters-long",
|
||||
TokenLifetime: 24 * 60 * 60 * 1000000000,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
log := TestLogger
|
||||
if log == nil {
|
||||
log = logger.NewLogger("test")
|
||||
}
|
||||
r := router.NewRouter(cfg, db, log)
|
||||
|
||||
// Login to get token
|
||||
loginData := map[string]string{
|
||||
"username": "testuser",
|
||||
"password": "testpass123",
|
||||
}
|
||||
jsonData, _ := json.Marshal(loginData)
|
||||
|
||||
req := httptest.NewRequest("POST", "/api/v1/auth/login", bytes.NewBuffer(jsonData))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
w := httptest.NewRecorder()
|
||||
r.ServeHTTP(w, req)
|
||||
|
||||
require.Equal(t, http.StatusOK, w.Code)
|
||||
|
||||
var loginResponse map[string]interface{}
|
||||
err = json.Unmarshal(w.Body.Bytes(), &loginResponse)
|
||||
require.NoError(t, err)
|
||||
token := loginResponse["token"].(string)
|
||||
|
||||
// Test /monitoring/alerts endpoint (use same router instance)
|
||||
req2 := httptest.NewRequest("GET", "/api/v1/monitoring/alerts", nil)
|
||||
req2.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
|
||||
w2 := httptest.NewRecorder()
|
||||
r.ServeHTTP(w2, req2)
|
||||
|
||||
assert.Equal(t, http.StatusOK, w2.Code)
|
||||
|
||||
var alertsResponse map[string]interface{}
|
||||
err = json.Unmarshal(w2.Body.Bytes(), &alertsResponse)
|
||||
require.NoError(t, err)
|
||||
assert.NotNil(t, alertsResponse["alerts"])
|
||||
}
|
||||
|
||||
174
backend/tests/integration/setup.go
Normal file
174
backend/tests/integration/setup.go
Normal file
@@ -0,0 +1,174 @@
|
||||
package integration
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
_ "github.com/lib/pq"
|
||||
"github.com/atlasos/calypso/internal/common/config"
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
"github.com/atlasos/calypso/internal/common/logger"
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
// TestDB holds the test database connection
|
||||
var TestDB *database.DB
|
||||
var TestConfig *config.Config
|
||||
var TestLogger *logger.Logger
|
||||
|
||||
// SetupTestDB initializes a test database connection
|
||||
func SetupTestDB(t *testing.T) *database.DB {
|
||||
if TestDB != nil {
|
||||
return TestDB
|
||||
}
|
||||
|
||||
// Get test database configuration from environment
|
||||
dbHost := getEnv("TEST_DB_HOST", "localhost")
|
||||
dbPort := getEnvInt("TEST_DB_PORT", 5432)
|
||||
dbUser := getEnv("TEST_DB_USER", "calypso")
|
||||
dbPassword := getEnv("TEST_DB_PASSWORD", "calypso123")
|
||||
dbName := getEnv("TEST_DB_NAME", "calypso_test")
|
||||
|
||||
cfg := &config.Config{
|
||||
Database: config.DatabaseConfig{
|
||||
Host: dbHost,
|
||||
Port: dbPort,
|
||||
User: dbUser,
|
||||
Password: dbPassword,
|
||||
Database: dbName,
|
||||
SSLMode: "disable",
|
||||
MaxConnections: 10,
|
||||
MaxIdleConns: 5,
|
||||
ConnMaxLifetime: 5 * time.Minute,
|
||||
},
|
||||
}
|
||||
|
||||
db, err := database.NewConnection(cfg.Database)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to connect to test database: %v", err)
|
||||
}
|
||||
|
||||
// Run migrations
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
if err := database.RunMigrations(ctx, db); err != nil {
|
||||
t.Fatalf("Failed to run migrations: %v", err)
|
||||
}
|
||||
|
||||
TestDB = db
|
||||
TestConfig = cfg
|
||||
if TestLogger == nil {
|
||||
TestLogger = logger.NewLogger("test")
|
||||
}
|
||||
|
||||
return db
|
||||
}
|
||||
|
||||
// CleanupTestDB cleans up test data
|
||||
func CleanupTestDB(t *testing.T) {
|
||||
if TestDB == nil {
|
||||
return
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// Clean up test data (but keep schema)
|
||||
tables := []string{
|
||||
"sessions",
|
||||
"audit_log",
|
||||
"tasks",
|
||||
"alerts",
|
||||
"user_roles",
|
||||
"role_permissions",
|
||||
"users",
|
||||
"scst_initiators",
|
||||
"scst_luns",
|
||||
"scst_targets",
|
||||
"disk_repositories",
|
||||
"physical_disks",
|
||||
"virtual_tapes",
|
||||
"virtual_tape_drives",
|
||||
"virtual_tape_libraries",
|
||||
"physical_tape_slots",
|
||||
"physical_tape_drives",
|
||||
"physical_tape_libraries",
|
||||
}
|
||||
|
||||
for _, table := range tables {
|
||||
query := fmt.Sprintf("TRUNCATE TABLE %s CASCADE", table)
|
||||
if _, err := TestDB.ExecContext(ctx, query); err != nil {
|
||||
// Ignore errors for tables that don't exist
|
||||
t.Logf("Warning: Failed to truncate %s: %v", table, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// CreateTestUser creates a test user in the database
|
||||
func CreateTestUser(t *testing.T, username, email, passwordHash string, isAdmin bool) string {
|
||||
userID := uuid.New().String()
|
||||
|
||||
query := `
|
||||
INSERT INTO users (id, username, email, password_hash, full_name, is_active)
|
||||
VALUES ($1, $2, $3, $4, $5, $6)
|
||||
RETURNING id
|
||||
`
|
||||
|
||||
var id string
|
||||
err := TestDB.QueryRow(query, userID, username, email, passwordHash, "Test User", true).Scan(&id)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create test user: %v", err)
|
||||
}
|
||||
|
||||
// Assign admin role if requested
|
||||
if isAdmin {
|
||||
roleQuery := `
|
||||
INSERT INTO user_roles (user_id, role_id)
|
||||
SELECT $1, id FROM roles WHERE name = 'admin'
|
||||
`
|
||||
if _, err := TestDB.Exec(roleQuery, id); err != nil {
|
||||
t.Fatalf("Failed to assign admin role: %v", err)
|
||||
}
|
||||
} else {
|
||||
// Assign operator role for non-admin users (gives monitoring:read permission)
|
||||
roleQuery := `
|
||||
INSERT INTO user_roles (user_id, role_id)
|
||||
SELECT $1, id FROM roles WHERE name = 'operator'
|
||||
`
|
||||
if _, err := TestDB.Exec(roleQuery, id); err != nil {
|
||||
// Operator role might not exist, try readonly
|
||||
roleQuery = `
|
||||
INSERT INTO user_roles (user_id, role_id)
|
||||
SELECT $1, id FROM roles WHERE name = 'readonly'
|
||||
`
|
||||
if _, err := TestDB.Exec(roleQuery, id); err != nil {
|
||||
t.Logf("Warning: Failed to assign role: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return id
|
||||
}
|
||||
|
||||
// Helper functions
|
||||
func getEnv(key, defaultValue string) string {
|
||||
if v := os.Getenv(key); v != "" {
|
||||
return v
|
||||
}
|
||||
return defaultValue
|
||||
}
|
||||
|
||||
func getEnvInt(key string, defaultValue int) int {
|
||||
if v := os.Getenv(key); v != "" {
|
||||
var result int
|
||||
if _, err := fmt.Sscanf(v, "%d", &result); err == nil {
|
||||
return result
|
||||
}
|
||||
}
|
||||
return defaultValue
|
||||
}
|
||||
|
||||
1
bacula-config
Symbolic link
1
bacula-config
Symbolic link
@@ -0,0 +1 @@
|
||||
/etc/bacula
|
||||
31
deploy/systemd/calypso-api-dev.service
Normal file
31
deploy/systemd/calypso-api-dev.service
Normal file
@@ -0,0 +1,31 @@
|
||||
[Unit]
|
||||
Description=AtlasOS - Calypso API Service (Development)
|
||||
Documentation=https://github.com/atlasos/calypso
|
||||
After=network.target postgresql.service
|
||||
Requires=postgresql.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
Group=root
|
||||
WorkingDirectory=/development/calypso/backend
|
||||
ExecStart=/development/calypso/backend/bin/calypso-api -config /development/calypso/backend/config.yaml.example
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=append:/tmp/backend-api.log
|
||||
StandardError=append:/tmp/backend-api.log
|
||||
SyslogIdentifier=calypso-api
|
||||
|
||||
# Environment variables
|
||||
Environment="CALYPSO_DB_PASSWORD=calypso123"
|
||||
Environment="CALYPSO_JWT_SECRET=test-jwt-secret-key-minimum-32-characters-long"
|
||||
Environment="CALYPSO_LOG_LEVEL=info"
|
||||
Environment="CALYPSO_LOG_FORMAT=json"
|
||||
|
||||
# Resource limits
|
||||
LimitNOFILE=65536
|
||||
LimitNPROC=4096
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
36
deploy/systemd/calypso-api.service
Normal file
36
deploy/systemd/calypso-api.service
Normal file
@@ -0,0 +1,36 @@
|
||||
[Unit]
|
||||
Description=AtlasOS - Calypso API Service
|
||||
Documentation=https://github.com/atlasos/calypso
|
||||
After=network.target postgresql.service
|
||||
Requires=postgresql.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=calypso
|
||||
Group=calypso
|
||||
WorkingDirectory=/opt/calypso/backend
|
||||
ExecStart=/opt/calypso/backend/bin/calypso-api -config /etc/calypso/config.yaml
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
SyslogIdentifier=calypso-api
|
||||
|
||||
# Security hardening
|
||||
NoNewPrivileges=true
|
||||
PrivateTmp=true
|
||||
ProtectSystem=strict
|
||||
ProtectHome=true
|
||||
ReadWritePaths=/var/lib/calypso /var/log/calypso
|
||||
|
||||
# Resource limits
|
||||
LimitNOFILE=65536
|
||||
LimitNPROC=4096
|
||||
|
||||
# Environment
|
||||
Environment="CALYPSO_LOG_LEVEL=info"
|
||||
Environment="CALYPSO_LOG_FORMAT=json"
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
25
deploy/systemd/calypso-logger.service
Normal file
25
deploy/systemd/calypso-logger.service
Normal file
@@ -0,0 +1,25 @@
|
||||
[Unit]
|
||||
Description=Calypso Stack Log Aggregator
|
||||
Documentation=https://github.com/atlasos/calypso
|
||||
After=network.target
|
||||
Wants=calypso-api.service calypso-frontend.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
# Run as root to access journald and write to /var/syslog
|
||||
# Format: timestamp [service] message
|
||||
ExecStart=/bin/bash -c '/usr/bin/journalctl -u calypso-api.service -u calypso-frontend.service -f --no-pager -o short-iso >> /var/syslog/calypso.log 2>&1'
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
|
||||
# Security hardening
|
||||
NoNewPrivileges=false
|
||||
PrivateTmp=true
|
||||
ReadWritePaths=/var/syslog
|
||||
|
||||
# Resource limits
|
||||
LimitNOFILE=65536
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
788
docs/alpha/CODING-STANDARDS.md
Normal file
788
docs/alpha/CODING-STANDARDS.md
Normal file
@@ -0,0 +1,788 @@
|
||||
# Coding Standards
|
||||
## AtlasOS - Calypso Backup Appliance
|
||||
|
||||
**Version:** 1.0.0-alpha
|
||||
**Date:** 2025-01-XX
|
||||
**Status:** Active
|
||||
|
||||
---
|
||||
|
||||
## 1. Overview
|
||||
|
||||
This document defines the coding standards and best practices for the Calypso project. All code must adhere to these standards to ensure consistency, maintainability, and quality.
|
||||
|
||||
## 2. General Principles
|
||||
|
||||
### 2.1 Code Quality
|
||||
- **Readability**: Code should be self-documenting and easy to understand
|
||||
- **Maintainability**: Code should be easy to modify and extend
|
||||
- **Consistency**: Follow consistent patterns across the codebase
|
||||
- **Simplicity**: Prefer simple solutions over complex ones
|
||||
- **DRY**: Don't Repeat Yourself - avoid code duplication
|
||||
|
||||
### 2.2 Code Review
|
||||
- All code must be reviewed before merging
|
||||
- Reviewers should check for adherence to these standards
|
||||
- Address review comments before merging
|
||||
|
||||
### 2.3 Documentation
|
||||
- Document complex logic and algorithms
|
||||
- Keep comments up-to-date with code changes
|
||||
- Write clear commit messages
|
||||
|
||||
---
|
||||
|
||||
## 3. Backend (Go) Standards
|
||||
|
||||
### 3.1 Code Formatting
|
||||
|
||||
#### 3.1.1 Use gofmt
|
||||
- Always run `gofmt` before committing
|
||||
- Use `goimports` for import organization
|
||||
- Configure IDE to format on save
|
||||
|
||||
#### 3.1.2 Line Length
|
||||
- Maximum line length: 100 characters
|
||||
- Break long lines for readability
|
||||
|
||||
#### 3.1.3 Indentation
|
||||
- Use tabs for indentation (not spaces)
|
||||
- Tab width: 4 spaces equivalent
|
||||
|
||||
### 3.2 Naming Conventions
|
||||
|
||||
#### 3.2.1 Packages
|
||||
```go
|
||||
// Good: lowercase, single word, descriptive
|
||||
package storage
|
||||
package auth
|
||||
package monitoring
|
||||
|
||||
// Bad: mixed case, abbreviations
|
||||
package Storage
|
||||
package Auth
|
||||
package Mon
|
||||
```
|
||||
|
||||
#### 3.2.2 Functions
|
||||
```go
|
||||
// Good: camelCase, descriptive
|
||||
func createZFSPool(name string) error
|
||||
func listNetworkInterfaces() ([]Interface, error)
|
||||
func validateUserInput(input string) error
|
||||
|
||||
// Bad: unclear names, abbreviations
|
||||
func create(name string) error
|
||||
func list() ([]Interface, error)
|
||||
func val(input string) error
|
||||
```
|
||||
|
||||
#### 3.2.3 Variables
|
||||
```go
|
||||
// Good: camelCase, descriptive
|
||||
var poolName string
|
||||
var networkInterfaces []Interface
|
||||
var isActive bool
|
||||
|
||||
// Bad: single letters, unclear
|
||||
var n string
|
||||
var ifs []Interface
|
||||
var a bool
|
||||
```
|
||||
|
||||
#### 3.2.4 Constants
|
||||
```go
|
||||
// Good: PascalCase for exported, camelCase for unexported
|
||||
const DefaultPort = 8080
|
||||
const maxRetries = 3
|
||||
|
||||
// Bad: inconsistent casing
|
||||
const defaultPort = 8080
|
||||
const MAX_RETRIES = 3
|
||||
```
|
||||
|
||||
#### 3.2.5 Types and Structs
|
||||
```go
|
||||
// Good: PascalCase, descriptive
|
||||
type ZFSPool struct {
|
||||
ID string
|
||||
Name string
|
||||
Status string
|
||||
}
|
||||
|
||||
// Bad: unclear names
|
||||
type Pool struct {
|
||||
I string
|
||||
N string
|
||||
S string
|
||||
}
|
||||
```
|
||||
|
||||
### 3.3 File Organization
|
||||
|
||||
#### 3.3.1 File Structure
|
||||
```go
|
||||
// 1. Package declaration
|
||||
package storage
|
||||
|
||||
// 2. Imports (standard, third-party, local)
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
|
||||
"github.com/atlasos/calypso/internal/common/database"
|
||||
)
|
||||
|
||||
// 3. Constants
|
||||
const (
|
||||
defaultTimeout = 30 * time.Second
|
||||
)
|
||||
|
||||
// 4. Types
|
||||
type Service struct {
|
||||
db *database.DB
|
||||
}
|
||||
|
||||
// 5. Functions
|
||||
func NewService(db *database.DB) *Service {
|
||||
return &Service{db: db}
|
||||
}
|
||||
```
|
||||
|
||||
#### 3.3.2 File Naming
|
||||
- Use lowercase with underscores: `handler.go`, `service.go`
|
||||
- Test files: `handler_test.go`
|
||||
- One main type per file when possible
|
||||
|
||||
### 3.4 Error Handling
|
||||
|
||||
#### 3.4.1 Error Return
|
||||
```go
|
||||
// Good: always return error as last value
|
||||
func createPool(name string) (*Pool, error) {
|
||||
if name == "" {
|
||||
return nil, fmt.Errorf("pool name cannot be empty")
|
||||
}
|
||||
// ...
|
||||
}
|
||||
|
||||
// Bad: panic, no error return
|
||||
func createPool(name string) *Pool {
|
||||
if name == "" {
|
||||
panic("pool name cannot be empty")
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### 3.4.2 Error Wrapping
|
||||
```go
|
||||
// Good: wrap errors with context
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create pool %s: %w", name, err)
|
||||
}
|
||||
|
||||
// Bad: lose error context
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
#### 3.4.3 Error Messages
|
||||
```go
|
||||
// Good: clear, actionable error messages
|
||||
return fmt.Errorf("pool '%s' already exists", name)
|
||||
return fmt.Errorf("insufficient disk space: need %d bytes, have %d bytes", needed, available)
|
||||
|
||||
// Bad: unclear error messages
|
||||
return fmt.Errorf("error")
|
||||
return fmt.Errorf("failed")
|
||||
```
|
||||
|
||||
### 3.5 Comments
|
||||
|
||||
#### 3.5.1 Package Comments
|
||||
```go
|
||||
// Package storage provides storage management functionality including
|
||||
// ZFS pool and dataset operations, disk discovery, and storage repository management.
|
||||
package storage
|
||||
```
|
||||
|
||||
#### 3.5.2 Function Comments
|
||||
```go
|
||||
// CreateZFSPool creates a new ZFS pool with the specified configuration.
|
||||
// It validates the pool name, checks disk availability, and creates the pool.
|
||||
// Returns an error if the pool cannot be created.
|
||||
func CreateZFSPool(ctx context.Context, name string, disks []string) error {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
#### 3.5.3 Inline Comments
|
||||
```go
|
||||
// Good: explain why, not what
|
||||
// Retry up to 3 times to handle transient network errors
|
||||
for i := 0; i < 3; i++ {
|
||||
// ...
|
||||
}
|
||||
|
||||
// Bad: obvious comments
|
||||
// Loop 3 times
|
||||
for i := 0; i < 3; i++ {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
### 3.6 Testing
|
||||
|
||||
#### 3.6.1 Test File Naming
|
||||
- Test files: `*_test.go`
|
||||
- Test functions: `TestFunctionName`
|
||||
- Benchmark functions: `BenchmarkFunctionName`
|
||||
|
||||
#### 3.6.2 Test Structure
|
||||
```go
|
||||
func TestCreateZFSPool(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input string
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "valid pool name",
|
||||
input: "tank",
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "empty pool name",
|
||||
input: "",
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
err := createPool(tt.input)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("createPool() error = %v, wantErr %v", err, tt.wantErr)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3.7 Concurrency
|
||||
|
||||
#### 3.7.1 Context Usage
|
||||
```go
|
||||
// Good: always accept context as first parameter
|
||||
func (s *Service) CreatePool(ctx context.Context, name string) error {
|
||||
// Use context for cancellation and timeout
|
||||
ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
|
||||
defer cancel()
|
||||
// ...
|
||||
}
|
||||
|
||||
// Bad: no context
|
||||
func (s *Service) CreatePool(name string) error {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
#### 3.7.2 Goroutines
|
||||
```go
|
||||
// Good: use context for cancellation
|
||||
go func() {
|
||||
ctx, cancel := context.WithCancel(ctx)
|
||||
defer cancel()
|
||||
// ...
|
||||
}()
|
||||
|
||||
// Bad: no cancellation mechanism
|
||||
go func() {
|
||||
// ...
|
||||
}()
|
||||
```
|
||||
|
||||
### 3.8 Database Operations
|
||||
|
||||
#### 3.8.1 Query Context
|
||||
```go
|
||||
// Good: use context for queries
|
||||
rows, err := s.db.QueryContext(ctx, query, args...)
|
||||
|
||||
// Bad: no context
|
||||
rows, err := s.db.Query(query, args...)
|
||||
```
|
||||
|
||||
#### 3.8.2 Transactions
|
||||
```go
|
||||
// Good: use transactions for multiple operations
|
||||
tx, err := s.db.BeginTx(ctx, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer tx.Rollback()
|
||||
|
||||
// ... operations ...
|
||||
|
||||
if err := tx.Commit(); err != nil {
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Frontend (TypeScript/React) Standards
|
||||
|
||||
### 4.1 Code Formatting
|
||||
|
||||
#### 4.1.1 Use Prettier
|
||||
- Configure Prettier for consistent formatting
|
||||
- Format on save enabled
|
||||
- Maximum line length: 100 characters
|
||||
|
||||
#### 4.1.2 Indentation
|
||||
- Use 2 spaces for indentation
|
||||
- Consistent spacing in JSX
|
||||
|
||||
### 4.2 Naming Conventions
|
||||
|
||||
#### 4.2.1 Components
|
||||
```typescript
|
||||
// Good: PascalCase, descriptive
|
||||
function StoragePage() { }
|
||||
function CreatePoolModal() { }
|
||||
function NetworkInterfaceCard() { }
|
||||
|
||||
// Bad: unclear names
|
||||
function Page() { }
|
||||
function Modal() { }
|
||||
function Card() { }
|
||||
```
|
||||
|
||||
#### 4.2.2 Functions
|
||||
```typescript
|
||||
// Good: camelCase, descriptive
|
||||
function createZFSPool(name: string): Promise<ZFSPool> { }
|
||||
function handleSubmit(event: React.FormEvent): void { }
|
||||
function formatBytes(bytes: number): string { }
|
||||
|
||||
// Bad: unclear names
|
||||
function create(name: string) { }
|
||||
function handle(e: any) { }
|
||||
function fmt(b: number) { }
|
||||
```
|
||||
|
||||
#### 4.2.3 Variables
|
||||
```typescript
|
||||
// Good: camelCase, descriptive
|
||||
const poolName = 'tank'
|
||||
const networkInterfaces: NetworkInterface[] = []
|
||||
const isActive = true
|
||||
|
||||
// Bad: unclear names
|
||||
const n = 'tank'
|
||||
const ifs: any[] = []
|
||||
const a = true
|
||||
```
|
||||
|
||||
#### 4.2.4 Constants
|
||||
```typescript
|
||||
// Good: UPPER_SNAKE_CASE for constants
|
||||
const DEFAULT_PORT = 8080
|
||||
const MAX_RETRIES = 3
|
||||
const API_BASE_URL = '/api/v1'
|
||||
|
||||
// Bad: inconsistent casing
|
||||
const defaultPort = 8080
|
||||
const maxRetries = 3
|
||||
```
|
||||
|
||||
#### 4.2.5 Types and Interfaces
|
||||
```typescript
|
||||
// Good: PascalCase, descriptive
|
||||
interface ZFSPool {
|
||||
id: string
|
||||
name: string
|
||||
status: string
|
||||
}
|
||||
|
||||
type PoolStatus = 'online' | 'offline' | 'degraded'
|
||||
|
||||
// Bad: unclear names
|
||||
interface Pool {
|
||||
i: string
|
||||
n: string
|
||||
s: string
|
||||
}
|
||||
```
|
||||
|
||||
### 4.3 File Organization
|
||||
|
||||
#### 4.3.1 Component Structure
|
||||
```typescript
|
||||
// 1. Imports (React, third-party, local)
|
||||
import { useState } from 'react'
|
||||
import { useQuery } from '@tanstack/react-query'
|
||||
import { zfsApi } from '@/api/storage'
|
||||
|
||||
// 2. Types/Interfaces
|
||||
interface Props {
|
||||
poolId: string
|
||||
}
|
||||
|
||||
// 3. Component
|
||||
export default function PoolDetail({ poolId }: Props) {
|
||||
// 4. Hooks
|
||||
const [isLoading, setIsLoading] = useState(false)
|
||||
|
||||
// 5. Queries
|
||||
const { data: pool } = useQuery({
|
||||
queryKey: ['pool', poolId],
|
||||
queryFn: () => zfsApi.getPool(poolId),
|
||||
})
|
||||
|
||||
// 6. Handlers
|
||||
const handleDelete = () => {
|
||||
// ...
|
||||
}
|
||||
|
||||
// 7. Effects
|
||||
useEffect(() => {
|
||||
// ...
|
||||
}, [poolId])
|
||||
|
||||
// 8. Render
|
||||
return (
|
||||
// JSX
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
#### 4.3.2 File Naming
|
||||
- Components: `PascalCase.tsx` (e.g., `StoragePage.tsx`)
|
||||
- Utilities: `camelCase.ts` (e.g., `formatBytes.ts`)
|
||||
- Types: `camelCase.ts` or `types.ts`
|
||||
- Hooks: `useCamelCase.ts` (e.g., `useStorage.ts`)
|
||||
|
||||
### 4.4 TypeScript
|
||||
|
||||
#### 4.4.1 Type Safety
|
||||
```typescript
|
||||
// Good: explicit types
|
||||
function createPool(name: string): Promise<ZFSPool> {
|
||||
// ...
|
||||
}
|
||||
|
||||
// Bad: any types
|
||||
function createPool(name: any): any {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
#### 4.4.2 Interface Definitions
|
||||
```typescript
|
||||
// Good: clear interface definitions
|
||||
interface ZFSPool {
|
||||
id: string
|
||||
name: string
|
||||
status: 'online' | 'offline' | 'degraded'
|
||||
totalCapacityBytes: number
|
||||
usedCapacityBytes: number
|
||||
}
|
||||
|
||||
// Bad: unclear or missing types
|
||||
interface Pool {
|
||||
id: any
|
||||
name: any
|
||||
status: any
|
||||
}
|
||||
```
|
||||
|
||||
### 4.5 React Patterns
|
||||
|
||||
#### 4.5.1 Hooks
|
||||
```typescript
|
||||
// Good: custom hooks for reusable logic
|
||||
function useZFSPool(poolId: string) {
|
||||
return useQuery({
|
||||
queryKey: ['pool', poolId],
|
||||
queryFn: () => zfsApi.getPool(poolId),
|
||||
})
|
||||
}
|
||||
|
||||
// Usage
|
||||
const { data: pool } = useZFSPool(poolId)
|
||||
```
|
||||
|
||||
#### 4.5.2 Component Composition
|
||||
```typescript
|
||||
// Good: small, focused components
|
||||
function PoolCard({ pool }: { pool: ZFSPool }) {
|
||||
return (
|
||||
<div>
|
||||
<PoolHeader pool={pool} />
|
||||
<PoolStats pool={pool} />
|
||||
<PoolActions pool={pool} />
|
||||
</div>
|
||||
)
|
||||
}
|
||||
|
||||
// Bad: large, monolithic components
|
||||
function PoolCard({ pool }: { pool: ZFSPool }) {
|
||||
// 500+ lines of JSX
|
||||
}
|
||||
```
|
||||
|
||||
#### 4.5.3 State Management
|
||||
```typescript
|
||||
// Good: use React Query for server state
|
||||
const { data, isLoading } = useQuery({
|
||||
queryKey: ['pools'],
|
||||
queryFn: zfsApi.listPools,
|
||||
})
|
||||
|
||||
// Good: use local state for UI state
|
||||
const [isModalOpen, setIsModalOpen] = useState(false)
|
||||
|
||||
// Good: use Zustand for global UI state
|
||||
const { user, setUser } = useAuthStore()
|
||||
```
|
||||
|
||||
### 4.6 Error Handling
|
||||
|
||||
#### 4.6.1 Error Boundaries
|
||||
```typescript
|
||||
// Good: use error boundaries
|
||||
function ErrorBoundary({ children }: { children: React.ReactNode }) {
|
||||
// ...
|
||||
}
|
||||
|
||||
// Usage
|
||||
<ErrorBoundary>
|
||||
<App />
|
||||
</ErrorBoundary>
|
||||
```
|
||||
|
||||
#### 4.6.2 Error Handling in Queries
|
||||
```typescript
|
||||
// Good: handle errors in queries
|
||||
const { data, error, isLoading } = useQuery({
|
||||
queryKey: ['pools'],
|
||||
queryFn: zfsApi.listPools,
|
||||
onError: (error) => {
|
||||
console.error('Failed to load pools:', error)
|
||||
// Show user-friendly error message
|
||||
},
|
||||
})
|
||||
```
|
||||
|
||||
### 4.7 Styling
|
||||
|
||||
#### 4.7.1 TailwindCSS
|
||||
```typescript
|
||||
// Good: use Tailwind classes
|
||||
<div className="flex items-center gap-4 p-6 bg-card-dark rounded-lg border border-border-dark">
|
||||
<h2 className="text-lg font-bold text-white">Storage Pools</h2>
|
||||
</div>
|
||||
|
||||
// Bad: inline styles
|
||||
<div style={{ display: 'flex', padding: '24px', backgroundColor: '#18232e' }}>
|
||||
<h2 style={{ fontSize: '18px', fontWeight: 'bold', color: 'white' }}>Storage Pools</h2>
|
||||
</div>
|
||||
```
|
||||
|
||||
#### 4.7.2 Class Organization
|
||||
```typescript
|
||||
// Good: logical grouping
|
||||
className="flex items-center gap-4 p-6 bg-card-dark rounded-lg border border-border-dark hover:bg-border-dark transition-colors"
|
||||
|
||||
// Bad: random order
|
||||
className="p-6 flex border rounded-lg items-center gap-4 bg-card-dark border-border-dark"
|
||||
```
|
||||
|
||||
### 4.8 Testing
|
||||
|
||||
#### 4.8.1 Component Testing
|
||||
```typescript
|
||||
// Good: test component behavior
|
||||
describe('StoragePage', () => {
|
||||
it('displays pools when loaded', () => {
|
||||
render(<StoragePage />)
|
||||
expect(screen.getByText('tank')).toBeInTheDocument()
|
||||
})
|
||||
|
||||
it('shows loading state', () => {
|
||||
render(<StoragePage />)
|
||||
expect(screen.getByText('Loading...')).toBeInTheDocument()
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Git Commit Standards
|
||||
|
||||
### 5.1 Commit Message Format
|
||||
```
|
||||
<type>(<scope>): <subject>
|
||||
|
||||
<body>
|
||||
|
||||
<footer>
|
||||
```
|
||||
|
||||
### 5.2 Commit Types
|
||||
- **feat**: New feature
|
||||
- **fix**: Bug fix
|
||||
- **docs**: Documentation changes
|
||||
- **style**: Code style changes (formatting, etc.)
|
||||
- **refactor**: Code refactoring
|
||||
- **test**: Test additions or changes
|
||||
- **chore**: Build process or auxiliary tool changes
|
||||
|
||||
### 5.3 Commit Examples
|
||||
```
|
||||
feat(storage): add ZFS pool creation endpoint
|
||||
|
||||
Add POST /api/v1/storage/zfs/pools endpoint with validation
|
||||
and error handling.
|
||||
|
||||
Closes #123
|
||||
|
||||
fix(shares): correct dataset_id field in create share
|
||||
|
||||
The frontend was sending dataset_name instead of dataset_id.
|
||||
Updated to use UUID from dataset selection.
|
||||
|
||||
docs: update API documentation for snapshot endpoints
|
||||
|
||||
refactor(auth): simplify JWT token validation logic
|
||||
```
|
||||
|
||||
### 5.4 Branch Naming
|
||||
- **feature/**: New features (e.g., `feature/object-storage`)
|
||||
- **fix/**: Bug fixes (e.g., `fix/share-creation-error`)
|
||||
- **docs/**: Documentation (e.g., `docs/api-documentation`)
|
||||
- **refactor/**: Refactoring (e.g., `refactor/storage-service`)
|
||||
|
||||
---
|
||||
|
||||
## 6. Code Review Guidelines
|
||||
|
||||
### 6.1 Review Checklist
|
||||
- [ ] Code follows naming conventions
|
||||
- [ ] Code is properly formatted
|
||||
- [ ] Error handling is appropriate
|
||||
- [ ] Tests are included for new features
|
||||
- [ ] Documentation is updated
|
||||
- [ ] No security vulnerabilities
|
||||
- [ ] Performance considerations addressed
|
||||
- [ ] No commented-out code
|
||||
- [ ] No console.log statements (use proper logging)
|
||||
|
||||
### 6.2 Review Comments
|
||||
- Be constructive and respectful
|
||||
- Explain why, not just what
|
||||
- Suggest improvements, not just point out issues
|
||||
- Approve when code meets standards
|
||||
|
||||
---
|
||||
|
||||
## 7. Documentation Standards
|
||||
|
||||
### 7.1 Code Comments
|
||||
- Document complex logic
|
||||
- Explain "why" not "what"
|
||||
- Keep comments up-to-date
|
||||
|
||||
### 7.2 API Documentation
|
||||
- Document all public APIs
|
||||
- Include parameter descriptions
|
||||
- Include return value descriptions
|
||||
- Include error conditions
|
||||
|
||||
### 7.3 README Files
|
||||
- Keep README files updated
|
||||
- Include setup instructions
|
||||
- Include usage examples
|
||||
- Include troubleshooting tips
|
||||
|
||||
---
|
||||
|
||||
## 8. Performance Standards
|
||||
|
||||
### 8.1 Backend
|
||||
- Database queries should be optimized
|
||||
- Use indexes appropriately
|
||||
- Avoid N+1 query problems
|
||||
- Use connection pooling
|
||||
|
||||
### 8.2 Frontend
|
||||
- Minimize re-renders
|
||||
- Use React.memo for expensive components
|
||||
- Lazy load routes
|
||||
- Optimize bundle size
|
||||
|
||||
---
|
||||
|
||||
## 9. Security Standards
|
||||
|
||||
### 9.1 Input Validation
|
||||
- Validate all user inputs
|
||||
- Sanitize inputs before use
|
||||
- Use parameterized queries
|
||||
- Escape output
|
||||
|
||||
### 9.2 Authentication
|
||||
- Never store passwords in plaintext
|
||||
- Use secure token storage
|
||||
- Implement proper session management
|
||||
- Handle token expiration
|
||||
|
||||
### 9.3 Authorization
|
||||
- Check permissions on every request
|
||||
- Use principle of least privilege
|
||||
- Log security events
|
||||
- Handle authorization errors properly
|
||||
|
||||
---
|
||||
|
||||
## 10. Tools and Configuration
|
||||
|
||||
### 10.1 Backend Tools
|
||||
- **gofmt**: Code formatting
|
||||
- **goimports**: Import organization
|
||||
- **golint**: Linting
|
||||
- **go vet**: Static analysis
|
||||
|
||||
### 10.2 Frontend Tools
|
||||
- **Prettier**: Code formatting
|
||||
- **ESLint**: Linting
|
||||
- **TypeScript**: Type checking
|
||||
- **Vite**: Build tool
|
||||
|
||||
---
|
||||
|
||||
## 11. Exceptions
|
||||
|
||||
### 11.1 When to Deviate
|
||||
- Performance-critical code may require optimization
|
||||
- Legacy code integration may require different patterns
|
||||
- Third-party library constraints
|
||||
|
||||
### 11.2 Documenting Exceptions
|
||||
- Document why standards are not followed
|
||||
- Include comments explaining deviations
|
||||
- Review exceptions during code review
|
||||
|
||||
---
|
||||
|
||||
## Document History
|
||||
|
||||
| Version | Date | Author | Changes |
|
||||
|---------|------|--------|---------|
|
||||
| 1.0.0-alpha | 2025-01-XX | Development Team | Initial coding standards document |
|
||||
|
||||
102
docs/alpha/Calypso_System_Architecture.md
Normal file
102
docs/alpha/Calypso_System_Architecture.md
Normal file
@@ -0,0 +1,102 @@
|
||||
# Calypso System Architecture Document
|
||||
Adastra Storage & Backup Appliance
|
||||
Version: 1.0 (Dev Release V1)
|
||||
Status: Baseline Architecture
|
||||
|
||||
## 1. Purpose & Scope
|
||||
This document describes the system architecture of Calypso as an integrated storage and backup appliance. It aligns with the System Requirements Specification (SRS) and System Design Specification (SDS), and serves as a reference for architects, engineers, operators, and auditors.
|
||||
|
||||
## 2. Architectural Principles
|
||||
- Appliance-first design
|
||||
- Clear separation of binaries, configuration, and data
|
||||
- ZFS-native storage architecture
|
||||
- Upgrade and rollback safety
|
||||
- Minimal external dependencies
|
||||
|
||||
## 3. High-Level Architecture
|
||||
Calypso operates as a single-node appliance where the control plane orchestrates storage, backup, object storage, tape, and iSCSI subsystems through a unified API and UI.
|
||||
|
||||
## 4. Deployment Model
|
||||
- Single-node deployment
|
||||
- Bare metal or VM (bare metal recommended)
|
||||
- Linux-based OS (LTS)
|
||||
|
||||
## 5. Centralized Filesystem Architecture
|
||||
|
||||
### 5.1 Domain Separation
|
||||
| Domain | Location |
|
||||
|------|---------|
|
||||
| Binaries | /opt/adastra/calypso |
|
||||
| Configuration | /etc/calypso |
|
||||
| Data (ZFS) | /srv/calypso |
|
||||
| Logs | /var/log/calypso |
|
||||
| Runtime | /var/lib/calypso, /run/calypso |
|
||||
|
||||
### 5.2 Binary Layout
|
||||
```
|
||||
/opt/adastra/calypso/
|
||||
releases/
|
||||
1.0.0/
|
||||
bin/
|
||||
web/
|
||||
migrations/
|
||||
scripts/
|
||||
current -> releases/1.0.0
|
||||
third_party/
|
||||
```
|
||||
|
||||
### 5.3 Configuration Layout
|
||||
```
|
||||
/etc/calypso/
|
||||
calypso.yaml
|
||||
secrets.env
|
||||
tls/
|
||||
integrations/
|
||||
system/
|
||||
```
|
||||
|
||||
### 5.4 ZFS Data Layout
|
||||
```
|
||||
/srv/calypso/
|
||||
db/
|
||||
backups/
|
||||
object/
|
||||
shares/
|
||||
vtl/
|
||||
iscsi/
|
||||
uploads/
|
||||
cache/
|
||||
_system/
|
||||
```
|
||||
|
||||
## 6. Component Architecture
|
||||
- Calypso Control Plane (Go-based API)
|
||||
- ZFS (core storage)
|
||||
- Bacula (backup)
|
||||
- MinIO (object storage)
|
||||
- SCST (iSCSI)
|
||||
- MHVTL (virtual tape library)
|
||||
|
||||
## 7. Data Flow
|
||||
- User actions handled by Calypso API
|
||||
- Operations executed on ZFS datasets
|
||||
- Metadata stored centrally in ZFS
|
||||
|
||||
## 8. Security Baseline
|
||||
- Service isolation
|
||||
- Permission-based filesystem access
|
||||
- Secrets separation
|
||||
- Controlled subsystem access
|
||||
|
||||
## 9. Upgrade & Rollback
|
||||
- Versioned releases
|
||||
- Atomic switch via symlink
|
||||
- Data preserved independently in ZFS
|
||||
|
||||
## 10. Non-Goals (V1)
|
||||
- Multi-node clustering
|
||||
- Kubernetes orchestration
|
||||
- Inline malware scanning
|
||||
|
||||
## 11. Summary
|
||||
Calypso provides a clean, upgrade-safe, and enterprise-grade appliance architecture, forming a strong foundation for future HA and immutable designs.
|
||||
75
docs/alpha/README.md
Normal file
75
docs/alpha/README.md
Normal file
@@ -0,0 +1,75 @@
|
||||
# AtlasOS - Calypso Documentation
|
||||
## Alpha Release
|
||||
|
||||
This directory contains the Software Requirements Specification (SRS) and Software Design Specification (SDS) documentation for the Calypso backup appliance management system.
|
||||
|
||||
## Documentation Structure
|
||||
|
||||
### Software Requirements Specification (SRS)
|
||||
Located in `srs/` directory:
|
||||
|
||||
- **SRS-00-Overview.md**: Overview and introduction
|
||||
- **SRS-01-Storage-Management.md**: ZFS storage management requirements
|
||||
- **SRS-02-File-Sharing.md**: SMB/NFS share management requirements
|
||||
- **SRS-03-iSCSI-Management.md**: iSCSI target management requirements
|
||||
- **SRS-04-Tape-Library-Management.md**: Physical and VTL management requirements
|
||||
- **SRS-05-Backup-Management.md**: Bacula/Bareos integration requirements
|
||||
- **SRS-06-Object-Storage.md**: S3-compatible object storage requirements
|
||||
- **SRS-07-Snapshot-Replication.md**: ZFS snapshot and replication requirements
|
||||
- **SRS-08-System-Management.md**: System configuration and management requirements
|
||||
- **SRS-09-Monitoring-Alerting.md**: Monitoring and alerting requirements
|
||||
- **SRS-10-IAM.md**: Identity and access management requirements
|
||||
- **SRS-11-User-Interface.md**: User interface and experience requirements
|
||||
|
||||
### Software Design Specification (SDS)
|
||||
Located in `sds/` directory:
|
||||
|
||||
- **SDS-00-Overview.md**: Design overview and introduction
|
||||
- **SDS-01-System-Architecture.md**: System architecture and component design
|
||||
- **SDS-02-Database-Design.md**: Database schema and data models
|
||||
- **SDS-03-API-Design.md**: REST API design and endpoints
|
||||
- **SDS-04-Security-Design.md**: Security architecture and implementation
|
||||
- **SDS-05-Integration-Design.md**: External system integration patterns
|
||||
|
||||
### Coding Standards
|
||||
- **CODING-STANDARDS.md**: Code style, naming conventions, and best practices for Go and TypeScript/React
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Features Implemented
|
||||
1. ✅ Storage Management (ZFS pools, datasets, disks)
|
||||
2. ✅ File Sharing (SMB/CIFS, NFS)
|
||||
3. ✅ iSCSI Management (SCST integration)
|
||||
4. ✅ Tape Library Management (Physical & VTL)
|
||||
5. ✅ Backup Management (Bacula/Bareos integration)
|
||||
6. ✅ Object Storage (S3-compatible)
|
||||
7. ✅ Snapshot & Replication
|
||||
8. ✅ System Management (Network, Services, NTP, SNMP, License)
|
||||
9. ✅ Monitoring & Alerting
|
||||
10. ✅ Identity & Access Management (IAM)
|
||||
11. ✅ User Interface (React SPA)
|
||||
|
||||
### Technology Stack
|
||||
- **Backend**: Go 1.21+, Gin, PostgreSQL
|
||||
- **Frontend**: React 18, TypeScript, Vite, TailwindCSS
|
||||
- **External**: ZFS, SCST, Bacula/Bareos, MHVTL
|
||||
|
||||
## Document Status
|
||||
|
||||
**Version**: 1.0.0-alpha
|
||||
**Last Updated**: 2025-01-XX
|
||||
**Status**: In Development
|
||||
|
||||
## Contributing
|
||||
|
||||
When updating documentation:
|
||||
1. Update the relevant SRS or SDS document
|
||||
2. Update the version and date in the document
|
||||
3. Update this README if structure changes
|
||||
4. Maintain consistency across documents
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- Implementation guides: `../on-progress/`
|
||||
- Technical specifications: `../../src/srs-technical-spec-documents/`
|
||||
|
||||
182
docs/alpha/sds/SDS-00-Overview.md
Normal file
182
docs/alpha/sds/SDS-00-Overview.md
Normal file
@@ -0,0 +1,182 @@
|
||||
# Software Design Specification (SDS)
|
||||
## AtlasOS - Calypso Backup Appliance
|
||||
### Alpha Release
|
||||
|
||||
**Version:** 1.0.0-alpha
|
||||
**Date:** 2025-01-XX
|
||||
**Status:** In Development
|
||||
|
||||
---
|
||||
|
||||
## 1. Introduction
|
||||
|
||||
### 1.1 Purpose
|
||||
This document provides a comprehensive Software Design Specification (SDS) for AtlasOS - Calypso, describing the system architecture, component design, database schema, API design, and implementation details.
|
||||
|
||||
### 1.2 Scope
|
||||
This SDS covers:
|
||||
- System architecture and design patterns
|
||||
- Component structure and organization
|
||||
- Database schema and data models
|
||||
- API design and endpoints
|
||||
- Security architecture
|
||||
- Deployment architecture
|
||||
- Integration patterns
|
||||
|
||||
### 1.3 Document Organization
|
||||
- **SDS-01**: System Architecture
|
||||
- **SDS-02**: Backend Design
|
||||
- **SDS-03**: Frontend Design
|
||||
- **SDS-04**: Database Design
|
||||
- **SDS-05**: API Design
|
||||
- **SDS-06**: Security Design
|
||||
- **SDS-07**: Integration Design
|
||||
|
||||
---
|
||||
|
||||
## 2. System Architecture Overview
|
||||
|
||||
### 2.1 High-Level Architecture
|
||||
Calypso follows a three-tier architecture:
|
||||
1. **Presentation Layer**: React-based SPA
|
||||
2. **Application Layer**: Go-based REST API
|
||||
3. **Data Layer**: PostgreSQL database
|
||||
|
||||
### 2.2 Architecture Patterns
|
||||
- **Clean Architecture**: Separation of concerns, domain-driven design
|
||||
- **RESTful API**: Resource-based API design
|
||||
- **Repository Pattern**: Data access abstraction
|
||||
- **Service Layer**: Business logic encapsulation
|
||||
- **Middleware Pattern**: Cross-cutting concerns
|
||||
|
||||
### 2.3 Technology Stack
|
||||
|
||||
#### Backend
|
||||
- **Language**: Go 1.21+
|
||||
- **Framework**: Gin web framework
|
||||
- **Database**: PostgreSQL 14+
|
||||
- **Authentication**: JWT tokens
|
||||
- **Logging**: Zerolog structured logging
|
||||
|
||||
#### Frontend
|
||||
- **Framework**: React 18 with TypeScript
|
||||
- **Build Tool**: Vite
|
||||
- **Styling**: TailwindCSS
|
||||
- **State Management**: Zustand + TanStack Query
|
||||
- **Routing**: React Router
|
||||
- **HTTP Client**: Axios
|
||||
|
||||
---
|
||||
|
||||
## 3. Design Principles
|
||||
|
||||
### 3.1 Separation of Concerns
|
||||
- Clear boundaries between layers
|
||||
- Single responsibility principle
|
||||
- Dependency inversion
|
||||
|
||||
### 3.2 Scalability
|
||||
- Stateless API design
|
||||
- Horizontal scaling capability
|
||||
- Efficient database queries
|
||||
|
||||
### 3.3 Security
|
||||
- Defense in depth
|
||||
- Principle of least privilege
|
||||
- Input validation and sanitization
|
||||
|
||||
### 3.4 Maintainability
|
||||
- Clean code principles
|
||||
- Comprehensive logging
|
||||
- Error handling
|
||||
- Code documentation
|
||||
|
||||
### 3.5 Performance
|
||||
- Response caching
|
||||
- Database query optimization
|
||||
- Efficient data structures
|
||||
- Background job processing
|
||||
|
||||
---
|
||||
|
||||
## 4. System Components
|
||||
|
||||
### 4.1 Backend Components
|
||||
- **Auth**: Authentication and authorization
|
||||
- **Storage**: ZFS and storage management
|
||||
- **Shares**: SMB/NFS share management
|
||||
- **SCST**: iSCSI target management
|
||||
- **Tape**: Physical and VTL management
|
||||
- **Backup**: Bacula/Bareos integration
|
||||
- **System**: System service management
|
||||
- **Monitoring**: Metrics and alerting
|
||||
- **IAM**: User and access management
|
||||
|
||||
### 4.2 Frontend Components
|
||||
- **Pages**: Route-based page components
|
||||
- **Components**: Reusable UI components
|
||||
- **API**: API client and queries
|
||||
- **Store**: Global state management
|
||||
- **Hooks**: Custom React hooks
|
||||
- **Utils**: Utility functions
|
||||
|
||||
---
|
||||
|
||||
## 5. Data Flow
|
||||
|
||||
### 5.1 Request Flow
|
||||
1. User action in frontend
|
||||
2. API call via Axios
|
||||
3. Request middleware (auth, logging, rate limiting)
|
||||
4. Handler processes request
|
||||
5. Service layer business logic
|
||||
6. Database operations
|
||||
7. Response returned to frontend
|
||||
8. UI update via React Query
|
||||
|
||||
### 5.2 Background Jobs
|
||||
- Disk monitoring (every 5 minutes)
|
||||
- ZFS pool monitoring (every 2 minutes)
|
||||
- Metrics collection (every 30 seconds)
|
||||
- Alert rule evaluation (continuous)
|
||||
|
||||
---
|
||||
|
||||
## 6. Deployment Architecture
|
||||
|
||||
### 6.1 Single-Server Deployment
|
||||
- Backend API service (systemd)
|
||||
- Frontend static files (nginx/caddy)
|
||||
- PostgreSQL database
|
||||
- External services (ZFS, SCST, Bacula)
|
||||
|
||||
### 6.2 Service Management
|
||||
- Systemd service files
|
||||
- Auto-restart on failure
|
||||
- Log rotation
|
||||
- Health checks
|
||||
|
||||
---
|
||||
|
||||
## 7. Future Enhancements
|
||||
|
||||
### 7.1 Scalability
|
||||
- Multi-server deployment
|
||||
- Load balancing
|
||||
- Database replication
|
||||
- Distributed caching
|
||||
|
||||
### 7.2 Features
|
||||
- WebSocket real-time updates
|
||||
- GraphQL API option
|
||||
- Microservices architecture
|
||||
- Container orchestration
|
||||
|
||||
---
|
||||
|
||||
## Document History
|
||||
|
||||
| Version | Date | Author | Changes |
|
||||
|---------|------|--------|---------|
|
||||
| 1.0.0-alpha | 2025-01-XX | Development Team | Initial SDS document |
|
||||
|
||||
302
docs/alpha/sds/SDS-01-System-Architecture.md
Normal file
302
docs/alpha/sds/SDS-01-System-Architecture.md
Normal file
@@ -0,0 +1,302 @@
|
||||
# SDS-01: System Architecture
|
||||
|
||||
## 1. Architecture Overview
|
||||
|
||||
### 1.1 Three-Tier Architecture
|
||||
```
|
||||
┌─────────────────────────────────────┐
|
||||
│ Presentation Layer │
|
||||
│ (React SPA) │
|
||||
└──────────────┬──────────────────────┘
|
||||
│ HTTP/REST
|
||||
┌──────────────▼──────────────────────┐
|
||||
│ Application Layer │
|
||||
│ (Go REST API) │
|
||||
└──────────────┬──────────────────────┘
|
||||
│ SQL
|
||||
┌──────────────▼──────────────────────┐
|
||||
│ Data Layer │
|
||||
│ (PostgreSQL) │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### 1.2 Component Layers
|
||||
|
||||
#### Backend Layers
|
||||
1. **Handler Layer**: HTTP request handling, validation
|
||||
2. **Service Layer**: Business logic, orchestration
|
||||
3. **Repository Layer**: Data access, database operations
|
||||
4. **Model Layer**: Data structures, domain models
|
||||
|
||||
#### Frontend Layers
|
||||
1. **Page Layer**: Route-based page components
|
||||
2. **Component Layer**: Reusable UI components
|
||||
3. **API Layer**: API client, data fetching
|
||||
4. **Store Layer**: Global state management
|
||||
|
||||
## 2. Backend Architecture
|
||||
|
||||
### 2.1 Directory Structure
|
||||
```
|
||||
backend/
|
||||
├── cmd/
|
||||
│ └── calypso-api/
|
||||
│ └── main.go
|
||||
├── internal/
|
||||
│ ├── auth/
|
||||
│ ├── storage/
|
||||
│ ├── shares/
|
||||
│ ├── scst/
|
||||
│ ├── tape_physical/
|
||||
│ ├── tape_vtl/
|
||||
│ ├── backup/
|
||||
│ ├── system/
|
||||
│ ├── monitoring/
|
||||
│ ├── iam/
|
||||
│ ├── tasks/
|
||||
│ └── common/
|
||||
│ ├── config/
|
||||
│ ├── database/
|
||||
│ ├── logger/
|
||||
│ ├── router/
|
||||
│ └── cache/
|
||||
└── db/
|
||||
└── migrations/
|
||||
```
|
||||
|
||||
### 2.2 Module Organization
|
||||
Each module follows this structure:
|
||||
- **handler.go**: HTTP handlers
|
||||
- **service.go**: Business logic
|
||||
- **model.go**: Data models (if needed)
|
||||
- **repository.go**: Database operations (if needed)
|
||||
|
||||
### 2.3 Common Components
|
||||
- **config**: Configuration management
|
||||
- **database**: Database connection and migrations
|
||||
- **logger**: Structured logging
|
||||
- **router**: HTTP router, middleware
|
||||
- **cache**: Response caching
|
||||
- **auth**: Authentication middleware
|
||||
- **audit**: Audit logging middleware
|
||||
|
||||
## 3. Frontend Architecture
|
||||
|
||||
### 3.1 Directory Structure
|
||||
```
|
||||
frontend/
|
||||
├── src/
|
||||
│ ├── pages/
|
||||
│ ├── components/
|
||||
│ ├── api/
|
||||
│ ├── store/
|
||||
│ ├── hooks/
|
||||
│ ├── lib/
|
||||
│ └── App.tsx
|
||||
└── public/
|
||||
```
|
||||
|
||||
### 3.2 Component Organization
|
||||
- **pages/**: Route-based page components
|
||||
- **components/**: Reusable UI components
|
||||
- **ui/**: Base UI components (buttons, inputs, etc.)
|
||||
- **Layout.tsx**: Main layout component
|
||||
- **api/**: API client and query definitions
|
||||
- **store/**: Zustand stores
|
||||
- **hooks/**: Custom React hooks
|
||||
- **lib/**: Utility functions
|
||||
|
||||
## 4. Request Processing Flow
|
||||
|
||||
### 4.1 HTTP Request Flow
|
||||
```
|
||||
Client Request
|
||||
↓
|
||||
CORS Middleware
|
||||
↓
|
||||
Rate Limiting Middleware
|
||||
↓
|
||||
Security Headers Middleware
|
||||
↓
|
||||
Cache Middleware (if enabled)
|
||||
↓
|
||||
Audit Logging Middleware
|
||||
↓
|
||||
Authentication Middleware
|
||||
↓
|
||||
Permission Middleware
|
||||
↓
|
||||
Handler
|
||||
↓
|
||||
Service
|
||||
↓
|
||||
Database
|
||||
↓
|
||||
Response
|
||||
```
|
||||
|
||||
### 4.2 Error Handling Flow
|
||||
```
|
||||
Error Occurred
|
||||
↓
|
||||
Service Layer Error
|
||||
↓
|
||||
Handler Error Handling
|
||||
↓
|
||||
Error Response Formatting
|
||||
↓
|
||||
HTTP Error Response
|
||||
↓
|
||||
Frontend Error Handling
|
||||
↓
|
||||
User Notification
|
||||
```
|
||||
|
||||
## 5. Background Services
|
||||
|
||||
### 5.1 Monitoring Services
|
||||
- **Disk Monitor**: Syncs disk information every 5 minutes
|
||||
- **ZFS Pool Monitor**: Syncs ZFS pool status every 2 minutes
|
||||
- **Metrics Service**: Collects system metrics every 30 seconds
|
||||
- **Alert Rule Engine**: Continuously evaluates alert rules
|
||||
|
||||
### 5.2 Event System
|
||||
- **Event Hub**: Broadcasts events to subscribers
|
||||
- **Metrics Broadcaster**: Broadcasts metrics to WebSocket clients
|
||||
- **Alert Service**: Processes alerts and notifications
|
||||
|
||||
## 6. Data Flow Patterns
|
||||
|
||||
### 6.1 Read Operations
|
||||
```
|
||||
Frontend Query
|
||||
↓
|
||||
API Call
|
||||
↓
|
||||
Handler
|
||||
↓
|
||||
Service
|
||||
↓
|
||||
Database Query
|
||||
↓
|
||||
Response
|
||||
↓
|
||||
React Query Cache
|
||||
↓
|
||||
UI Update
|
||||
```
|
||||
|
||||
### 6.2 Write Operations
|
||||
```
|
||||
Frontend Mutation
|
||||
↓
|
||||
API Call
|
||||
↓
|
||||
Handler (Validation)
|
||||
↓
|
||||
Service (Business Logic)
|
||||
↓
|
||||
Database Transaction
|
||||
↓
|
||||
Cache Invalidation
|
||||
↓
|
||||
Response
|
||||
↓
|
||||
React Query Invalidation
|
||||
↓
|
||||
UI Update
|
||||
```
|
||||
|
||||
## 7. Integration Points
|
||||
|
||||
### 7.1 External System Integrations
|
||||
- **ZFS**: Command-line tools (`zpool`, `zfs`)
|
||||
- **SCST**: Configuration files and commands
|
||||
- **Bacula/Bareos**: Database and `bconsole` commands
|
||||
- **MHVTL**: Configuration and control
|
||||
- **Systemd**: Service management
|
||||
|
||||
### 7.2 Integration Patterns
|
||||
- **Command Execution**: Execute system commands
|
||||
- **File Operations**: Read/write configuration files
|
||||
- **Database Access**: Direct database queries (Bacula)
|
||||
- **API Calls**: HTTP API calls (future)
|
||||
|
||||
## 8. Security Architecture
|
||||
|
||||
### 8.1 Authentication Flow
|
||||
```
|
||||
Login Request
|
||||
↓
|
||||
Credential Validation
|
||||
↓
|
||||
JWT Token Generation
|
||||
↓
|
||||
Token Response
|
||||
↓
|
||||
Token Storage (Frontend)
|
||||
↓
|
||||
Token in Request Headers
|
||||
↓
|
||||
Token Validation (Middleware)
|
||||
↓
|
||||
Request Processing
|
||||
```
|
||||
|
||||
### 8.2 Authorization Flow
|
||||
```
|
||||
Authenticated Request
|
||||
↓
|
||||
User Role Retrieval
|
||||
↓
|
||||
Permission Check
|
||||
↓
|
||||
Resource Access Check
|
||||
↓
|
||||
Request Processing or Denial
|
||||
```
|
||||
|
||||
## 9. Caching Strategy
|
||||
|
||||
### 9.1 Response Caching
|
||||
- **Cacheable Endpoints**: GET requests only
|
||||
- **Cache Keys**: Based on URL and query parameters
|
||||
- **TTL**: Configurable per endpoint
|
||||
- **Invalidation**: On write operations
|
||||
|
||||
### 9.2 Frontend Caching
|
||||
- **React Query**: Automatic caching and invalidation
|
||||
- **Stale Time**: 5 minutes default
|
||||
- **Cache Time**: 30 minutes default
|
||||
|
||||
## 10. Logging Architecture
|
||||
|
||||
### 10.1 Log Levels
|
||||
- **DEBUG**: Detailed debugging information
|
||||
- **INFO**: General informational messages
|
||||
- **WARN**: Warning messages
|
||||
- **ERROR**: Error messages
|
||||
|
||||
### 10.2 Log Structure
|
||||
- **Structured Logging**: JSON format
|
||||
- **Fields**: Timestamp, level, message, context
|
||||
- **Audit Logs**: Separate audit log table
|
||||
|
||||
## 11. Error Handling Architecture
|
||||
|
||||
### 11.1 Error Types
|
||||
- **Validation Errors**: 400 Bad Request
|
||||
- **Authentication Errors**: 401 Unauthorized
|
||||
- **Authorization Errors**: 403 Forbidden
|
||||
- **Not Found Errors**: 404 Not Found
|
||||
- **Server Errors**: 500 Internal Server Error
|
||||
|
||||
### 11.2 Error Response Format
|
||||
```json
|
||||
{
|
||||
"error": "Error message",
|
||||
"code": "ERROR_CODE",
|
||||
"details": {}
|
||||
}
|
||||
```
|
||||
|
||||
385
docs/alpha/sds/SDS-02-Database-Design.md
Normal file
385
docs/alpha/sds/SDS-02-Database-Design.md
Normal file
@@ -0,0 +1,385 @@
|
||||
# SDS-02: Database Design
|
||||
|
||||
## 1. Database Overview
|
||||
|
||||
### 1.1 Database System
|
||||
- **Type**: PostgreSQL 14+
|
||||
- **Encoding**: UTF-8
|
||||
- **Connection Pooling**: pgxpool
|
||||
- **Migrations**: Custom migration system
|
||||
|
||||
### 1.2 Database Schema Organization
|
||||
- **Tables**: Organized by domain (users, storage, shares, etc.)
|
||||
- **Indexes**: Performance indexes on foreign keys and frequently queried columns
|
||||
- **Constraints**: Foreign keys, unique constraints, check constraints
|
||||
|
||||
## 2. Core Tables
|
||||
|
||||
### 2.1 Users & Authentication
|
||||
```sql
|
||||
users (
|
||||
id UUID PRIMARY KEY,
|
||||
username VARCHAR(255) UNIQUE NOT NULL,
|
||||
email VARCHAR(255) UNIQUE,
|
||||
password_hash VARCHAR(255) NOT NULL,
|
||||
is_active BOOLEAN DEFAULT true,
|
||||
created_at TIMESTAMP,
|
||||
updated_at TIMESTAMP
|
||||
)
|
||||
|
||||
roles (
|
||||
id UUID PRIMARY KEY,
|
||||
name VARCHAR(255) UNIQUE NOT NULL,
|
||||
description TEXT,
|
||||
created_at TIMESTAMP,
|
||||
updated_at TIMESTAMP
|
||||
)
|
||||
|
||||
permissions (
|
||||
id UUID PRIMARY KEY,
|
||||
resource VARCHAR(255) NOT NULL,
|
||||
action VARCHAR(255) NOT NULL,
|
||||
description TEXT,
|
||||
UNIQUE(resource, action)
|
||||
)
|
||||
|
||||
user_roles (
|
||||
user_id UUID REFERENCES users(id),
|
||||
role_id UUID REFERENCES roles(id),
|
||||
PRIMARY KEY (user_id, role_id)
|
||||
)
|
||||
|
||||
role_permissions (
|
||||
role_id UUID REFERENCES roles(id),
|
||||
permission_id UUID REFERENCES permissions(id),
|
||||
PRIMARY KEY (role_id, permission_id)
|
||||
)
|
||||
|
||||
groups (
|
||||
id UUID PRIMARY KEY,
|
||||
name VARCHAR(255) UNIQUE NOT NULL,
|
||||
description TEXT,
|
||||
created_at TIMESTAMP,
|
||||
updated_at TIMESTAMP
|
||||
)
|
||||
|
||||
user_groups (
|
||||
user_id UUID REFERENCES users(id),
|
||||
group_id UUID REFERENCES groups(id),
|
||||
PRIMARY KEY (user_id, group_id)
|
||||
)
|
||||
```
|
||||
|
||||
### 2.2 Storage Tables
|
||||
```sql
|
||||
zfs_pools (
|
||||
id UUID PRIMARY KEY,
|
||||
name VARCHAR(255) UNIQUE NOT NULL,
|
||||
description TEXT,
|
||||
raid_level VARCHAR(50),
|
||||
status VARCHAR(50),
|
||||
total_capacity_bytes BIGINT,
|
||||
used_capacity_bytes BIGINT,
|
||||
health_status VARCHAR(50),
|
||||
created_at TIMESTAMP,
|
||||
updated_at TIMESTAMP,
|
||||
created_by UUID REFERENCES users(id)
|
||||
)
|
||||
|
||||
zfs_datasets (
|
||||
id UUID PRIMARY KEY,
|
||||
name VARCHAR(255) NOT NULL,
|
||||
pool_name VARCHAR(255) REFERENCES zfs_pools(name),
|
||||
type VARCHAR(50),
|
||||
mount_point VARCHAR(255),
|
||||
used_bytes BIGINT,
|
||||
available_bytes BIGINT,
|
||||
compression VARCHAR(50),
|
||||
quota BIGINT,
|
||||
reservation BIGINT,
|
||||
created_at TIMESTAMP,
|
||||
UNIQUE(pool_name, name)
|
||||
)
|
||||
|
||||
physical_disks (
|
||||
id UUID PRIMARY KEY,
|
||||
device_path VARCHAR(255) UNIQUE NOT NULL,
|
||||
vendor VARCHAR(255),
|
||||
model VARCHAR(255),
|
||||
serial_number VARCHAR(255),
|
||||
size_bytes BIGINT,
|
||||
type VARCHAR(50),
|
||||
status VARCHAR(50),
|
||||
last_synced_at TIMESTAMP
|
||||
)
|
||||
|
||||
storage_repositories (
|
||||
id UUID PRIMARY KEY,
|
||||
name VARCHAR(255) UNIQUE NOT NULL,
|
||||
type VARCHAR(50),
|
||||
path VARCHAR(255),
|
||||
capacity_bytes BIGINT,
|
||||
used_bytes BIGINT,
|
||||
created_at TIMESTAMP,
|
||||
updated_at TIMESTAMP
|
||||
)
|
||||
```
|
||||
|
||||
### 2.3 Shares Tables
|
||||
```sql
|
||||
shares (
|
||||
id UUID PRIMARY KEY,
|
||||
dataset_id UUID REFERENCES zfs_datasets(id),
|
||||
share_type VARCHAR(50),
|
||||
nfs_enabled BOOLEAN DEFAULT false,
|
||||
nfs_options TEXT,
|
||||
nfs_clients TEXT[],
|
||||
smb_enabled BOOLEAN DEFAULT false,
|
||||
smb_share_name VARCHAR(255),
|
||||
smb_path VARCHAR(255),
|
||||
smb_comment TEXT,
|
||||
smb_guest_ok BOOLEAN DEFAULT false,
|
||||
smb_read_only BOOLEAN DEFAULT false,
|
||||
smb_browseable BOOLEAN DEFAULT true,
|
||||
is_active BOOLEAN DEFAULT true,
|
||||
created_at TIMESTAMP,
|
||||
updated_at TIMESTAMP,
|
||||
created_by UUID REFERENCES users(id)
|
||||
)
|
||||
```
|
||||
|
||||
### 2.4 iSCSI Tables
|
||||
```sql
|
||||
iscsi_targets (
|
||||
id UUID PRIMARY KEY,
|
||||
name VARCHAR(255) UNIQUE NOT NULL,
|
||||
alias VARCHAR(255),
|
||||
enabled BOOLEAN DEFAULT true,
|
||||
created_at TIMESTAMP,
|
||||
updated_at TIMESTAMP,
|
||||
created_by UUID REFERENCES users(id)
|
||||
)
|
||||
|
||||
iscsi_luns (
|
||||
id UUID PRIMARY KEY,
|
||||
target_id UUID REFERENCES iscsi_targets(id),
|
||||
lun_number INTEGER,
|
||||
device_path VARCHAR(255),
|
||||
size_bytes BIGINT,
|
||||
created_at TIMESTAMP
|
||||
)
|
||||
|
||||
iscsi_initiators (
|
||||
id UUID PRIMARY KEY,
|
||||
iqn VARCHAR(255) UNIQUE NOT NULL,
|
||||
created_at TIMESTAMP
|
||||
)
|
||||
|
||||
target_initiators (
|
||||
target_id UUID REFERENCES iscsi_targets(id),
|
||||
initiator_id UUID REFERENCES iscsi_initiators(id),
|
||||
PRIMARY KEY (target_id, initiator_id)
|
||||
)
|
||||
```
|
||||
|
||||
### 2.5 Tape Tables
|
||||
```sql
|
||||
vtl_libraries (
|
||||
id UUID PRIMARY KEY,
|
||||
name VARCHAR(255) UNIQUE NOT NULL,
|
||||
vendor VARCHAR(255),
|
||||
model VARCHAR(255),
|
||||
drive_count INTEGER,
|
||||
slot_count INTEGER,
|
||||
status VARCHAR(50),
|
||||
created_at TIMESTAMP,
|
||||
updated_at TIMESTAMP,
|
||||
created_by UUID REFERENCES users(id)
|
||||
)
|
||||
|
||||
physical_libraries (
|
||||
id UUID PRIMARY KEY,
|
||||
vendor VARCHAR(255),
|
||||
model VARCHAR(255),
|
||||
serial_number VARCHAR(255),
|
||||
drive_count INTEGER,
|
||||
slot_count INTEGER,
|
||||
discovered_at TIMESTAMP
|
||||
)
|
||||
```
|
||||
|
||||
### 2.6 Backup Tables
|
||||
```sql
|
||||
backup_jobs (
|
||||
id UUID PRIMARY KEY,
|
||||
name VARCHAR(255) UNIQUE NOT NULL,
|
||||
client_id INTEGER,
|
||||
fileset_id INTEGER,
|
||||
schedule VARCHAR(255),
|
||||
storage_pool_id INTEGER,
|
||||
enabled BOOLEAN DEFAULT true,
|
||||
created_at TIMESTAMP,
|
||||
updated_at TIMESTAMP,
|
||||
created_by UUID REFERENCES users(id)
|
||||
)
|
||||
```
|
||||
|
||||
### 2.7 Monitoring Tables
|
||||
```sql
|
||||
alerts (
|
||||
id UUID PRIMARY KEY,
|
||||
rule_id UUID,
|
||||
severity VARCHAR(50),
|
||||
source VARCHAR(255),
|
||||
message TEXT,
|
||||
status VARCHAR(50),
|
||||
acknowledged_at TIMESTAMP,
|
||||
resolved_at TIMESTAMP,
|
||||
created_at TIMESTAMP
|
||||
)
|
||||
|
||||
alert_rules (
|
||||
id UUID PRIMARY KEY,
|
||||
name VARCHAR(255) UNIQUE NOT NULL,
|
||||
description TEXT,
|
||||
source VARCHAR(255),
|
||||
condition_type VARCHAR(255),
|
||||
condition_config JSONB,
|
||||
severity VARCHAR(50),
|
||||
enabled BOOLEAN DEFAULT true,
|
||||
created_at TIMESTAMP,
|
||||
updated_at TIMESTAMP
|
||||
)
|
||||
```
|
||||
|
||||
### 2.8 Audit Tables
|
||||
```sql
|
||||
audit_logs (
|
||||
id UUID PRIMARY KEY,
|
||||
user_id UUID REFERENCES users(id),
|
||||
action VARCHAR(255),
|
||||
resource_type VARCHAR(255),
|
||||
resource_id VARCHAR(255),
|
||||
method VARCHAR(10),
|
||||
path VARCHAR(255),
|
||||
ip_address VARCHAR(45),
|
||||
user_agent TEXT,
|
||||
request_body JSONB,
|
||||
response_status INTEGER,
|
||||
created_at TIMESTAMP
|
||||
)
|
||||
```
|
||||
|
||||
### 2.9 Task Tables
|
||||
```sql
|
||||
tasks (
|
||||
id UUID PRIMARY KEY,
|
||||
type VARCHAR(255),
|
||||
status VARCHAR(50),
|
||||
progress INTEGER,
|
||||
result JSONB,
|
||||
error_message TEXT,
|
||||
created_at TIMESTAMP,
|
||||
updated_at TIMESTAMP,
|
||||
completed_at TIMESTAMP,
|
||||
created_by UUID REFERENCES users(id)
|
||||
)
|
||||
```
|
||||
|
||||
## 3. Indexes
|
||||
|
||||
### 3.1 Performance Indexes
|
||||
```sql
|
||||
-- Users
|
||||
CREATE INDEX idx_users_username ON users(username);
|
||||
CREATE INDEX idx_users_email ON users(email);
|
||||
|
||||
-- Storage
|
||||
CREATE INDEX idx_zfs_pools_name ON zfs_pools(name);
|
||||
CREATE INDEX idx_zfs_datasets_pool_name ON zfs_datasets(pool_name);
|
||||
|
||||
-- Shares
|
||||
CREATE INDEX idx_shares_dataset_id ON shares(dataset_id);
|
||||
CREATE INDEX idx_shares_created_by ON shares(created_by);
|
||||
|
||||
-- iSCSI
|
||||
CREATE INDEX idx_iscsi_targets_name ON iscsi_targets(name);
|
||||
CREATE INDEX idx_iscsi_luns_target_id ON iscsi_luns(target_id);
|
||||
|
||||
-- Monitoring
|
||||
CREATE INDEX idx_alerts_status ON alerts(status);
|
||||
CREATE INDEX idx_alerts_created_at ON alerts(created_at);
|
||||
CREATE INDEX idx_audit_logs_user_id ON audit_logs(user_id);
|
||||
CREATE INDEX idx_audit_logs_created_at ON audit_logs(created_at);
|
||||
```
|
||||
|
||||
## 4. Migrations
|
||||
|
||||
### 4.1 Migration System
|
||||
- **Location**: `db/migrations/`
|
||||
- **Naming**: `NNN_description.sql`
|
||||
- **Execution**: Sequential execution on startup
|
||||
- **Version Tracking**: `schema_migrations` table
|
||||
|
||||
### 4.2 Migration Files
|
||||
- `001_initial_schema.sql`: Core tables
|
||||
- `002_storage_and_tape_schema.sql`: Storage and tape tables
|
||||
- `003_performance_indexes.sql`: Performance indexes
|
||||
- `004_add_zfs_pools_table.sql`: ZFS pools
|
||||
- `005_add_zfs_datasets_table.sql`: ZFS datasets
|
||||
- `006_add_zfs_shares_and_iscsi.sql`: Shares and iSCSI
|
||||
- `007_add_vendor_to_vtl_libraries.sql`: VTL updates
|
||||
- `008_add_user_groups.sql`: User groups
|
||||
- `009_backup_jobs_schema.sql`: Backup jobs
|
||||
- `010_add_backup_permissions.sql`: Backup permissions
|
||||
- `011_sync_bacula_jobs_function.sql`: Bacula sync function
|
||||
|
||||
## 5. Data Relationships
|
||||
|
||||
### 5.1 Entity Relationships
|
||||
- **Users** → **Roles** (many-to-many)
|
||||
- **Roles** → **Permissions** (many-to-many)
|
||||
- **Users** → **Groups** (many-to-many)
|
||||
- **ZFS Pools** → **ZFS Datasets** (one-to-many)
|
||||
- **ZFS Datasets** → **Shares** (one-to-many)
|
||||
- **iSCSI Targets** → **LUNs** (one-to-many)
|
||||
- **iSCSI Targets** → **Initiators** (many-to-many)
|
||||
|
||||
## 6. Data Integrity
|
||||
|
||||
### 6.1 Constraints
|
||||
- **Primary Keys**: UUID primary keys for all tables
|
||||
- **Foreign Keys**: Referential integrity
|
||||
- **Unique Constraints**: Unique usernames, emails, names
|
||||
- **Check Constraints**: Valid status values, positive numbers
|
||||
|
||||
### 6.2 Cascading Rules
|
||||
- **ON DELETE CASCADE**: Child records deleted with parent
|
||||
- **ON DELETE RESTRICT**: Prevent deletion if referenced
|
||||
- **ON UPDATE CASCADE**: Update foreign keys on parent update
|
||||
|
||||
## 7. Query Optimization
|
||||
|
||||
### 7.1 Query Patterns
|
||||
- **Eager Loading**: Join related data when needed
|
||||
- **Pagination**: Limit and offset for large datasets
|
||||
- **Filtering**: WHERE clauses for filtering
|
||||
- **Sorting**: ORDER BY for sorted results
|
||||
|
||||
### 7.2 Caching Strategy
|
||||
- **Query Result Caching**: Cache frequently accessed queries
|
||||
- **Cache Invalidation**: Invalidate on write operations
|
||||
- **TTL**: Time-to-live for cached data
|
||||
|
||||
## 8. Backup & Recovery
|
||||
|
||||
### 8.1 Backup Strategy
|
||||
- **Regular Backups**: Daily database backups
|
||||
- **Point-in-Time Recovery**: WAL archiving
|
||||
- **Backup Retention**: 30 days retention
|
||||
|
||||
### 8.2 Recovery Procedures
|
||||
- **Full Restore**: Restore from backup
|
||||
- **Point-in-Time**: Restore to specific timestamp
|
||||
- **Selective Restore**: Restore specific tables
|
||||
|
||||
286
docs/alpha/sds/SDS-03-API-Design.md
Normal file
286
docs/alpha/sds/SDS-03-API-Design.md
Normal file
@@ -0,0 +1,286 @@
|
||||
# SDS-03: API Design
|
||||
|
||||
## 1. API Overview
|
||||
|
||||
### 1.1 API Style
|
||||
- **RESTful**: Resource-based API design
|
||||
- **Versioning**: `/api/v1/` prefix
|
||||
- **Content-Type**: `application/json`
|
||||
- **Authentication**: JWT Bearer tokens
|
||||
|
||||
### 1.2 API Base URL
|
||||
```
|
||||
http://localhost:8080/api/v1
|
||||
```
|
||||
|
||||
## 2. Authentication API
|
||||
|
||||
### 2.1 Endpoints
|
||||
```
|
||||
POST /auth/login
|
||||
POST /auth/logout
|
||||
GET /auth/me
|
||||
```
|
||||
|
||||
### 2.2 Request/Response Examples
|
||||
|
||||
#### Login
|
||||
```http
|
||||
POST /api/v1/auth/login
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"username": "admin",
|
||||
"password": "password"
|
||||
}
|
||||
|
||||
Response: 200 OK
|
||||
{
|
||||
"token": "eyJhbGciOiJIUzI1NiIs...",
|
||||
"user": {
|
||||
"id": "uuid",
|
||||
"username": "admin",
|
||||
"email": "admin@example.com",
|
||||
"roles": ["admin"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Get Current User
|
||||
```http
|
||||
GET /api/v1/auth/me
|
||||
Authorization: Bearer <token>
|
||||
|
||||
Response: 200 OK
|
||||
{
|
||||
"id": "uuid",
|
||||
"username": "admin",
|
||||
"email": "admin@example.com",
|
||||
"roles": ["admin"],
|
||||
"permissions": ["storage:read", "storage:write", ...]
|
||||
}
|
||||
```
|
||||
|
||||
## 3. Storage API
|
||||
|
||||
### 3.1 ZFS Pools
|
||||
```
|
||||
GET /storage/zfs/pools
|
||||
GET /storage/zfs/pools/:id
|
||||
POST /storage/zfs/pools
|
||||
DELETE /storage/zfs/pools/:id
|
||||
POST /storage/zfs/pools/:id/spare
|
||||
```
|
||||
|
||||
### 3.2 ZFS Datasets
|
||||
```
|
||||
GET /storage/zfs/pools/:id/datasets
|
||||
POST /storage/zfs/pools/:id/datasets
|
||||
DELETE /storage/zfs/pools/:id/datasets/:dataset
|
||||
```
|
||||
|
||||
### 3.3 Request/Response Examples
|
||||
|
||||
#### Create ZFS Pool
|
||||
```http
|
||||
POST /api/v1/storage/zfs/pools
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"name": "tank",
|
||||
"raid_level": "mirror",
|
||||
"disks": ["/dev/sdb", "/dev/sdc"],
|
||||
"compression": "lz4",
|
||||
"deduplication": false
|
||||
}
|
||||
|
||||
Response: 201 Created
|
||||
{
|
||||
"id": "uuid",
|
||||
"name": "tank",
|
||||
"status": "online",
|
||||
"total_capacity_bytes": 1000000000000,
|
||||
"created_at": "2025-01-01T00:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## 4. Shares API
|
||||
|
||||
### 4.1 Endpoints
|
||||
```
|
||||
GET /shares
|
||||
GET /shares/:id
|
||||
POST /shares
|
||||
PUT /shares/:id
|
||||
DELETE /shares/:id
|
||||
```
|
||||
|
||||
### 4.2 Request/Response Examples
|
||||
|
||||
#### Create Share
|
||||
```http
|
||||
POST /api/v1/shares
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"dataset_id": "uuid",
|
||||
"share_type": "both",
|
||||
"nfs_enabled": true,
|
||||
"nfs_clients": ["192.168.1.0/24"],
|
||||
"smb_enabled": true,
|
||||
"smb_share_name": "shared",
|
||||
"smb_path": "/mnt/tank/shared",
|
||||
"smb_guest_ok": false,
|
||||
"smb_read_only": false
|
||||
}
|
||||
|
||||
Response: 201 Created
|
||||
{
|
||||
"id": "uuid",
|
||||
"dataset_id": "uuid",
|
||||
"share_type": "both",
|
||||
"nfs_enabled": true,
|
||||
"smb_enabled": true,
|
||||
"created_at": "2025-01-01T00:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## 5. iSCSI API
|
||||
|
||||
### 5.1 Endpoints
|
||||
```
|
||||
GET /scst/targets
|
||||
GET /scst/targets/:id
|
||||
POST /scst/targets
|
||||
DELETE /scst/targets/:id
|
||||
POST /scst/targets/:id/luns
|
||||
DELETE /scst/targets/:id/luns/:lunId
|
||||
POST /scst/targets/:id/initiators
|
||||
GET /scst/initiators
|
||||
POST /scst/config/apply
|
||||
```
|
||||
|
||||
## 6. System API
|
||||
|
||||
### 6.1 Endpoints
|
||||
```
|
||||
GET /system/services
|
||||
GET /system/services/:name
|
||||
POST /system/services/:name/restart
|
||||
GET /system/services/:name/logs
|
||||
GET /system/interfaces
|
||||
PUT /system/interfaces/:name
|
||||
GET /system/ntp
|
||||
POST /system/ntp
|
||||
GET /system/logs
|
||||
GET /system/network/throughput
|
||||
POST /system/execute
|
||||
POST /system/support-bundle
|
||||
```
|
||||
|
||||
## 7. Monitoring API
|
||||
|
||||
### 7.1 Endpoints
|
||||
```
|
||||
GET /monitoring/metrics
|
||||
GET /monitoring/health
|
||||
GET /monitoring/alerts
|
||||
GET /monitoring/alerts/:id
|
||||
POST /monitoring/alerts/:id/acknowledge
|
||||
POST /monitoring/alerts/:id/resolve
|
||||
GET /monitoring/rules
|
||||
POST /monitoring/rules
|
||||
```
|
||||
|
||||
## 8. IAM API
|
||||
|
||||
### 8.1 Endpoints
|
||||
```
|
||||
GET /iam/users
|
||||
GET /iam/users/:id
|
||||
POST /iam/users
|
||||
PUT /iam/users/:id
|
||||
DELETE /iam/users/:id
|
||||
|
||||
GET /iam/roles
|
||||
GET /iam/roles/:id
|
||||
POST /iam/roles
|
||||
PUT /iam/roles/:id
|
||||
DELETE /iam/roles/:id
|
||||
|
||||
GET /iam/permissions
|
||||
GET /iam/groups
|
||||
```
|
||||
|
||||
## 9. Error Responses
|
||||
|
||||
### 9.1 Error Format
|
||||
```json
|
||||
{
|
||||
"error": "Error message",
|
||||
"code": "ERROR_CODE",
|
||||
"details": {
|
||||
"field": "validation error"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 9.2 HTTP Status Codes
|
||||
- **200 OK**: Success
|
||||
- **201 Created**: Resource created
|
||||
- **400 Bad Request**: Validation error
|
||||
- **401 Unauthorized**: Authentication required
|
||||
- **403 Forbidden**: Permission denied
|
||||
- **404 Not Found**: Resource not found
|
||||
- **500 Internal Server Error**: Server error
|
||||
|
||||
## 10. Pagination
|
||||
|
||||
### 10.1 Pagination Parameters
|
||||
```
|
||||
GET /api/v1/resource?page=1&limit=20
|
||||
```
|
||||
|
||||
### 10.2 Pagination Response
|
||||
```json
|
||||
{
|
||||
"data": [...],
|
||||
"pagination": {
|
||||
"page": 1,
|
||||
"limit": 20,
|
||||
"total": 100,
|
||||
"total_pages": 5
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 11. Filtering & Sorting
|
||||
|
||||
### 11.1 Filtering
|
||||
```
|
||||
GET /api/v1/resource?status=active&type=filesystem
|
||||
```
|
||||
|
||||
### 11.2 Sorting
|
||||
```
|
||||
GET /api/v1/resource?sort=name&order=asc
|
||||
```
|
||||
|
||||
## 12. Rate Limiting
|
||||
|
||||
### 12.1 Rate Limits
|
||||
- **Default**: 100 requests per minute per IP
|
||||
- **Authenticated**: 200 requests per minute per user
|
||||
- **Headers**: `X-RateLimit-Limit`, `X-RateLimit-Remaining`
|
||||
|
||||
## 13. Caching
|
||||
|
||||
### 13.1 Cache Headers
|
||||
- **Cache-Control**: `max-age=300` for GET requests
|
||||
- **ETag**: Entity tags for cache validation
|
||||
- **Last-Modified**: Last modification time
|
||||
|
||||
### 13.2 Cache Invalidation
|
||||
- **On Write**: Invalidate related cache entries
|
||||
- **Manual**: Clear cache via admin endpoint
|
||||
|
||||
224
docs/alpha/sds/SDS-04-Security-Design.md
Normal file
224
docs/alpha/sds/SDS-04-Security-Design.md
Normal file
@@ -0,0 +1,224 @@
|
||||
# SDS-04: Security Design
|
||||
|
||||
## 1. Security Overview
|
||||
|
||||
### 1.1 Security Principles
|
||||
- **Defense in Depth**: Multiple layers of security
|
||||
- **Principle of Least Privilege**: Minimum required permissions
|
||||
- **Secure by Default**: Secure default configurations
|
||||
- **Input Validation**: Validate all inputs
|
||||
- **Output Encoding**: Encode all outputs
|
||||
|
||||
## 2. Authentication
|
||||
|
||||
### 2.1 Authentication Method
|
||||
- **JWT Tokens**: JSON Web Tokens for stateless authentication
|
||||
- **Token Expiration**: Configurable expiration time
|
||||
- **Token Refresh**: Refresh token mechanism (future)
|
||||
|
||||
### 2.2 Password Security
|
||||
- **Hashing**: bcrypt with cost factor 10
|
||||
- **Password Requirements**: Minimum length, complexity
|
||||
- **Password Storage**: Hashed passwords only, never plaintext
|
||||
|
||||
### 2.3 Session Management
|
||||
- **Stateless**: No server-side session storage
|
||||
- **Token Storage**: Secure storage in frontend (localStorage/sessionStorage)
|
||||
- **Token Validation**: Validate on every request
|
||||
|
||||
## 3. Authorization
|
||||
|
||||
### 3.1 Role-Based Access Control (RBAC)
|
||||
- **Roles**: Admin, Operator, ReadOnly
|
||||
- **Permissions**: Resource-based permissions (storage:read, storage:write)
|
||||
- **Role Assignment**: Users assigned to roles
|
||||
- **Permission Inheritance**: Permissions inherited from roles
|
||||
|
||||
### 3.2 Permission Model
|
||||
```
|
||||
Resource:Action
|
||||
Examples:
|
||||
- storage:read
|
||||
- storage:write
|
||||
- iscsi:read
|
||||
- iscsi:write
|
||||
- backup:read
|
||||
- backup:write
|
||||
- system:read
|
||||
- system:write
|
||||
```
|
||||
|
||||
### 3.3 Permission Checking
|
||||
- **Middleware**: Permission middleware checks on protected routes
|
||||
- **Handler Level**: Additional checks in handlers if needed
|
||||
- **Service Level**: Business logic permission checks
|
||||
|
||||
## 4. Input Validation
|
||||
|
||||
### 4.1 Validation Layers
|
||||
1. **Frontend**: Client-side validation
|
||||
2. **Handler**: Request validation
|
||||
3. **Service**: Business logic validation
|
||||
4. **Database**: Constraint validation
|
||||
|
||||
### 4.2 Validation Rules
|
||||
- **Required Fields**: Check for required fields
|
||||
- **Type Validation**: Validate data types
|
||||
- **Format Validation**: Validate formats (email, IP, etc.)
|
||||
- **Range Validation**: Validate numeric ranges
|
||||
- **Length Validation**: Validate string lengths
|
||||
|
||||
### 4.3 SQL Injection Prevention
|
||||
- **Parameterized Queries**: Use parameterized queries only
|
||||
- **No String Concatenation**: Never concatenate SQL strings
|
||||
- **Input Sanitization**: Sanitize all inputs
|
||||
|
||||
## 5. Output Encoding
|
||||
|
||||
### 5.1 XSS Prevention
|
||||
- **HTML Encoding**: Encode HTML in responses
|
||||
- **JSON Encoding**: Proper JSON encoding
|
||||
- **Content Security Policy**: CSP headers
|
||||
|
||||
### 5.2 Response Headers
|
||||
```
|
||||
Content-Security-Policy: default-src 'self'
|
||||
X-Content-Type-Options: nosniff
|
||||
X-Frame-Options: DENY
|
||||
X-XSS-Protection: 1; mode=block
|
||||
```
|
||||
|
||||
## 6. HTTPS & TLS
|
||||
|
||||
### 6.1 TLS Configuration
|
||||
- **TLS Version**: TLS 1.2 minimum
|
||||
- **Cipher Suites**: Strong cipher suites only
|
||||
- **Certificate**: Valid SSL certificate
|
||||
|
||||
### 6.2 HTTPS Enforcement
|
||||
- **Redirect HTTP to HTTPS**: Force HTTPS
|
||||
- **HSTS**: HTTP Strict Transport Security
|
||||
|
||||
## 7. Rate Limiting
|
||||
|
||||
### 7.1 Rate Limit Strategy
|
||||
- **IP-Based**: Rate limit by IP address
|
||||
- **User-Based**: Rate limit by authenticated user
|
||||
- **Endpoint-Based**: Different limits per endpoint
|
||||
|
||||
### 7.2 Rate Limit Configuration
|
||||
- **Default**: 100 requests/minute
|
||||
- **Authenticated**: 200 requests/minute
|
||||
- **Strict Endpoints**: Lower limits for sensitive endpoints
|
||||
|
||||
## 8. Audit Logging
|
||||
|
||||
### 8.1 Audit Events
|
||||
- **Authentication**: Login, logout, failed login
|
||||
- **Authorization**: Permission denied events
|
||||
- **Data Access**: Read operations (configurable)
|
||||
- **Data Modification**: Create, update, delete operations
|
||||
- **System Actions**: System configuration changes
|
||||
|
||||
### 8.2 Audit Log Format
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"user_id": "uuid",
|
||||
"action": "CREATE_SHARE",
|
||||
"resource_type": "share",
|
||||
"resource_id": "uuid",
|
||||
"method": "POST",
|
||||
"path": "/api/v1/shares",
|
||||
"ip_address": "192.168.1.100",
|
||||
"user_agent": "Mozilla/5.0...",
|
||||
"request_body": {...},
|
||||
"response_status": 201,
|
||||
"created_at": "2025-01-01T00:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## 9. Error Handling
|
||||
|
||||
### 9.1 Error Information
|
||||
- **Public Errors**: Safe error messages for users
|
||||
- **Private Errors**: Detailed errors in logs only
|
||||
- **No Stack Traces**: Never expose stack traces to users
|
||||
|
||||
### 9.2 Error Logging
|
||||
- **Log All Errors**: Log all errors with context
|
||||
- **Sensitive Data**: Never log passwords, tokens, secrets
|
||||
- **Error Tracking**: Track error patterns
|
||||
|
||||
## 10. File Upload Security
|
||||
|
||||
### 10.1 Upload Restrictions
|
||||
- **File Types**: Whitelist allowed file types
|
||||
- **File Size**: Maximum file size limits
|
||||
- **File Validation**: Validate file contents
|
||||
|
||||
### 10.2 Storage Security
|
||||
- **Secure Storage**: Store in secure location
|
||||
- **Access Control**: Restrict file access
|
||||
- **Virus Scanning**: Scan uploaded files (future)
|
||||
|
||||
## 11. API Security
|
||||
|
||||
### 11.1 API Authentication
|
||||
- **Bearer Tokens**: JWT in Authorization header
|
||||
- **Token Validation**: Validate on every request
|
||||
- **Token Expiration**: Enforce token expiration
|
||||
|
||||
### 11.2 API Rate Limiting
|
||||
- **Per IP**: Rate limit by IP address
|
||||
- **Per User**: Rate limit by authenticated user
|
||||
- **Per Endpoint**: Different limits per endpoint
|
||||
|
||||
## 12. Database Security
|
||||
|
||||
### 12.1 Database Access
|
||||
- **Connection Security**: Encrypted connections
|
||||
- **Credentials**: Secure credential storage
|
||||
- **Least Privilege**: Database user with minimum privileges
|
||||
|
||||
### 12.2 Data Encryption
|
||||
- **At Rest**: Database encryption (future)
|
||||
- **In Transit**: TLS for database connections
|
||||
- **Sensitive Data**: Encrypt sensitive fields
|
||||
|
||||
## 13. System Security
|
||||
|
||||
### 13.1 Command Execution
|
||||
- **Whitelist**: Only allow whitelisted commands
|
||||
- **Input Validation**: Validate command inputs
|
||||
- **Output Sanitization**: Sanitize command outputs
|
||||
|
||||
### 13.2 File System Access
|
||||
- **Path Validation**: Validate all file paths
|
||||
- **Access Control**: Restrict file system access
|
||||
- **Symlink Protection**: Prevent symlink attacks
|
||||
|
||||
## 14. Security Headers
|
||||
|
||||
### 14.1 HTTP Security Headers
|
||||
```
|
||||
X-Content-Type-Options: nosniff
|
||||
X-Frame-Options: DENY
|
||||
X-XSS-Protection: 1; mode=block
|
||||
Content-Security-Policy: default-src 'self'
|
||||
Strict-Transport-Security: max-age=31536000
|
||||
Referrer-Policy: strict-origin-when-cross-origin
|
||||
```
|
||||
|
||||
## 15. Security Monitoring
|
||||
|
||||
### 15.1 Security Events
|
||||
- **Failed Logins**: Monitor failed login attempts
|
||||
- **Permission Denials**: Monitor permission denials
|
||||
- **Suspicious Activity**: Detect suspicious patterns
|
||||
|
||||
### 15.2 Alerting
|
||||
- **Security Alerts**: Alert on security events
|
||||
- **Thresholds**: Alert thresholds for suspicious activity
|
||||
- **Notification**: Notify administrators
|
||||
|
||||
294
docs/alpha/sds/SDS-05-Integration-Design.md
Normal file
294
docs/alpha/sds/SDS-05-Integration-Design.md
Normal file
@@ -0,0 +1,294 @@
|
||||
# SDS-05: Integration Design
|
||||
|
||||
## 1. Integration Overview
|
||||
|
||||
### 1.1 External Systems
|
||||
Calypso integrates with several external systems:
|
||||
- **ZFS**: Zettabyte File System for storage management
|
||||
- **SCST**: SCSI target subsystem for iSCSI
|
||||
- **Bacula/Bareos**: Backup software
|
||||
- **MHVTL**: Virtual Tape Library emulation
|
||||
- **Systemd**: Service management
|
||||
- **PostgreSQL**: Database system
|
||||
|
||||
## 2. ZFS Integration
|
||||
|
||||
### 2.1 Integration Method
|
||||
- **Command Execution**: Execute `zpool` and `zfs` commands
|
||||
- **Output Parsing**: Parse command output
|
||||
- **Error Handling**: Handle command errors
|
||||
|
||||
### 2.2 ZFS Commands
|
||||
```bash
|
||||
# Pool operations
|
||||
zpool create <pool> <disks>
|
||||
zpool list
|
||||
zpool status <pool>
|
||||
zpool destroy <pool>
|
||||
|
||||
# Dataset operations
|
||||
zfs create <dataset>
|
||||
zfs list
|
||||
zfs destroy <dataset>
|
||||
zfs snapshot <dataset>@<snapshot>
|
||||
zfs clone <snapshot> <clone>
|
||||
zfs rollback <snapshot>
|
||||
```
|
||||
|
||||
### 2.3 Data Synchronization
|
||||
- **Pool Monitor**: Background service syncs pool status every 2 minutes
|
||||
- **Dataset Monitor**: Real-time dataset information
|
||||
- **ARC Stats**: Real-time ARC statistics
|
||||
|
||||
## 3. SCST Integration
|
||||
|
||||
### 3.1 Integration Method
|
||||
- **Configuration Files**: Read/write SCST configuration files
|
||||
- **Command Execution**: Execute SCST admin commands
|
||||
- **Config Apply**: Apply configuration changes
|
||||
|
||||
### 3.2 SCST Operations
|
||||
```bash
|
||||
# Target management
|
||||
scstadmin -add_target <target>
|
||||
scstadmin -enable_target <target>
|
||||
scstadmin -disable_target <target>
|
||||
scstadmin -remove_target <target>
|
||||
|
||||
# LUN management
|
||||
scstadmin -add_lun <lun> -driver <driver> -target <target>
|
||||
scstadmin -remove_lun <lun> -driver <driver> -target <target>
|
||||
|
||||
# Initiator management
|
||||
scstadmin -add_init <initiator> -driver <driver> -target <target>
|
||||
scstadmin -remove_init <initiator> -driver <driver> -target <target>
|
||||
|
||||
# Config apply
|
||||
scstadmin -write_config /etc/scst.conf
|
||||
```
|
||||
|
||||
### 3.3 Configuration File Format
|
||||
- **Location**: `/etc/scst.conf`
|
||||
- **Format**: SCST configuration syntax
|
||||
- **Backup**: Backup before modifications
|
||||
|
||||
## 4. Bacula/Bareos Integration
|
||||
|
||||
### 4.1 Integration Methods
|
||||
- **Database Access**: Direct PostgreSQL access to Bacula database
|
||||
- **Bconsole Commands**: Execute commands via `bconsole`
|
||||
- **Job Synchronization**: Sync jobs from Bacula database
|
||||
|
||||
### 4.2 Database Schema
|
||||
- **Tables**: Jobs, Clients, Filesets, Pools, Volumes, Media
|
||||
- **Queries**: SQL queries to retrieve backup information
|
||||
- **Updates**: Update job status, volume information
|
||||
|
||||
### 4.3 Bconsole Commands
|
||||
```bash
|
||||
# Job operations
|
||||
run job=<job_name>
|
||||
status job=<job_id>
|
||||
list jobs
|
||||
list files jobid=<job_id>
|
||||
|
||||
# Client operations
|
||||
list clients
|
||||
status client=<client_name>
|
||||
|
||||
# Pool operations
|
||||
list pools
|
||||
list volumes pool=<pool_name>
|
||||
|
||||
# Storage operations
|
||||
list storage
|
||||
status storage=<storage_name>
|
||||
```
|
||||
|
||||
### 4.4 Job Synchronization
|
||||
- **Background Sync**: Periodic sync from Bacula database
|
||||
- **Real-time Updates**: Update on job completion
|
||||
- **Status Mapping**: Map Bacula status to Calypso status
|
||||
|
||||
## 5. MHVTL Integration
|
||||
|
||||
### 5.1 Integration Method
|
||||
- **Configuration Files**: Read/write MHVTL configuration
|
||||
- **Command Execution**: Execute MHVTL control commands
|
||||
- **Status Monitoring**: Monitor VTL status
|
||||
|
||||
### 5.2 MHVTL Operations
|
||||
```bash
|
||||
# Library operations
|
||||
vtlcmd -l <library> -s <status>
|
||||
vtlcmd -l <library> -d <drive> -l <load>
|
||||
vtlcmd -l <library> -d <drive> -u <unload>
|
||||
|
||||
# Media operations
|
||||
vtlcmd -l <library> -m <media> -l <label>
|
||||
```
|
||||
|
||||
### 5.3 Configuration Management
|
||||
- **Library Configuration**: Create/update VTL library configs
|
||||
- **Drive Configuration**: Configure virtual drives
|
||||
- **Slot Configuration**: Configure virtual slots
|
||||
|
||||
## 6. Systemd Integration
|
||||
|
||||
### 6.1 Integration Method
|
||||
- **DBus API**: Use systemd DBus API
|
||||
- **Command Execution**: Execute `systemctl` commands
|
||||
- **Service Status**: Query service status
|
||||
|
||||
### 6.2 Systemd Operations
|
||||
```bash
|
||||
# Service control
|
||||
systemctl start <service>
|
||||
systemctl stop <service>
|
||||
systemctl restart <service>
|
||||
systemctl status <service>
|
||||
|
||||
# Service information
|
||||
systemctl list-units --type=service
|
||||
systemctl show <service>
|
||||
```
|
||||
|
||||
### 6.3 Service Management
|
||||
- **Service Discovery**: Discover available services
|
||||
- **Status Monitoring**: Monitor service status
|
||||
- **Log Access**: Access service logs via journalctl
|
||||
|
||||
## 7. Network Interface Integration
|
||||
|
||||
### 7.1 Integration Method
|
||||
- **System Commands**: Execute network configuration commands
|
||||
- **File Operations**: Read/write network configuration files
|
||||
- **Status Queries**: Query interface status
|
||||
|
||||
### 7.2 Network Operations
|
||||
```bash
|
||||
# Interface information
|
||||
ip addr show
|
||||
ip link show
|
||||
ethtool <interface>
|
||||
|
||||
# Interface configuration
|
||||
ip addr add <ip>/<mask> dev <interface>
|
||||
ip link set <interface> up/down
|
||||
```
|
||||
|
||||
### 7.3 Configuration Files
|
||||
- **Netplan**: Ubuntu network configuration
|
||||
- **NetworkManager**: NetworkManager configuration
|
||||
- **ifconfig**: Legacy configuration
|
||||
|
||||
## 8. NTP Integration
|
||||
|
||||
### 8.1 Integration Method
|
||||
- **Configuration Files**: Read/write NTP configuration
|
||||
- **Command Execution**: Execute NTP commands
|
||||
- **Status Queries**: Query NTP status
|
||||
|
||||
### 8.2 NTP Operations
|
||||
```bash
|
||||
# NTP status
|
||||
ntpq -p
|
||||
timedatectl status
|
||||
|
||||
# NTP configuration
|
||||
timedatectl set-timezone <timezone>
|
||||
timedatectl set-ntp <true/false>
|
||||
```
|
||||
|
||||
### 8.3 Configuration Files
|
||||
- **ntp.conf**: NTP daemon configuration
|
||||
- **chrony.conf**: Chrony configuration (alternative)
|
||||
|
||||
## 9. Integration Patterns
|
||||
|
||||
### 9.1 Command Execution Pattern
|
||||
```go
|
||||
func executeCommand(cmd string, args []string) (string, error) {
|
||||
ctx := context.Background()
|
||||
output, err := exec.CommandContext(ctx, cmd, args...).CombinedOutput()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("command failed: %w", err)
|
||||
}
|
||||
return string(output), nil
|
||||
}
|
||||
```
|
||||
|
||||
### 9.2 File Operation Pattern
|
||||
```go
|
||||
func readConfigFile(path string) ([]byte, error) {
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read config: %w", err)
|
||||
}
|
||||
return data, nil
|
||||
}
|
||||
|
||||
func writeConfigFile(path string, data []byte) error {
|
||||
if err := os.WriteFile(path, data, 0644); err != nil {
|
||||
return fmt.Errorf("failed to write config: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
```
|
||||
|
||||
### 9.3 Database Integration Pattern
|
||||
```go
|
||||
func queryBaculaDB(query string, args ...interface{}) ([]map[string]interface{}, error) {
|
||||
rows, err := db.Query(query, args...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
// Process rows...
|
||||
return results, nil
|
||||
}
|
||||
```
|
||||
|
||||
## 10. Error Handling
|
||||
|
||||
### 10.1 Command Execution Errors
|
||||
- **Timeout**: Command execution timeout
|
||||
- **Exit Code**: Check command exit codes
|
||||
- **Output Parsing**: Handle parsing errors
|
||||
|
||||
### 10.2 File Operation Errors
|
||||
- **Permission Errors**: Handle permission denied
|
||||
- **File Not Found**: Handle missing files
|
||||
- **Write Errors**: Handle write failures
|
||||
|
||||
### 10.3 Database Integration Errors
|
||||
- **Connection Errors**: Handle database connection failures
|
||||
- **Query Errors**: Handle SQL query errors
|
||||
- **Transaction Errors**: Handle transaction failures
|
||||
|
||||
## 11. Monitoring & Health Checks
|
||||
|
||||
### 11.1 Integration Health
|
||||
- **ZFS Health**: Monitor ZFS pool health
|
||||
- **SCST Health**: Monitor SCST service status
|
||||
- **Bacula Health**: Monitor Bacula service status
|
||||
- **MHVTL Health**: Monitor MHVTL service status
|
||||
|
||||
### 11.2 Health Check Endpoints
|
||||
```
|
||||
GET /api/v1/health
|
||||
GET /api/v1/health/zfs
|
||||
GET /api/v1/health/scst
|
||||
GET /api/v1/health/bacula
|
||||
GET /api/v1/health/vtl
|
||||
```
|
||||
|
||||
## 12. Future Integrations
|
||||
|
||||
### 12.1 Planned Integrations
|
||||
- **LDAP/AD**: Directory service integration
|
||||
- **Cloud Storage**: Cloud backup integration
|
||||
- **Monitoring Systems**: Prometheus, Grafana integration
|
||||
- **Notification Systems**: Email, Slack, PagerDuty integration
|
||||
|
||||
283
docs/alpha/srs/SRS-00-Overview.md
Normal file
283
docs/alpha/srs/SRS-00-Overview.md
Normal file
@@ -0,0 +1,283 @@
|
||||
# Software Requirements Specification (SRS)
|
||||
## AtlasOS - Calypso Backup Appliance
|
||||
### Alpha Release
|
||||
|
||||
**Version:** 1.0.0-alpha
|
||||
**Date:** 2025-01-XX
|
||||
**Status:** In Development
|
||||
|
||||
---
|
||||
|
||||
## 1. Introduction
|
||||
|
||||
### 1.1 Purpose
|
||||
This document provides a comprehensive Software Requirements Specification (SRS) for AtlasOS - Calypso, an enterprise-grade backup appliance management system. The system provides unified management for storage, backup, tape libraries, and system administration through a modern web-based interface.
|
||||
|
||||
### 1.2 Scope
|
||||
Calypso is designed to manage:
|
||||
- ZFS storage pools and datasets
|
||||
- File sharing (SMB/CIFS and NFS)
|
||||
- iSCSI block storage targets
|
||||
- Physical and Virtual Tape Libraries (VTL)
|
||||
- Backup job management (Bacula/Bareos integration)
|
||||
- System monitoring and alerting
|
||||
- User and access management (IAM)
|
||||
- Object storage services
|
||||
- Snapshot and replication management
|
||||
|
||||
### 1.3 Definitions, Acronyms, and Abbreviations
|
||||
- **ZFS**: Zettabyte File System
|
||||
- **SMB/CIFS**: Server Message Block / Common Internet File System
|
||||
- **NFS**: Network File System
|
||||
- **iSCSI**: Internet Small Computer Systems Interface
|
||||
- **VTL**: Virtual Tape Library
|
||||
- **IAM**: Identity and Access Management
|
||||
- **RBAC**: Role-Based Access Control
|
||||
- **API**: Application Programming Interface
|
||||
- **REST**: Representational State Transfer
|
||||
- **JWT**: JSON Web Token
|
||||
- **SNMP**: Simple Network Management Protocol
|
||||
- **NTP**: Network Time Protocol
|
||||
|
||||
### 1.4 References
|
||||
- ZFS Documentation: https://openzfs.github.io/openzfs-docs/
|
||||
- SCST Documentation: http://scst.sourceforge.net/
|
||||
- Bacula Documentation: https://www.bacula.org/documentation/
|
||||
- React Documentation: https://react.dev/
|
||||
- Go Documentation: https://go.dev/doc/
|
||||
|
||||
### 1.5 Overview
|
||||
This SRS is organized into the following sections:
|
||||
- **SRS-01**: Storage Management
|
||||
- **SRS-02**: File Sharing (SMB/NFS)
|
||||
- **SRS-03**: iSCSI Management
|
||||
- **SRS-04**: Tape Library Management
|
||||
- **SRS-05**: Backup Management
|
||||
- **SRS-06**: Object Storage
|
||||
- **SRS-07**: Snapshot & Replication
|
||||
- **SRS-08**: System Management
|
||||
- **SRS-09**: Monitoring & Alerting
|
||||
- **SRS-10**: Identity & Access Management
|
||||
- **SRS-11**: User Interface & Experience
|
||||
|
||||
---
|
||||
|
||||
## 2. System Overview
|
||||
|
||||
### 2.1 System Architecture
|
||||
Calypso follows a client-server architecture:
|
||||
- **Frontend**: React-based Single Page Application (SPA)
|
||||
- **Backend**: Go-based REST API server
|
||||
- **Database**: PostgreSQL for persistent storage
|
||||
- **External Services**: ZFS, SCST, Bacula/Bareos, MHVTL
|
||||
|
||||
### 2.2 Technology Stack
|
||||
|
||||
#### Frontend
|
||||
- React 18 with TypeScript
|
||||
- Vite for build tooling
|
||||
- TailwindCSS for styling
|
||||
- TanStack Query for data fetching
|
||||
- React Router for navigation
|
||||
- Zustand for state management
|
||||
- Axios for HTTP requests
|
||||
- Lucide React for icons
|
||||
|
||||
#### Backend
|
||||
- Go 1.21+
|
||||
- Gin web framework
|
||||
- PostgreSQL database
|
||||
- JWT for authentication
|
||||
- Structured logging (zerolog)
|
||||
|
||||
### 2.3 Deployment Model
|
||||
- Single-server deployment
|
||||
- Systemd service management
|
||||
- Reverse proxy support (nginx/caddy)
|
||||
- WebSocket support for real-time updates
|
||||
|
||||
---
|
||||
|
||||
## 3. Functional Requirements
|
||||
|
||||
### 3.1 Authentication & Authorization
|
||||
- User login/logout
|
||||
- JWT-based session management
|
||||
- Role-based access control (Admin, Operator, ReadOnly)
|
||||
- Permission-based feature access
|
||||
- Session timeout and refresh
|
||||
|
||||
### 3.2 Storage Management
|
||||
- ZFS pool creation, deletion, and monitoring
|
||||
- Dataset management (filesystems and volumes)
|
||||
- Disk discovery and monitoring
|
||||
- Storage repository management
|
||||
- ARC statistics monitoring
|
||||
|
||||
### 3.3 File Sharing
|
||||
- SMB/CIFS share creation and configuration
|
||||
- NFS share creation and client management
|
||||
- Share access control
|
||||
- Mount point management
|
||||
|
||||
### 3.4 iSCSI Management
|
||||
- iSCSI target creation and management
|
||||
- LUN mapping and configuration
|
||||
- Initiator access control
|
||||
- Portal configuration
|
||||
- Extent management
|
||||
|
||||
### 3.5 Tape Library Management
|
||||
- Physical tape library discovery
|
||||
- Virtual Tape Library (VTL) management
|
||||
- Tape drive and slot management
|
||||
- Media inventory
|
||||
|
||||
### 3.6 Backup Management
|
||||
- Backup job creation and scheduling
|
||||
- Bacula/Bareos integration
|
||||
- Storage pool and volume management
|
||||
- Job history and monitoring
|
||||
- Client management
|
||||
|
||||
### 3.7 Object Storage
|
||||
- S3-compatible bucket management
|
||||
- Access policy configuration
|
||||
- User and key management
|
||||
- Usage monitoring
|
||||
|
||||
### 3.8 Snapshot & Replication
|
||||
- ZFS snapshot creation and management
|
||||
- Snapshot rollback and cloning
|
||||
- Replication task configuration
|
||||
- Remote replication management
|
||||
|
||||
### 3.9 System Management
|
||||
- Network interface configuration
|
||||
- Service management (start/stop/restart)
|
||||
- NTP configuration
|
||||
- SNMP configuration
|
||||
- System logs viewing
|
||||
- Terminal console access
|
||||
- Feature license management
|
||||
|
||||
### 3.10 Monitoring & Alerting
|
||||
- Real-time system metrics
|
||||
- Storage health monitoring
|
||||
- Network throughput monitoring
|
||||
- Alert rule configuration
|
||||
- Alert history and management
|
||||
|
||||
### 3.11 Identity & Access Management
|
||||
- User account management
|
||||
- Role management
|
||||
- Permission assignment
|
||||
- Group management
|
||||
- User profile management
|
||||
|
||||
---
|
||||
|
||||
## 4. Non-Functional Requirements
|
||||
|
||||
### 4.1 Performance
|
||||
- API response time < 200ms for read operations
|
||||
- API response time < 1s for write operations
|
||||
- Support for 100+ concurrent users
|
||||
- Real-time metrics update every 5-30 seconds
|
||||
|
||||
### 4.2 Security
|
||||
- HTTPS support
|
||||
- JWT token expiration and refresh
|
||||
- Password hashing (bcrypt)
|
||||
- SQL injection prevention
|
||||
- XSS protection
|
||||
- CSRF protection
|
||||
- Rate limiting
|
||||
- Audit logging
|
||||
|
||||
### 4.3 Reliability
|
||||
- Database transaction support
|
||||
- Error handling and recovery
|
||||
- Health check endpoints
|
||||
- Graceful shutdown
|
||||
|
||||
### 4.4 Usability
|
||||
- Responsive web design
|
||||
- Dark theme support
|
||||
- Intuitive navigation
|
||||
- Real-time feedback
|
||||
- Loading states
|
||||
- Error messages
|
||||
|
||||
### 4.5 Maintainability
|
||||
- Clean code architecture
|
||||
- Comprehensive logging
|
||||
- API documentation
|
||||
- Code comments
|
||||
- Modular design
|
||||
|
||||
---
|
||||
|
||||
## 5. System Constraints
|
||||
|
||||
### 5.1 Hardware Requirements
|
||||
- Minimum: 4GB RAM, 2 CPU cores, 100GB storage
|
||||
- Recommended: 8GB+ RAM, 4+ CPU cores, 500GB+ storage
|
||||
|
||||
### 5.2 Software Requirements
|
||||
- Linux-based operating system (Ubuntu 24.04+)
|
||||
- PostgreSQL 14+
|
||||
- ZFS support
|
||||
- SCST installed and configured
|
||||
- Bacula/Bareos (optional, for backup features)
|
||||
|
||||
### 5.3 Network Requirements
|
||||
- Network connectivity for remote access
|
||||
- SSH access for system management
|
||||
- Port 8080 (API) and 3000 (Frontend) accessible
|
||||
|
||||
---
|
||||
|
||||
## 6. Assumptions and Dependencies
|
||||
|
||||
### 6.1 Assumptions
|
||||
- System has root/sudo access for ZFS and system operations
|
||||
- Network interfaces are properly configured
|
||||
- External services (Bacula, SCST) are installed and accessible
|
||||
- Users have basic understanding of storage and backup concepts
|
||||
|
||||
### 6.2 Dependencies
|
||||
- PostgreSQL database
|
||||
- ZFS kernel module and tools
|
||||
- SCST kernel module and tools
|
||||
- Bacula/Bareos (for backup features)
|
||||
- MHVTL (for VTL features)
|
||||
|
||||
---
|
||||
|
||||
## 7. Future Enhancements
|
||||
|
||||
### 7.1 Planned Features
|
||||
- LDAP/Active Directory integration
|
||||
- Multi-site replication
|
||||
- Cloud backup integration
|
||||
- Advanced encryption at rest
|
||||
- WebSocket real-time updates
|
||||
- Mobile responsive improvements
|
||||
- Advanced reporting and analytics
|
||||
|
||||
### 7.2 Potential Enhancements
|
||||
- Multi-tenant support
|
||||
- API rate limiting per user
|
||||
- Advanced backup scheduling
|
||||
- Disaster recovery features
|
||||
- Performance optimization tools
|
||||
|
||||
---
|
||||
|
||||
## Document History
|
||||
|
||||
| Version | Date | Author | Changes |
|
||||
|---------|------|--------|---------|
|
||||
| 1.0.0-alpha | 2025-01-XX | Development Team | Initial SRS document |
|
||||
|
||||
127
docs/alpha/srs/SRS-01-Storage-Management.md
Normal file
127
docs/alpha/srs/SRS-01-Storage-Management.md
Normal file
@@ -0,0 +1,127 @@
|
||||
# SRS-01: Storage Management
|
||||
|
||||
## 1. Overview
|
||||
Storage Management module provides comprehensive management of ZFS storage pools, datasets, disks, and storage repositories.
|
||||
|
||||
## 2. Functional Requirements
|
||||
|
||||
### 2.1 ZFS Pool Management
|
||||
**FR-SM-001**: System shall allow users to create ZFS pools
|
||||
- **Input**: Pool name, RAID level, disk selection, compression, deduplication options
|
||||
- **Output**: Created pool with UUID
|
||||
- **Validation**: Pool name uniqueness, disk availability, RAID level compatibility
|
||||
|
||||
**FR-SM-002**: System shall allow users to list all ZFS pools
|
||||
- **Output**: List of pools with status, capacity, health information
|
||||
- **Refresh**: Auto-refresh every 2 minutes
|
||||
|
||||
**FR-SM-003**: System shall allow users to view ZFS pool details
|
||||
- **Output**: Pool configuration, capacity, health, datasets, disk information
|
||||
|
||||
**FR-SM-004**: System shall allow users to delete ZFS pools
|
||||
- **Validation**: Pool must be empty or confirmation required
|
||||
- **Side Effect**: All datasets in pool are destroyed
|
||||
|
||||
**FR-SM-005**: System shall allow users to add spare disks to pools
|
||||
- **Input**: Pool ID, disk list
|
||||
- **Validation**: Disk availability, compatibility
|
||||
|
||||
### 2.2 ZFS Dataset Management
|
||||
**FR-SM-006**: System shall allow users to create ZFS datasets
|
||||
- **Input**: Pool ID, dataset name, type (filesystem/volume), compression, quota, reservation, mount point
|
||||
- **Output**: Created dataset with UUID
|
||||
- **Validation**: Name uniqueness within pool, valid mount point
|
||||
|
||||
**FR-SM-007**: System shall allow users to list datasets in a pool
|
||||
- **Input**: Pool ID
|
||||
- **Output**: List of datasets with properties
|
||||
- **Refresh**: Auto-refresh every 1 second
|
||||
|
||||
**FR-SM-008**: System shall allow users to delete ZFS datasets
|
||||
- **Input**: Pool ID, dataset name
|
||||
- **Validation**: Dataset must not be in use
|
||||
|
||||
### 2.3 Disk Management
|
||||
**FR-SM-009**: System shall discover and list all physical disks
|
||||
- **Output**: Disk list with size, type, status, mount information
|
||||
- **Refresh**: Auto-refresh every 5 minutes
|
||||
|
||||
**FR-SM-010**: System shall allow users to manually sync disk discovery
|
||||
- **Action**: Trigger disk rescan
|
||||
|
||||
**FR-SM-011**: System shall display disk details
|
||||
- **Output**: Disk properties, partitions, usage, health status
|
||||
|
||||
### 2.4 Storage Repository Management
|
||||
**FR-SM-012**: System shall allow users to create storage repositories
|
||||
- **Input**: Name, type, path, capacity
|
||||
- **Output**: Created repository with ID
|
||||
|
||||
**FR-SM-013**: System shall allow users to list storage repositories
|
||||
- **Output**: Repository list with capacity, usage, status
|
||||
|
||||
**FR-SM-014**: System shall allow users to view repository details
|
||||
- **Output**: Repository properties, usage statistics
|
||||
|
||||
**FR-SM-015**: System shall allow users to delete storage repositories
|
||||
- **Validation**: Repository must not be in use
|
||||
|
||||
### 2.5 ARC Statistics
|
||||
**FR-SM-016**: System shall display ZFS ARC statistics
|
||||
- **Output**: Hit ratio, cache size, eviction statistics
|
||||
- **Refresh**: Real-time updates
|
||||
|
||||
## 3. User Interface Requirements
|
||||
|
||||
### 3.1 Storage Dashboard
|
||||
- Pool overview cards with capacity and health
|
||||
- Dataset tree view
|
||||
- Disk list with status indicators
|
||||
- Quick actions (create pool, create dataset)
|
||||
|
||||
### 3.2 Pool Management
|
||||
- Pool creation wizard
|
||||
- Pool detail view with tabs (Overview, Datasets, Disks, Settings)
|
||||
- Pool deletion confirmation dialog
|
||||
|
||||
### 3.3 Dataset Management
|
||||
- Dataset creation form
|
||||
- Dataset list with filtering and sorting
|
||||
- Dataset detail view
|
||||
- Dataset deletion confirmation
|
||||
|
||||
## 4. API Endpoints
|
||||
|
||||
```
|
||||
GET /api/v1/storage/zfs/pools
|
||||
GET /api/v1/storage/zfs/pools/:id
|
||||
POST /api/v1/storage/zfs/pools
|
||||
DELETE /api/v1/storage/zfs/pools/:id
|
||||
POST /api/v1/storage/zfs/pools/:id/spare
|
||||
|
||||
GET /api/v1/storage/zfs/pools/:id/datasets
|
||||
POST /api/v1/storage/zfs/pools/:id/datasets
|
||||
DELETE /api/v1/storage/zfs/pools/:id/datasets/:dataset
|
||||
|
||||
GET /api/v1/storage/disks
|
||||
POST /api/v1/storage/disks/sync
|
||||
|
||||
GET /api/v1/storage/repositories
|
||||
GET /api/v1/storage/repositories/:id
|
||||
POST /api/v1/storage/repositories
|
||||
DELETE /api/v1/storage/repositories/:id
|
||||
|
||||
GET /api/v1/storage/zfs/arc/stats
|
||||
```
|
||||
|
||||
## 5. Permissions
|
||||
- **storage:read**: Required for all read operations
|
||||
- **storage:write**: Required for create, update, delete operations
|
||||
|
||||
## 6. Error Handling
|
||||
- Invalid pool name format
|
||||
- Disk not available
|
||||
- Pool already exists
|
||||
- Dataset in use
|
||||
- Insufficient permissions
|
||||
|
||||
141
docs/alpha/srs/SRS-02-File-Sharing.md
Normal file
141
docs/alpha/srs/SRS-02-File-Sharing.md
Normal file
@@ -0,0 +1,141 @@
|
||||
# SRS-02: File Sharing (SMB/NFS)
|
||||
|
||||
## 1. Overview
|
||||
File Sharing module provides management of SMB/CIFS and NFS shares for network file access.
|
||||
|
||||
## 2. Functional Requirements
|
||||
|
||||
### 2.1 Share Management
|
||||
**FR-FS-001**: System shall allow users to create shares
|
||||
- **Input**: Dataset ID, share type (SMB/NFS/Both), share name, mount point
|
||||
- **Output**: Created share with UUID
|
||||
- **Validation**: Dataset exists, share name uniqueness
|
||||
|
||||
**FR-FS-002**: System shall allow users to list all shares
|
||||
- **Output**: Share list with type, dataset, status
|
||||
- **Filtering**: By protocol, dataset, status
|
||||
|
||||
**FR-FS-003**: System shall allow users to view share details
|
||||
- **Output**: Share configuration, protocol settings, access control
|
||||
|
||||
**FR-FS-004**: System shall allow users to update shares
|
||||
- **Input**: Share ID, updated configuration
|
||||
- **Validation**: Valid configuration values
|
||||
|
||||
**FR-FS-005**: System shall allow users to delete shares
|
||||
- **Validation**: Share must not be actively accessed
|
||||
|
||||
### 2.2 SMB/CIFS Configuration
|
||||
**FR-FS-006**: System shall allow users to configure SMB share name
|
||||
- **Input**: Share ID, SMB share name
|
||||
- **Validation**: Valid SMB share name format
|
||||
|
||||
**FR-FS-007**: System shall allow users to configure SMB path
|
||||
- **Input**: Share ID, SMB path
|
||||
- **Validation**: Path exists and is accessible
|
||||
|
||||
**FR-FS-008**: System shall allow users to configure SMB comment
|
||||
- **Input**: Share ID, comment text
|
||||
|
||||
**FR-FS-009**: System shall allow users to enable/disable guest access
|
||||
- **Input**: Share ID, guest access flag
|
||||
|
||||
**FR-FS-010**: System shall allow users to configure read-only access
|
||||
- **Input**: Share ID, read-only flag
|
||||
|
||||
**FR-FS-011**: System shall allow users to configure browseable option
|
||||
- **Input**: Share ID, browseable flag
|
||||
|
||||
### 2.3 NFS Configuration
|
||||
**FR-FS-012**: System shall allow users to configure NFS clients
|
||||
- **Input**: Share ID, client list (IP addresses or hostnames)
|
||||
- **Validation**: Valid IP/hostname format
|
||||
|
||||
**FR-FS-013**: System shall allow users to add NFS clients
|
||||
- **Input**: Share ID, client address
|
||||
- **Validation**: Client not already in list
|
||||
|
||||
**FR-FS-014**: System shall allow users to remove NFS clients
|
||||
- **Input**: Share ID, client address
|
||||
|
||||
**FR-FS-015**: System shall allow users to configure NFS options
|
||||
- **Input**: Share ID, NFS options (ro, rw, sync, async, etc.)
|
||||
|
||||
### 2.4 Share Status
|
||||
**FR-FS-016**: System shall display share status (enabled/disabled)
|
||||
- **Output**: Current status for each protocol
|
||||
|
||||
**FR-FS-017**: System shall allow users to enable/disable SMB protocol
|
||||
- **Input**: Share ID, enabled flag
|
||||
|
||||
**FR-FS-018**: System shall allow users to enable/disable NFS protocol
|
||||
- **Input**: Share ID, enabled flag
|
||||
|
||||
## 3. User Interface Requirements
|
||||
|
||||
### 3.1 Share List View
|
||||
- Master-detail layout
|
||||
- Search and filter functionality
|
||||
- Protocol indicators (SMB/NFS badges)
|
||||
- Status indicators
|
||||
|
||||
### 3.2 Share Detail View
|
||||
- Protocol tabs (SMB, NFS)
|
||||
- Configuration forms
|
||||
- Client management (for NFS)
|
||||
- Quick actions (enable/disable protocols)
|
||||
|
||||
### 3.3 Create Share Modal
|
||||
- Dataset selection
|
||||
- Share name input
|
||||
- Protocol selection
|
||||
- Initial configuration
|
||||
|
||||
## 4. API Endpoints
|
||||
|
||||
```
|
||||
GET /api/v1/shares
|
||||
GET /api/v1/shares/:id
|
||||
POST /api/v1/shares
|
||||
PUT /api/v1/shares/:id
|
||||
DELETE /api/v1/shares/:id
|
||||
```
|
||||
|
||||
## 5. Data Model
|
||||
|
||||
### Share Object
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"dataset_id": "uuid",
|
||||
"dataset_name": "string",
|
||||
"mount_point": "string",
|
||||
"share_type": "smb|nfs|both",
|
||||
"smb_enabled": boolean,
|
||||
"smb_share_name": "string",
|
||||
"smb_path": "string",
|
||||
"smb_comment": "string",
|
||||
"smb_guest_ok": boolean,
|
||||
"smb_read_only": boolean,
|
||||
"smb_browseable": boolean,
|
||||
"nfs_enabled": boolean,
|
||||
"nfs_clients": ["string"],
|
||||
"nfs_options": "string",
|
||||
"is_active": boolean,
|
||||
"created_at": "timestamp",
|
||||
"updated_at": "timestamp",
|
||||
"created_by": "uuid"
|
||||
}
|
||||
```
|
||||
|
||||
## 6. Permissions
|
||||
- **storage:read**: Required for viewing shares
|
||||
- **storage:write**: Required for creating, updating, deleting shares
|
||||
|
||||
## 7. Error Handling
|
||||
- Invalid dataset ID
|
||||
- Duplicate share name
|
||||
- Invalid client address format
|
||||
- Share in use
|
||||
- Insufficient permissions
|
||||
|
||||
163
docs/alpha/srs/SRS-03-iSCSI-Management.md
Normal file
163
docs/alpha/srs/SRS-03-iSCSI-Management.md
Normal file
@@ -0,0 +1,163 @@
|
||||
# SRS-03: iSCSI Management
|
||||
|
||||
## 1. Overview
|
||||
iSCSI Management module provides configuration and management of iSCSI targets, LUNs, initiators, and portals using SCST.
|
||||
|
||||
## 2. Functional Requirements
|
||||
|
||||
### 2.1 Target Management
|
||||
**FR-ISCSI-001**: System shall allow users to create iSCSI targets
|
||||
- **Input**: Target name, alias
|
||||
- **Output**: Created target with ID
|
||||
- **Validation**: Target name uniqueness, valid IQN format
|
||||
|
||||
**FR-ISCSI-002**: System shall allow users to list all iSCSI targets
|
||||
- **Output**: Target list with status, LUN count, initiator count
|
||||
|
||||
**FR-ISCSI-003**: System shall allow users to view target details
|
||||
- **Output**: Target configuration, LUNs, initiators, status
|
||||
|
||||
**FR-ISCSI-004**: System shall allow users to delete iSCSI targets
|
||||
- **Validation**: Target must not be in use
|
||||
|
||||
**FR-ISCSI-005**: System shall allow users to enable/disable targets
|
||||
- **Input**: Target ID, enabled flag
|
||||
|
||||
### 2.2 LUN Management
|
||||
**FR-ISCSI-006**: System shall allow users to add LUNs to targets
|
||||
- **Input**: Target ID, device path, LUN number
|
||||
- **Validation**: Device exists, LUN number available
|
||||
|
||||
**FR-ISCSI-007**: System shall allow users to remove LUNs from targets
|
||||
- **Input**: Target ID, LUN ID
|
||||
|
||||
**FR-ISCSI-008**: System shall display LUN information
|
||||
- **Output**: LUN number, device, size, status
|
||||
|
||||
### 2.3 Initiator Management
|
||||
**FR-ISCSI-009**: System shall allow users to add initiators to targets
|
||||
- **Input**: Target ID, initiator IQN
|
||||
- **Validation**: Valid IQN format
|
||||
|
||||
**FR-ISCSI-010**: System shall allow users to remove initiators from targets
|
||||
- **Input**: Target ID, initiator ID
|
||||
|
||||
**FR-ISCSI-011**: System shall allow users to list all initiators
|
||||
- **Output**: Initiator list with associated targets
|
||||
|
||||
**FR-ISCSI-012**: System shall allow users to create initiator groups
|
||||
- **Input**: Group name, initiator list
|
||||
- **Output**: Created group with ID
|
||||
|
||||
**FR-ISCSI-013**: System shall allow users to manage initiator groups
|
||||
- **Actions**: Create, update, delete, add/remove initiators
|
||||
|
||||
### 2.4 Portal Management
|
||||
**FR-ISCSI-014**: System shall allow users to create portals
|
||||
- **Input**: IP address, port
|
||||
- **Output**: Created portal with ID
|
||||
|
||||
**FR-ISCSI-015**: System shall allow users to list portals
|
||||
- **Output**: Portal list with IP, port, status
|
||||
|
||||
**FR-ISCSI-016**: System shall allow users to update portals
|
||||
- **Input**: Portal ID, updated configuration
|
||||
|
||||
**FR-ISCSI-017**: System shall allow users to delete portals
|
||||
- **Input**: Portal ID
|
||||
|
||||
### 2.5 Extent Management
|
||||
**FR-ISCSI-018**: System shall allow users to create extents
|
||||
- **Input**: Device path, size, type
|
||||
- **Output**: Created extent
|
||||
|
||||
**FR-ISCSI-019**: System shall allow users to list extents
|
||||
- **Output**: Extent list with device, size, type
|
||||
|
||||
**FR-ISCSI-020**: System shall allow users to delete extents
|
||||
- **Input**: Extent device
|
||||
|
||||
### 2.6 Configuration Management
|
||||
**FR-ISCSI-021**: System shall allow users to view SCST configuration file
|
||||
- **Output**: Current SCST configuration
|
||||
|
||||
**FR-ISCSI-022**: System shall allow users to update SCST configuration file
|
||||
- **Input**: Configuration content
|
||||
- **Validation**: Valid SCST configuration format
|
||||
|
||||
**FR-ISCSI-023**: System shall allow users to apply SCST configuration
|
||||
- **Action**: Reload SCST configuration
|
||||
- **Side Effect**: Targets may be restarted
|
||||
|
||||
## 3. User Interface Requirements
|
||||
|
||||
### 3.1 Target List View
|
||||
- Target cards with status indicators
|
||||
- Quick actions (enable/disable, delete)
|
||||
- Filter and search functionality
|
||||
|
||||
### 3.2 Target Detail View
|
||||
- Overview tab (target info, status)
|
||||
- LUNs tab (LUN list, add/remove)
|
||||
- Initiators tab (initiator list, add/remove)
|
||||
- Settings tab (target configuration)
|
||||
|
||||
### 3.3 Create Target Wizard
|
||||
- Target name input
|
||||
- Alias input
|
||||
- Initial LUN assignment (optional)
|
||||
- Initial initiator assignment (optional)
|
||||
|
||||
## 4. API Endpoints
|
||||
|
||||
```
|
||||
GET /api/v1/scst/targets
|
||||
GET /api/v1/scst/targets/:id
|
||||
POST /api/v1/scst/targets
|
||||
DELETE /api/v1/scst/targets/:id
|
||||
POST /api/v1/scst/targets/:id/enable
|
||||
POST /api/v1/scst/targets/:id/disable
|
||||
|
||||
POST /api/v1/scst/targets/:id/luns
|
||||
DELETE /api/v1/scst/targets/:id/luns/:lunId
|
||||
|
||||
POST /api/v1/scst/targets/:id/initiators
|
||||
GET /api/v1/scst/initiators
|
||||
GET /api/v1/scst/initiators/:id
|
||||
DELETE /api/v1/scst/initiators/:id
|
||||
|
||||
GET /api/v1/scst/initiator-groups
|
||||
GET /api/v1/scst/initiator-groups/:id
|
||||
POST /api/v1/scst/initiator-groups
|
||||
PUT /api/v1/scst/initiator-groups/:id
|
||||
DELETE /api/v1/scst/initiator-groups/:id
|
||||
POST /api/v1/scst/initiator-groups/:id/initiators
|
||||
|
||||
GET /api/v1/scst/portals
|
||||
GET /api/v1/scst/portals/:id
|
||||
POST /api/v1/scst/portals
|
||||
PUT /api/v1/scst/portals/:id
|
||||
DELETE /api/v1/scst/portals/:id
|
||||
|
||||
GET /api/v1/scst/extents
|
||||
POST /api/v1/scst/extents
|
||||
DELETE /api/v1/scst/extents/:device
|
||||
|
||||
GET /api/v1/scst/config/file
|
||||
PUT /api/v1/scst/config/file
|
||||
POST /api/v1/scst/config/apply
|
||||
|
||||
GET /api/v1/scst/handlers
|
||||
```
|
||||
|
||||
## 5. Permissions
|
||||
- **iscsi:read**: Required for viewing targets, initiators, portals
|
||||
- **iscsi:write**: Required for creating, updating, deleting
|
||||
|
||||
## 6. Error Handling
|
||||
- Invalid IQN format
|
||||
- Target name already exists
|
||||
- Device not available
|
||||
- SCST configuration errors
|
||||
- Insufficient permissions
|
||||
|
||||
115
docs/alpha/srs/SRS-04-Tape-Library-Management.md
Normal file
115
docs/alpha/srs/SRS-04-Tape-Library-Management.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# SRS-04: Tape Library Management
|
||||
|
||||
## 1. Overview
|
||||
Tape Library Management module provides management of physical and virtual tape libraries, drives, slots, and media.
|
||||
|
||||
## 2. Functional Requirements
|
||||
|
||||
### 2.1 Physical Tape Library
|
||||
**FR-TAPE-001**: System shall discover physical tape libraries
|
||||
- **Action**: Scan for attached tape libraries
|
||||
- **Output**: List of discovered libraries
|
||||
|
||||
**FR-TAPE-002**: System shall list physical tape libraries
|
||||
- **Output**: Library list with vendor, model, serial number
|
||||
|
||||
**FR-TAPE-003**: System shall display physical library details
|
||||
- **Output**: Library properties, drives, slots, media
|
||||
|
||||
**FR-TAPE-004**: System shall allow users to load media
|
||||
- **Input**: Library ID, drive ID, slot ID
|
||||
- **Action**: Load tape from slot to drive
|
||||
|
||||
**FR-TAPE-005**: System shall allow users to unload media
|
||||
- **Input**: Library ID, drive ID, slot ID
|
||||
- **Action**: Unload tape from drive to slot
|
||||
|
||||
### 2.2 Virtual Tape Library (VTL)
|
||||
**FR-TAPE-006**: System shall allow users to create VTL libraries
|
||||
- **Input**: Library name, vendor, model, drive count, slot count
|
||||
- **Output**: Created VTL with ID
|
||||
|
||||
**FR-TAPE-007**: System shall allow users to list VTL libraries
|
||||
- **Output**: VTL list with status, drive count, slot count
|
||||
|
||||
**FR-TAPE-008**: System shall allow users to view VTL details
|
||||
- **Output**: VTL configuration, drives, slots, media
|
||||
|
||||
**FR-TAPE-009**: System shall allow users to update VTL libraries
|
||||
- **Input**: VTL ID, updated configuration
|
||||
|
||||
**FR-TAPE-010**: System shall allow users to delete VTL libraries
|
||||
- **Input**: VTL ID
|
||||
- **Validation**: VTL must not be in use
|
||||
|
||||
**FR-TAPE-011**: System shall allow users to start/stop VTL libraries
|
||||
- **Input**: VTL ID, action (start/stop)
|
||||
|
||||
### 2.3 Drive Management
|
||||
**FR-TAPE-012**: System shall display drive information
|
||||
- **Output**: Drive status, media loaded, position
|
||||
|
||||
**FR-TAPE-013**: System shall allow users to control drives
|
||||
- **Actions**: Load, unload, eject, rewind
|
||||
|
||||
### 2.4 Slot Management
|
||||
**FR-TAPE-014**: System shall display slot information
|
||||
- **Output**: Slot status, media present, media label
|
||||
|
||||
**FR-TAPE-015**: System shall allow users to manage slots
|
||||
- **Actions**: View media, move media
|
||||
|
||||
### 2.5 Media Management
|
||||
**FR-TAPE-016**: System shall display media inventory
|
||||
- **Output**: Media list with label, type, status, location
|
||||
|
||||
**FR-TAPE-017**: System shall allow users to label media
|
||||
- **Input**: Media ID, label
|
||||
- **Validation**: Valid label format
|
||||
|
||||
## 3. User Interface Requirements
|
||||
|
||||
### 3.1 Library List View
|
||||
- Physical and VTL library cards
|
||||
- Status indicators
|
||||
- Quick actions (discover, create VTL)
|
||||
|
||||
### 3.2 Library Detail View
|
||||
- Overview tab (library info, status)
|
||||
- Drives tab (drive list, controls)
|
||||
- Slots tab (slot grid, media info)
|
||||
- Media tab (media inventory)
|
||||
|
||||
### 3.3 VTL Creation Wizard
|
||||
- Library name and configuration
|
||||
- Drive and slot count
|
||||
- Vendor and model selection
|
||||
|
||||
## 4. API Endpoints
|
||||
|
||||
```
|
||||
GET /api/v1/tape/physical/libraries
|
||||
POST /api/v1/tape/physical/libraries/discover
|
||||
GET /api/v1/tape/physical/libraries/:id
|
||||
|
||||
GET /api/v1/tape/vtl/libraries
|
||||
GET /api/v1/tape/vtl/libraries/:id
|
||||
POST /api/v1/tape/vtl/libraries
|
||||
PUT /api/v1/tape/vtl/libraries/:id
|
||||
DELETE /api/v1/tape/vtl/libraries/:id
|
||||
POST /api/v1/tape/vtl/libraries/:id/start
|
||||
POST /api/v1/tape/vtl/libraries/:id/stop
|
||||
```
|
||||
|
||||
## 5. Permissions
|
||||
- **tape:read**: Required for viewing libraries
|
||||
- **tape:write**: Required for creating, updating, deleting, controlling
|
||||
|
||||
## 6. Error Handling
|
||||
- Library not found
|
||||
- Drive not available
|
||||
- Slot already occupied
|
||||
- Media not found
|
||||
- MHVTL service errors
|
||||
- Insufficient permissions
|
||||
|
||||
130
docs/alpha/srs/SRS-05-Backup-Management.md
Normal file
130
docs/alpha/srs/SRS-05-Backup-Management.md
Normal file
@@ -0,0 +1,130 @@
|
||||
# SRS-05: Backup Management
|
||||
|
||||
## 1. Overview
|
||||
Backup Management module provides integration with Bacula/Bareos for backup job management, scheduling, and monitoring.
|
||||
|
||||
## 2. Functional Requirements
|
||||
|
||||
### 2.1 Backup Jobs
|
||||
**FR-BACKUP-001**: System shall allow users to create backup jobs
|
||||
- **Input**: Job name, client, fileset, schedule, storage pool
|
||||
- **Output**: Created job with ID
|
||||
- **Validation**: Valid client, fileset, schedule
|
||||
|
||||
**FR-BACKUP-002**: System shall allow users to list backup jobs
|
||||
- **Output**: Job list with status, last run, next run
|
||||
- **Filtering**: By status, client, schedule
|
||||
|
||||
**FR-BACKUP-003**: System shall allow users to view job details
|
||||
- **Output**: Job configuration, history, statistics
|
||||
|
||||
**FR-BACKUP-004**: System shall allow users to run jobs manually
|
||||
- **Input**: Job ID
|
||||
- **Action**: Trigger immediate job execution
|
||||
|
||||
**FR-BACKUP-005**: System shall display job history
|
||||
- **Output**: Job run history with status, duration, data transferred
|
||||
|
||||
### 2.2 Clients
|
||||
**FR-BACKUP-006**: System shall list backup clients
|
||||
- **Output**: Client list with status, last backup
|
||||
|
||||
**FR-BACKUP-007**: System shall display client details
|
||||
- **Output**: Client configuration, job history
|
||||
|
||||
### 2.3 Storage Pools
|
||||
**FR-BACKUP-008**: System shall allow users to create storage pools
|
||||
- **Input**: Pool name, pool type, volume count
|
||||
- **Output**: Created pool with ID
|
||||
|
||||
**FR-BACKUP-009**: System shall allow users to list storage pools
|
||||
- **Output**: Pool list with type, volume count, usage
|
||||
|
||||
**FR-BACKUP-010**: System shall allow users to delete storage pools
|
||||
- **Input**: Pool ID
|
||||
- **Validation**: Pool must not be in use
|
||||
|
||||
### 2.4 Storage Volumes
|
||||
**FR-BACKUP-011**: System shall allow users to create storage volumes
|
||||
- **Input**: Pool ID, volume name, size
|
||||
- **Output**: Created volume with ID
|
||||
|
||||
**FR-BACKUP-012**: System shall allow users to list storage volumes
|
||||
- **Output**: Volume list with status, usage, expiration
|
||||
|
||||
**FR-BACKUP-013**: System shall allow users to update storage volumes
|
||||
- **Input**: Volume ID, updated properties
|
||||
|
||||
**FR-BACKUP-014**: System shall allow users to delete storage volumes
|
||||
- **Input**: Volume ID
|
||||
|
||||
### 2.5 Media Management
|
||||
**FR-BACKUP-015**: System shall list backup media
|
||||
- **Output**: Media list with label, type, status, location
|
||||
|
||||
**FR-BACKUP-016**: System shall display media details
|
||||
- **Output**: Media properties, job history, usage
|
||||
|
||||
### 2.6 Dashboard Statistics
|
||||
**FR-BACKUP-017**: System shall display backup dashboard statistics
|
||||
- **Output**: Total jobs, running jobs, success rate, data backed up
|
||||
|
||||
### 2.7 Bconsole Integration
|
||||
**FR-BACKUP-018**: System shall allow users to execute bconsole commands
|
||||
- **Input**: Command string
|
||||
- **Output**: Command output
|
||||
- **Validation**: Allowed commands only
|
||||
|
||||
## 3. User Interface Requirements
|
||||
|
||||
### 3.1 Backup Dashboard
|
||||
- Statistics cards (total jobs, running, success rate)
|
||||
- Recent job activity
|
||||
- Quick actions
|
||||
|
||||
### 3.2 Job Management
|
||||
- Job list with filtering
|
||||
- Job creation wizard
|
||||
- Job detail view with history
|
||||
- Job run controls
|
||||
|
||||
### 3.3 Storage Management
|
||||
- Storage pool list and management
|
||||
- Volume list and management
|
||||
- Media inventory
|
||||
|
||||
## 4. API Endpoints
|
||||
|
||||
```
|
||||
GET /api/v1/backup/dashboard/stats
|
||||
GET /api/v1/backup/jobs
|
||||
GET /api/v1/backup/jobs/:id
|
||||
POST /api/v1/backup/jobs
|
||||
|
||||
GET /api/v1/backup/clients
|
||||
GET /api/v1/backup/storage/pools
|
||||
POST /api/v1/backup/storage/pools
|
||||
DELETE /api/v1/backup/storage/pools/:id
|
||||
|
||||
GET /api/v1/backup/storage/volumes
|
||||
POST /api/v1/backup/storage/volumes
|
||||
PUT /api/v1/backup/storage/volumes/:id
|
||||
DELETE /api/v1/backup/storage/volumes/:id
|
||||
|
||||
GET /api/v1/backup/media
|
||||
GET /api/v1/backup/storage/daemons
|
||||
|
||||
POST /api/v1/backup/console/execute
|
||||
```
|
||||
|
||||
## 5. Permissions
|
||||
- **backup:read**: Required for viewing jobs, clients, storage
|
||||
- **backup:write**: Required for creating, updating, deleting, executing
|
||||
|
||||
## 6. Error Handling
|
||||
- Bacula/Bareos connection errors
|
||||
- Invalid job configuration
|
||||
- Job execution failures
|
||||
- Storage pool/volume errors
|
||||
- Insufficient permissions
|
||||
|
||||
111
docs/alpha/srs/SRS-06-Object-Storage.md
Normal file
111
docs/alpha/srs/SRS-06-Object-Storage.md
Normal file
@@ -0,0 +1,111 @@
|
||||
# SRS-06: Object Storage
|
||||
|
||||
## 1. Overview
|
||||
Object Storage module provides S3-compatible object storage service management including buckets, access policies, and user/key management.
|
||||
|
||||
## 2. Functional Requirements
|
||||
|
||||
### 2.1 Bucket Management
|
||||
**FR-OBJ-001**: System shall allow users to create buckets
|
||||
- **Input**: Bucket name, access policy (private/public-read)
|
||||
- **Output**: Created bucket with ID
|
||||
- **Validation**: Bucket name uniqueness, valid S3 naming
|
||||
|
||||
**FR-OBJ-002**: System shall allow users to list buckets
|
||||
- **Output**: Bucket list with name, type, usage, object count
|
||||
- **Filtering**: By name, type, access policy
|
||||
|
||||
**FR-OBJ-003**: System shall allow users to view bucket details
|
||||
- **Output**: Bucket configuration, usage statistics, access policy
|
||||
|
||||
**FR-OBJ-004**: System shall allow users to delete buckets
|
||||
- **Input**: Bucket ID
|
||||
- **Validation**: Bucket must be empty or confirmation required
|
||||
|
||||
**FR-OBJ-005**: System shall display bucket usage
|
||||
- **Output**: Storage used, object count, last modified
|
||||
|
||||
### 2.2 Access Policy Management
|
||||
**FR-OBJ-006**: System shall allow users to configure bucket access policies
|
||||
- **Input**: Bucket ID, access policy (private, public-read, public-read-write)
|
||||
- **Output**: Updated access policy
|
||||
|
||||
**FR-OBJ-007**: System shall display current access policy
|
||||
- **Output**: Policy type, policy document
|
||||
|
||||
### 2.3 User & Key Management
|
||||
**FR-OBJ-008**: System shall allow users to create S3 users
|
||||
- **Input**: Username, access level
|
||||
- **Output**: Created user with access keys
|
||||
|
||||
**FR-OBJ-009**: System shall allow users to list S3 users
|
||||
- **Output**: User list with access level, key count
|
||||
|
||||
**FR-OBJ-010**: System shall allow users to generate access keys
|
||||
- **Input**: User ID
|
||||
- **Output**: Access key ID and secret key
|
||||
|
||||
**FR-OBJ-011**: System shall allow users to revoke access keys
|
||||
- **Input**: User ID, key ID
|
||||
|
||||
### 2.4 Service Management
|
||||
**FR-OBJ-012**: System shall display service status
|
||||
- **Output**: Service status (running/stopped), uptime
|
||||
|
||||
**FR-OBJ-013**: System shall display service statistics
|
||||
- **Output**: Total usage, object count, endpoint URL
|
||||
|
||||
**FR-OBJ-014**: System shall display S3 endpoint URL
|
||||
- **Output**: Endpoint URL with copy functionality
|
||||
|
||||
## 3. User Interface Requirements
|
||||
|
||||
### 3.1 Object Storage Dashboard
|
||||
- Service status card
|
||||
- Statistics cards (total usage, object count, uptime)
|
||||
- S3 endpoint display with copy button
|
||||
|
||||
### 3.2 Bucket Management
|
||||
- Bucket list with search and filter
|
||||
- Bucket creation modal
|
||||
- Bucket detail view with tabs (Overview, Settings, Access Policy)
|
||||
- Bucket actions (delete, configure)
|
||||
|
||||
### 3.3 Tabs
|
||||
- **Buckets**: Main bucket management
|
||||
- **Users & Keys**: S3 user and access key management
|
||||
- **Monitoring**: Usage statistics and monitoring
|
||||
- **Settings**: Service configuration
|
||||
|
||||
## 4. API Endpoints
|
||||
|
||||
```
|
||||
GET /api/v1/object-storage/buckets
|
||||
GET /api/v1/object-storage/buckets/:id
|
||||
POST /api/v1/object-storage/buckets
|
||||
DELETE /api/v1/object-storage/buckets/:id
|
||||
PUT /api/v1/object-storage/buckets/:id/policy
|
||||
|
||||
GET /api/v1/object-storage/users
|
||||
POST /api/v1/object-storage/users
|
||||
GET /api/v1/object-storage/users/:id/keys
|
||||
POST /api/v1/object-storage/users/:id/keys
|
||||
DELETE /api/v1/object-storage/users/:id/keys/:keyId
|
||||
|
||||
GET /api/v1/object-storage/service/status
|
||||
GET /api/v1/object-storage/service/stats
|
||||
GET /api/v1/object-storage/service/endpoint
|
||||
```
|
||||
|
||||
## 5. Permissions
|
||||
- **object-storage:read**: Required for viewing buckets, users
|
||||
- **object-storage:write**: Required for creating, updating, deleting
|
||||
|
||||
## 6. Error Handling
|
||||
- Invalid bucket name
|
||||
- Bucket already exists
|
||||
- Bucket not empty
|
||||
- Invalid access policy
|
||||
- Service not available
|
||||
- Insufficient permissions
|
||||
|
||||
145
docs/alpha/srs/SRS-07-Snapshot-Replication.md
Normal file
145
docs/alpha/srs/SRS-07-Snapshot-Replication.md
Normal file
@@ -0,0 +1,145 @@
|
||||
# SRS-07: Snapshot & Replication
|
||||
|
||||
## 1. Overview
|
||||
Snapshot & Replication module provides ZFS snapshot management and remote replication task configuration.
|
||||
|
||||
## 2. Functional Requirements
|
||||
|
||||
### 2.1 Snapshot Management
|
||||
**FR-SNAP-001**: System shall allow users to create snapshots
|
||||
- **Input**: Dataset name, snapshot name
|
||||
- **Output**: Created snapshot with timestamp
|
||||
- **Validation**: Dataset exists, snapshot name uniqueness
|
||||
|
||||
**FR-SNAP-002**: System shall allow users to list snapshots
|
||||
- **Output**: Snapshot list with name, dataset, created date, referenced size
|
||||
- **Filtering**: By dataset, date range, name
|
||||
|
||||
**FR-SNAP-003**: System shall allow users to view snapshot details
|
||||
- **Output**: Snapshot properties, dataset, size, creation date
|
||||
|
||||
**FR-SNAP-004**: System shall allow users to delete snapshots
|
||||
- **Input**: Snapshot ID
|
||||
- **Validation**: Snapshot not in use
|
||||
|
||||
**FR-SNAP-005**: System shall allow users to rollback to snapshot
|
||||
- **Input**: Snapshot ID
|
||||
- **Warning**: Data loss warning required
|
||||
- **Action**: Rollback dataset to snapshot state
|
||||
|
||||
**FR-SNAP-006**: System shall allow users to clone snapshots
|
||||
- **Input**: Snapshot ID, clone name
|
||||
- **Output**: Created clone dataset
|
||||
|
||||
**FR-SNAP-007**: System shall display snapshot retention information
|
||||
- **Output**: Snapshots marked for expiration, retention policy
|
||||
|
||||
### 2.2 Replication Management
|
||||
**FR-SNAP-008**: System shall allow users to create replication tasks
|
||||
- **Input**: Task name, source dataset, target host, target dataset, schedule, compression
|
||||
- **Output**: Created replication task with ID
|
||||
- **Validation**: Valid source dataset, target host reachable
|
||||
|
||||
**FR-SNAP-009**: System shall allow users to list replication tasks
|
||||
- **Output**: Task list with status, last run, next run
|
||||
|
||||
**FR-SNAP-010**: System shall allow users to view replication task details
|
||||
- **Output**: Task configuration, history, status
|
||||
|
||||
**FR-SNAP-011**: System shall allow users to update replication tasks
|
||||
- **Input**: Task ID, updated configuration
|
||||
|
||||
**FR-SNAP-012**: System shall allow users to delete replication tasks
|
||||
- **Input**: Task ID
|
||||
|
||||
**FR-SNAP-013**: System shall display replication status
|
||||
- **Output**: Task status (idle, running, error), progress percentage
|
||||
|
||||
**FR-SNAP-014**: System shall allow users to run replication manually
|
||||
- **Input**: Task ID
|
||||
- **Action**: Trigger immediate replication
|
||||
|
||||
### 2.3 Replication Configuration
|
||||
**FR-SNAP-015**: System shall allow users to configure replication schedule
|
||||
- **Input**: Schedule type (hourly, daily, weekly, monthly, custom cron)
|
||||
- **Input**: Schedule time
|
||||
|
||||
**FR-SNAP-016**: System shall allow users to configure target settings
|
||||
- **Input**: Target host, SSH port, target user, target dataset
|
||||
|
||||
**FR-SNAP-017**: System shall allow users to configure compression
|
||||
- **Input**: Compression type (off, lz4, gzip, zstd)
|
||||
|
||||
**FR-SNAP-018**: System shall allow users to configure replication options
|
||||
- **Input**: Recursive flag, auto-snapshot flag, encryption flag
|
||||
|
||||
### 2.4 Restore Points
|
||||
**FR-SNAP-019**: System shall display restore points
|
||||
- **Output**: Available restore points from snapshots
|
||||
|
||||
**FR-SNAP-020**: System shall allow users to restore from snapshot
|
||||
- **Input**: Snapshot ID, restore target
|
||||
|
||||
## 3. User Interface Requirements
|
||||
|
||||
### 3.1 Snapshot & Replication Dashboard
|
||||
- Statistics cards (total snapshots, last replication, next scheduled)
|
||||
- Quick actions (create snapshot, view logs)
|
||||
|
||||
### 3.2 Tabs
|
||||
- **Snapshots**: Snapshot list and management
|
||||
- **Replication Tasks**: Replication task management
|
||||
- **Restore Points**: Restore point management
|
||||
|
||||
### 3.3 Snapshot List
|
||||
- Table view with columns (name, dataset, created, referenced, actions)
|
||||
- Search and filter functionality
|
||||
- Pagination
|
||||
- Bulk actions (select multiple)
|
||||
|
||||
### 3.4 Replication Task Management
|
||||
- Task list with status indicators
|
||||
- Task creation wizard
|
||||
- Task detail view with progress
|
||||
|
||||
### 3.5 Create Replication Modal
|
||||
- Task name input
|
||||
- Source dataset selection
|
||||
- Target configuration (host, port, user, dataset)
|
||||
- Schedule configuration
|
||||
- Compression and options
|
||||
|
||||
## 4. API Endpoints
|
||||
|
||||
```
|
||||
GET /api/v1/snapshots
|
||||
GET /api/v1/snapshots/:id
|
||||
POST /api/v1/snapshots
|
||||
DELETE /api/v1/snapshots/:id
|
||||
POST /api/v1/snapshots/:id/rollback
|
||||
POST /api/v1/snapshots/:id/clone
|
||||
|
||||
GET /api/v1/replication/tasks
|
||||
GET /api/v1/replication/tasks/:id
|
||||
POST /api/v1/replication/tasks
|
||||
PUT /api/v1/replication/tasks/:id
|
||||
DELETE /api/v1/replication/tasks/:id
|
||||
POST /api/v1/replication/tasks/:id/run
|
||||
GET /api/v1/replication/tasks/:id/status
|
||||
|
||||
GET /api/v1/restore-points
|
||||
POST /api/v1/restore-points/restore
|
||||
```
|
||||
|
||||
## 5. Permissions
|
||||
- **storage:read**: Required for viewing snapshots and replication tasks
|
||||
- **storage:write**: Required for creating, updating, deleting, executing
|
||||
|
||||
## 6. Error Handling
|
||||
- Invalid dataset
|
||||
- Snapshot not found
|
||||
- Replication target unreachable
|
||||
- SSH authentication failure
|
||||
- Replication task errors
|
||||
- Insufficient permissions
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user