Compare commits
6 Commits
main
...
2025-12-14
| Author | SHA1 | Date | |
|---|---|---|---|
| 999bfa1026 | |||
| ad464c828a | |||
| 0692b11335 | |||
| 99694bfa63 | |||
| 72b5c18f29 | |||
| 8100f87686 |
3
.github/copilot-instructions.md
vendored
3
.github/copilot-instructions.md
vendored
@@ -1,4 +1,5 @@
|
||||
You are an expert storage-systems engineer and Go backend architect.
|
||||
You are an expert storage-systems engineer and Go backend architect. Also your skilled in HTMX-based server-rendered UIs. You have deep experience building storage management systems similar to TrueNAS or Unraid. Your also familiar with Linux storage subsystems like ZFS, NFS, Samba, iSCSI, and S3-compatible object storage. Your also knowledgeable about best practices in security, RBAC, observability, and clean architecture in Go applications. Your also an expert in designing UIs using HTMX and TailwindCSS for modern web applications.
|
||||
|
||||
|
||||
Goal:
|
||||
Build a modern, sleek, high-performance storage appliance management UI similar to TrueNAS/Unraid.
|
||||
|
||||
351
README.md
351
README.md
@@ -1,42 +1,355 @@
|
||||
# Storage Appliance (skeleton)
|
||||
# Adastra Storage Appliance
|
||||
|
||||
This repository is a starting skeleton for a storage appliance management system using Go + HTMX.
|
||||
A comprehensive storage appliance management system providing ZFS pool management, NFS/SMB shares, iSCSI targets, object storage, and monitoring capabilities with a modern web interface.
|
||||
|
||||
Features in this skeleton:
|
||||
- HTTP server with chi router and graceful shutdown
|
||||
- Basic DB migration and seed for a sqlite database
|
||||
- Minimal middleware placeholders (auth, RBAC, CSRF)
|
||||
- Templates using html/template and HTMX sample
|
||||
- Job runner skeleton and infra adapter stubs for ZFS/NFS/SMB/MinIO/iSCSI
|
||||
## Features
|
||||
|
||||
Quick start:
|
||||
- **Storage Management**: ZFS pool and dataset management with snapshots
|
||||
- **Network Shares**: NFS and SMB/CIFS share management
|
||||
- **Block Storage**: iSCSI target and LUN management
|
||||
- **Object Storage**: MinIO integration for S3-compatible storage
|
||||
- **Authentication & RBAC**: User management with role-based access control
|
||||
- **Monitoring**: Prometheus metrics and real-time monitoring dashboard
|
||||
- **Audit Logging**: Comprehensive audit trail of all operations
|
||||
- **Modern UI**: HTMX-based web interface with TailwindCSS
|
||||
|
||||
## Requirements
|
||||
|
||||
- Ubuntu 24.04 (or compatible Linux distribution)
|
||||
- Go 1.21 or later
|
||||
- ZFS utilities (`zfsutils-linux`)
|
||||
- SMART monitoring tools (`smartmontools`)
|
||||
- NFS server (`nfs-kernel-server`)
|
||||
- Samba (`samba`)
|
||||
- iSCSI target utilities (`targetcli-fb`)
|
||||
- MinIO (included in installer)
|
||||
|
||||
## Installation
|
||||
|
||||
### Option 1: Using the Installation Script (Recommended)
|
||||
|
||||
1. Clone the repository:
|
||||
```bash
|
||||
git clone <repository-url>
|
||||
cd storage-appliance
|
||||
```
|
||||
|
||||
2. Run the installation script as root:
|
||||
```bash
|
||||
sudo bash packaging/install.sh
|
||||
```
|
||||
|
||||
The installer will:
|
||||
- Install all system dependencies
|
||||
- Build the application
|
||||
- Create the service user
|
||||
- Set up systemd service
|
||||
- Configure directories and permissions
|
||||
|
||||
3. Start the service:
|
||||
```bash
|
||||
sudo systemctl start adastra-storage
|
||||
sudo systemctl enable adastra-storage # Enable on boot
|
||||
```
|
||||
|
||||
4. Check the status:
|
||||
```bash
|
||||
sudo systemctl status adastra-storage
|
||||
```
|
||||
|
||||
5. View logs:
|
||||
```bash
|
||||
sudo journalctl -u adastra-storage -f
|
||||
```
|
||||
|
||||
### Option 2: Building a Debian Package
|
||||
|
||||
1. Build the Debian package:
|
||||
```bash
|
||||
cd packaging
|
||||
chmod +x build-deb.sh
|
||||
sudo ./build-deb.sh
|
||||
```
|
||||
|
||||
2. Install the package:
|
||||
```bash
|
||||
sudo dpkg -i ../adastra-storage_1.0.0_amd64.deb
|
||||
sudo apt-get install -f # Install any missing dependencies
|
||||
```
|
||||
|
||||
3. Start the service:
|
||||
```bash
|
||||
sudo systemctl start adastra-storage
|
||||
sudo systemctl enable adastra-storage
|
||||
```
|
||||
|
||||
### Option 3: Manual Installation
|
||||
|
||||
1. Install dependencies:
|
||||
```bash
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y golang-go zfsutils-linux smartmontools \
|
||||
nfs-kernel-server samba targetcli-fb build-essential
|
||||
```
|
||||
|
||||
2. Install MinIO:
|
||||
```bash
|
||||
wget https://dl.min.io/server/minio/release/linux-amd64/minio -O /usr/local/bin/minio
|
||||
chmod +x /usr/local/bin/minio
|
||||
```
|
||||
|
||||
3. Build the application:
|
||||
```bash
|
||||
go build -o appliance ./cmd/appliance
|
||||
```
|
||||
|
||||
4. Create directories:
|
||||
```bash
|
||||
sudo mkdir -p /opt/adastra-storage/{bin,data,templates,migrations,logs}
|
||||
```
|
||||
|
||||
5. Copy files:
|
||||
```bash
|
||||
sudo cp appliance /opt/adastra-storage/bin/adastra-storage
|
||||
sudo cp -r internal/templates/* /opt/adastra-storage/templates/
|
||||
sudo cp -r migrations/* /opt/adastra-storage/migrations/
|
||||
```
|
||||
|
||||
6. Create service user:
|
||||
```bash
|
||||
sudo useradd -r -s /bin/false -d /opt/adastra-storage adastra
|
||||
sudo chown -R adastra:adastra /opt/adastra-storage
|
||||
```
|
||||
|
||||
7. Install systemd service:
|
||||
```bash
|
||||
sudo cp packaging/adastra-storage.service /etc/systemd/system/
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable adastra-storage
|
||||
sudo systemctl start adastra-storage
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Installation Directory
|
||||
|
||||
The application is installed to `/opt/adastra-storage` with the following structure:
|
||||
```
|
||||
/opt/adastra-storage/
|
||||
├── bin/
|
||||
│ └── adastra-storage # Main binary
|
||||
├── data/
|
||||
│ └── appliance.db # SQLite database
|
||||
├── templates/ # HTML templates
|
||||
├── migrations/ # Database migrations
|
||||
├── logs/ # Log files
|
||||
└── uninstall.sh # Uninstaller script
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
The service uses the following environment variables (set in systemd service):
|
||||
- `INSTALL_DIR`: Installation directory (default: `/opt/adastra-storage`)
|
||||
- `DATA_DIR`: Data directory (default: `/opt/adastra-storage/data`)
|
||||
|
||||
### Service Configuration
|
||||
|
||||
The systemd service file is located at `/etc/systemd/system/adastra-storage.service`.
|
||||
|
||||
To modify the service:
|
||||
```bash
|
||||
sudo systemctl edit adastra-storage
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Accessing the Web Interface
|
||||
|
||||
After installation, access the web interface at:
|
||||
```
|
||||
http://localhost:8080
|
||||
```
|
||||
|
||||
Default credentials:
|
||||
- **Username**: `admin`
|
||||
- **Password**: `admin`
|
||||
|
||||
⚠️ **IMPORTANT**: Change the default password immediately after first login!
|
||||
|
||||
### Service Management
|
||||
|
||||
Start the service:
|
||||
```bash
|
||||
sudo systemctl start adastra-storage
|
||||
```
|
||||
|
||||
Stop the service:
|
||||
```bash
|
||||
sudo systemctl stop adastra-storage
|
||||
```
|
||||
|
||||
Restart the service:
|
||||
```bash
|
||||
sudo systemctl restart adastra-storage
|
||||
```
|
||||
|
||||
Check status:
|
||||
```bash
|
||||
sudo systemctl status adastra-storage
|
||||
```
|
||||
|
||||
View logs:
|
||||
```bash
|
||||
sudo journalctl -u adastra-storage -f
|
||||
```
|
||||
|
||||
### Prometheus Metrics
|
||||
|
||||
The application exposes Prometheus metrics at:
|
||||
```
|
||||
http://localhost:8080/metrics
|
||||
```
|
||||
|
||||
### API Examples
|
||||
|
||||
Get pools (requires authentication):
|
||||
```bash
|
||||
curl -b cookies.txt http://127.0.0.1:8080/api/pools
|
||||
```
|
||||
|
||||
Create a pool (requires admin permission):
|
||||
```bash
|
||||
curl -X POST -b cookies.txt -H "Content-Type: application/json" \
|
||||
-d '{"name":"tank","vdevs":["/dev/sda"]}' \
|
||||
http://127.0.0.1:8080/api/pools
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
### Quick Start
|
||||
|
||||
```bash
|
||||
make run
|
||||
```
|
||||
|
||||
Build:
|
||||
### Build
|
||||
|
||||
```bash
|
||||
make build
|
||||
```
|
||||
|
||||
Run tests:
|
||||
### Run Tests
|
||||
|
||||
```bash
|
||||
make test
|
||||
```
|
||||
|
||||
The skeleton is intentionally minimal. Next steps include adding real service implementations, authentication, and more tests.
|
||||
### Lint
|
||||
|
||||
API examples:
|
||||
|
||||
Get pools (viewer):
|
||||
```bash
|
||||
curl -H "X-Auth-User: viewer" -H "X-Auth-Role: viewer" http://127.0.0.1:8080/api/pools
|
||||
make lint
|
||||
```
|
||||
|
||||
Create a pool (admin):
|
||||
## Uninstallation
|
||||
|
||||
### Using the Uninstaller Script
|
||||
|
||||
```bash
|
||||
curl -s -X POST -H "X-Auth-User: admin" -H "X-Auth-Role: admin" -H "Content-Type: application/json" \
|
||||
-d '{"name":"tank","vdevs":["/dev/sda"]}' http://127.0.0.1:8080/api/pools
|
||||
sudo /opt/adastra-storage/uninstall.sh
|
||||
```
|
||||
|
||||
The uninstaller will:
|
||||
- Stop and disable the service
|
||||
- Remove application files (optionally preserve data)
|
||||
- Optionally remove the service user
|
||||
|
||||
### Manual Uninstallation
|
||||
|
||||
1. Stop and disable the service:
|
||||
```bash
|
||||
sudo systemctl stop adastra-storage
|
||||
sudo systemctl disable adastra-storage
|
||||
```
|
||||
|
||||
2. Remove files:
|
||||
```bash
|
||||
sudo rm -rf /opt/adastra-storage
|
||||
sudo rm /etc/systemd/system/adastra-storage.service
|
||||
sudo systemctl daemon-reload
|
||||
```
|
||||
|
||||
3. Remove service user (optional):
|
||||
```bash
|
||||
sudo userdel adastra
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### Components
|
||||
|
||||
- **HTTP Server**: Chi router with middleware for auth, RBAC, CSRF
|
||||
- **Database**: SQLite with migrations
|
||||
- **Services**: Storage, Shares, iSCSI, Object Store
|
||||
- **Infrastructure Adapters**: ZFS, NFS, Samba, iSCSI, MinIO
|
||||
- **Job Runner**: Async job processing for long-running tasks
|
||||
- **Monitoring**: Prometheus metrics and UI dashboard
|
||||
- **Audit**: Comprehensive audit logging
|
||||
|
||||
### Security
|
||||
|
||||
- Session-based authentication with secure cookies
|
||||
- CSRF protection compatible with HTMX
|
||||
- Role-based access control (RBAC) with fine-grained permissions
|
||||
- Password hashing using Argon2id
|
||||
- Audit logging of all operations
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Service Won't Start
|
||||
|
||||
1. Check service status:
|
||||
```bash
|
||||
sudo systemctl status adastra-storage
|
||||
```
|
||||
|
||||
2. Check logs:
|
||||
```bash
|
||||
sudo journalctl -u adastra-storage -n 50
|
||||
```
|
||||
|
||||
3. Verify permissions:
|
||||
```bash
|
||||
ls -la /opt/adastra-storage
|
||||
```
|
||||
|
||||
4. Check if port 8080 is available:
|
||||
```bash
|
||||
sudo netstat -tlnp | grep 8080
|
||||
```
|
||||
|
||||
### Database Issues
|
||||
|
||||
The database is located at `/opt/adastra-storage/data/appliance.db`. If you need to reset:
|
||||
```bash
|
||||
sudo systemctl stop adastra-storage
|
||||
sudo rm /opt/adastra-storage/data/appliance.db
|
||||
sudo systemctl start adastra-storage
|
||||
```
|
||||
|
||||
⚠️ **Warning**: This will delete all data!
|
||||
|
||||
### Permission Issues
|
||||
|
||||
Ensure the service user has proper permissions:
|
||||
```bash
|
||||
sudo chown -R adastra:adastra /opt/adastra-storage
|
||||
sudo usermod -aG disk adastra # For ZFS access
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
[Specify your license here]
|
||||
|
||||
## Support
|
||||
|
||||
For issues and questions, please open an issue on the project repository.
|
||||
|
||||
@@ -3,19 +3,27 @@ package main
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/example/storage-appliance/internal/audit"
|
||||
httpin "github.com/example/storage-appliance/internal/http"
|
||||
iscsiinfra "github.com/example/storage-appliance/internal/infra/iscsi"
|
||||
"github.com/example/storage-appliance/internal/infra/nfs"
|
||||
"github.com/example/storage-appliance/internal/infra/osexec"
|
||||
"github.com/example/storage-appliance/internal/infra/samba"
|
||||
"github.com/example/storage-appliance/internal/infra/sqlite/db"
|
||||
"github.com/example/storage-appliance/internal/infra/zfs"
|
||||
"github.com/example/storage-appliance/internal/job"
|
||||
iscsiSvcPkg "github.com/example/storage-appliance/internal/service/iscsi"
|
||||
"github.com/example/storage-appliance/internal/service/mock"
|
||||
"github.com/example/storage-appliance/internal/service/shares"
|
||||
"github.com/example/storage-appliance/internal/service/storage"
|
||||
_ "github.com/glebarez/sqlite"
|
||||
"github.com/go-chi/chi/v5"
|
||||
@@ -24,8 +32,26 @@ import (
|
||||
|
||||
func main() {
|
||||
ctx := context.Background()
|
||||
|
||||
// Determine data directory (use /opt/adastra-storage/data in production, current dir in dev)
|
||||
dataDir := os.Getenv("DATA_DIR")
|
||||
if dataDir == "" {
|
||||
dataDir = os.Getenv("INSTALL_DIR")
|
||||
if dataDir != "" {
|
||||
dataDir = filepath.Join(dataDir, "data")
|
||||
} else {
|
||||
dataDir = "." // Development mode
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure data directory exists
|
||||
if err := os.MkdirAll(dataDir, 0755); err != nil {
|
||||
log.Fatalf("failed to create data directory: %v", err)
|
||||
}
|
||||
|
||||
// Connect simple sqlite DB (file)
|
||||
dsn := "file:appliance.db?_foreign_keys=on"
|
||||
dbPath := filepath.Join(dataDir, "appliance.db")
|
||||
dsn := fmt.Sprintf("file:%s?_foreign_keys=on", dbPath)
|
||||
sqldb, err := sql.Open("sqlite", dsn)
|
||||
if err != nil {
|
||||
log.Fatalf("open db: %v", err)
|
||||
@@ -51,13 +77,23 @@ func main() {
|
||||
// Attach router and app dependencies
|
||||
// wire mocks for now; replace with real adapters in infra
|
||||
diskSvc := &mock.MockDiskService{}
|
||||
zfsSvc := &mock.MockZFSService{}
|
||||
jobRunner := &mock.MockJobRunner{}
|
||||
auditLogger := audit.NewSQLAuditLogger(sqldb)
|
||||
// job runner uses sqlite DB and zfs adapter
|
||||
zfsAdapter := zfs.NewAdapter(osexec.Default)
|
||||
jobRunner := &job.Runner{DB: sqldb}
|
||||
auditLogger := audit.NewSQLAuditLogger(sqldb)
|
||||
jobRunner.ZFS = zfsAdapter
|
||||
jobRunner.Audit = auditLogger
|
||||
// storage service wiring: use zfsAdapter and jobRunner and audit logger
|
||||
storageSvc := storage.NewStorageService(zfsAdapter, jobRunner, auditLogger)
|
||||
nfsAdapter := nfs.NewAdapter(osexec.Default, "")
|
||||
sambaAdapter := samba.NewAdapter(osexec.Default, "")
|
||||
sharesSvc := shares.NewSharesService(sqldb, nfsAdapter, sambaAdapter, auditLogger)
|
||||
|
||||
// iSCSI adapter and service
|
||||
iscsiAdapter := iscsiinfra.NewAdapter(osexec.Default)
|
||||
iscsiSvc := iscsiSvcPkg.NewISCSIService(sqldb, zfsAdapter, iscsiAdapter, auditLogger)
|
||||
|
||||
zfsSvc := zfsAdapter
|
||||
app := &httpin.App{
|
||||
DB: sqldb,
|
||||
DiskSvc: diskSvc,
|
||||
@@ -65,6 +101,9 @@ func main() {
|
||||
JobRunner: jobRunner,
|
||||
HTTPClient: &http.Client{},
|
||||
StorageSvc: storageSvc,
|
||||
ShareSvc: sharesSvc,
|
||||
ISCSISvc: iscsiSvc,
|
||||
Runner: osexec.Default,
|
||||
}
|
||||
r.Use(uuidMiddleware)
|
||||
httpin.RegisterRoutes(r, app)
|
||||
|
||||
@@ -2,7 +2,9 @@ package audit
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/sha256"
|
||||
"database/sql"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"log"
|
||||
"time"
|
||||
@@ -19,6 +21,12 @@ type Event struct {
|
||||
ResourceID string
|
||||
Success bool
|
||||
Details map[string]any
|
||||
// Enhanced fields
|
||||
Actor string // Username or user identifier
|
||||
Resource string // Full resource identifier (e.g., "pool:my-pool")
|
||||
PayloadHash string // SHA256 hash of request payload
|
||||
Result string // Success/failure message or status
|
||||
ClientIP string // Client IP address
|
||||
}
|
||||
|
||||
type AuditLogger interface {
|
||||
@@ -40,12 +48,67 @@ func (l *SQLAuditLogger) Record(ctx context.Context, e Event) error {
|
||||
if e.Timestamp.IsZero() {
|
||||
e.Timestamp = time.Now()
|
||||
}
|
||||
detailsJSON, _ := json.Marshal(e.Details)
|
||||
_, err := l.DB.ExecContext(ctx, `INSERT INTO audit_events (id, ts, user_id, action, resource_type, resource_id, success, details) VALUES (?, ?, ?, ?, ?, ?, ?, ?)`, e.ID, e.Timestamp, e.UserID, e.Action, e.ResourceType, e.ResourceID, boolToInt(e.Success), string(detailsJSON))
|
||||
if err != nil {
|
||||
log.Printf("audit record failed: %v", err)
|
||||
|
||||
// Set actor from UserID if not provided
|
||||
if e.Actor == "" {
|
||||
e.Actor = e.UserID
|
||||
}
|
||||
return err
|
||||
|
||||
// Build resource string from ResourceType and ResourceID
|
||||
if e.Resource == "" {
|
||||
if e.ResourceID != "" {
|
||||
e.Resource = e.ResourceType + ":" + e.ResourceID
|
||||
} else {
|
||||
e.Resource = e.ResourceType
|
||||
}
|
||||
}
|
||||
|
||||
// Set result from Success if not provided
|
||||
if e.Result == "" {
|
||||
if e.Success {
|
||||
e.Result = "success"
|
||||
} else {
|
||||
e.Result = "failure"
|
||||
}
|
||||
}
|
||||
|
||||
detailsJSON, _ := json.Marshal(e.Details)
|
||||
|
||||
// Try to insert with all columns, fallback to basic columns if enhanced columns don't exist
|
||||
_, err := l.DB.ExecContext(ctx,
|
||||
`INSERT INTO audit_events (id, ts, user_id, action, resource_type, resource_id, success, details, actor, resource, payload_hash, result, client_ip)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
|
||||
e.ID, e.Timestamp, e.UserID, e.Action, e.ResourceType, e.ResourceID, boolToInt(e.Success), string(detailsJSON),
|
||||
e.Actor, e.Resource, e.PayloadHash, e.Result, e.ClientIP)
|
||||
if err != nil {
|
||||
// Fallback to basic insert if enhanced columns don't exist yet
|
||||
_, err2 := l.DB.ExecContext(ctx,
|
||||
`INSERT INTO audit_events (id, ts, user_id, action, resource_type, resource_id, success, details)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)`,
|
||||
e.ID, e.Timestamp, e.UserID, e.Action, e.ResourceType, e.ResourceID, boolToInt(e.Success), string(detailsJSON))
|
||||
if err2 != nil {
|
||||
log.Printf("audit record failed: %v (fallback also failed: %v)", err, err2)
|
||||
return err2
|
||||
}
|
||||
log.Printf("audit record inserted with fallback (enhanced columns may not exist): %v", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// HashPayload computes SHA256 hash of a payload (JSON string or bytes)
|
||||
func HashPayload(payload interface{}) string {
|
||||
var data []byte
|
||||
switch v := payload.(type) {
|
||||
case []byte:
|
||||
data = v
|
||||
case string:
|
||||
data = []byte(v)
|
||||
default:
|
||||
jsonData, _ := json.Marshal(payload)
|
||||
data = jsonData
|
||||
}
|
||||
hash := sha256.Sum256(data)
|
||||
return hex.EncodeToString(hash[:])
|
||||
}
|
||||
|
||||
func boolToInt(b bool) int {
|
||||
|
||||
89
internal/auth/password.go
Normal file
89
internal/auth/password.go
Normal file
@@ -0,0 +1,89 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"crypto/subtle"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"golang.org/x/crypto/argon2"
|
||||
)
|
||||
|
||||
const (
|
||||
// Argon2id parameters
|
||||
argon2Memory = 64 * 1024 // 64 MB
|
||||
argon2Iterations = 3
|
||||
argon2Parallelism = 2
|
||||
argon2SaltLength = 16
|
||||
argon2KeyLength = 32
|
||||
)
|
||||
|
||||
// HashPassword hashes a password using Argon2id
|
||||
func HashPassword(password string) (string, error) {
|
||||
// Generate a random salt
|
||||
salt := make([]byte, argon2SaltLength)
|
||||
if _, err := rand.Read(salt); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Hash the password
|
||||
hash := argon2.IDKey([]byte(password), salt, argon2Iterations, argon2Memory, argon2Parallelism, argon2KeyLength)
|
||||
|
||||
// Encode the hash and salt
|
||||
b64Salt := base64.RawStdEncoding.EncodeToString(salt)
|
||||
b64Hash := base64.RawStdEncoding.EncodeToString(hash)
|
||||
|
||||
// Return the encoded hash in the format: $argon2id$v=19$m=65536,t=3,p=2$salt$hash
|
||||
return fmt.Sprintf("$argon2id$v=%d$m=%d,t=%d,p=%d$%s$%s",
|
||||
argon2.Version, argon2Memory, argon2Iterations, argon2Parallelism, b64Salt, b64Hash), nil
|
||||
}
|
||||
|
||||
// VerifyPassword verifies a password against a hash
|
||||
func VerifyPassword(password, encodedHash string) (bool, error) {
|
||||
// Parse the encoded hash
|
||||
parts := strings.Split(encodedHash, "$")
|
||||
if len(parts) != 6 {
|
||||
return false, errors.New("invalid hash format")
|
||||
}
|
||||
|
||||
if parts[1] != "argon2id" {
|
||||
return false, errors.New("unsupported hash algorithm")
|
||||
}
|
||||
|
||||
// Parse version
|
||||
var version int
|
||||
if _, err := fmt.Sscanf(parts[2], "v=%d", &version); err != nil {
|
||||
return false, err
|
||||
}
|
||||
if version != argon2.Version {
|
||||
return false, errors.New("incompatible version")
|
||||
}
|
||||
|
||||
// Parse parameters
|
||||
var memory, iterations, parallelism int
|
||||
if _, err := fmt.Sscanf(parts[3], "m=%d,t=%d,p=%d", &memory, &iterations, ¶llelism); err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// Decode salt and hash
|
||||
salt, err := base64.RawStdEncoding.DecodeString(parts[4])
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
hash, err := base64.RawStdEncoding.DecodeString(parts[5])
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// Compute the hash of the password
|
||||
otherHash := argon2.IDKey([]byte(password), salt, uint32(iterations), uint32(memory), uint8(parallelism), uint32(len(hash)))
|
||||
|
||||
// Compare hashes in constant time
|
||||
if subtle.ConstantTimeCompare(hash, otherHash) == 1 {
|
||||
return true, nil
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
183
internal/auth/rbac.go
Normal file
183
internal/auth/rbac.go
Normal file
@@ -0,0 +1,183 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
)
|
||||
|
||||
type Permission struct {
|
||||
ID string
|
||||
Name string
|
||||
Description string
|
||||
}
|
||||
|
||||
type Role struct {
|
||||
ID string
|
||||
Name string
|
||||
Description string
|
||||
}
|
||||
|
||||
type RBACStore struct {
|
||||
DB *sql.DB
|
||||
}
|
||||
|
||||
func NewRBACStore(db *sql.DB) *RBACStore {
|
||||
return &RBACStore{DB: db}
|
||||
}
|
||||
|
||||
// GetUserRoles retrieves all roles for a user
|
||||
func (s *RBACStore) GetUserRoles(ctx context.Context, userID string) ([]Role, error) {
|
||||
rows, err := s.DB.QueryContext(ctx,
|
||||
`SELECT r.id, r.name, r.description FROM roles r
|
||||
INNER JOIN user_roles ur ON r.id = ur.role_id
|
||||
WHERE ur.user_id = ?`,
|
||||
userID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var roles []Role
|
||||
for rows.Next() {
|
||||
var role Role
|
||||
if err := rows.Scan(&role.ID, &role.Name, &role.Description); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
roles = append(roles, role)
|
||||
}
|
||||
return roles, rows.Err()
|
||||
}
|
||||
|
||||
// GetRolePermissions retrieves all permissions for a role
|
||||
func (s *RBACStore) GetRolePermissions(ctx context.Context, roleID string) ([]Permission, error) {
|
||||
rows, err := s.DB.QueryContext(ctx,
|
||||
`SELECT p.id, p.name, p.description FROM permissions p
|
||||
INNER JOIN role_permissions rp ON p.id = rp.permission_id
|
||||
WHERE rp.role_id = ?`,
|
||||
roleID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var permissions []Permission
|
||||
for rows.Next() {
|
||||
var perm Permission
|
||||
if err := rows.Scan(&perm.ID, &perm.Name, &perm.Description); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
permissions = append(permissions, perm)
|
||||
}
|
||||
return permissions, rows.Err()
|
||||
}
|
||||
|
||||
// GetUserPermissions retrieves all permissions for a user (through their roles)
|
||||
func (s *RBACStore) GetUserPermissions(ctx context.Context, userID string) ([]Permission, error) {
|
||||
rows, err := s.DB.QueryContext(ctx,
|
||||
`SELECT DISTINCT p.id, p.name, p.description FROM permissions p
|
||||
INNER JOIN role_permissions rp ON p.id = rp.permission_id
|
||||
INNER JOIN user_roles ur ON rp.role_id = ur.role_id
|
||||
WHERE ur.user_id = ?`,
|
||||
userID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var permissions []Permission
|
||||
for rows.Next() {
|
||||
var perm Permission
|
||||
if err := rows.Scan(&perm.ID, &perm.Name, &perm.Description); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
permissions = append(permissions, perm)
|
||||
}
|
||||
return permissions, rows.Err()
|
||||
}
|
||||
|
||||
// UserHasPermission checks if a user has a specific permission
|
||||
func (s *RBACStore) UserHasPermission(ctx context.Context, userID, permission string) (bool, error) {
|
||||
var count int
|
||||
err := s.DB.QueryRowContext(ctx,
|
||||
`SELECT COUNT(*) FROM permissions p
|
||||
INNER JOIN role_permissions rp ON p.id = rp.permission_id
|
||||
INNER JOIN user_roles ur ON rp.role_id = ur.role_id
|
||||
WHERE ur.user_id = ? AND p.name = ?`,
|
||||
userID, permission).Scan(&count)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return count > 0, nil
|
||||
}
|
||||
|
||||
// AssignRoleToUser assigns a role to a user
|
||||
func (s *RBACStore) AssignRoleToUser(ctx context.Context, userID, roleID string) error {
|
||||
_, err := s.DB.ExecContext(ctx,
|
||||
`INSERT OR IGNORE INTO user_roles (user_id, role_id) VALUES (?, ?)`,
|
||||
userID, roleID)
|
||||
return err
|
||||
}
|
||||
|
||||
// RemoveRoleFromUser removes a role from a user
|
||||
func (s *RBACStore) RemoveRoleFromUser(ctx context.Context, userID, roleID string) error {
|
||||
_, err := s.DB.ExecContext(ctx,
|
||||
`DELETE FROM user_roles WHERE user_id = ? AND role_id = ?`,
|
||||
userID, roleID)
|
||||
return err
|
||||
}
|
||||
|
||||
// GetAllRoles retrieves all roles
|
||||
func (s *RBACStore) GetAllRoles(ctx context.Context) ([]Role, error) {
|
||||
rows, err := s.DB.QueryContext(ctx,
|
||||
`SELECT id, name, description FROM roles ORDER BY name`)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var roles []Role
|
||||
for rows.Next() {
|
||||
var role Role
|
||||
if err := rows.Scan(&role.ID, &role.Name, &role.Description); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
roles = append(roles, role)
|
||||
}
|
||||
return roles, rows.Err()
|
||||
}
|
||||
|
||||
// GetAllPermissions retrieves all permissions
|
||||
func (s *RBACStore) GetAllPermissions(ctx context.Context) ([]Permission, error) {
|
||||
rows, err := s.DB.QueryContext(ctx,
|
||||
`SELECT id, name, description FROM permissions ORDER BY name`)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var permissions []Permission
|
||||
for rows.Next() {
|
||||
var perm Permission
|
||||
if err := rows.Scan(&perm.ID, &perm.Name, &perm.Description); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
permissions = append(permissions, perm)
|
||||
}
|
||||
return permissions, rows.Err()
|
||||
}
|
||||
|
||||
// AssignPermissionToRole assigns a permission to a role
|
||||
func (s *RBACStore) AssignPermissionToRole(ctx context.Context, roleID, permissionID string) error {
|
||||
_, err := s.DB.ExecContext(ctx,
|
||||
`INSERT OR IGNORE INTO role_permissions (role_id, permission_id) VALUES (?, ?)`,
|
||||
roleID, permissionID)
|
||||
return err
|
||||
}
|
||||
|
||||
// RemovePermissionFromRole removes a permission from a role
|
||||
func (s *RBACStore) RemovePermissionFromRole(ctx context.Context, roleID, permissionID string) error {
|
||||
_, err := s.DB.ExecContext(ctx,
|
||||
`DELETE FROM role_permissions WHERE role_id = ? AND permission_id = ?`,
|
||||
roleID, permissionID)
|
||||
return err
|
||||
}
|
||||
108
internal/auth/session.go
Normal file
108
internal/auth/session.go
Normal file
@@ -0,0 +1,108 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/rand"
|
||||
"database/sql"
|
||||
"encoding/base64"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
const (
|
||||
SessionCookieName = "session_token"
|
||||
SessionDuration = 24 * time.Hour
|
||||
)
|
||||
|
||||
type Session struct {
|
||||
ID string
|
||||
UserID string
|
||||
Token string
|
||||
ExpiresAt time.Time
|
||||
CreatedAt time.Time
|
||||
}
|
||||
|
||||
type SessionStore struct {
|
||||
DB *sql.DB
|
||||
}
|
||||
|
||||
func NewSessionStore(db *sql.DB) *SessionStore {
|
||||
return &SessionStore{DB: db}
|
||||
}
|
||||
|
||||
// GenerateToken generates a secure random token
|
||||
func GenerateToken() (string, error) {
|
||||
b := make([]byte, 32)
|
||||
if _, err := rand.Read(b); err != nil {
|
||||
return "", err
|
||||
}
|
||||
return base64.URLEncoding.EncodeToString(b), nil
|
||||
}
|
||||
|
||||
// CreateSession creates a new session for a user
|
||||
func (s *SessionStore) CreateSession(ctx context.Context, userID string) (*Session, error) {
|
||||
token, err := GenerateToken()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sessionID := uuid.New().String()
|
||||
expiresAt := time.Now().Add(SessionDuration)
|
||||
|
||||
_, err = s.DB.ExecContext(ctx,
|
||||
`INSERT INTO sessions (id, user_id, token, expires_at) VALUES (?, ?, ?, ?)`,
|
||||
sessionID, userID, token, expiresAt)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &Session{
|
||||
ID: sessionID,
|
||||
UserID: userID,
|
||||
Token: token,
|
||||
ExpiresAt: expiresAt,
|
||||
CreatedAt: time.Now(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// GetSession retrieves a session by token
|
||||
func (s *SessionStore) GetSession(ctx context.Context, token string) (*Session, error) {
|
||||
var session Session
|
||||
var expiresAtStr string
|
||||
err := s.DB.QueryRowContext(ctx,
|
||||
`SELECT id, user_id, token, expires_at, created_at FROM sessions WHERE token = ? AND expires_at > ?`,
|
||||
token, time.Now()).Scan(&session.ID, &session.UserID, &session.Token, &expiresAtStr, &session.CreatedAt)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
session.ExpiresAt, err = time.Parse("2006-01-02 15:04:05", expiresAtStr)
|
||||
if err != nil {
|
||||
// Try with timezone
|
||||
session.ExpiresAt, err = time.Parse(time.RFC3339, expiresAtStr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return &session, nil
|
||||
}
|
||||
|
||||
// DeleteSession deletes a session by token
|
||||
func (s *SessionStore) DeleteSession(ctx context.Context, token string) error {
|
||||
_, err := s.DB.ExecContext(ctx, `DELETE FROM sessions WHERE token = ?`, token)
|
||||
return err
|
||||
}
|
||||
|
||||
// DeleteUserSessions deletes all sessions for a user
|
||||
func (s *SessionStore) DeleteUserSessions(ctx context.Context, userID string) error {
|
||||
_, err := s.DB.ExecContext(ctx, `DELETE FROM sessions WHERE user_id = ?`, userID)
|
||||
return err
|
||||
}
|
||||
|
||||
// CleanupExpiredSessions removes expired sessions
|
||||
func (s *SessionStore) CleanupExpiredSessions(ctx context.Context) error {
|
||||
_, err := s.DB.ExecContext(ctx, `DELETE FROM sessions WHERE expires_at < ?`, time.Now())
|
||||
return err
|
||||
}
|
||||
102
internal/auth/user.go
Normal file
102
internal/auth/user.go
Normal file
@@ -0,0 +1,102 @@
|
||||
package auth
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
)
|
||||
|
||||
type User struct {
|
||||
ID string
|
||||
Username string
|
||||
PasswordHash string
|
||||
Role string // Legacy field, kept for backward compatibility
|
||||
CreatedAt string
|
||||
}
|
||||
|
||||
type UserStore struct {
|
||||
DB *sql.DB
|
||||
}
|
||||
|
||||
func NewUserStore(db *sql.DB) *UserStore {
|
||||
return &UserStore{DB: db}
|
||||
}
|
||||
|
||||
// GetUserByUsername retrieves a user by username
|
||||
func (s *UserStore) GetUserByUsername(ctx context.Context, username string) (*User, error) {
|
||||
var user User
|
||||
err := s.DB.QueryRowContext(ctx,
|
||||
`SELECT id, username, password_hash, role, created_at FROM users WHERE username = ?`,
|
||||
username).Scan(&user.ID, &user.Username, &user.PasswordHash, &user.Role, &user.CreatedAt)
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, errors.New("user not found")
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return &user, nil
|
||||
}
|
||||
|
||||
// GetUserByID retrieves a user by ID
|
||||
func (s *UserStore) GetUserByID(ctx context.Context, userID string) (*User, error) {
|
||||
var user User
|
||||
err := s.DB.QueryRowContext(ctx,
|
||||
`SELECT id, username, password_hash, role, created_at FROM users WHERE id = ?`,
|
||||
userID).Scan(&user.ID, &user.Username, &user.PasswordHash, &user.Role, &user.CreatedAt)
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, errors.New("user not found")
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
return &user, nil
|
||||
}
|
||||
|
||||
// CreateUser creates a new user
|
||||
func (s *UserStore) CreateUser(ctx context.Context, username, password string) (*User, error) {
|
||||
passwordHash, err := HashPassword(password)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
userID := username // Using username as ID for simplicity, could use UUID
|
||||
_, err = s.DB.ExecContext(ctx,
|
||||
`INSERT INTO users (id, username, password_hash) VALUES (?, ?, ?)`,
|
||||
userID, username, passwordHash)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return s.GetUserByID(ctx, userID)
|
||||
}
|
||||
|
||||
// UpdatePassword updates a user's password
|
||||
func (s *UserStore) UpdatePassword(ctx context.Context, userID, newPassword string) error {
|
||||
passwordHash, err := HashPassword(newPassword)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = s.DB.ExecContext(ctx,
|
||||
`UPDATE users SET password_hash = ? WHERE id = ?`,
|
||||
passwordHash, userID)
|
||||
return err
|
||||
}
|
||||
|
||||
// Authenticate verifies username and password
|
||||
func (s *UserStore) Authenticate(ctx context.Context, username, password string) (*User, error) {
|
||||
user, err := s.GetUserByUsername(ctx, username)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
valid, err := VerifyPassword(password, user.PasswordHash)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if !valid {
|
||||
return nil, errors.New("invalid password")
|
||||
}
|
||||
|
||||
return user, nil
|
||||
}
|
||||
@@ -52,10 +52,11 @@ type Dataset struct {
|
||||
}
|
||||
|
||||
type Share struct {
|
||||
ID UUID
|
||||
Name string
|
||||
Path string
|
||||
Type string // nfs or smb
|
||||
ID UUID
|
||||
Name string
|
||||
Path string
|
||||
Type string // nfs or smb
|
||||
Config map[string]string
|
||||
}
|
||||
|
||||
type LUN struct {
|
||||
@@ -73,4 +74,5 @@ type Job struct {
|
||||
Owner UUID
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
Details map[string]any
|
||||
}
|
||||
|
||||
258
internal/http/admin_handlers.go
Normal file
258
internal/http/admin_handlers.go
Normal file
@@ -0,0 +1,258 @@
|
||||
package http
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
|
||||
"github.com/example/storage-appliance/internal/auth"
|
||||
"github.com/go-chi/chi/v5"
|
||||
)
|
||||
|
||||
// UsersHandler shows the users management page
|
||||
func (a *App) UsersHandler(w http.ResponseWriter, r *http.Request) {
|
||||
data := templateData(r, map[string]interface{}{
|
||||
"Title": "User Management",
|
||||
})
|
||||
if err := templates.ExecuteTemplate(w, "users", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// HXUsersHandler returns HTMX partial for users list
|
||||
func (a *App) HXUsersHandler(w http.ResponseWriter, r *http.Request) {
|
||||
rbacStore := auth.NewRBACStore(a.DB)
|
||||
|
||||
// Get all users (simplified - in production, you'd want pagination)
|
||||
rows, err := a.DB.QueryContext(r.Context(), `SELECT id, username, created_at FROM users ORDER BY username`)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
type UserWithRoles struct {
|
||||
ID string
|
||||
Username string
|
||||
CreatedAt string
|
||||
Roles []auth.Role
|
||||
}
|
||||
|
||||
var users []UserWithRoles
|
||||
for rows.Next() {
|
||||
var u UserWithRoles
|
||||
if err := rows.Scan(&u.ID, &u.Username, &u.CreatedAt); err != nil {
|
||||
continue
|
||||
}
|
||||
roles, _ := rbacStore.GetUserRoles(r.Context(), u.ID)
|
||||
u.Roles = roles
|
||||
users = append(users, u)
|
||||
}
|
||||
|
||||
data := map[string]interface{}{
|
||||
"Users": users,
|
||||
}
|
||||
if err := templates.ExecuteTemplate(w, "hx_users", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// CreateUserHandler creates a new user
|
||||
func (a *App) CreateUserHandler(w http.ResponseWriter, r *http.Request) {
|
||||
username := r.FormValue("username")
|
||||
password := r.FormValue("password")
|
||||
|
||||
if username == "" || password == "" {
|
||||
http.Error(w, "username and password required", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
userStore := auth.NewUserStore(a.DB)
|
||||
_, err := userStore.CreateUser(r.Context(), username, password)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
// Return HTMX partial or redirect
|
||||
if r.Header.Get("HX-Request") == "true" {
|
||||
w.Header().Set("HX-Refresh", "true")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
} else {
|
||||
http.Redirect(w, r, "/admin/users", http.StatusFound)
|
||||
}
|
||||
}
|
||||
|
||||
// DeleteUserHandler deletes a user
|
||||
func (a *App) DeleteUserHandler(w http.ResponseWriter, r *http.Request) {
|
||||
userID := chi.URLParam(r, "id")
|
||||
|
||||
_, err := a.DB.ExecContext(r.Context(), `DELETE FROM users WHERE id = ?`, userID)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
if r.Header.Get("HX-Request") == "true" {
|
||||
w.Header().Set("HX-Refresh", "true")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
} else {
|
||||
http.Redirect(w, r, "/admin/users", http.StatusFound)
|
||||
}
|
||||
}
|
||||
|
||||
// UpdateUserRolesHandler updates roles for a user
|
||||
func (a *App) UpdateUserRolesHandler(w http.ResponseWriter, r *http.Request) {
|
||||
userID := chi.URLParam(r, "id")
|
||||
|
||||
var req struct {
|
||||
RoleIDs []string `json:"role_ids"`
|
||||
}
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
http.Error(w, "invalid request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
rbacStore := auth.NewRBACStore(a.DB)
|
||||
|
||||
// Get current roles
|
||||
currentRoles, _ := rbacStore.GetUserRoles(r.Context(), userID)
|
||||
|
||||
// Remove all current roles
|
||||
for _, role := range currentRoles {
|
||||
rbacStore.RemoveRoleFromUser(r.Context(), userID, role.ID)
|
||||
}
|
||||
|
||||
// Add new roles
|
||||
for _, roleID := range req.RoleIDs {
|
||||
rbacStore.AssignRoleToUser(r.Context(), userID, roleID)
|
||||
}
|
||||
|
||||
if r.Header.Get("HX-Request") == "true" {
|
||||
w.Header().Set("HX-Refresh", "true")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
} else {
|
||||
http.Redirect(w, r, "/admin/users", http.StatusFound)
|
||||
}
|
||||
}
|
||||
|
||||
// RolesHandler shows the roles management page
|
||||
func (a *App) RolesHandler(w http.ResponseWriter, r *http.Request) {
|
||||
data := templateData(r, map[string]interface{}{
|
||||
"Title": "Role Management",
|
||||
})
|
||||
if err := templates.ExecuteTemplate(w, "roles", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// HXRolesHandler returns HTMX partial for roles list
|
||||
func (a *App) HXRolesHandler(w http.ResponseWriter, r *http.Request) {
|
||||
rbacStore := auth.NewRBACStore(a.DB)
|
||||
|
||||
roles, err := rbacStore.GetAllRoles(r.Context())
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
type RoleWithPermissions struct {
|
||||
auth.Role
|
||||
Permissions []auth.Permission
|
||||
}
|
||||
|
||||
var rolesWithPerms []RoleWithPermissions
|
||||
for _, role := range roles {
|
||||
rwp := RoleWithPermissions{Role: role}
|
||||
perms, _ := rbacStore.GetRolePermissions(r.Context(), role.ID)
|
||||
rwp.Permissions = perms
|
||||
rolesWithPerms = append(rolesWithPerms, rwp)
|
||||
}
|
||||
|
||||
data := map[string]interface{}{
|
||||
"Roles": rolesWithPerms,
|
||||
}
|
||||
if err := templates.ExecuteTemplate(w, "hx_roles", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// CreateRoleHandler creates a new role
|
||||
func (a *App) CreateRoleHandler(w http.ResponseWriter, r *http.Request) {
|
||||
name := r.FormValue("name")
|
||||
description := r.FormValue("description")
|
||||
|
||||
if name == "" {
|
||||
http.Error(w, "name required", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
roleID := name // Using name as ID for simplicity
|
||||
_, err := a.DB.ExecContext(r.Context(),
|
||||
`INSERT INTO roles (id, name, description) VALUES (?, ?, ?)`,
|
||||
roleID, name, description)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
if r.Header.Get("HX-Request") == "true" {
|
||||
w.Header().Set("HX-Refresh", "true")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
} else {
|
||||
http.Redirect(w, r, "/admin/roles", http.StatusFound)
|
||||
}
|
||||
}
|
||||
|
||||
// DeleteRoleHandler deletes a role
|
||||
func (a *App) DeleteRoleHandler(w http.ResponseWriter, r *http.Request) {
|
||||
roleID := chi.URLParam(r, "id")
|
||||
|
||||
_, err := a.DB.ExecContext(r.Context(), `DELETE FROM roles WHERE id = ?`, roleID)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
if r.Header.Get("HX-Request") == "true" {
|
||||
w.Header().Set("HX-Refresh", "true")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
} else {
|
||||
http.Redirect(w, r, "/admin/roles", http.StatusFound)
|
||||
}
|
||||
}
|
||||
|
||||
// UpdateRolePermissionsHandler updates permissions for a role
|
||||
func (a *App) UpdateRolePermissionsHandler(w http.ResponseWriter, r *http.Request) {
|
||||
roleID := chi.URLParam(r, "id")
|
||||
|
||||
var req struct {
|
||||
PermissionIDs []string `json:"permission_ids"`
|
||||
}
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
http.Error(w, "invalid request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
rbacStore := auth.NewRBACStore(a.DB)
|
||||
|
||||
// Get current permissions
|
||||
currentPerms, _ := rbacStore.GetRolePermissions(r.Context(), roleID)
|
||||
|
||||
// Remove all current permissions
|
||||
for _, perm := range currentPerms {
|
||||
rbacStore.RemovePermissionFromRole(r.Context(), roleID, perm.ID)
|
||||
}
|
||||
|
||||
// Add new permissions
|
||||
for _, permID := range req.PermissionIDs {
|
||||
rbacStore.AssignPermissionToRole(r.Context(), roleID, permID)
|
||||
}
|
||||
|
||||
if r.Header.Get("HX-Request") == "true" {
|
||||
w.Header().Set("HX-Refresh", "true")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
} else {
|
||||
http.Redirect(w, r, "/admin/roles", http.StatusFound)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,7 +4,9 @@ import (
|
||||
"database/sql"
|
||||
"net/http"
|
||||
|
||||
"github.com/example/storage-appliance/internal/infra/osexec"
|
||||
"github.com/example/storage-appliance/internal/service"
|
||||
"github.com/example/storage-appliance/internal/service/storage"
|
||||
)
|
||||
|
||||
// App contains injected dependencies for handlers.
|
||||
@@ -15,4 +17,8 @@ type App struct {
|
||||
JobRunner service.JobRunner
|
||||
HTTPClient *http.Client
|
||||
StorageSvc *storage.StorageService
|
||||
ShareSvc service.SharesService
|
||||
ISCSISvc service.ISCSIService
|
||||
ObjectSvc service.ObjectService
|
||||
Runner osexec.Runner
|
||||
}
|
||||
|
||||
125
internal/http/auth_handlers.go
Normal file
125
internal/http/auth_handlers.go
Normal file
@@ -0,0 +1,125 @@
|
||||
package http
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
|
||||
"github.com/example/storage-appliance/internal/auth"
|
||||
)
|
||||
|
||||
// LoginHandler handles user login
|
||||
func (a *App) LoginHandler(w http.ResponseWriter, r *http.Request) {
|
||||
if r.Method == "GET" {
|
||||
// Show login page
|
||||
data := map[string]interface{}{
|
||||
"Title": "Login",
|
||||
}
|
||||
if err := templates.ExecuteTemplate(w, "login", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Handle POST login
|
||||
var req struct {
|
||||
Username string `json:"username"`
|
||||
Password string `json:"password"`
|
||||
}
|
||||
|
||||
if r.Header.Get("Content-Type") == "application/json" {
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
http.Error(w, "invalid request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
req.Username = r.FormValue("username")
|
||||
req.Password = r.FormValue("password")
|
||||
}
|
||||
|
||||
// Authenticate user
|
||||
userStore := auth.NewUserStore(a.DB)
|
||||
user, err := userStore.Authenticate(r.Context(), req.Username, req.Password)
|
||||
if err != nil {
|
||||
if r.Header.Get("HX-Request") == "true" {
|
||||
w.Header().Set("Content-Type", "text/html")
|
||||
w.WriteHeader(http.StatusUnauthorized)
|
||||
w.Write([]byte(`<div class="text-red-600">Invalid username or password</div>`))
|
||||
} else {
|
||||
http.Error(w, "invalid credentials", http.StatusUnauthorized)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Create session
|
||||
sessionStore := auth.NewSessionStore(a.DB)
|
||||
session, err := sessionStore.CreateSession(r.Context(), user.ID)
|
||||
if err != nil {
|
||||
http.Error(w, "failed to create session", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
// Set session cookie
|
||||
http.SetCookie(w, &http.Cookie{
|
||||
Name: auth.SessionCookieName,
|
||||
Value: session.Token,
|
||||
Path: "/",
|
||||
HttpOnly: true,
|
||||
Secure: false, // Set to true in production with HTTPS
|
||||
SameSite: http.SameSiteStrictMode,
|
||||
MaxAge: int(auth.SessionDuration.Seconds()),
|
||||
})
|
||||
|
||||
// Set CSRF token cookie
|
||||
csrfToken := generateCSRFToken()
|
||||
http.SetCookie(w, &http.Cookie{
|
||||
Name: "csrf_token",
|
||||
Value: csrfToken,
|
||||
Path: "/",
|
||||
HttpOnly: false, // Needed for HTMX to read it
|
||||
Secure: false,
|
||||
SameSite: http.SameSiteStrictMode,
|
||||
MaxAge: int(auth.SessionDuration.Seconds()),
|
||||
})
|
||||
|
||||
// Redirect or return success
|
||||
if r.Header.Get("HX-Request") == "true" {
|
||||
w.Header().Set("HX-Redirect", "/dashboard")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
} else {
|
||||
http.Redirect(w, r, "/dashboard", http.StatusFound)
|
||||
}
|
||||
}
|
||||
|
||||
// LogoutHandler handles user logout
|
||||
func (a *App) LogoutHandler(w http.ResponseWriter, r *http.Request) {
|
||||
// Get session token from cookie
|
||||
cookie, err := r.Cookie(auth.SessionCookieName)
|
||||
if err == nil {
|
||||
// Delete session
|
||||
sessionStore := auth.NewSessionStore(a.DB)
|
||||
sessionStore.DeleteSession(r.Context(), cookie.Value)
|
||||
}
|
||||
|
||||
// Clear cookies
|
||||
http.SetCookie(w, &http.Cookie{
|
||||
Name: auth.SessionCookieName,
|
||||
Value: "",
|
||||
Path: "/",
|
||||
HttpOnly: true,
|
||||
MaxAge: -1,
|
||||
})
|
||||
http.SetCookie(w, &http.Cookie{
|
||||
Name: "csrf_token",
|
||||
Value: "",
|
||||
Path: "/",
|
||||
HttpOnly: false,
|
||||
MaxAge: -1,
|
||||
})
|
||||
|
||||
if r.Header.Get("HX-Request") == "true" {
|
||||
w.Header().Set("HX-Redirect", "/login")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
} else {
|
||||
http.Redirect(w, r, "/login", http.StatusFound)
|
||||
}
|
||||
}
|
||||
@@ -2,20 +2,27 @@ package http
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"github.com/example/storage-appliance/internal/audit"
|
||||
"github.com/example/storage-appliance/internal/domain"
|
||||
"github.com/go-chi/chi/v5"
|
||||
"html/template"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/example/storage-appliance/internal/domain"
|
||||
"github.com/go-chi/chi/v5"
|
||||
)
|
||||
|
||||
var templates *template.Template
|
||||
|
||||
func init() {
|
||||
var err error
|
||||
// Try a couple of relative paths so tests work regardless of cwd
|
||||
templates, err = template.ParseGlob("internal/templates/*.html")
|
||||
if err != nil {
|
||||
templates, err = template.ParseGlob("../templates/*.html")
|
||||
}
|
||||
if err != nil {
|
||||
templates, err = template.ParseGlob("./templates/*.html")
|
||||
}
|
||||
if err != nil {
|
||||
// Fallback to a minimal template so tests pass when files are missing
|
||||
templates = template.New("dashboard.html")
|
||||
@@ -24,9 +31,9 @@ func init() {
|
||||
}
|
||||
|
||||
func (a *App) DashboardHandler(w http.ResponseWriter, r *http.Request) {
|
||||
data := map[string]interface{}{
|
||||
data := templateData(r, map[string]interface{}{
|
||||
"Title": "Storage Appliance Dashboard",
|
||||
}
|
||||
})
|
||||
if err := templates.ExecuteTemplate(w, "base", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
@@ -39,6 +46,11 @@ func (a *App) PoolsHandler(w http.ResponseWriter, r *http.Request) {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
// audit the list action if possible
|
||||
if a.StorageSvc != nil && a.StorageSvc.Audit != nil {
|
||||
user, _ := r.Context().Value(ContextKeyUser).(string)
|
||||
a.StorageSvc.Audit.Record(ctx, audit.Event{UserID: user, Action: "pool.list", ResourceType: "pool", ResourceID: "all", Success: true})
|
||||
}
|
||||
j, err := json.Marshal(pools)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
@@ -53,6 +65,176 @@ func (a *App) JobsHandler(w http.ResponseWriter, r *http.Request) {
|
||||
w.Write([]byte(`[]`))
|
||||
}
|
||||
|
||||
// PoolDatasetsHandler returns datasets for a given pool via API
|
||||
func (a *App) PoolDatasetsHandler(w http.ResponseWriter, r *http.Request) {
|
||||
pool := chi.URLParam(r, "pool")
|
||||
ds, err := a.StorageSvc.ListDatasets(r.Context(), pool)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
b, _ := json.Marshal(ds)
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.Write(b)
|
||||
if a.StorageSvc != nil && a.StorageSvc.Audit != nil {
|
||||
user, _ := r.Context().Value(ContextKeyUser).(string)
|
||||
a.StorageSvc.Audit.Record(r.Context(), audit.Event{UserID: user, Action: "dataset.list", ResourceType: "dataset", ResourceID: pool, Success: true})
|
||||
}
|
||||
}
|
||||
|
||||
// CreateDatasetHandler handles dataset creation via API
|
||||
func (a *App) CreateDatasetHandler(w http.ResponseWriter, r *http.Request) {
|
||||
type req struct {
|
||||
Name string `json:"name"`
|
||||
Props map[string]string `json:"props"`
|
||||
}
|
||||
var body req
|
||||
if err := json.NewDecoder(r.Body).Decode(&body); err != nil {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
if err := a.StorageSvc.CreateDataset(r.Context(), user, role, body.Name, body.Props); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
}
|
||||
|
||||
// SnapshotHandler creates a snapshot via Storage service and returns job id
|
||||
func (a *App) SnapshotHandler(w http.ResponseWriter, r *http.Request) {
|
||||
dataset := chi.URLParam(r, "dataset")
|
||||
type req struct {
|
||||
Name string `json:"name"`
|
||||
}
|
||||
var body req
|
||||
if err := json.NewDecoder(r.Body).Decode(&body); err != nil {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
id, err := a.StorageSvc.Snapshot(r.Context(), user, role, dataset, body.Name)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.Write([]byte(`{"job_id":"` + id + `"}`))
|
||||
}
|
||||
|
||||
// PoolScrubHandler starts a scrub on the pool and returns a job id
|
||||
func (a *App) PoolScrubHandler(w http.ResponseWriter, r *http.Request) {
|
||||
pool := chi.URLParam(r, "pool")
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
id, err := a.StorageSvc.ScrubStart(r.Context(), user, role, pool)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.Write([]byte(`{"job_id":"` + id + `"}`))
|
||||
}
|
||||
|
||||
// NFSStatusHandler returns nfs server service status
|
||||
func (a *App) NFSStatusHandler(w http.ResponseWriter, r *http.Request) {
|
||||
status, err := a.ShareSvc.NFSStatus(r.Context())
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.Write([]byte(`{"status":"` + status + `"}`))
|
||||
}
|
||||
|
||||
// ObjectStoreHandler renders object storage page (MinIO)
|
||||
func (a *App) ObjectStoreHandler(w http.ResponseWriter, r *http.Request) {
|
||||
data := map[string]interface{}{"Title": "Object Storage"}
|
||||
if err := templates.ExecuteTemplate(w, "base", data); err != nil {
|
||||
if err2 := templates.ExecuteTemplate(w, "object_store", data); err2 != nil {
|
||||
http.Error(w, err2.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// HXBucketsHandler renders buckets list partial
|
||||
func (a *App) HXBucketsHandler(w http.ResponseWriter, r *http.Request) {
|
||||
var buckets []string
|
||||
if a.ObjectSvc != nil {
|
||||
buckets, _ = a.ObjectSvc.ListBuckets(r.Context())
|
||||
}
|
||||
if err := templates.ExecuteTemplate(w, "hx_buckets", buckets); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// CreateBucketHandler creates a bucket through the ObjectSvc
|
||||
func (a *App) CreateBucketHandler(w http.ResponseWriter, r *http.Request) {
|
||||
if err := r.ParseForm(); err != nil {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
name := r.FormValue("name")
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
if a.ObjectSvc == nil {
|
||||
http.Error(w, "object service not configured", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
id, err := a.ObjectSvc.CreateBucket(r.Context(), user, role, name)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
data := map[string]any{"JobID": id, "Name": name}
|
||||
if err := templates.ExecuteTemplate(w, "job_row", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// ObjectSettingsHandler handles updating object storage settings
|
||||
func (a *App) ObjectSettingsHandler(w http.ResponseWriter, r *http.Request) {
|
||||
// accept JSON body with settings or form values
|
||||
type req struct {
|
||||
AccessKey string `json:"access_key"`
|
||||
SecretKey string `json:"secret_key"`
|
||||
DataPath string `json:"data_path"`
|
||||
Port int `json:"port"`
|
||||
TLS bool `json:"tls"`
|
||||
}
|
||||
var body req
|
||||
if r.Header.Get("Content-Type") == "application/json" {
|
||||
if err := json.NewDecoder(r.Body).Decode(&body); err != nil {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
if err := r.ParseForm(); err == nil {
|
||||
body.AccessKey = r.FormValue("access_key")
|
||||
body.SecretKey = r.FormValue("secret_key")
|
||||
body.DataPath = r.FormValue("data_path")
|
||||
// parse port and tls
|
||||
}
|
||||
}
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
if a.ObjectSvc == nil {
|
||||
http.Error(w, "object service not configured", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
// wrap settings as an 'any' to satisfy interface (object service expects a specific type internally)
|
||||
// For now, cast to the concrete struct via type assertion inside the service, but we need to pass as any
|
||||
settings := map[string]any{"access_key": body.AccessKey, "secret_key": body.SecretKey, "data_path": body.DataPath, "port": body.Port, "tls": body.TLS}
|
||||
// ObjectService.SetSettings expects settings 'any' (simplified), need to convert inside
|
||||
if err := a.ObjectSvc.SetSettings(r.Context(), user, role, settings); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
}
|
||||
|
||||
// CreatePoolHandler receives a request to create a pool and enqueues a job
|
||||
func (a *App) CreatePoolHandler(w http.ResponseWriter, r *http.Request) {
|
||||
// Minimal implementation that reads 'name' and 'vdevs'
|
||||
@@ -65,9 +247,20 @@ func (a *App) CreatePoolHandler(w http.ResponseWriter, r *http.Request) {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
// Create a job and enqueue
|
||||
j := domain.Job{Type: "create-pool", Status: "queued", Progress: 0}
|
||||
id, err := a.JobRunner.Enqueue(r.Context(), j)
|
||||
// prefer storage service which adds validation/audit; fall back to job runner
|
||||
var id string
|
||||
var err error
|
||||
if a.StorageSvc != nil {
|
||||
user, _ := r.Context().Value(ContextKeyUser).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
id, err = a.StorageSvc.CreatePool(r.Context(), user, role, body.Name, body.Vdevs)
|
||||
} else if a.JobRunner != nil {
|
||||
j := domain.Job{Type: "create-pool", Status: "queued", Progress: 0, Details: map[string]any{"name": body.Name, "vdevs": body.Vdevs}}
|
||||
id, err = a.JobRunner.Enqueue(r.Context(), j)
|
||||
} else {
|
||||
http.Error(w, "no job runner", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if err != nil {
|
||||
http.Error(w, "failed to create job", http.StatusInternalServerError)
|
||||
return
|
||||
@@ -83,9 +276,9 @@ func StaticHandler(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
// StorageHandler renders the main storage page
|
||||
func (a *App) StorageHandler(w http.ResponseWriter, r *http.Request) {
|
||||
data := map[string]interface{}{
|
||||
data := templateData(r, map[string]interface{}{
|
||||
"Title": "Storage",
|
||||
}
|
||||
})
|
||||
if err := templates.ExecuteTemplate(w, "base", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
@@ -141,3 +334,347 @@ func (a *App) JobPartialHandler(w http.ResponseWriter, r *http.Request) {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// SharesNFSHandler renders the NFS shares page
|
||||
func (a *App) SharesNFSHandler(w http.ResponseWriter, r *http.Request) {
|
||||
data := templateData(r, map[string]interface{}{"Title": "NFS Shares"})
|
||||
if err := templates.ExecuteTemplate(w, "base", data); err != nil {
|
||||
// fallback to rendering the content template directly (useful in tests)
|
||||
if err2 := templates.ExecuteTemplate(w, "shares_nfs", data); err2 != nil {
|
||||
http.Error(w, err2.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// HXNFSHandler renders NFS shares partial
|
||||
func (a *App) HXNFSHandler(w http.ResponseWriter, r *http.Request) {
|
||||
shares := []domain.Share{}
|
||||
if a.ShareSvc != nil {
|
||||
shares, _ = a.ShareSvc.ListNFS(r.Context())
|
||||
}
|
||||
if err := templates.ExecuteTemplate(w, "hx_nfs_shares", shares); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// CreateNFSHandler handles NFS create requests (HTMX form or JSON)
|
||||
func (a *App) CreateNFSHandler(w http.ResponseWriter, r *http.Request) {
|
||||
if err := r.ParseForm(); err != nil {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
name := r.FormValue("name")
|
||||
path := r.FormValue("path")
|
||||
optsRaw := r.FormValue("options")
|
||||
opts := map[string]string{}
|
||||
if optsRaw != "" {
|
||||
// expecting JSON options for MVP
|
||||
_ = json.Unmarshal([]byte(optsRaw), &opts)
|
||||
}
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
if a.ShareSvc == nil {
|
||||
http.Error(w, "no share service", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
id, err := a.ShareSvc.CreateNFS(r.Context(), user, role, name, path, opts)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
// Return a job/creation partial: reuse job_row for a simple message
|
||||
data := map[string]any{"JobID": id, "Name": name, "Status": "queued"}
|
||||
if err := templates.ExecuteTemplate(w, "job_row", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// DeleteNFSHandler handles NFS share deletion
|
||||
func (a *App) DeleteNFSHandler(w http.ResponseWriter, r *http.Request) {
|
||||
if err := r.ParseForm(); err != nil {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
id := r.FormValue("id")
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
if a.ShareSvc == nil {
|
||||
http.Error(w, "no share service", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if err := a.ShareSvc.DeleteNFS(r.Context(), user, role, id); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
// return partial table after deletion
|
||||
shares, _ := a.ShareSvc.ListNFS(r.Context())
|
||||
if err := templates.ExecuteTemplate(w, "hx_nfs_shares", shares); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// SharesSMBHandler renders the SMB shares page
|
||||
func (a *App) SharesSMBHandler(w http.ResponseWriter, r *http.Request) {
|
||||
data := map[string]interface{}{"Title": "SMB Shares"}
|
||||
if err := templates.ExecuteTemplate(w, "base", data); err != nil {
|
||||
// fallback for tests
|
||||
if err2 := templates.ExecuteTemplate(w, "shares_smb", data); err2 != nil {
|
||||
http.Error(w, err2.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ISCSIHandler renders the iSCSI page
|
||||
func (a *App) ISCSIHandler(w http.ResponseWriter, r *http.Request) {
|
||||
data := map[string]interface{}{"Title": "iSCSI Targets"}
|
||||
if err := templates.ExecuteTemplate(w, "base", data); err != nil {
|
||||
if err2 := templates.ExecuteTemplate(w, "iscsi", data); err2 != nil {
|
||||
http.Error(w, err2.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// HXISCSIHandler renders iSCSI targets partial
|
||||
func (a *App) HXISCSIHandler(w http.ResponseWriter, r *http.Request) {
|
||||
targets := []map[string]any{}
|
||||
if a.ISCSISvc != nil {
|
||||
targets, _ = a.ISCSISvc.ListTargets(r.Context())
|
||||
}
|
||||
if err := templates.ExecuteTemplate(w, "hx_iscsi_targets", targets); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// HXISCLUNsHandler renders LUNs for a target
|
||||
func (a *App) HXISCLUNsHandler(w http.ResponseWriter, r *http.Request) {
|
||||
targetID := chi.URLParam(r, "target")
|
||||
luns := []map[string]any{}
|
||||
if a.ISCSISvc != nil {
|
||||
luns, _ = a.ISCSISvc.ListLUNs(r.Context(), targetID)
|
||||
}
|
||||
if err := templates.ExecuteTemplate(w, "hx_iscsi_luns", luns); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// ISCSI Target info partial
|
||||
func (a *App) ISCSITargetInfoHandler(w http.ResponseWriter, r *http.Request) {
|
||||
targetID := chi.URLParam(r, "target")
|
||||
var info map[string]any
|
||||
if a.ISCSISvc != nil {
|
||||
info, _ = a.ISCSISvc.GetTargetInfo(r.Context(), targetID)
|
||||
}
|
||||
if err := templates.ExecuteTemplate(w, "hx_iscsi_target_info", info); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// CreateISCSITargetHandler handles creating an iSCSI target via form/JSON
|
||||
func (a *App) CreateISCSITargetHandler(w http.ResponseWriter, r *http.Request) {
|
||||
if err := r.ParseForm(); err != nil {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
name := r.FormValue("name")
|
||||
iqn := r.FormValue("iqn")
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
if a.ISCSISvc == nil {
|
||||
http.Error(w, "no iscsi service", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
id, err := a.ISCSISvc.CreateTarget(r.Context(), user, role, name, iqn)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
data := map[string]any{"ID": id, "Name": name}
|
||||
if err := templates.ExecuteTemplate(w, "job_row", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// CreateISCSILUNHandler handles creating a LUN for a target
|
||||
func (a *App) CreateISCSILUNHandler(w http.ResponseWriter, r *http.Request) {
|
||||
if err := r.ParseForm(); err != nil {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
targetID := r.FormValue("target_id")
|
||||
zvol := r.FormValue("zvol")
|
||||
size := r.FormValue("size")
|
||||
blocksize := 512
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
if a.ISCSISvc == nil {
|
||||
http.Error(w, "no iscsi service", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
id, err := a.ISCSISvc.CreateLUN(r.Context(), user, role, targetID, zvol, size, blocksize)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
data := map[string]any{"JobID": id, "Name": zvol}
|
||||
if err := templates.ExecuteTemplate(w, "job_row", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// DeleteISCSILUNHandler deletes a LUN with optional 'force' param
|
||||
func (a *App) DeleteISCSILUNHandler(w http.ResponseWriter, r *http.Request) {
|
||||
if err := r.ParseForm(); err != nil {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
id := r.FormValue("id")
|
||||
force := r.FormValue("force") == "1" || r.FormValue("force") == "true"
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
if a.ISCSISvc == nil {
|
||||
http.Error(w, "no iscsi service", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if err := a.ISCSISvc.DeleteLUN(r.Context(), user, role, id, force); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
}
|
||||
|
||||
// AddISCSIPortalHandler configures a portal for a target
|
||||
func (a *App) AddISCSIPortalHandler(w http.ResponseWriter, r *http.Request) {
|
||||
if err := r.ParseForm(); err != nil {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
targetID := r.FormValue("target_id")
|
||||
address := r.FormValue("address")
|
||||
// default port 3260
|
||||
port := 3260
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
if a.ISCSISvc == nil {
|
||||
http.Error(w, "no iscsi service", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
id, err := a.ISCSISvc.AddPortal(r.Context(), user, role, targetID, address, port)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
data := map[string]any{"ID": id}
|
||||
if err := templates.ExecuteTemplate(w, "job_row", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// AddISCSIInitiatorHandler adds an initiator to an IQN ACL
|
||||
func (a *App) AddISCSIInitiatorHandler(w http.ResponseWriter, r *http.Request) {
|
||||
if err := r.ParseForm(); err != nil {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
targetID := r.FormValue("target_id")
|
||||
initiator := r.FormValue("initiator_iqn")
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
if a.ISCSISvc == nil {
|
||||
http.Error(w, "no iscsi service", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
id, err := a.ISCSISvc.AddInitiator(r.Context(), user, role, targetID, initiator)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
data := map[string]any{"ID": id}
|
||||
if err := templates.ExecuteTemplate(w, "job_row", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// UnmapISCSILUNHandler performs the 'drain' step to unmap the LUN
|
||||
func (a *App) UnmapISCSILUNHandler(w http.ResponseWriter, r *http.Request) {
|
||||
if err := r.ParseForm(); err != nil {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
id := r.FormValue("id")
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
if a.ISCSISvc == nil {
|
||||
http.Error(w, "no iscsi service", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if err := a.ISCSISvc.UnmapLUN(r.Context(), user, role, id); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
}
|
||||
|
||||
// HXSmbHandler renders SMB shares partial
|
||||
func (a *App) HXSmbHandler(w http.ResponseWriter, r *http.Request) {
|
||||
shares := []domain.Share{}
|
||||
if a.ShareSvc != nil {
|
||||
shares, _ = a.ShareSvc.ListSMB(r.Context())
|
||||
}
|
||||
if err := templates.ExecuteTemplate(w, "hx_smb_shares", shares); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// CreateSMBHandler handles SMB creation (HTMX)
|
||||
func (a *App) CreateSMBHandler(w http.ResponseWriter, r *http.Request) {
|
||||
if err := r.ParseForm(); err != nil {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
name := r.FormValue("name")
|
||||
path := r.FormValue("path")
|
||||
readOnly := r.FormValue("read_only") == "1" || r.FormValue("read_only") == "true"
|
||||
allowedUsersRaw := r.FormValue("allowed_users")
|
||||
var allowed []string
|
||||
if allowedUsersRaw != "" {
|
||||
allowed = strings.Split(allowedUsersRaw, ",")
|
||||
}
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
if a.ShareSvc == nil {
|
||||
http.Error(w, "no share service", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
id, err := a.ShareSvc.CreateSMB(r.Context(), user, role, name, path, readOnly, allowed)
|
||||
if err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
data := map[string]any{"JobID": id, "Name": name, "Status": "queued"}
|
||||
if err := templates.ExecuteTemplate(w, "job_row", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// DeleteSMBHandler handles SMB deletion
|
||||
func (a *App) DeleteSMBHandler(w http.ResponseWriter, r *http.Request) {
|
||||
if err := r.ParseForm(); err != nil {
|
||||
http.Error(w, "bad request", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
id := r.FormValue("id")
|
||||
user, _ := r.Context().Value(ContextKey("user")).(string)
|
||||
role, _ := r.Context().Value(ContextKey("user.role")).(string)
|
||||
if a.ShareSvc == nil {
|
||||
http.Error(w, "no share service", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
if err := a.ShareSvc.DeleteSMB(r.Context(), user, role, id); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
shares, _ := a.ShareSvc.ListSMB(r.Context())
|
||||
if err := templates.ExecuteTemplate(w, "hx_smb_shares", shares); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -43,3 +43,81 @@ func TestCreatePoolHandler(t *testing.T) {
|
||||
t.Fatalf("expected job_id in response")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSharesNFSHandler(t *testing.T) {
|
||||
m := &mock.MockSharesService{}
|
||||
app := &App{DB: &sql.DB{}, ShareSvc: m}
|
||||
req := httptest.NewRequest(http.MethodGet, "/shares/nfs", nil)
|
||||
w := httptest.NewRecorder()
|
||||
app.SharesNFSHandler(w, req)
|
||||
if w.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d; body: %s", w.Code, w.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestCreateNFSHandler(t *testing.T) {
|
||||
m := &mock.MockSharesService{}
|
||||
app := &App{DB: &sql.DB{}, ShareSvc: m}
|
||||
form := "name=data&path=tank/ds&options={}" // simple form body
|
||||
req := httptest.NewRequest(http.MethodPost, "/shares/nfs/create", strings.NewReader(form))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req.Header.Set("X-Auth-User", "admin")
|
||||
req.Header.Set("X-Auth-Role", "admin")
|
||||
w := httptest.NewRecorder()
|
||||
app.CreateNFSHandler(w, req)
|
||||
if w.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d; body: %s", w.Code, w.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestNFSStatusHandler(t *testing.T) {
|
||||
m := &mock.MockSharesService{}
|
||||
app := &App{DB: &sql.DB{}, ShareSvc: m}
|
||||
req := httptest.NewRequest(http.MethodGet, "/api/shares/nfs/status", nil)
|
||||
w := httptest.NewRecorder()
|
||||
app.NFSStatusHandler(w, req)
|
||||
if w.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d; body: %s", w.Code, w.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestSharesSMBHandler(t *testing.T) {
|
||||
m := &mock.MockSharesService{}
|
||||
app := &App{DB: &sql.DB{}, ShareSvc: m}
|
||||
req := httptest.NewRequest(http.MethodGet, "/shares/smb", nil)
|
||||
w := httptest.NewRecorder()
|
||||
app.SharesSMBHandler(w, req)
|
||||
if w.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d; body: %s", w.Code, w.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestCreateSMBHandler(t *testing.T) {
|
||||
m := &mock.MockSharesService{}
|
||||
app := &App{DB: &sql.DB{}, ShareSvc: m}
|
||||
form := "name=smb1&path=tank/ds&allowed_users=user1,user2&read_only=1"
|
||||
req := httptest.NewRequest(http.MethodPost, "/shares/smb/create", strings.NewReader(form))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req.Header.Set("X-Auth-User", "admin")
|
||||
req.Header.Set("X-Auth-Role", "admin")
|
||||
w := httptest.NewRecorder()
|
||||
app.CreateSMBHandler(w, req)
|
||||
if w.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d; body: %s", w.Code, w.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestDeleteSMBHandler(t *testing.T) {
|
||||
m := &mock.MockSharesService{}
|
||||
app := &App{DB: &sql.DB{}, ShareSvc: m}
|
||||
form := "id=smb-1"
|
||||
req := httptest.NewRequest(http.MethodPost, "/shares/smb/delete", strings.NewReader(form))
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req.Header.Set("X-Auth-User", "admin")
|
||||
req.Header.Set("X-Auth-Role", "admin")
|
||||
w := httptest.NewRecorder()
|
||||
app.DeleteSMBHandler(w, req)
|
||||
if w.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d; body: %s", w.Code, w.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,9 +2,14 @@ package http
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/rand"
|
||||
"encoding/base64"
|
||||
"log"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/example/storage-appliance/internal/auth"
|
||||
)
|
||||
|
||||
// ContextKey used to store values in context
|
||||
@@ -12,6 +17,9 @@ type ContextKey string
|
||||
|
||||
const (
|
||||
ContextKeyRequestID ContextKey = "request-id"
|
||||
ContextKeyUser ContextKey = "user"
|
||||
ContextKeyUserID ContextKey = "user.id"
|
||||
ContextKeySession ContextKey = "session"
|
||||
)
|
||||
|
||||
// RequestID middleware sets a request ID in headers and request context
|
||||
@@ -30,49 +38,170 @@ func Logging(next http.Handler) http.Handler {
|
||||
})
|
||||
}
|
||||
|
||||
// Auth middleware placeholder to authenticate users
|
||||
func Auth(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
// Basic dev auth: read X-Auth-User; in real world, validate session/jwt
|
||||
username := r.Header.Get("X-Auth-User")
|
||||
if username == "" {
|
||||
username = "anonymous"
|
||||
}
|
||||
// Role hint: header X-Auth-Role (admin/operator/viewer)
|
||||
role := r.Header.Get("X-Auth-Role")
|
||||
if role == "" {
|
||||
if username == "admin" {
|
||||
role = "admin"
|
||||
} else {
|
||||
role = "viewer"
|
||||
// AuthMiddleware creates an auth middleware that uses the provided App
|
||||
func AuthMiddleware(app *App) func(http.Handler) http.Handler {
|
||||
return func(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
// Skip auth for login and public routes
|
||||
if strings.HasPrefix(r.URL.Path, "/login") || strings.HasPrefix(r.URL.Path, "/static") || r.URL.Path == "/healthz" || r.URL.Path == "/metrics" {
|
||||
next.ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
}
|
||||
ctx := context.WithValue(r.Context(), ContextKey("user"), username)
|
||||
ctx = context.WithValue(ctx, ContextKey("user.role"), role)
|
||||
next.ServeHTTP(w, r.WithContext(ctx))
|
||||
})
|
||||
|
||||
// Get session token from cookie
|
||||
cookie, err := r.Cookie(auth.SessionCookieName)
|
||||
if err != nil {
|
||||
// No session, redirect to login
|
||||
if r.Header.Get("HX-Request") == "true" {
|
||||
w.Header().Set("HX-Redirect", "/login")
|
||||
w.WriteHeader(http.StatusUnauthorized)
|
||||
} else {
|
||||
http.Redirect(w, r, "/login", http.StatusFound)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Validate session
|
||||
sessionStore := auth.NewSessionStore(app.DB)
|
||||
session, err := sessionStore.GetSession(r.Context(), cookie.Value)
|
||||
if err != nil {
|
||||
// Invalid session, redirect to login
|
||||
if r.Header.Get("HX-Request") == "true" {
|
||||
w.Header().Set("HX-Redirect", "/login")
|
||||
w.WriteHeader(http.StatusUnauthorized)
|
||||
} else {
|
||||
http.Redirect(w, r, "/login", http.StatusFound)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Get user
|
||||
userStore := auth.NewUserStore(app.DB)
|
||||
user, err := userStore.GetUserByID(r.Context(), session.UserID)
|
||||
if err != nil {
|
||||
http.Error(w, "user not found", http.StatusUnauthorized)
|
||||
return
|
||||
}
|
||||
|
||||
// Store user info in context
|
||||
ctx := context.WithValue(r.Context(), ContextKeyUser, user.Username)
|
||||
ctx = context.WithValue(ctx, ContextKeyUserID, user.ID)
|
||||
ctx = context.WithValue(ctx, ContextKeySession, session)
|
||||
|
||||
next.ServeHTTP(w, r.WithContext(ctx))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// CSRF middleware placeholder (reads X-CSRF-Token)
|
||||
func CSRFMiddleware(next http.Handler) http.Handler {
|
||||
// Auth is a legacy wrapper for backward compatibility
|
||||
func Auth(next http.Handler) http.Handler {
|
||||
// This will be replaced by AuthMiddleware in router
|
||||
return next
|
||||
}
|
||||
|
||||
// RequireAuth middleware ensures user is authenticated (alternative to Auth that doesn't redirect)
|
||||
func RequireAuth(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
// TODO: check and enforce CSRF tokens for mutating requests
|
||||
userID := r.Context().Value(ContextKeyUserID)
|
||||
if userID == nil {
|
||||
http.Error(w, "unauthorized", http.StatusUnauthorized)
|
||||
return
|
||||
}
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
|
||||
// RBAC middleware placeholder
|
||||
func RBAC(permission string) func(http.Handler) http.Handler {
|
||||
// CSRFMiddleware creates a CSRF middleware that uses the provided App
|
||||
func CSRFMiddleware(app *App) func(http.Handler) http.Handler {
|
||||
return func(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
// Try to read role from context and permit admin always
|
||||
role := r.Context().Value(ContextKey("user.role"))
|
||||
if role == "admin" {
|
||||
// For safe methods, ensure CSRF token cookie exists
|
||||
if r.Method == "GET" || r.Method == "HEAD" || r.Method == "OPTIONS" {
|
||||
// Set CSRF token cookie if it doesn't exist
|
||||
if cookie, err := r.Cookie("csrf_token"); err != nil || cookie.Value == "" {
|
||||
token := generateCSRFToken()
|
||||
http.SetCookie(w, &http.Cookie{
|
||||
Name: "csrf_token",
|
||||
Value: token,
|
||||
Path: "/",
|
||||
HttpOnly: false, // Needed for HTMX to read it
|
||||
Secure: false,
|
||||
SameSite: http.SameSiteStrictMode,
|
||||
MaxAge: 86400, // 24 hours
|
||||
})
|
||||
}
|
||||
next.ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
// For now, only admin is permitted; add permission checks here
|
||||
|
||||
// Get CSRF token from header (HTMX compatible) or form
|
||||
token := r.Header.Get("X-CSRF-Token")
|
||||
if token == "" {
|
||||
token = r.FormValue("csrf_token")
|
||||
}
|
||||
|
||||
// Get expected token from cookie
|
||||
expectedToken := getCSRFToken(r)
|
||||
if token == "" || token != expectedToken {
|
||||
http.Error(w, "invalid CSRF token", http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// getCSRFToken retrieves or generates a CSRF token for the session
|
||||
func getCSRFToken(r *http.Request) string {
|
||||
// Try to get from cookie first
|
||||
cookie, err := r.Cookie("csrf_token")
|
||||
if err == nil && cookie.Value != "" {
|
||||
return cookie.Value
|
||||
}
|
||||
|
||||
// Generate new token (will be set in cookie by handler)
|
||||
return generateCSRFToken()
|
||||
}
|
||||
|
||||
func generateCSRFToken() string {
|
||||
b := make([]byte, 32)
|
||||
rand.Read(b)
|
||||
return base64.URLEncoding.EncodeToString(b)
|
||||
}
|
||||
|
||||
// RequirePermission creates a permission check middleware
|
||||
func RequirePermission(app *App, permission string) func(http.Handler) http.Handler {
|
||||
return func(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
userID := r.Context().Value(ContextKeyUserID)
|
||||
if userID == nil {
|
||||
http.Error(w, "unauthorized", http.StatusUnauthorized)
|
||||
return
|
||||
}
|
||||
|
||||
rbacStore := auth.NewRBACStore(app.DB)
|
||||
hasPermission, err := rbacStore.UserHasPermission(r.Context(), userID.(string), permission)
|
||||
if err != nil {
|
||||
log.Printf("permission check error: %v", err)
|
||||
http.Error(w, "internal error", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
if !hasPermission {
|
||||
http.Error(w, "forbidden", http.StatusForbidden)
|
||||
return
|
||||
}
|
||||
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// RBAC middleware (kept for backward compatibility)
|
||||
func RBAC(permission string) func(http.Handler) http.Handler {
|
||||
// This will be replaced by RequirePermission in router
|
||||
return func(next http.Handler) http.Handler {
|
||||
return next
|
||||
}
|
||||
}
|
||||
|
||||
121
internal/http/monitoring_handlers.go
Normal file
121
internal/http/monitoring_handlers.go
Normal file
@@ -0,0 +1,121 @@
|
||||
package http
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
"github.com/example/storage-appliance/internal/monitoring"
|
||||
)
|
||||
|
||||
// MetricsHandler serves Prometheus metrics
|
||||
func (a *App) MetricsHandler(w http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
|
||||
// Create collectors
|
||||
collectors := []monitoring.Collector{
|
||||
monitoring.NewZFSCollector(a.ZFSSvc, a.Runner),
|
||||
monitoring.NewSMARTCollector(a.Runner),
|
||||
monitoring.NewServiceCollector(a.Runner),
|
||||
monitoring.NewHostCollector(),
|
||||
}
|
||||
|
||||
// Export metrics
|
||||
exporter := monitoring.NewPrometheusExporter(collectors...)
|
||||
metrics := exporter.Export(ctx)
|
||||
|
||||
w.Header().Set("Content-Type", "text/plain; version=0.0.4")
|
||||
w.WriteHeader(http.StatusOK)
|
||||
w.Write([]byte(metrics))
|
||||
}
|
||||
|
||||
// MonitoringHandler shows the monitoring dashboard
|
||||
func (a *App) MonitoringHandler(w http.ResponseWriter, r *http.Request) {
|
||||
data := templateData(r, map[string]interface{}{
|
||||
"Title": "Monitoring",
|
||||
})
|
||||
if err := templates.ExecuteTemplate(w, "monitoring", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// HXMonitoringHandler returns HTMX partial for monitoring metrics
|
||||
func (a *App) HXMonitoringHandler(w http.ResponseWriter, r *http.Request) {
|
||||
ctx := r.Context()
|
||||
|
||||
// Create collectors
|
||||
collectors := []monitoring.Collector{
|
||||
monitoring.NewZFSCollector(a.ZFSSvc, a.Runner),
|
||||
monitoring.NewSMARTCollector(a.Runner),
|
||||
monitoring.NewServiceCollector(a.Runner),
|
||||
monitoring.NewHostCollector(),
|
||||
}
|
||||
|
||||
// Export for UI
|
||||
exporter := monitoring.NewUIExporter(collectors...)
|
||||
groups := exporter.Export(ctx)
|
||||
|
||||
data := map[string]interface{}{
|
||||
"Groups": groups,
|
||||
}
|
||||
|
||||
if err := templates.ExecuteTemplate(w, "hx_monitoring", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
// HXMonitoringGroupHandler returns HTMX partial for a specific metric group
|
||||
func (a *App) HXMonitoringGroupHandler(w http.ResponseWriter, r *http.Request) {
|
||||
groupName := r.URL.Query().Get("group")
|
||||
if groupName == "" {
|
||||
http.Error(w, "group parameter required", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
ctx := r.Context()
|
||||
|
||||
// Create the specific collector (normalize group name)
|
||||
var collector monitoring.Collector
|
||||
groupLower := strings.ToLower(groupName)
|
||||
switch groupLower {
|
||||
case "zfs":
|
||||
collector = monitoring.NewZFSCollector(a.ZFSSvc, a.Runner)
|
||||
case "smart":
|
||||
collector = monitoring.NewSMARTCollector(a.Runner)
|
||||
case "services", "service":
|
||||
collector = monitoring.NewServiceCollector(a.Runner)
|
||||
case "host":
|
||||
collector = monitoring.NewHostCollector()
|
||||
default:
|
||||
// Try to match by collector name
|
||||
if strings.Contains(groupLower, "zfs") {
|
||||
collector = monitoring.NewZFSCollector(a.ZFSSvc, a.Runner)
|
||||
} else if strings.Contains(groupLower, "smart") {
|
||||
collector = monitoring.NewSMARTCollector(a.Runner)
|
||||
} else if strings.Contains(groupLower, "service") {
|
||||
collector = monitoring.NewServiceCollector(a.Runner)
|
||||
} else if strings.Contains(groupLower, "host") {
|
||||
collector = monitoring.NewHostCollector()
|
||||
} else {
|
||||
http.Error(w, "unknown group", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Export for UI
|
||||
exporter := monitoring.NewUIExporter(collector)
|
||||
groups := exporter.Export(ctx)
|
||||
|
||||
if len(groups) == 0 {
|
||||
http.Error(w, "no data", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
data := map[string]interface{}{
|
||||
"Group": groups[0],
|
||||
}
|
||||
|
||||
if err := templates.ExecuteTemplate(w, "hx_monitoring_group", data); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,19 +10,78 @@ import (
|
||||
func RegisterRoutes(r *chi.Mux, app *App) {
|
||||
r.Use(Logging)
|
||||
r.Use(RequestID)
|
||||
r.Use(Auth)
|
||||
r.Use(CSRFMiddleware(app))
|
||||
r.Use(AuthMiddleware(app))
|
||||
|
||||
// Public routes
|
||||
r.Get("/login", app.LoginHandler)
|
||||
r.Post("/login", app.LoginHandler)
|
||||
r.Post("/logout", app.LogoutHandler)
|
||||
r.Get("/healthz", func(w http.ResponseWriter, r *http.Request) { w.WriteHeader(http.StatusOK) })
|
||||
r.Get("/metrics", app.MetricsHandler) // Prometheus metrics (public for scraping)
|
||||
|
||||
// Protected routes
|
||||
r.Get("/", app.DashboardHandler)
|
||||
r.Get("/dashboard", app.DashboardHandler)
|
||||
r.Get("/healthz", func(w http.ResponseWriter, r *http.Request) { w.WriteHeader(http.StatusOK) })
|
||||
r.Get("/monitoring", app.MonitoringHandler)
|
||||
r.Get("/hx/monitoring", app.HXMonitoringHandler)
|
||||
r.Get("/hx/monitoring/group", app.HXMonitoringGroupHandler)
|
||||
|
||||
// API namespace
|
||||
r.Route("/api", func(r chi.Router) {
|
||||
r.Get("/pools", app.PoolsHandler)
|
||||
r.With(RBAC("storage.pool.create")).Post("/pools", app.CreatePoolHandler) // create a pool -> creates a job
|
||||
r.With(RequirePermission(app, "storage.pool.create")).Post("/pools", app.CreatePoolHandler) // create a pool -> creates a job
|
||||
r.Get("/pools/{pool}/datasets", app.PoolDatasetsHandler)
|
||||
r.With(RequirePermission(app, "storage.dataset.create")).Post("/datasets", app.CreateDatasetHandler)
|
||||
r.With(RequirePermission(app, "storage.dataset.snapshot")).Post("/datasets/{dataset}/snapshot", app.SnapshotHandler)
|
||||
r.With(RequirePermission(app, "storage.pool.scrub")).Post("/pools/{pool}/scrub", app.PoolScrubHandler)
|
||||
r.Get("/jobs", app.JobsHandler)
|
||||
r.Get("/shares/nfs/status", app.NFSStatusHandler)
|
||||
})
|
||||
|
||||
r.Get("/storage", app.StorageHandler)
|
||||
r.Get("/shares/nfs", app.SharesNFSHandler)
|
||||
r.Get("/hx/shares/nfs", app.HXNFSHandler)
|
||||
r.With(RequirePermission(app, "shares.nfs.create")).Post("/shares/nfs/create", app.CreateNFSHandler)
|
||||
r.With(RequirePermission(app, "shares.nfs.delete")).Post("/shares/nfs/delete", app.DeleteNFSHandler)
|
||||
r.Get("/shares/smb", app.SharesSMBHandler)
|
||||
r.Get("/hx/shares/smb", app.HXSmbHandler)
|
||||
r.With(RequirePermission(app, "shares.smb.create")).Post("/shares/smb/create", app.CreateSMBHandler)
|
||||
r.With(RequirePermission(app, "shares.smb.delete")).Post("/shares/smb/delete", app.DeleteSMBHandler)
|
||||
r.Get("/hx/pools", app.HXPoolsHandler)
|
||||
r.Post("/storage/pool/create", app.StorageCreatePoolHandler)
|
||||
r.With(RequirePermission(app, "storage.pool.create")).Post("/storage/pool/create", app.StorageCreatePoolHandler)
|
||||
r.Get("/jobs/{id}", app.JobPartialHandler)
|
||||
|
||||
// iSCSI routes
|
||||
r.Get("/iscsi", app.ISCSIHandler)
|
||||
r.Get("/api/iscsi/hx_targets", app.HXISCSIHandler)
|
||||
r.Get("/api/iscsi/hx_luns/{target}", app.HXISCLUNsHandler)
|
||||
r.Get("/api/iscsi/target/{target}", app.ISCSITargetInfoHandler)
|
||||
r.With(RequirePermission(app, "iscsi.target.create")).Post("/api/iscsi/create_target", app.CreateISCSITargetHandler)
|
||||
r.With(RequirePermission(app, "iscsi.lun.create")).Post("/api/iscsi/create_lun", app.CreateISCSILUNHandler)
|
||||
r.With(RequirePermission(app, "iscsi.lun.delete")).Post("/api/iscsi/delete_lun", app.DeleteISCSILUNHandler)
|
||||
r.With(RequirePermission(app, "iscsi.lun.unmap")).Post("/api/iscsi/unmap_lun", app.UnmapISCSILUNHandler)
|
||||
r.With(RequirePermission(app, "iscsi.portal.create")).Post("/api/iscsi/add_portal", app.AddISCSIPortalHandler)
|
||||
r.With(RequirePermission(app, "iscsi.initiator.create")).Post("/api/iscsi/add_initiator", app.AddISCSIInitiatorHandler)
|
||||
|
||||
// Admin routes - users
|
||||
r.Route("/admin/users", func(r chi.Router) {
|
||||
r.Use(RequirePermission(app, "users.manage"))
|
||||
r.Get("/", app.UsersHandler)
|
||||
r.Get("/hx", app.HXUsersHandler)
|
||||
r.Post("/create", app.CreateUserHandler)
|
||||
r.Post("/{id}/delete", app.DeleteUserHandler)
|
||||
r.Post("/{id}/roles", app.UpdateUserRolesHandler)
|
||||
})
|
||||
// Admin routes - roles
|
||||
r.Route("/admin/roles", func(r chi.Router) {
|
||||
r.Use(RequirePermission(app, "roles.manage"))
|
||||
r.Get("/", app.RolesHandler)
|
||||
r.Get("/hx", app.HXRolesHandler)
|
||||
r.Post("/create", app.CreateRoleHandler)
|
||||
r.Post("/{id}/delete", app.DeleteRoleHandler)
|
||||
r.Post("/{id}/permissions", app.UpdateRolePermissionsHandler)
|
||||
})
|
||||
|
||||
r.Get("/static/*", StaticHandler)
|
||||
}
|
||||
|
||||
20
internal/http/template_helpers.go
Normal file
20
internal/http/template_helpers.go
Normal file
@@ -0,0 +1,20 @@
|
||||
package http
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
)
|
||||
|
||||
// templateData adds CSRF token and other common data to template context
|
||||
func templateData(r *http.Request, data map[string]interface{}) map[string]interface{} {
|
||||
if data == nil {
|
||||
data = make(map[string]interface{})
|
||||
}
|
||||
|
||||
// Get CSRF token from cookie
|
||||
if cookie, err := r.Cookie("csrf_token"); err == nil {
|
||||
data["CSRFToken"] = cookie.Value
|
||||
}
|
||||
|
||||
return data
|
||||
}
|
||||
|
||||
59
internal/infra/crypto/crypto.go
Normal file
59
internal/infra/crypto/crypto.go
Normal file
@@ -0,0 +1,59 @@
|
||||
package crypto
|
||||
|
||||
import (
|
||||
"crypto/aes"
|
||||
"crypto/cipher"
|
||||
"crypto/rand"
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
// Encrypt uses AES-GCM with a 32 byte key
|
||||
func Encrypt(key []byte, plaintext string) (string, error) {
|
||||
if len(key) != 32 {
|
||||
return "", errors.New("invalid key length")
|
||||
}
|
||||
block, err := aes.NewCipher(key)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
aesgcm, err := cipher.NewGCM(block)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
nonce := make([]byte, aesgcm.NonceSize())
|
||||
if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
|
||||
return "", err
|
||||
}
|
||||
ct := aesgcm.Seal(nonce, nonce, []byte(plaintext), nil)
|
||||
return base64.StdEncoding.EncodeToString(ct), nil
|
||||
}
|
||||
|
||||
func Decrypt(key []byte, cipherText string) (string, error) {
|
||||
if len(key) != 32 {
|
||||
return "", errors.New("invalid key length")
|
||||
}
|
||||
data, err := base64.StdEncoding.DecodeString(cipherText)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
block, err := aes.NewCipher(key)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
aesgcm, err := cipher.NewGCM(block)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
nonceSize := aesgcm.NonceSize()
|
||||
if len(data) < nonceSize {
|
||||
return "", errors.New("ciphertext too short")
|
||||
}
|
||||
nonce, ct := data[:nonceSize], data[nonceSize:]
|
||||
pt, err := aesgcm.Open(nil, nonce, ct, nil)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return string(pt), nil
|
||||
}
|
||||
121
internal/infra/iscsi/iscsi.go
Normal file
121
internal/infra/iscsi/iscsi.go
Normal file
@@ -0,0 +1,121 @@
|
||||
package iscsi
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/example/storage-appliance/internal/infra/osexec"
|
||||
)
|
||||
|
||||
// Adapter wraps targetcli invocations for LIO (targetcli) management.
|
||||
type Adapter struct {
|
||||
Runner osexec.Runner
|
||||
}
|
||||
|
||||
func NewAdapter(runner osexec.Runner) *Adapter { return &Adapter{Runner: runner} }
|
||||
|
||||
// CreateTarget creates an IQN target via targetcli
|
||||
func (a *Adapter) CreateTarget(ctx context.Context, iqn string) error {
|
||||
// Use a short timeout for cli interactions
|
||||
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
_, stderr, code, err := osexec.ExecWithRunner(a.Runner, ctx, "targetcli", "/iscsi", "create", iqn)
|
||||
if err != nil {
|
||||
return fmt.Errorf("targetcli create target failed: %v %s", err, stderr)
|
||||
}
|
||||
if code != 0 {
|
||||
return fmt.Errorf("targetcli create returned: %s", stderr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// CreateBackstore creates a block backstore for a zvol device.
|
||||
func (a *Adapter) CreateBackstore(ctx context.Context, name, devpath string) error {
|
||||
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
// targetcli syntax: /backstores/block create <name> <devpath>
|
||||
_, stderr, code, err := osexec.ExecWithRunner(a.Runner, ctx, "targetcli", "/backstores/block", "create", name, devpath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("targetcli create backstore failed: %v %s", err, stderr)
|
||||
}
|
||||
if code != 0 {
|
||||
return fmt.Errorf("targetcli backstore returned: %s", stderr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// CreateLUN maps backstore into target's TPG1 LUNs
|
||||
func (a *Adapter) CreateLUN(ctx context.Context, iqn, backstoreName string, lunID int) error {
|
||||
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
bsPath := fmt.Sprintf("/backstores/block/%s", backstoreName)
|
||||
tpgPath := fmt.Sprintf("/iscsi/%s/tpg1/luns", iqn)
|
||||
_, stderr, code, err := osexec.ExecWithRunner(a.Runner, ctx, "targetcli", tpgPath, "create", bsPath)
|
||||
if err != nil {
|
||||
return fmt.Errorf("targetcli create lun failed: %v %s", err, stderr)
|
||||
}
|
||||
if code != 0 {
|
||||
return fmt.Errorf("targetcli create lun returned: %s", stderr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteLUN unmaps a LUN from a target; if the LUN is mapped fail unless forced.
|
||||
func (a *Adapter) DeleteLUN(ctx context.Context, iqn string, lunID int) error {
|
||||
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
tpgPath := fmt.Sprintf("/iscsi/%s/tpg1/luns", iqn)
|
||||
// delete by numeric id
|
||||
_, stderr, code, err := osexec.ExecWithRunner(a.Runner, ctx, "targetcli", tpgPath, "delete", fmt.Sprintf("%d", lunID))
|
||||
if err != nil {
|
||||
return fmt.Errorf("targetcli delete lun failed: %v %s", err, stderr)
|
||||
}
|
||||
if code != 0 {
|
||||
return fmt.Errorf("targetcli delete lun returned: %s", stderr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Adapter) AddPortal(ctx context.Context, iqn, address string, port int) error {
|
||||
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
tpgPath := fmt.Sprintf("/iscsi/%s/tpg1/portals", iqn)
|
||||
addr := fmt.Sprintf("%s:%d", address, port)
|
||||
_, stderr, code, err := osexec.ExecWithRunner(a.Runner, ctx, "targetcli", tpgPath, "create", addr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("targetcli add portal failed: %v %s", err, stderr)
|
||||
}
|
||||
if code != 0 {
|
||||
return fmt.Errorf("targetcli add portal returned: %s", stderr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Adapter) AddACL(ctx context.Context, iqn, initiator string) error {
|
||||
ctx, cancel := context.WithTimeout(ctx, 10*time.Second)
|
||||
defer cancel()
|
||||
aclPath := fmt.Sprintf("/iscsi/%s/tpg1/acls", iqn)
|
||||
_, stderr, code, err := osexec.ExecWithRunner(a.Runner, ctx, "targetcli", aclPath, "create", initiator)
|
||||
if err != nil {
|
||||
return fmt.Errorf("targetcli add acl failed: %v %s", err, stderr)
|
||||
}
|
||||
if code != 0 {
|
||||
return fmt.Errorf("targetcli add acl returned: %s", stderr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Save writes the configuration to storage (saving targetcli config)
|
||||
func (a *Adapter) Save(ctx context.Context) error {
|
||||
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
|
||||
defer cancel()
|
||||
_, stderr, code, err := osexec.ExecWithRunner(a.Runner, ctx, "targetcli", "saveconfig")
|
||||
if err != nil {
|
||||
return fmt.Errorf("targetcli save failed: %v %s", err, stderr)
|
||||
}
|
||||
if code != 0 {
|
||||
return fmt.Errorf("targetcli save returned: %s", stderr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
122
internal/infra/minio/minio.go
Normal file
122
internal/infra/minio/minio.go
Normal file
@@ -0,0 +1,122 @@
|
||||
package minio
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/example/storage-appliance/internal/infra/osexec"
|
||||
)
|
||||
|
||||
type Adapter struct {
|
||||
Runner osexec.Runner
|
||||
EnvPath string
|
||||
}
|
||||
|
||||
func NewAdapter(runner osexec.Runner, envPath string) *Adapter {
|
||||
if envPath == "" {
|
||||
envPath = "/etc/minio/minio.env"
|
||||
}
|
||||
return &Adapter{Runner: runner, EnvPath: envPath}
|
||||
}
|
||||
|
||||
type Settings struct {
|
||||
AccessKey string `json:"access_key"`
|
||||
SecretKey string `json:"secret_key"`
|
||||
DataPath string `json:"data_path"`
|
||||
Port int `json:"port"`
|
||||
TLS bool `json:"tls"`
|
||||
}
|
||||
|
||||
// WriteEnv writes environment file used by MinIO service
|
||||
func (a *Adapter) WriteEnv(ctx context.Context, s Settings) error {
|
||||
dir := filepath.Dir(a.EnvPath)
|
||||
if err := os.MkdirAll(dir, 0755); err != nil {
|
||||
return err
|
||||
}
|
||||
// env lines
|
||||
lines := []string{
|
||||
fmt.Sprintf("MINIO_ROOT_USER=%s", s.AccessKey),
|
||||
fmt.Sprintf("MINIO_ROOT_PASSWORD=%s", s.SecretKey),
|
||||
fmt.Sprintf("MINIO_VOLUMES=%s", s.DataPath),
|
||||
}
|
||||
if s.Port != 0 {
|
||||
lines = append(lines, fmt.Sprintf("MINIO_OPTS=--address :%d", s.Port))
|
||||
}
|
||||
content := strings.Join(lines, "\n") + "\n"
|
||||
tmp := filepath.Join(dir, ".minio.env.tmp")
|
||||
if err := os.WriteFile(tmp, []byte(content), 0600); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.Rename(tmp, a.EnvPath); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Reload reloads minio service to pick up new env; prefer systemctl reload
|
||||
func (a *Adapter) Reload(ctx context.Context) error {
|
||||
_, stderr, _, err := osexec.ExecWithRunner(a.Runner, ctx, "systemctl", "reload", "minio")
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
// fallback to restart
|
||||
_, stderr, _, err = osexec.ExecWithRunner(a.Runner, ctx, "systemctl", "restart", "minio")
|
||||
if err != nil {
|
||||
return fmt.Errorf("minio reload/restart failed: %s", stderr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ConfigureMC configures mc alias to point to the MinIO service using given settings
|
||||
func (a *Adapter) ConfigureMC(ctx context.Context, alias string, settings Settings) error {
|
||||
secure := "--insecure"
|
||||
if settings.TLS {
|
||||
secure = ""
|
||||
}
|
||||
// mc alias set <alias> <endpoint> <access> <secret> [--api S3v4]
|
||||
endpoint := fmt.Sprintf("http://127.0.0.1:%d", settings.Port)
|
||||
if settings.TLS {
|
||||
endpoint = fmt.Sprintf("https://127.0.0.1:%d", settings.Port)
|
||||
}
|
||||
_, stderr, _, err := osexec.ExecWithRunner(a.Runner, ctx, "mc", "alias", "set", alias, endpoint, settings.AccessKey, settings.SecretKey, secure)
|
||||
if err != nil {
|
||||
return fmt.Errorf("mc alias set failed: %s", stderr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ListBuckets uses mc to list buckets via alias
|
||||
func (a *Adapter) ListBuckets(ctx context.Context, alias string) ([]string, error) {
|
||||
out, stderr, _, err := osexec.ExecWithRunner(a.Runner, ctx, "mc", "ls", "--json", alias)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("mc ls failed: %s", stderr)
|
||||
}
|
||||
// parse JSON lines, each contains a 'key' or 'name' - in mc, `ls --json` returns 'key'
|
||||
var buckets []string
|
||||
lines := strings.Split(strings.TrimSpace(out), "\n")
|
||||
for _, l := range lines {
|
||||
var obj map[string]any
|
||||
if err := json.Unmarshal([]byte(l), &obj); err != nil {
|
||||
continue
|
||||
}
|
||||
if otype, ok := obj["type"].(string); ok && otype == "bucket" {
|
||||
if name, ok := obj["key"].(string); ok {
|
||||
buckets = append(buckets, name)
|
||||
}
|
||||
}
|
||||
}
|
||||
return buckets, nil
|
||||
}
|
||||
|
||||
// CreateBucket uses mc to create a new bucket alias/<name>
|
||||
func (a *Adapter) CreateBucket(ctx context.Context, alias, name string) error {
|
||||
_, stderr, _, err := osexec.ExecWithRunner(a.Runner, ctx, "mc", "mb", alias+"/"+name)
|
||||
if err != nil {
|
||||
return fmt.Errorf("mc mb failed: %s", stderr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
74
internal/infra/nfs/nfs.go
Normal file
74
internal/infra/nfs/nfs.go
Normal file
@@ -0,0 +1,74 @@
|
||||
package nfs
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/example/storage-appliance/internal/domain"
|
||||
"github.com/example/storage-appliance/internal/infra/osexec"
|
||||
)
|
||||
|
||||
type Adapter struct {
|
||||
Runner osexec.Runner
|
||||
ExportsPath string
|
||||
}
|
||||
|
||||
func NewAdapter(runner osexec.Runner, exportsPath string) *Adapter {
|
||||
if exportsPath == "" {
|
||||
exportsPath = "/etc/exports"
|
||||
}
|
||||
return &Adapter{Runner: runner, ExportsPath: exportsPath}
|
||||
}
|
||||
|
||||
// RenderExports renders the given shares into /etc/exports atomically
|
||||
func (a *Adapter) RenderExports(ctx context.Context, shares []domain.Share) error {
|
||||
var lines []string
|
||||
for _, s := range shares {
|
||||
// default options for NFS export
|
||||
opts := "rw,sync,no_root_squash"
|
||||
if s.Type == "nfs" {
|
||||
// if options stored as JSON use it
|
||||
if sPath := s.Path; sPath != "" {
|
||||
// options may be in s.Name? No, for now use default
|
||||
}
|
||||
}
|
||||
lines = append(lines, fmt.Sprintf("%s %s", s.Path, opts))
|
||||
}
|
||||
content := strings.Join(lines, "\n") + "\n"
|
||||
|
||||
dir := filepath.Dir(a.ExportsPath)
|
||||
tmp := filepath.Join(dir, ".exports.tmp")
|
||||
if err := os.WriteFile(tmp, []byte(content), 0644); err != nil {
|
||||
return err
|
||||
}
|
||||
// atomic rename
|
||||
if err := os.Rename(tmp, a.ExportsPath); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Apply runs exportfs -ra to apply exports
|
||||
func (a *Adapter) Apply(ctx context.Context) error {
|
||||
_, stderr, _, err := osexec.ExecWithRunner(a.Runner, ctx, "exportfs", "-ra")
|
||||
if err != nil {
|
||||
return fmt.Errorf("exportfs failed: %s", stderr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Status checks systemd for nfs server status
|
||||
func (a *Adapter) Status(ctx context.Context) (string, error) {
|
||||
// try common unit names
|
||||
names := []string{"nfs-server", "nfs-kernel-server"}
|
||||
for _, n := range names {
|
||||
out, _, _, err := osexec.ExecWithRunner(a.Runner, ctx, "systemctl", "is-active", n)
|
||||
if err == nil && strings.TrimSpace(out) != "" {
|
||||
return strings.TrimSpace(out), nil
|
||||
}
|
||||
}
|
||||
return "unknown", nil
|
||||
}
|
||||
87
internal/infra/samba/samba.go
Normal file
87
internal/infra/samba/samba.go
Normal file
@@ -0,0 +1,87 @@
|
||||
package samba
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/example/storage-appliance/internal/domain"
|
||||
"github.com/example/storage-appliance/internal/infra/osexec"
|
||||
)
|
||||
|
||||
type Adapter struct {
|
||||
Runner osexec.Runner
|
||||
IncludePath string
|
||||
}
|
||||
|
||||
func NewAdapter(runner osexec.Runner, includePath string) *Adapter {
|
||||
if includePath == "" {
|
||||
includePath = "/etc/samba/smb.conf.d/appliance.conf"
|
||||
}
|
||||
return &Adapter{Runner: runner, IncludePath: includePath}
|
||||
}
|
||||
|
||||
// RenderConf writes the Samba include file for appliance-managed shares
|
||||
func (a *Adapter) RenderConf(ctx context.Context, shares []domain.Share) error {
|
||||
var lines []string
|
||||
lines = append(lines, "# Appliance-managed SMB share configuration")
|
||||
for _, s := range shares {
|
||||
if s.Type != "smb" {
|
||||
continue
|
||||
}
|
||||
opts := []string{"path = " + s.Path}
|
||||
// parse options if stored in s.Name or s.Config; fallback to broad default
|
||||
// s.Config may have read-only or allowed users
|
||||
if ro, ok := s.Config["read_only"]; ok && ro == "true" {
|
||||
opts = append(opts, "read only = yes")
|
||||
} else {
|
||||
opts = append(opts, "read only = no")
|
||||
}
|
||||
if users, ok := s.Config["allowed_users"]; ok {
|
||||
opts = append(opts, "valid users = "+users)
|
||||
}
|
||||
// write section
|
||||
lines = append(lines, fmt.Sprintf("[%s]", s.Name))
|
||||
for _, l := range opts {
|
||||
lines = append(lines, l)
|
||||
}
|
||||
lines = append(lines, "")
|
||||
}
|
||||
content := strings.Join(lines, "\n") + "\n"
|
||||
dir := filepath.Dir(a.IncludePath)
|
||||
tmp := filepath.Join(dir, ".appliance.smb.tmp")
|
||||
if err := os.WriteFile(tmp, []byte(content), 0644); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := os.Rename(tmp, a.IncludePath); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Reload reloads or restarts samba to apply config
|
||||
func (a *Adapter) Reload(ctx context.Context) error {
|
||||
// try to reload first
|
||||
_, stderr, _, err := osexec.ExecWithRunner(a.Runner, ctx, "systemctl", "reload", "smbd")
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
// fallback to restart
|
||||
_, stderr, _, err = osexec.ExecWithRunner(a.Runner, ctx, "systemctl", "restart", "smbd")
|
||||
if err != nil {
|
||||
return fmt.Errorf("samba reload/restart failed: %s", stderr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// CreateSambaUser optional: stub for creating a local samba user mapped to appliance user
|
||||
func (a *Adapter) CreateSambaUser(ctx context.Context, user, password string) error {
|
||||
// This is optional - we use smbpasswd command in production; stub for now
|
||||
_, stderr, _, err := osexec.ExecWithRunner(a.Runner, ctx, "smbpasswd", "-a", user)
|
||||
if err != nil {
|
||||
return fmt.Errorf("smbpasswd failed: %s", stderr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -3,8 +3,10 @@ package db
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
"log"
|
||||
"strings"
|
||||
|
||||
"github.com/example/storage-appliance/internal/auth"
|
||||
)
|
||||
|
||||
// MigrateAndSeed performs a very small migration set and seeds an admin user
|
||||
@@ -19,12 +21,42 @@ func MigrateAndSeed(ctx context.Context, db *sql.DB) error {
|
||||
`CREATE TABLE IF NOT EXISTS users (id TEXT PRIMARY KEY, username TEXT NOT NULL UNIQUE, password_hash TEXT, role TEXT, created_at DATETIME DEFAULT CURRENT_TIMESTAMP);`,
|
||||
`CREATE TABLE IF NOT EXISTS pools (name TEXT PRIMARY KEY, guid TEXT, health TEXT, capacity TEXT);`,
|
||||
`CREATE TABLE IF NOT EXISTS jobs (id TEXT PRIMARY KEY, type TEXT, status TEXT, progress INTEGER DEFAULT 0, owner TEXT, created_at DATETIME DEFAULT CURRENT_TIMESTAMP, updated_at DATETIME);`,
|
||||
`CREATE TABLE IF NOT EXISTS shares (id TEXT PRIMARY KEY, name TEXT, path TEXT, type TEXT, options TEXT, created_at DATETIME DEFAULT CURRENT_TIMESTAMP);`,
|
||||
`CREATE TABLE IF NOT EXISTS object_storage (id TEXT PRIMARY KEY, name TEXT, access_key TEXT, secret_key TEXT, data_path TEXT, port INTEGER, tls INTEGER DEFAULT 0, created_at DATETIME DEFAULT CURRENT_TIMESTAMP);`,
|
||||
`CREATE TABLE IF NOT EXISTS buckets (id TEXT PRIMARY KEY, name TEXT, created_at DATETIME DEFAULT CURRENT_TIMESTAMP);`,
|
||||
`CREATE TABLE IF NOT EXISTS iscsi_targets (id TEXT PRIMARY KEY, iqn TEXT NOT NULL UNIQUE, name TEXT, created_at DATETIME DEFAULT CURRENT_TIMESTAMP);`,
|
||||
`CREATE TABLE IF NOT EXISTS iscsi_portals (id TEXT PRIMARY KEY, target_id TEXT NOT NULL, address TEXT NOT NULL, port INTEGER DEFAULT 3260, created_at DATETIME DEFAULT CURRENT_TIMESTAMP);`,
|
||||
`CREATE TABLE IF NOT EXISTS iscsi_initiators (id TEXT PRIMARY KEY, target_id TEXT NOT NULL, initiator_iqn TEXT NOT NULL, created_at DATETIME DEFAULT CURRENT_TIMESTAMP);`,
|
||||
`CREATE TABLE IF NOT EXISTS iscsi_luns (id TEXT PRIMARY KEY, target_id TEXT NOT NULL, lun_id INTEGER NOT NULL, zvol TEXT NOT NULL, size INTEGER, blocksize INTEGER, mapped INTEGER DEFAULT 0, created_at DATETIME DEFAULT CURRENT_TIMESTAMP);`,
|
||||
// Audit and snapshots tables
|
||||
`CREATE TABLE IF NOT EXISTS audit_events (id TEXT PRIMARY KEY, ts DATETIME DEFAULT CURRENT_TIMESTAMP, user_id TEXT, action TEXT, resource_type TEXT, resource_id TEXT, success INTEGER DEFAULT 1, details TEXT, actor TEXT, resource TEXT, payload_hash TEXT, result TEXT, client_ip TEXT);`,
|
||||
`CREATE TABLE IF NOT EXISTS datasets (name TEXT PRIMARY KEY, pool TEXT, type TEXT, created_at DATETIME DEFAULT CURRENT_TIMESTAMP);`,
|
||||
`CREATE TABLE IF NOT EXISTS snapshots (id TEXT PRIMARY KEY, dataset TEXT, name TEXT, created_at DATETIME DEFAULT CURRENT_TIMESTAMP);`,
|
||||
// RBAC tables
|
||||
`CREATE TABLE IF NOT EXISTS roles (id TEXT PRIMARY KEY, name TEXT NOT NULL UNIQUE, description TEXT, created_at DATETIME DEFAULT CURRENT_TIMESTAMP);`,
|
||||
`CREATE TABLE IF NOT EXISTS permissions (id TEXT PRIMARY KEY, name TEXT NOT NULL UNIQUE, description TEXT, created_at DATETIME DEFAULT CURRENT_TIMESTAMP);`,
|
||||
`CREATE TABLE IF NOT EXISTS role_permissions (role_id TEXT NOT NULL, permission_id TEXT NOT NULL, PRIMARY KEY (role_id, permission_id), FOREIGN KEY (role_id) REFERENCES roles(id) ON DELETE CASCADE, FOREIGN KEY (permission_id) REFERENCES permissions(id) ON DELETE CASCADE);`,
|
||||
`CREATE TABLE IF NOT EXISTS user_roles (user_id TEXT NOT NULL, role_id TEXT NOT NULL, PRIMARY KEY (user_id, role_id), FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE, FOREIGN KEY (role_id) REFERENCES roles(id) ON DELETE CASCADE);`,
|
||||
`CREATE TABLE IF NOT EXISTS sessions (id TEXT PRIMARY KEY, user_id TEXT NOT NULL, token TEXT NOT NULL UNIQUE, expires_at DATETIME NOT NULL, created_at DATETIME DEFAULT CURRENT_TIMESTAMP, FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE);`,
|
||||
`CREATE INDEX IF NOT EXISTS idx_sessions_token ON sessions(token);`,
|
||||
`CREATE INDEX IF NOT EXISTS idx_sessions_user_id ON sessions(user_id);`,
|
||||
`CREATE INDEX IF NOT EXISTS idx_sessions_expires_at ON sessions(expires_at);`,
|
||||
}
|
||||
for _, s := range stmts {
|
||||
if _, err := tx.ExecContext(ctx, s); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Enhance audit_events table if needed (add missing columns)
|
||||
// Note: audit_events table is now created with all columns above, but this handles upgrades
|
||||
enhanceAuditTable(ctx, tx)
|
||||
|
||||
// Seed default roles and permissions
|
||||
if err := seedRolesAndPermissions(ctx, tx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Seed a default admin user if not exists
|
||||
var count int
|
||||
if err := tx.QueryRowContext(ctx, `SELECT COUNT(1) FROM users WHERE username = 'admin'`).Scan(&count); err != nil {
|
||||
@@ -32,13 +64,143 @@ func MigrateAndSeed(ctx context.Context, db *sql.DB) error {
|
||||
}
|
||||
if count == 0 {
|
||||
// note: simple seeded password: admin (do not use in prod)
|
||||
pwHash, _ := bcrypt.GenerateFromPassword([]byte("admin"), bcrypt.DefaultCost)
|
||||
if _, err := tx.ExecContext(ctx, `INSERT INTO users (id, username, password_hash, role) VALUES (?, 'admin', ?, 'admin')`, "admin", string(pwHash)); err != nil {
|
||||
pwHash, err := auth.HashPassword("admin")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := tx.ExecContext(ctx, `INSERT INTO users (id, username, password_hash, role) VALUES (?, 'admin', ?, 'admin')`, "admin", pwHash); err != nil {
|
||||
return err
|
||||
}
|
||||
// Assign admin role to admin user
|
||||
var adminRoleID string
|
||||
if err := tx.QueryRowContext(ctx, `SELECT id FROM roles WHERE name = 'admin'`).Scan(&adminRoleID); err == nil {
|
||||
tx.ExecContext(ctx, `INSERT OR IGNORE INTO user_roles (user_id, role_id) VALUES (?, ?)`, "admin", adminRoleID)
|
||||
}
|
||||
}
|
||||
if err := tx.Commit(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func enhanceAuditTable(ctx context.Context, tx *sql.Tx) {
|
||||
// Check if table exists first
|
||||
var tableExists int
|
||||
err := tx.QueryRowContext(ctx, `SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='audit_events'`).Scan(&tableExists)
|
||||
if err != nil || tableExists == 0 {
|
||||
// Table doesn't exist, it will be created above with all columns
|
||||
return
|
||||
}
|
||||
|
||||
// Check if columns exist and add them if missing
|
||||
// SQLite doesn't support IF NOT EXISTS for ALTER TABLE, so we'll try-catch
|
||||
columns := []struct {
|
||||
name string
|
||||
stmt string
|
||||
}{
|
||||
{"actor", `ALTER TABLE audit_events ADD COLUMN actor TEXT;`},
|
||||
{"resource", `ALTER TABLE audit_events ADD COLUMN resource TEXT;`},
|
||||
{"payload_hash", `ALTER TABLE audit_events ADD COLUMN payload_hash TEXT;`},
|
||||
{"result", `ALTER TABLE audit_events ADD COLUMN result TEXT;`},
|
||||
{"client_ip", `ALTER TABLE audit_events ADD COLUMN client_ip TEXT;`},
|
||||
}
|
||||
|
||||
for _, col := range columns {
|
||||
_, err := tx.ExecContext(ctx, col.stmt)
|
||||
if err != nil {
|
||||
// Column might already exist, ignore error silently
|
||||
// Only log if it's not a "duplicate column" error
|
||||
if !strings.Contains(err.Error(), "duplicate column") && !strings.Contains(err.Error(), "no such table") {
|
||||
log.Printf("Note: %s column may already exist: %v", col.name, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func seedRolesAndPermissions(ctx context.Context, tx *sql.Tx) error {
|
||||
// Seed default roles
|
||||
roles := []struct {
|
||||
id string
|
||||
name string
|
||||
description string
|
||||
}{
|
||||
{"admin", "admin", "Administrator with full access"},
|
||||
{"operator", "operator", "Operator with limited administrative access"},
|
||||
{"viewer", "viewer", "Read-only access"},
|
||||
}
|
||||
|
||||
for _, r := range roles {
|
||||
_, err := tx.ExecContext(ctx, `INSERT OR IGNORE INTO roles (id, name, description) VALUES (?, ?, ?)`, r.id, r.name, r.description)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Seed default permissions
|
||||
permissions := []struct {
|
||||
id string
|
||||
name string
|
||||
description string
|
||||
}{
|
||||
{"storage.pool.create", "storage.pool.create", "Create storage pools"},
|
||||
{"storage.pool.scrub", "storage.pool.scrub", "Scrub storage pools"},
|
||||
{"storage.dataset.create", "storage.dataset.create", "Create datasets"},
|
||||
{"storage.dataset.snapshot", "storage.dataset.snapshot", "Create snapshots"},
|
||||
{"shares.nfs.create", "shares.nfs.create", "Create NFS shares"},
|
||||
{"shares.nfs.delete", "shares.nfs.delete", "Delete NFS shares"},
|
||||
{"shares.smb.create", "shares.smb.create", "Create SMB shares"},
|
||||
{"shares.smb.delete", "shares.smb.delete", "Delete SMB shares"},
|
||||
{"iscsi.target.create", "iscsi.target.create", "Create iSCSI targets"},
|
||||
{"iscsi.lun.create", "iscsi.lun.create", "Create iSCSI LUNs"},
|
||||
{"iscsi.lun.delete", "iscsi.lun.delete", "Delete iSCSI LUNs"},
|
||||
{"iscsi.lun.unmap", "iscsi.lun.unmap", "Unmap iSCSI LUNs"},
|
||||
{"iscsi.portal.create", "iscsi.portal.create", "Add iSCSI portals"},
|
||||
{"iscsi.initiator.create", "iscsi.initiator.create", "Add iSCSI initiators"},
|
||||
{"users.manage", "users.manage", "Manage users"},
|
||||
{"roles.manage", "roles.manage", "Manage roles and permissions"},
|
||||
}
|
||||
|
||||
for _, p := range permissions {
|
||||
_, err := tx.ExecContext(ctx, `INSERT OR IGNORE INTO permissions (id, name, description) VALUES (?, ?, ?)`, p.id, p.name, p.description)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Assign all permissions to admin role
|
||||
var adminRoleID string
|
||||
if err := tx.QueryRowContext(ctx, `SELECT id FROM roles WHERE name = 'admin'`).Scan(&adminRoleID); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, p := range permissions {
|
||||
_, err := tx.ExecContext(ctx, `INSERT OR IGNORE INTO role_permissions (role_id, permission_id) VALUES (?, ?)`, adminRoleID, p.id)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Assign some permissions to operator role
|
||||
var operatorRoleID string
|
||||
if err := tx.QueryRowContext(ctx, `SELECT id FROM roles WHERE name = 'operator'`).Scan(&operatorRoleID); err == nil {
|
||||
operatorPerms := []string{
|
||||
"storage.pool.create",
|
||||
"storage.dataset.create",
|
||||
"storage.dataset.snapshot",
|
||||
"shares.nfs.create",
|
||||
"shares.nfs.delete",
|
||||
"shares.smb.create",
|
||||
"shares.smb.delete",
|
||||
"iscsi.target.create",
|
||||
"iscsi.lun.create",
|
||||
"iscsi.lun.delete",
|
||||
"iscsi.portal.create",
|
||||
"iscsi.initiator.create",
|
||||
}
|
||||
for _, permID := range operatorPerms {
|
||||
tx.ExecContext(ctx, `INSERT OR IGNORE INTO role_permissions (role_id, permission_id) VALUES (?, ?)`, operatorRoleID, permID)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -16,3 +16,28 @@ func (i *ISCSIAdapter) CreateLUN(ctx context.Context, target string, backstore s
|
||||
log.Printf("iscsi: CreateLUN target=%s backstore=%s lun=%d (stub)", target, backstore, lunID)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *ISCSIAdapter) CreateBackstore(ctx context.Context, name string, devpath string) error {
|
||||
log.Printf("iscsi: CreateBackstore name=%s dev=%s (stub)", name, devpath)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *ISCSIAdapter) DeleteLUN(ctx context.Context, target string, lunID int) error {
|
||||
log.Printf("iscsi: DeleteLUN target=%s lun=%d (stub)", target, lunID)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *ISCSIAdapter) AddPortal(ctx context.Context, iqn string, addr string, port int) error {
|
||||
log.Printf("iscsi: AddPortal iqn=%s addr=%s port=%d (stub)", iqn, addr, port)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *ISCSIAdapter) AddACL(ctx context.Context, iqn, initiator string) error {
|
||||
log.Printf("iscsi: AddACL iqn=%s initiator=%s (stub)", iqn, initiator)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *ISCSIAdapter) Save(ctx context.Context) error {
|
||||
log.Printf("iscsi: Save (stub)")
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/example/storage-appliance/internal/domain"
|
||||
"github.com/example/storage-appliance/internal/infra/osexec"
|
||||
@@ -100,6 +99,30 @@ func (a *Adapter) CreateDataset(ctx context.Context, name string, props map[stri
|
||||
return nil
|
||||
}
|
||||
|
||||
// CreateZVol creates a block device zvol with the given size and optional props
|
||||
func (a *Adapter) CreateZVol(ctx context.Context, name, size string, props map[string]string) error {
|
||||
args := []string{"create", "-V", size, name}
|
||||
for k, v := range props {
|
||||
args = append([]string{"create", "-o", fmt.Sprintf("%s=%s", k, v)}, args...)
|
||||
}
|
||||
// Note: above building may produce repeated 'create' parts - keep simple: build args now
|
||||
// We'll just construct a direct zfs create -V size -o prop=val name
|
||||
args = []string{"create", "-V", size}
|
||||
for k, v := range props {
|
||||
args = append(args, "-o", fmt.Sprintf("%s=%s", k, v))
|
||||
}
|
||||
args = append(args, name)
|
||||
out, stderr, code, err := osexec.ExecWithRunner(a.Runner, ctx, "zfs", args...)
|
||||
_ = out
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if code != 0 {
|
||||
return fmt.Errorf("zfs create vol failed: %s", stderr)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Adapter) Snapshot(ctx context.Context, dataset, snapName string) error {
|
||||
name := fmt.Sprintf("%s@%s", dataset, snapName)
|
||||
_, stderr, code, err := osexec.ExecWithRunner(a.Runner, ctx, "zfs", "snapshot", name)
|
||||
|
||||
@@ -3,7 +3,6 @@ package zfs
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/example/storage-appliance/internal/infra/osexec"
|
||||
)
|
||||
|
||||
@@ -3,15 +3,20 @@ package job
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/example/storage-appliance/internal/audit"
|
||||
"github.com/example/storage-appliance/internal/domain"
|
||||
"github.com/example/storage-appliance/internal/infra/zfs"
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
type Runner struct {
|
||||
DB *sql.DB
|
||||
DB *sql.DB
|
||||
ZFS *zfs.Adapter
|
||||
Audit audit.AuditLogger
|
||||
}
|
||||
|
||||
func (r *Runner) Enqueue(ctx context.Context, j domain.Job) (string, error) {
|
||||
@@ -22,16 +27,112 @@ func (r *Runner) Enqueue(ctx context.Context, j domain.Job) (string, error) {
|
||||
j.Status = "queued"
|
||||
j.CreatedAt = time.Now()
|
||||
j.UpdatedAt = time.Now()
|
||||
_, err := r.DB.ExecContext(ctx, `INSERT INTO jobs (id, type, status, progress, owner, created_at, updated_at) VALUES (?, ?, ?, ?, ?, ?, ?)`,
|
||||
j.ID, j.Type, j.Status, j.Progress, j.Owner, j.CreatedAt, j.UpdatedAt)
|
||||
// persist details JSON if present
|
||||
detailsJSON := ""
|
||||
if j.Details != nil {
|
||||
b, _ := json.Marshal(j.Details)
|
||||
detailsJSON = string(b)
|
||||
}
|
||||
_, err := r.DB.ExecContext(ctx, `INSERT INTO jobs (id, type, status, progress, owner, created_at, updated_at, details) VALUES (?, ?, ?, ?, ?, ?, ?, ?)`,
|
||||
j.ID, j.Type, j.Status, j.Progress, j.Owner, j.CreatedAt, j.UpdatedAt, detailsJSON)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
log.Printf("enqueued job %s (%s)", j.ID, j.Type)
|
||||
// run async worker (very simple worker for skeleton)
|
||||
go func() {
|
||||
time.Sleep(1 * time.Second)
|
||||
r.updateStatus(ctx, j.ID, "succeeded", 100)
|
||||
// update running
|
||||
_ = r.updateStatus(ctx, j.ID, "running", 0)
|
||||
// execute based on job type
|
||||
switch j.Type {
|
||||
case "create-pool":
|
||||
// parse details: expect name and vdevs
|
||||
var name string
|
||||
var vdevs []string
|
||||
if j.Details != nil {
|
||||
if n, ok := j.Details["name"].(string); ok {
|
||||
name = n
|
||||
}
|
||||
if rawV, ok := j.Details["vdevs"].([]any); ok {
|
||||
for _, vv := range rawV {
|
||||
if s, ok := vv.(string); ok {
|
||||
vdevs = append(vdevs, s)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
_ = r.updateStatus(ctx, j.ID, "running", 10)
|
||||
if r.ZFS != nil {
|
||||
// call sync create pool
|
||||
if err := r.ZFS.CreatePoolSync(ctx, name, vdevs); err != nil {
|
||||
_ = r.updateStatus(ctx, j.ID, "failed", 0)
|
||||
if r.Audit != nil {
|
||||
r.Audit.Record(ctx, audit.Event{UserID: string(j.Owner), Action: "pool.create", ResourceType: "pool", ResourceID: name, Success: false, Details: map[string]any{"error": err.Error()}})
|
||||
}
|
||||
return
|
||||
}
|
||||
_ = r.updateStatus(ctx, j.ID, "succeeded", 100)
|
||||
if r.Audit != nil {
|
||||
r.Audit.Record(ctx, audit.Event{UserID: string(j.Owner), Action: "pool.create", ResourceType: "pool", ResourceID: name, Success: true})
|
||||
}
|
||||
return
|
||||
}
|
||||
_ = r.updateStatus(ctx, j.ID, "succeeded", 100)
|
||||
case "snapshot":
|
||||
_ = r.updateStatus(ctx, j.ID, "running", 10)
|
||||
// call zfs snapshot; expect dataset and name
|
||||
var dataset, snapName string
|
||||
if j.Details != nil {
|
||||
if d, ok := j.Details["dataset"].(string); ok {
|
||||
dataset = d
|
||||
}
|
||||
if s, ok := j.Details["snap_name"].(string); ok {
|
||||
snapName = s
|
||||
}
|
||||
}
|
||||
if r.ZFS != nil {
|
||||
if err := r.ZFS.Snapshot(ctx, dataset, snapName); err != nil {
|
||||
_ = r.updateStatus(ctx, j.ID, "failed", 0)
|
||||
if r.Audit != nil {
|
||||
r.Audit.Record(ctx, audit.Event{UserID: string(j.Owner), Action: "snapshot", ResourceType: "snapshot", ResourceID: dataset + "@" + snapName, Success: false, Details: map[string]any{"error": err.Error()}})
|
||||
}
|
||||
return
|
||||
}
|
||||
_ = r.updateStatus(ctx, j.ID, "succeeded", 100)
|
||||
if r.Audit != nil {
|
||||
r.Audit.Record(ctx, audit.Event{UserID: string(j.Owner), Action: "snapshot", ResourceType: "snapshot", ResourceID: dataset + "@" + snapName, Success: true})
|
||||
}
|
||||
return
|
||||
}
|
||||
_ = r.updateStatus(ctx, j.ID, "succeeded", 100)
|
||||
case "scrub":
|
||||
_ = r.updateStatus(ctx, j.ID, "running", 10)
|
||||
var pool string
|
||||
if j.Details != nil {
|
||||
if p, ok := j.Details["pool"].(string); ok {
|
||||
pool = p
|
||||
}
|
||||
}
|
||||
if r.ZFS != nil {
|
||||
if err := r.ZFS.ScrubStart(ctx, pool); err != nil {
|
||||
_ = r.updateStatus(ctx, j.ID, "failed", 0)
|
||||
if r.Audit != nil {
|
||||
r.Audit.Record(ctx, audit.Event{UserID: string(j.Owner), Action: "pool.scrub", ResourceType: "pool", ResourceID: pool, Success: false, Details: map[string]any{"error": err.Error()}})
|
||||
}
|
||||
return
|
||||
}
|
||||
_ = r.updateStatus(ctx, j.ID, "succeeded", 100)
|
||||
if r.Audit != nil {
|
||||
r.Audit.Record(ctx, audit.Event{UserID: string(j.Owner), Action: "pool.scrub", ResourceType: "pool", ResourceID: pool, Success: true})
|
||||
}
|
||||
return
|
||||
}
|
||||
_ = r.updateStatus(ctx, j.ID, "succeeded", 100)
|
||||
default:
|
||||
// unknown job types just succeed
|
||||
time.Sleep(500 * time.Millisecond)
|
||||
_ = r.updateStatus(ctx, j.ID, "succeeded", 100)
|
||||
}
|
||||
}()
|
||||
return id, nil
|
||||
}
|
||||
|
||||
438
internal/monitoring/collectors.go
Normal file
438
internal/monitoring/collectors.go
Normal file
@@ -0,0 +1,438 @@
|
||||
package monitoring
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/example/storage-appliance/internal/infra/osexec"
|
||||
"github.com/example/storage-appliance/internal/service"
|
||||
)
|
||||
|
||||
const (
|
||||
DefaultTimeout = 5 * time.Second
|
||||
)
|
||||
|
||||
// MetricValue represents a single metric value
|
||||
type MetricValue struct {
|
||||
Name string
|
||||
Labels map[string]string
|
||||
Value float64
|
||||
Type string // "gauge" or "counter"
|
||||
}
|
||||
|
||||
// MetricCollection represents a collection of metrics
|
||||
type MetricCollection struct {
|
||||
Metrics []MetricValue
|
||||
Errors []string
|
||||
}
|
||||
|
||||
// Collector interface for different metric collectors
|
||||
type Collector interface {
|
||||
Collect(ctx context.Context) MetricCollection
|
||||
Name() string
|
||||
}
|
||||
|
||||
// ZFSCollector collects ZFS pool health and scrub status
|
||||
type ZFSCollector struct {
|
||||
ZFSSvc service.ZFSService
|
||||
Runner osexec.Runner
|
||||
}
|
||||
|
||||
func NewZFSCollector(zfsSvc service.ZFSService, runner osexec.Runner) *ZFSCollector {
|
||||
return &ZFSCollector{ZFSSvc: zfsSvc, Runner: runner}
|
||||
}
|
||||
|
||||
func (c *ZFSCollector) Name() string {
|
||||
return "zfs"
|
||||
}
|
||||
|
||||
func (c *ZFSCollector) Collect(ctx context.Context) MetricCollection {
|
||||
ctx, cancel := context.WithTimeout(ctx, DefaultTimeout)
|
||||
defer cancel()
|
||||
|
||||
collection := MetricCollection{
|
||||
Metrics: []MetricValue{},
|
||||
Errors: []string{},
|
||||
}
|
||||
|
||||
// Get pool list
|
||||
pools, err := c.ZFSSvc.ListPools(ctx)
|
||||
if err != nil {
|
||||
collection.Errors = append(collection.Errors, fmt.Sprintf("failed to list pools: %v", err))
|
||||
return collection
|
||||
}
|
||||
|
||||
for _, pool := range pools {
|
||||
// Pool health metric (1 = ONLINE, 0.5 = DEGRADED, 0 = FAULTED/OFFLINE)
|
||||
healthValue := 0.0
|
||||
switch strings.ToUpper(pool.Health) {
|
||||
case "ONLINE":
|
||||
healthValue = 1.0
|
||||
case "DEGRADED":
|
||||
healthValue = 0.5
|
||||
case "FAULTED", "OFFLINE", "UNAVAIL":
|
||||
healthValue = 0.0
|
||||
}
|
||||
|
||||
collection.Metrics = append(collection.Metrics, MetricValue{
|
||||
Name: "zfs_pool_health",
|
||||
Labels: map[string]string{"pool": pool.Name},
|
||||
Value: healthValue,
|
||||
Type: "gauge",
|
||||
})
|
||||
|
||||
// Get scrub status
|
||||
scrubStatus, err := c.getScrubStatus(ctx, pool.Name)
|
||||
if err != nil {
|
||||
collection.Errors = append(collection.Errors, fmt.Sprintf("failed to get scrub status for %s: %v", pool.Name, err))
|
||||
continue
|
||||
}
|
||||
|
||||
// Scrub in progress (1 = yes, 0 = no)
|
||||
scrubInProgress := 0.0
|
||||
if strings.Contains(scrubStatus, "scan: scrub in progress") {
|
||||
scrubInProgress = 1.0
|
||||
}
|
||||
|
||||
collection.Metrics = append(collection.Metrics, MetricValue{
|
||||
Name: "zfs_pool_scrub_in_progress",
|
||||
Labels: map[string]string{"pool": pool.Name},
|
||||
Value: scrubInProgress,
|
||||
Type: "gauge",
|
||||
})
|
||||
}
|
||||
|
||||
return collection
|
||||
}
|
||||
|
||||
func (c *ZFSCollector) getScrubStatus(ctx context.Context, pool string) (string, error) {
|
||||
out, _, _, err := osexec.ExecWithRunner(c.Runner, ctx, "zpool", "status", pool)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
for _, line := range strings.Split(out, "\n") {
|
||||
if strings.Contains(line, "scan:") {
|
||||
return strings.TrimSpace(line), nil
|
||||
}
|
||||
}
|
||||
return "no-scan", nil
|
||||
}
|
||||
|
||||
// SMARTCollector collects SMART health status
|
||||
type SMARTCollector struct {
|
||||
Runner osexec.Runner
|
||||
}
|
||||
|
||||
func NewSMARTCollector(runner osexec.Runner) *SMARTCollector {
|
||||
return &SMARTCollector{Runner: runner}
|
||||
}
|
||||
|
||||
func (c *SMARTCollector) Name() string {
|
||||
return "smart"
|
||||
}
|
||||
|
||||
func (c *SMARTCollector) Collect(ctx context.Context) MetricCollection {
|
||||
ctx, cancel := context.WithTimeout(ctx, DefaultTimeout)
|
||||
defer cancel()
|
||||
|
||||
collection := MetricCollection{
|
||||
Metrics: []MetricValue{},
|
||||
Errors: []string{},
|
||||
}
|
||||
|
||||
// List all disks (simplified - try common devices)
|
||||
// In a real implementation, you'd scan /dev/ or use lsblk
|
||||
commonDisks := []string{"sda", "sdb", "sdc", "nvme0n1", "nvme1n1"}
|
||||
disks := []string{}
|
||||
for _, d := range commonDisks {
|
||||
disks = append(disks, fmt.Sprintf("/dev/%s", d))
|
||||
}
|
||||
|
||||
// Check SMART health for each disk
|
||||
for _, disk := range disks {
|
||||
health, err := c.getSMARTHealth(ctx, disk)
|
||||
if err != nil {
|
||||
// Skip devices that don't exist or don't support SMART
|
||||
continue
|
||||
}
|
||||
|
||||
// SMART health: 1 = PASSED, 0 = FAILED
|
||||
healthValue := 0.0
|
||||
if strings.Contains(strings.ToUpper(health), "PASSED") {
|
||||
healthValue = 1.0
|
||||
}
|
||||
|
||||
collection.Metrics = append(collection.Metrics, MetricValue{
|
||||
Name: "smart_health",
|
||||
Labels: map[string]string{"device": disk},
|
||||
Value: healthValue,
|
||||
Type: "gauge",
|
||||
})
|
||||
}
|
||||
|
||||
return collection
|
||||
}
|
||||
|
||||
func (c *SMARTCollector) getSMARTHealth(ctx context.Context, device string) (string, error) {
|
||||
// Use smartctl -H to get health status
|
||||
out, _, code, err := osexec.ExecWithRunner(c.Runner, ctx, "smartctl", "-H", device)
|
||||
if err != nil || code != 0 {
|
||||
return "", fmt.Errorf("smartctl failed: %v", err)
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// ServiceCollector collects service states
|
||||
type ServiceCollector struct {
|
||||
Runner osexec.Runner
|
||||
}
|
||||
|
||||
func NewServiceCollector(runner osexec.Runner) *ServiceCollector {
|
||||
return &ServiceCollector{Runner: runner}
|
||||
}
|
||||
|
||||
func (c *ServiceCollector) Name() string {
|
||||
return "services"
|
||||
}
|
||||
|
||||
func (c *ServiceCollector) Collect(ctx context.Context) MetricCollection {
|
||||
ctx, cancel := context.WithTimeout(ctx, DefaultTimeout)
|
||||
defer cancel()
|
||||
|
||||
collection := MetricCollection{
|
||||
Metrics: []MetricValue{},
|
||||
Errors: []string{},
|
||||
}
|
||||
|
||||
services := []string{"nfs-server", "smbd", "iscsid", "iscsi", "minio"}
|
||||
|
||||
for _, svc := range services {
|
||||
status, err := c.getServiceStatus(ctx, svc)
|
||||
if err != nil {
|
||||
collection.Errors = append(collection.Errors, fmt.Sprintf("failed to check %s: %v", svc, err))
|
||||
continue
|
||||
}
|
||||
|
||||
// Service state: 1 = active/running, 0 = inactive/stopped
|
||||
stateValue := 0.0
|
||||
if strings.Contains(strings.ToLower(status), "active") || strings.Contains(strings.ToLower(status), "running") {
|
||||
stateValue = 1.0
|
||||
}
|
||||
|
||||
collection.Metrics = append(collection.Metrics, MetricValue{
|
||||
Name: "service_state",
|
||||
Labels: map[string]string{"service": svc},
|
||||
Value: stateValue,
|
||||
Type: "gauge",
|
||||
})
|
||||
}
|
||||
|
||||
return collection
|
||||
}
|
||||
|
||||
func (c *ServiceCollector) getServiceStatus(ctx context.Context, service string) (string, error) {
|
||||
// Try systemctl first
|
||||
out, _, code, err := osexec.ExecWithRunner(c.Runner, ctx, "systemctl", "is-active", service)
|
||||
if err == nil && code == 0 {
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// Fallback to checking process
|
||||
out, _, code, err = osexec.ExecWithRunner(c.Runner, ctx, "pgrep", "-f", service)
|
||||
if err == nil && code == 0 && strings.TrimSpace(out) != "" {
|
||||
return "running", nil
|
||||
}
|
||||
|
||||
return "inactive", nil
|
||||
}
|
||||
|
||||
// HostCollector collects host metrics from /proc
|
||||
type HostCollector struct{}
|
||||
|
||||
func NewHostCollector() *HostCollector {
|
||||
return &HostCollector{}
|
||||
}
|
||||
|
||||
func (c *HostCollector) Name() string {
|
||||
return "host"
|
||||
}
|
||||
|
||||
func (c *HostCollector) Collect(ctx context.Context) MetricCollection {
|
||||
ctx, cancel := context.WithTimeout(ctx, DefaultTimeout)
|
||||
defer cancel()
|
||||
|
||||
collection := MetricCollection{
|
||||
Metrics: []MetricValue{},
|
||||
Errors: []string{},
|
||||
}
|
||||
|
||||
// Load average
|
||||
loadavg, err := c.readLoadAvg()
|
||||
if err != nil {
|
||||
collection.Errors = append(collection.Errors, fmt.Sprintf("failed to read loadavg: %v", err))
|
||||
} else {
|
||||
collection.Metrics = append(collection.Metrics, MetricValue{
|
||||
Name: "host_load1",
|
||||
Labels: map[string]string{},
|
||||
Value: loadavg.Load1,
|
||||
Type: "gauge",
|
||||
})
|
||||
collection.Metrics = append(collection.Metrics, MetricValue{
|
||||
Name: "host_load5",
|
||||
Labels: map[string]string{},
|
||||
Value: loadavg.Load5,
|
||||
Type: "gauge",
|
||||
})
|
||||
collection.Metrics = append(collection.Metrics, MetricValue{
|
||||
Name: "host_load15",
|
||||
Labels: map[string]string{},
|
||||
Value: loadavg.Load15,
|
||||
Type: "gauge",
|
||||
})
|
||||
}
|
||||
|
||||
// Memory info
|
||||
meminfo, err := c.readMemInfo()
|
||||
if err != nil {
|
||||
collection.Errors = append(collection.Errors, fmt.Sprintf("failed to read meminfo: %v", err))
|
||||
} else {
|
||||
collection.Metrics = append(collection.Metrics, MetricValue{
|
||||
Name: "host_memory_total_bytes",
|
||||
Labels: map[string]string{},
|
||||
Value: meminfo.MemTotal,
|
||||
Type: "gauge",
|
||||
})
|
||||
collection.Metrics = append(collection.Metrics, MetricValue{
|
||||
Name: "host_memory_free_bytes",
|
||||
Labels: map[string]string{},
|
||||
Value: meminfo.MemFree,
|
||||
Type: "gauge",
|
||||
})
|
||||
collection.Metrics = append(collection.Metrics, MetricValue{
|
||||
Name: "host_memory_available_bytes",
|
||||
Labels: map[string]string{},
|
||||
Value: meminfo.MemAvailable,
|
||||
Type: "gauge",
|
||||
})
|
||||
}
|
||||
|
||||
// Disk IO (simplified - read from /proc/diskstats)
|
||||
diskIO, err := c.readDiskIO()
|
||||
if err != nil {
|
||||
collection.Errors = append(collection.Errors, fmt.Sprintf("failed to read disk IO: %v", err))
|
||||
} else {
|
||||
for device, io := range diskIO {
|
||||
collection.Metrics = append(collection.Metrics, MetricValue{
|
||||
Name: "host_disk_reads_completed",
|
||||
Labels: map[string]string{"device": device},
|
||||
Value: io.ReadsCompleted,
|
||||
Type: "counter",
|
||||
})
|
||||
collection.Metrics = append(collection.Metrics, MetricValue{
|
||||
Name: "host_disk_writes_completed",
|
||||
Labels: map[string]string{"device": device},
|
||||
Value: io.WritesCompleted,
|
||||
Type: "counter",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return collection
|
||||
}
|
||||
|
||||
type LoadAvg struct {
|
||||
Load1 float64
|
||||
Load5 float64
|
||||
Load15 float64
|
||||
}
|
||||
|
||||
func (c *HostCollector) readLoadAvg() (LoadAvg, error) {
|
||||
data, err := os.ReadFile("/proc/loadavg")
|
||||
if err != nil {
|
||||
return LoadAvg{}, err
|
||||
}
|
||||
|
||||
fields := strings.Fields(string(data))
|
||||
if len(fields) < 3 {
|
||||
return LoadAvg{}, fmt.Errorf("invalid loadavg format")
|
||||
}
|
||||
|
||||
load1, _ := strconv.ParseFloat(fields[0], 64)
|
||||
load5, _ := strconv.ParseFloat(fields[1], 64)
|
||||
load15, _ := strconv.ParseFloat(fields[2], 64)
|
||||
|
||||
return LoadAvg{Load1: load1, Load5: load5, Load15: load15}, nil
|
||||
}
|
||||
|
||||
type MemInfo struct {
|
||||
MemTotal float64
|
||||
MemFree float64
|
||||
MemAvailable float64
|
||||
}
|
||||
|
||||
func (c *HostCollector) readMemInfo() (MemInfo, error) {
|
||||
data, err := os.ReadFile("/proc/meminfo")
|
||||
if err != nil {
|
||||
return MemInfo{}, err
|
||||
}
|
||||
|
||||
info := MemInfo{}
|
||||
lines := strings.Split(string(data), "\n")
|
||||
for _, line := range lines {
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) < 2 {
|
||||
continue
|
||||
}
|
||||
key := strings.TrimSuffix(fields[0], ":")
|
||||
value, _ := strconv.ParseFloat(fields[1], 64)
|
||||
// Values are in KB, convert to bytes
|
||||
valueBytes := value * 1024
|
||||
|
||||
switch key {
|
||||
case "MemTotal":
|
||||
info.MemTotal = valueBytes
|
||||
case "MemFree":
|
||||
info.MemFree = valueBytes
|
||||
case "MemAvailable":
|
||||
info.MemAvailable = valueBytes
|
||||
}
|
||||
}
|
||||
|
||||
return info, nil
|
||||
}
|
||||
|
||||
type DiskIO struct {
|
||||
ReadsCompleted float64
|
||||
WritesCompleted float64
|
||||
}
|
||||
|
||||
func (c *HostCollector) readDiskIO() (map[string]DiskIO, error) {
|
||||
data, err := os.ReadFile("/proc/diskstats")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
result := make(map[string]DiskIO)
|
||||
lines := strings.Split(string(data), "\n")
|
||||
for _, line := range lines {
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) < 14 {
|
||||
continue
|
||||
}
|
||||
device := fields[2]
|
||||
reads, _ := strconv.ParseFloat(fields[3], 64)
|
||||
writes, _ := strconv.ParseFloat(fields[7], 64)
|
||||
|
||||
result[device] = DiskIO{
|
||||
ReadsCompleted: reads,
|
||||
WritesCompleted: writes,
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
60
internal/monitoring/prometheus.go
Normal file
60
internal/monitoring/prometheus.go
Normal file
@@ -0,0 +1,60 @@
|
||||
package monitoring
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// PrometheusExporter exports metrics in Prometheus format
|
||||
type PrometheusExporter struct {
|
||||
Collectors []Collector
|
||||
}
|
||||
|
||||
func NewPrometheusExporter(collectors ...Collector) *PrometheusExporter {
|
||||
return &PrometheusExporter{Collectors: collectors}
|
||||
}
|
||||
|
||||
// Export collects all metrics and formats them as Prometheus text format
|
||||
func (e *PrometheusExporter) Export(ctx context.Context) string {
|
||||
var builder strings.Builder
|
||||
allErrors := []string{}
|
||||
|
||||
for _, collector := range e.Collectors {
|
||||
collection := collector.Collect(ctx)
|
||||
allErrors = append(allErrors, collection.Errors...)
|
||||
|
||||
for _, metric := range collection.Metrics {
|
||||
// Format: metric_name{label1="value1",label2="value2"} value
|
||||
builder.WriteString(metric.Name)
|
||||
if len(metric.Labels) > 0 {
|
||||
builder.WriteString("{")
|
||||
first := true
|
||||
for k, v := range metric.Labels {
|
||||
if !first {
|
||||
builder.WriteString(",")
|
||||
}
|
||||
builder.WriteString(fmt.Sprintf(`%s="%s"`, k, escapeLabelValue(v)))
|
||||
first = false
|
||||
}
|
||||
builder.WriteString("}")
|
||||
}
|
||||
builder.WriteString(fmt.Sprintf(" %v\n", metric.Value))
|
||||
}
|
||||
}
|
||||
|
||||
// Add error metrics if any
|
||||
if len(allErrors) > 0 {
|
||||
builder.WriteString(fmt.Sprintf("monitoring_collector_errors_total %d\n", len(allErrors)))
|
||||
}
|
||||
|
||||
return builder.String()
|
||||
}
|
||||
|
||||
func escapeLabelValue(v string) string {
|
||||
v = strings.ReplaceAll(v, "\\", "\\\\")
|
||||
v = strings.ReplaceAll(v, "\"", "\\\"")
|
||||
v = strings.ReplaceAll(v, "\n", "\\n")
|
||||
return v
|
||||
}
|
||||
|
||||
149
internal/monitoring/ui.go
Normal file
149
internal/monitoring/ui.go
Normal file
@@ -0,0 +1,149 @@
|
||||
package monitoring
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// UIMetric represents a metric for UI display
|
||||
type UIMetric struct {
|
||||
Name string
|
||||
Value string
|
||||
Status string // "ok", "warning", "error"
|
||||
Timestamp time.Time
|
||||
Error string
|
||||
}
|
||||
|
||||
// UIMetricGroup represents a group of metrics for UI display
|
||||
type UIMetricGroup struct {
|
||||
Title string
|
||||
Metrics []UIMetric
|
||||
Errors []string
|
||||
}
|
||||
|
||||
// UIExporter exports metrics in a format suitable for UI display
|
||||
type UIExporter struct {
|
||||
Collectors []Collector
|
||||
}
|
||||
|
||||
func NewUIExporter(collectors ...Collector) *UIExporter {
|
||||
return &UIExporter{Collectors: collectors}
|
||||
}
|
||||
|
||||
// Export collects all metrics and formats them for UI
|
||||
func (e *UIExporter) Export(ctx context.Context) []UIMetricGroup {
|
||||
groups := []UIMetricGroup{}
|
||||
|
||||
for _, collector := range e.Collectors {
|
||||
collection := collector.Collect(ctx)
|
||||
// Capitalize first letter
|
||||
name := collector.Name()
|
||||
if len(name) > 0 {
|
||||
name = strings.ToUpper(name[:1]) + name[1:]
|
||||
}
|
||||
group := UIMetricGroup{
|
||||
Title: name,
|
||||
Metrics: []UIMetric{},
|
||||
Errors: collection.Errors,
|
||||
}
|
||||
|
||||
for _, metric := range collection.Metrics {
|
||||
status := "ok"
|
||||
value := formatMetricValue(metric)
|
||||
|
||||
// Determine status based on metric type and value
|
||||
if metric.Name == "zfs_pool_health" {
|
||||
if metric.Value == 0.0 {
|
||||
status = "error"
|
||||
} else if metric.Value == 0.5 {
|
||||
status = "warning"
|
||||
}
|
||||
} else if metric.Name == "smart_health" {
|
||||
if metric.Value == 0.0 {
|
||||
status = "error"
|
||||
}
|
||||
} else if metric.Name == "service_state" {
|
||||
if metric.Value == 0.0 {
|
||||
status = "error"
|
||||
}
|
||||
} else if strings.HasPrefix(metric.Name, "host_load") {
|
||||
if metric.Value > 10.0 {
|
||||
status = "warning"
|
||||
}
|
||||
if metric.Value > 20.0 {
|
||||
status = "error"
|
||||
}
|
||||
}
|
||||
|
||||
group.Metrics = append(group.Metrics, UIMetric{
|
||||
Name: formatMetricName(metric),
|
||||
Value: value,
|
||||
Status: status,
|
||||
Timestamp: time.Now(),
|
||||
})
|
||||
}
|
||||
|
||||
groups = append(groups, group)
|
||||
}
|
||||
|
||||
return groups
|
||||
}
|
||||
|
||||
func formatMetricName(metric MetricValue) string {
|
||||
name := metric.Name
|
||||
if len(metric.Labels) > 0 {
|
||||
labels := []string{}
|
||||
for k, v := range metric.Labels {
|
||||
labels = append(labels, fmt.Sprintf("%s=%s", k, v))
|
||||
}
|
||||
name = fmt.Sprintf("%s{%s}", name, strings.Join(labels, ", "))
|
||||
}
|
||||
return name
|
||||
}
|
||||
|
||||
func formatMetricValue(metric MetricValue) string {
|
||||
switch metric.Name {
|
||||
case "zfs_pool_health":
|
||||
if metric.Value == 1.0 {
|
||||
return "ONLINE"
|
||||
} else if metric.Value == 0.5 {
|
||||
return "DEGRADED"
|
||||
}
|
||||
return "FAULTED"
|
||||
case "zfs_pool_scrub_in_progress":
|
||||
if metric.Value == 1.0 {
|
||||
return "In Progress"
|
||||
}
|
||||
return "Idle"
|
||||
case "smart_health":
|
||||
if metric.Value == 1.0 {
|
||||
return "PASSED"
|
||||
}
|
||||
return "FAILED"
|
||||
case "service_state":
|
||||
if metric.Value == 1.0 {
|
||||
return "Running"
|
||||
}
|
||||
return "Stopped"
|
||||
case "host_load1", "host_load5", "host_load15":
|
||||
return fmt.Sprintf("%.2f", metric.Value)
|
||||
case "host_memory_total_bytes", "host_memory_free_bytes", "host_memory_available_bytes":
|
||||
return formatBytes(metric.Value)
|
||||
default:
|
||||
return fmt.Sprintf("%.2f", metric.Value)
|
||||
}
|
||||
}
|
||||
|
||||
func formatBytes(bytes float64) string {
|
||||
units := []string{"B", "KB", "MB", "GB", "TB"}
|
||||
value := bytes
|
||||
unit := 0
|
||||
for value >= 1024 && unit < len(units)-1 {
|
||||
value /= 1024
|
||||
unit++
|
||||
}
|
||||
return fmt.Sprintf("%.2f %s", value, units[unit])
|
||||
}
|
||||
|
||||
@@ -12,9 +12,45 @@ type DiskService interface {
|
||||
|
||||
type ZFSService interface {
|
||||
ListPools(ctx context.Context) ([]domain.Pool, error)
|
||||
CreatePool(ctx context.Context, name string, vdevs []string) (string, error)
|
||||
// CreatePool is a higher level operation handled by StorageService with jobs
|
||||
// CreatePool(ctx context.Context, name string, vdevs []string) (string, error)
|
||||
GetPoolStatus(ctx context.Context, pool string) (domain.PoolHealth, error)
|
||||
ListDatasets(ctx context.Context, pool string) ([]domain.Dataset, error)
|
||||
CreateDataset(ctx context.Context, name string, props map[string]string) error
|
||||
Snapshot(ctx context.Context, dataset, snapName string) error
|
||||
ScrubStart(ctx context.Context, pool string) error
|
||||
ScrubStatus(ctx context.Context, pool string) (string, error)
|
||||
}
|
||||
|
||||
type JobRunner interface {
|
||||
Enqueue(ctx context.Context, j domain.Job) (string, error)
|
||||
}
|
||||
|
||||
type SharesService interface {
|
||||
ListNFS(ctx context.Context) ([]domain.Share, error)
|
||||
CreateNFS(ctx context.Context, user, role, name, path string, opts map[string]string) (string, error)
|
||||
DeleteNFS(ctx context.Context, user, role, id string) error
|
||||
NFSStatus(ctx context.Context) (string, error)
|
||||
ListSMB(ctx context.Context) ([]domain.Share, error)
|
||||
CreateSMB(ctx context.Context, user, role, name, path string, readOnly bool, allowedUsers []string) (string, error)
|
||||
DeleteSMB(ctx context.Context, user, role, id string) error
|
||||
}
|
||||
|
||||
type ObjectService interface {
|
||||
SetSettings(ctx context.Context, user, role string, s map[string]any) error
|
||||
GetSettings(ctx context.Context) (map[string]any, error)
|
||||
ListBuckets(ctx context.Context) ([]string, error)
|
||||
CreateBucket(ctx context.Context, user, role, name string) (string, error)
|
||||
}
|
||||
|
||||
type ISCSIService interface {
|
||||
ListTargets(ctx context.Context) ([]map[string]any, error)
|
||||
CreateTarget(ctx context.Context, user, role, name, iqn string) (string, error)
|
||||
CreateLUN(ctx context.Context, user, role, targetID, lunName string, size string, blocksize int) (string, error)
|
||||
DeleteLUN(ctx context.Context, user, role, id string, force bool) error
|
||||
UnmapLUN(ctx context.Context, user, role, id string) error
|
||||
AddPortal(ctx context.Context, user, role, targetID, address string, port int) (string, error)
|
||||
AddInitiator(ctx context.Context, user, role, targetID, initiatorIQN string) (string, error)
|
||||
ListLUNs(ctx context.Context, targetID string) ([]map[string]any, error)
|
||||
GetTargetInfo(ctx context.Context, targetID string) (map[string]any, error)
|
||||
}
|
||||
|
||||
228
internal/service/iscsi/iscsi.go
Normal file
228
internal/service/iscsi/iscsi.go
Normal file
@@ -0,0 +1,228 @@
|
||||
package iscsi
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/example/storage-appliance/internal/audit"
|
||||
"github.com/example/storage-appliance/internal/infra/iscsi"
|
||||
"github.com/example/storage-appliance/internal/infra/zfs"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrForbidden = errors.New("forbidden")
|
||||
)
|
||||
|
||||
type ISCSIService struct {
|
||||
DB *sql.DB
|
||||
ZFS *zfs.Adapter
|
||||
ISCSI *iscsi.Adapter
|
||||
Audit audit.AuditLogger
|
||||
}
|
||||
|
||||
func NewISCSIService(db *sql.DB, z *zfs.Adapter, i *iscsi.Adapter, a audit.AuditLogger) *ISCSIService {
|
||||
return &ISCSIService{DB: db, ZFS: z, ISCSI: i, Audit: a}
|
||||
}
|
||||
|
||||
func (s *ISCSIService) ListTargets(ctx context.Context) ([]map[string]any, error) {
|
||||
rows, err := s.DB.QueryContext(ctx, `SELECT id, iqn, name, created_at FROM iscsi_targets`)
|
||||
if err != nil { return nil, err }
|
||||
defer rows.Close()
|
||||
res := []map[string]any{}
|
||||
for rows.Next() {
|
||||
var id, iqn, name string
|
||||
var created time.Time
|
||||
if err := rows.Scan(&id, &iqn, &name, &created); err != nil { return nil, err }
|
||||
res = append(res, map[string]any{"id": id, "iqn": iqn, "name": name, "created_at": created})
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func (s *ISCSIService) CreateTarget(ctx context.Context, user, role, name, iqn string) (string, error) {
|
||||
if role != "admin" { return "", ErrForbidden }
|
||||
if iqn == "" || !strings.HasPrefix(iqn, "iqn.") { return "", errors.New("invalid IQN") }
|
||||
id := uuid.New().String()
|
||||
if _, err := s.DB.ExecContext(ctx, `INSERT INTO iscsi_targets (id, iqn, name) VALUES (?, ?, ?)`, id, iqn, name); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if s.ISCSI != nil {
|
||||
if err := s.ISCSI.CreateTarget(ctx, iqn); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if err := s.ISCSI.Save(ctx); err != nil { return "", err }
|
||||
}
|
||||
if s.Audit != nil { s.Audit.Record(ctx, audit.Event{UserID: user, Action: "iscsi.target.create", ResourceType: "iscsi_target", ResourceID: id, Success: true}) }
|
||||
return id, nil
|
||||
}
|
||||
|
||||
// CreateLUN creates a zvol and maps it as LUN for the IQN. lunName is the zvol path e.g. pool/dataset/vol
|
||||
func (s *ISCSIService) CreateLUN(ctx context.Context, user, role, targetID, lunName string, size string, blocksize int) (string, error) {
|
||||
if role != "admin" && role != "operator" { return "", ErrForbidden }
|
||||
// fetch target IQN
|
||||
var iqn string
|
||||
if err := s.DB.QueryRowContext(ctx, `SELECT iqn FROM iscsi_targets WHERE id = ?`, targetID).Scan(&iqn); err != nil {
|
||||
return "", err
|
||||
}
|
||||
// build zvol name
|
||||
zvol := lunName // expect fully qualified dataset, e.g., pool/iscsi/target/lun0
|
||||
// create zvol via zfs adapter
|
||||
props := map[string]string{}
|
||||
if blocksize > 0 {
|
||||
// convert bytes to K unit if divisible
|
||||
// For simplicity, just set volblocksize as "8K" or "512"; attempt simple conversion
|
||||
props["volblocksize"] = fmt.Sprintf("%d", blocksize)
|
||||
}
|
||||
if s.ZFS != nil {
|
||||
if _, err := s.ZFS.ListDatasets(ctx, ""); err == nil { // no-op to check connectivity
|
||||
}
|
||||
if err := s.ZFS.CreateZVol(ctx, zvol, size, props); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
// backstore name and device path
|
||||
bsName := "bs-" + uuid.New().String()
|
||||
devpath := "/dev/zvol/" + zvol
|
||||
if s.ISCSI != nil {
|
||||
if err := s.ISCSI.CreateBackstore(ctx, bsName, devpath); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
// determine LUN ID as next available for target
|
||||
var maxLun sql.NullInt64
|
||||
if err := s.DB.QueryRowContext(ctx, `SELECT MAX(lun_id) FROM iscsi_luns WHERE target_id = ?`, targetID).Scan(&maxLun); err != nil && err != sql.ErrNoRows { return "", err }
|
||||
nextLun := 0
|
||||
if maxLun.Valid { nextLun = int(maxLun.Int64) + 1 }
|
||||
if s.ISCSI != nil {
|
||||
if err := s.ISCSI.CreateLUN(ctx, iqn, bsName, nextLun); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if err := s.ISCSI.Save(ctx); err != nil { return "", err }
|
||||
}
|
||||
id := uuid.New().String()
|
||||
if _, err := s.DB.ExecContext(ctx, `INSERT INTO iscsi_luns (id, target_id, lun_id, zvol, size, blocksize, mapped) VALUES (?, ?, ?, ?, ?, ?, 1)`, id, targetID, nextLun, zvol, sizeToInt(size), blocksize); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if s.Audit != nil { s.Audit.Record(ctx, audit.Event{UserID: user, Action: "iscsi.lun.create", ResourceType: "iscsi_lun", ResourceID: id, Success: true}) }
|
||||
return id, nil
|
||||
}
|
||||
|
||||
func sizeToInt(s string) int {
|
||||
// naive conversion: strip trailing G/M/K
|
||||
// This function can be improved; for now return 0
|
||||
return 0
|
||||
}
|
||||
|
||||
func (s *ISCSIService) ListLUNs(ctx context.Context, targetID string) ([]map[string]any, error) {
|
||||
rows, err := s.DB.QueryContext(ctx, `SELECT id, lun_id, zvol, size, blocksize, mapped, created_at FROM iscsi_luns WHERE target_id = ?`, targetID)
|
||||
if err != nil { return nil, err }
|
||||
defer rows.Close()
|
||||
res := []map[string]any{}
|
||||
for rows.Next() {
|
||||
var id, zvol string
|
||||
var lunID int
|
||||
var size int
|
||||
var blocksize int
|
||||
var mapped int
|
||||
var created time.Time
|
||||
if err := rows.Scan(&id, &lunID, &zvol, &size, &blocksize, &mapped, &created); err != nil { return nil, err }
|
||||
res = append(res, map[string]any{"id": id, "lun_id": lunID, "zvol": zvol, "size": size, "blocksize": blocksize, "mapped": mapped == 1, "created_at": created})
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func (s *ISCSIService) GetTargetInfo(ctx context.Context, targetID string) (map[string]any, error) {
|
||||
var iqn string
|
||||
if err := s.DB.QueryRowContext(ctx, `SELECT iqn FROM iscsi_targets WHERE id = ?`, targetID).Scan(&iqn); err != nil { return nil, err }
|
||||
portals := []map[string]any{}
|
||||
rows, err := s.DB.QueryContext(ctx, `SELECT id, address, port FROM iscsi_portals WHERE target_id = ?`, targetID)
|
||||
if err != nil { return nil, err }
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
var id, address string
|
||||
var port int
|
||||
if err := rows.Scan(&id, &address, &port); err != nil { return nil, err }
|
||||
portals = append(portals, map[string]any{"id": id, "address": address, "port": port})
|
||||
}
|
||||
inits := []map[string]any{}
|
||||
rows2, err := s.DB.QueryContext(ctx, `SELECT id, initiator_iqn FROM iscsi_initiators WHERE target_id = ?`, targetID)
|
||||
if err != nil { return nil, err }
|
||||
defer rows2.Close()
|
||||
for rows2.Next() {
|
||||
var id, iqnStr string
|
||||
if err := rows2.Scan(&id, &iqnStr); err != nil { return nil, err }
|
||||
inits = append(inits, map[string]any{"id": id, "iqn": iqnStr})
|
||||
}
|
||||
res := map[string]any{"iqn": iqn, "portals": portals, "initiators": inits}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func (s *ISCSIService) DeleteLUN(ctx context.Context, user, role, id string, force bool) error {
|
||||
if role != "admin" { return ErrForbidden }
|
||||
// check LUN
|
||||
var mappedInt int
|
||||
var targetID string
|
||||
var lunID int
|
||||
if err := s.DB.QueryRowContext(ctx, `SELECT target_id, lun_id, mapped FROM iscsi_luns WHERE id = ?`, id).Scan(&targetID, &lunID, &mappedInt); err != nil { return err }
|
||||
if mappedInt == 1 && !force { return errors.New("LUN is mapped; unmap (drain) before deletion or specify force") }
|
||||
// delete via adapter
|
||||
if s.ISCSI != nil {
|
||||
var iqn string
|
||||
if err := s.DB.QueryRowContext(ctx, `SELECT iqn FROM iscsi_targets WHERE id = ?`, targetID).Scan(&iqn); err != nil { return err }
|
||||
if err := s.ISCSI.DeleteLUN(ctx, iqn, lunID); err != nil { return err }
|
||||
if err := s.ISCSI.Save(ctx); err != nil { return err }
|
||||
}
|
||||
if _, err := s.DB.ExecContext(ctx, `DELETE FROM iscsi_luns WHERE id = ?`, id); err != nil { return err }
|
||||
if s.Audit != nil { s.Audit.Record(ctx, audit.Event{UserID: user, Action: "iscsi.lun.delete", ResourceType: "iscsi_lun", ResourceID: id, Success: true}) }
|
||||
return nil
|
||||
}
|
||||
|
||||
// UnmapLUN removes LUN mapping from target, sets mapped to false in DB
|
||||
func (s *ISCSIService) UnmapLUN(ctx context.Context, user, role, id string) error {
|
||||
if role != "admin" && role != "operator" { return ErrForbidden }
|
||||
var targetID string
|
||||
var lunID int
|
||||
if err := s.DB.QueryRowContext(ctx, `SELECT target_id, lun_id FROM iscsi_luns WHERE id = ?`, id).Scan(&targetID, &lunID); err != nil { return err }
|
||||
if s.ISCSI != nil {
|
||||
var iqn string
|
||||
if err := s.DB.QueryRowContext(ctx, `SELECT iqn FROM iscsi_targets WHERE id = ?`, targetID).Scan(&iqn); err != nil { return err }
|
||||
if err := s.ISCSI.DeleteLUN(ctx, iqn, lunID); err != nil { return err }
|
||||
if err := s.ISCSI.Save(ctx); err != nil { return err }
|
||||
}
|
||||
if _, err := s.DB.ExecContext(ctx, `UPDATE iscsi_luns SET mapped = 0 WHERE id = ?`, id); err != nil { return err }
|
||||
if s.Audit != nil { s.Audit.Record(ctx, audit.Event{UserID: user, Action: "iscsi.lun.unmap", ResourceType: "iscsi_lun", ResourceID: id, Success: true}) }
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ISCSIService) AddPortal(ctx context.Context, user, role, targetID, address string, port int) (string, error) {
|
||||
if role != "admin" && role != "operator" { return "", ErrForbidden }
|
||||
// verify target exists and fetch IQN
|
||||
var iqn string
|
||||
if err := s.DB.QueryRowContext(ctx, `SELECT iqn FROM iscsi_targets WHERE id = ?`, targetID).Scan(&iqn); err != nil { return "", err }
|
||||
id := uuid.New().String()
|
||||
if _, err := s.DB.ExecContext(ctx, `INSERT INTO iscsi_portals (id, target_id, address, port) VALUES (?, ?, ?, ?)`, id, targetID, address, port); err != nil { return "", err }
|
||||
if s.ISCSI != nil {
|
||||
if err := s.ISCSI.AddPortal(ctx, iqn, address, port); err != nil { return "", err }
|
||||
if err := s.ISCSI.Save(ctx); err != nil { return "", err }
|
||||
}
|
||||
if s.Audit != nil { s.Audit.Record(ctx, audit.Event{UserID: user, Action: "iscsi.portal.add", ResourceType: "iscsi_portal", ResourceID: id, Success: true}) }
|
||||
return id, nil
|
||||
}
|
||||
|
||||
func (s *ISCSIService) AddInitiator(ctx context.Context, user, role, targetID, initiatorIQN string) (string, error) {
|
||||
if role != "admin" && role != "operator" { return "", ErrForbidden }
|
||||
var iqn string
|
||||
if err := s.DB.QueryRowContext(ctx, `SELECT iqn FROM iscsi_targets WHERE id = ?`, targetID).Scan(&iqn); err != nil { return "", err }
|
||||
id := uuid.New().String()
|
||||
if _, err := s.DB.ExecContext(ctx, `INSERT INTO iscsi_initiators (id, target_id, initiator_iqn) VALUES (?, ?, ?)`, id, targetID, initiatorIQN); err != nil { return "", err }
|
||||
if s.ISCSI != nil {
|
||||
if err := s.ISCSI.AddACL(ctx, iqn, initiatorIQN); err != nil { return "", err }
|
||||
if err := s.ISCSI.Save(ctx); err != nil { return "", err }
|
||||
}
|
||||
if s.Audit != nil { s.Audit.Record(ctx, audit.Event{UserID: user, Action: "iscsi.initiator.add", ResourceType: "iscsi_initiator", ResourceID: id, Success: true}) }
|
||||
return id, nil
|
||||
}
|
||||
@@ -10,9 +10,11 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
_ service.DiskService = (*MockDiskService)(nil)
|
||||
_ service.ZFSService = (*MockZFSService)(nil)
|
||||
_ service.JobRunner = (*MockJobRunner)(nil)
|
||||
_ service.DiskService = (*MockDiskService)(nil)
|
||||
_ service.ZFSService = (*MockZFSService)(nil)
|
||||
_ service.JobRunner = (*MockJobRunner)(nil)
|
||||
_ service.SharesService = (*MockSharesService)(nil)
|
||||
_ service.ISCSIService = (*MockISCSIService)(nil)
|
||||
)
|
||||
|
||||
type MockDiskService struct{}
|
||||
@@ -32,8 +34,32 @@ func (m *MockZFSService) ListPools(ctx context.Context) ([]domain.Pool, error) {
|
||||
}
|
||||
|
||||
func (m *MockZFSService) CreatePool(ctx context.Context, name string, vdevs []string) (string, error) {
|
||||
// spawn instant job id for mock
|
||||
return "job-" + uuid.New().String(), nil
|
||||
// not implemented on adapter-level mock
|
||||
return "", nil
|
||||
}
|
||||
|
||||
func (m *MockZFSService) GetPoolStatus(ctx context.Context, pool string) (domain.PoolHealth, error) {
|
||||
return domain.PoolHealth{Pool: pool, Status: "ONLINE", Detail: "mocked"}, nil
|
||||
}
|
||||
|
||||
func (m *MockZFSService) ListDatasets(ctx context.Context, pool string) ([]domain.Dataset, error) {
|
||||
return []domain.Dataset{{Name: pool + "/dataset1", Pool: pool, Type: "filesystem"}}, nil
|
||||
}
|
||||
|
||||
func (m *MockZFSService) CreateDataset(ctx context.Context, name string, props map[string]string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MockZFSService) Snapshot(ctx context.Context, dataset, snapName string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MockZFSService) ScrubStart(ctx context.Context, pool string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MockZFSService) ScrubStatus(ctx context.Context, pool string) (string, error) {
|
||||
return "none", nil
|
||||
}
|
||||
|
||||
type MockJobRunner struct{}
|
||||
@@ -45,3 +71,67 @@ func (m *MockJobRunner) Enqueue(ctx context.Context, j domain.Job) (string, erro
|
||||
}()
|
||||
return uuid.New().String(), nil
|
||||
}
|
||||
|
||||
type MockSharesService struct{}
|
||||
|
||||
func (m *MockSharesService) ListNFS(ctx context.Context) ([]domain.Share, error) {
|
||||
return []domain.Share{{ID: domain.UUID(uuid.New().String()), Name: "data", Path: "tank/ds", Type: "nfs"}}, nil
|
||||
}
|
||||
|
||||
func (m *MockSharesService) CreateNFS(ctx context.Context, user, role, name, path string, opts map[string]string) (string, error) {
|
||||
return "share-" + uuid.New().String(), nil
|
||||
}
|
||||
|
||||
func (m *MockSharesService) DeleteNFS(ctx context.Context, user, role, id string) error {
|
||||
return nil
|
||||
}
|
||||
func (m *MockSharesService) NFSStatus(ctx context.Context) (string, error) {
|
||||
return "active", nil
|
||||
}
|
||||
func (m *MockSharesService) ListSMB(ctx context.Context) ([]domain.Share, error) {
|
||||
return []domain.Share{{ID: domain.UUID(uuid.New().String()), Name: "smb1", Path: "tank/ds", Type: "smb", Config: map[string]string{"read_only": "false"}}}, nil
|
||||
}
|
||||
func (m *MockSharesService) CreateSMB(ctx context.Context, user, role, name, path string, readOnly bool, allowedUsers []string) (string, error) {
|
||||
return "smb-" + uuid.New().String(), nil
|
||||
}
|
||||
func (m *MockSharesService) DeleteSMB(ctx context.Context, user, role, id string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type MockISCSIService struct{}
|
||||
|
||||
func (m *MockISCSIService) ListTargets(ctx context.Context) ([]map[string]any, error) {
|
||||
return []map[string]any{{"id": "t-1", "iqn": "iqn.2025-12.org.example:target1", "name": "test"}}, nil
|
||||
}
|
||||
|
||||
func (m *MockISCSIService) CreateTarget(ctx context.Context, user, role, name, iqn string) (string, error) {
|
||||
return "t-" + uuid.New().String(), nil
|
||||
}
|
||||
|
||||
func (m *MockISCSIService) CreateLUN(ctx context.Context, user, role, targetID, lunName string, size string, blocksize int) (string, error) {
|
||||
return "lun-" + uuid.New().String(), nil
|
||||
}
|
||||
|
||||
func (m *MockISCSIService) DeleteLUN(ctx context.Context, user, role, id string, force bool) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MockISCSIService) ListLUNs(ctx context.Context, targetID string) ([]map[string]any, error) {
|
||||
return []map[string]any{{"id": "lun-1", "lun_id": 0, "zvol": "tank/ds/vol1", "size": 10737418240}}, nil
|
||||
}
|
||||
|
||||
func (m *MockISCSIService) UnmapLUN(ctx context.Context, user, role, id string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MockISCSIService) AddPortal(ctx context.Context, user, role, targetID, address string, port int) (string, error) {
|
||||
return "portal-" + uuid.New().String(), nil
|
||||
}
|
||||
|
||||
func (m *MockISCSIService) AddInitiator(ctx context.Context, user, role, targetID, initiatorIQN string) (string, error) {
|
||||
return "init-" + uuid.New().String(), nil
|
||||
}
|
||||
|
||||
func (m *MockISCSIService) GetTargetInfo(ctx context.Context, targetID string) (map[string]any, error) {
|
||||
return map[string]any{"iqn": "iqn.2025-12.org.example:target1", "portals": []map[string]any{{"id": "p-1", "address": "10.0.0.1", "port": 3260}}, "initiators": []map[string]any{{"id": "i-1", "iqn": "iqn.1993-08.org.debian:01"}}}, nil
|
||||
}
|
||||
|
||||
159
internal/service/objectstore/objectstore.go
Normal file
159
internal/service/objectstore/objectstore.go
Normal file
@@ -0,0 +1,159 @@
|
||||
package objectstore
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/example/storage-appliance/internal/audit"
|
||||
"github.com/example/storage-appliance/internal/infra/minio"
|
||||
"github.com/example/storage-appliance/internal/infra/crypto"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrForbidden = errors.New("forbidden")
|
||||
)
|
||||
|
||||
type Settings struct {
|
||||
ID string
|
||||
Name string
|
||||
AccessKey string
|
||||
SecretKey string
|
||||
DataPath string
|
||||
Port int
|
||||
TLS bool
|
||||
CreatedAt time.Time
|
||||
}
|
||||
|
||||
type ObjectService struct {
|
||||
DB *sql.DB
|
||||
Minio *minio.Adapter
|
||||
Audit audit.AuditLogger
|
||||
// encryption key for secret storage
|
||||
Key []byte
|
||||
}
|
||||
|
||||
func NewObjectService(db *sql.DB, m *minio.Adapter, a audit.AuditLogger, key []byte) *ObjectService {
|
||||
return &ObjectService{DB: db, Minio: m, Audit: a, Key: key}
|
||||
}
|
||||
|
||||
func (s *ObjectService) SetSettings(ctx context.Context, user, role string, stMap map[string]any) error {
|
||||
if role != "admin" {
|
||||
return ErrForbidden
|
||||
}
|
||||
// convert map to Settings struct for local use
|
||||
st := Settings{}
|
||||
if v, ok := stMap["access_key"].(string); ok { st.AccessKey = v }
|
||||
if v, ok := stMap["secret_key"].(string); ok { st.SecretKey = v }
|
||||
if v, ok := stMap["data_path"].(string); ok { st.DataPath = v }
|
||||
if v, ok := stMap["name"].(string); ok { st.Name = v }
|
||||
if v, ok := stMap["port"].(int); ok { st.Port = v }
|
||||
if v, ok := stMap["tls"].(bool); ok { st.TLS = v }
|
||||
|
||||
// encrypt access key and secret key
|
||||
if len(s.Key) != 32 {
|
||||
return errors.New("encryption key must be 32 bytes")
|
||||
}
|
||||
encAccess, err := crypto.Encrypt(s.Key, st.AccessKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
encSecret, err := crypto.Encrypt(s.Key, st.SecretKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// upsert into DB (single row)
|
||||
if _, err := s.DB.ExecContext(ctx, `INSERT OR REPLACE INTO object_storage (id, name, access_key, secret_key, data_path, port, tls) VALUES ('minio', ?, ?, ?, ?, ?, ?)` , st.Name, encAccess, encSecret, st.DataPath, st.Port, boolToInt(st.TLS)); err != nil {
|
||||
return err
|
||||
}
|
||||
if s.Audit != nil {
|
||||
s.Audit.Record(ctx, audit.Event{UserID: user, Action: "object.settings.update", ResourceType: "object_storage", ResourceID: "minio", Success: true})
|
||||
}
|
||||
if s.Minio != nil {
|
||||
// write env file
|
||||
settings := minio.Settings{AccessKey: st.AccessKey, SecretKey: st.SecretKey, DataPath: st.DataPath, Port: st.Port, TLS: st.TLS}
|
||||
if err := s.Minio.WriteEnv(ctx, settings); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := s.Minio.Reload(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *ObjectService) GetSettings(ctx context.Context) (map[string]any, error) {
|
||||
var st Settings
|
||||
row := s.DB.QueryRowContext(ctx, `SELECT name, access_key, secret_key, data_path, port, tls, created_at FROM object_storage WHERE id = 'minio'`)
|
||||
var encAccess, encSecret string
|
||||
var tlsInt int
|
||||
if err := row.Scan(&st.Name, &encAccess, &encSecret, &st.DataPath, &st.Port, &tlsInt, &st.CreatedAt); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
st.TLS = tlsInt == 1
|
||||
if len(s.Key) == 32 {
|
||||
if A, err := crypto.Decrypt(s.Key, encAccess); err == nil { st.AccessKey = A }
|
||||
if S, err := crypto.Decrypt(s.Key, encSecret); err == nil { st.SecretKey = S }
|
||||
}
|
||||
res := map[string]any{"name": st.Name, "access_key": st.AccessKey, "secret_key": st.SecretKey, "data_path": st.DataPath, "port": st.Port, "tls": st.TLS, "created_at": st.CreatedAt}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func boolToInt(b bool) int { if b { return 1 }; return 0 }
|
||||
|
||||
// ListBuckets via minio adapter or fallback to DB
|
||||
func (s *ObjectService) ListBuckets(ctx context.Context) ([]string, error) {
|
||||
if s.Minio != nil {
|
||||
// ensure mc alias is configured
|
||||
stMap, err := s.GetSettings(ctx)
|
||||
if err != nil { return nil, err }
|
||||
alias := "appliance"
|
||||
mSet := minio.Settings{}
|
||||
if v, ok := stMap["access_key"].(string); ok { mSet.AccessKey = v }
|
||||
if v, ok := stMap["secret_key"].(string); ok { mSet.SecretKey = v }
|
||||
if v, ok := stMap["data_path"].(string); ok { mSet.DataPath = v }
|
||||
if v, ok := stMap["port"].(int); ok { mSet.Port = v }
|
||||
if v, ok := stMap["tls"].(bool); ok { mSet.TLS = v }
|
||||
s.Minio.ConfigureMC(ctx, alias, mSet)
|
||||
return s.Minio.ListBuckets(ctx, alias)
|
||||
}
|
||||
// fallback to DB persisted buckets
|
||||
rows, err := s.DB.QueryContext(ctx, `SELECT name FROM buckets`)
|
||||
if err != nil { return nil, err }
|
||||
defer rows.Close()
|
||||
var res []string
|
||||
for rows.Next() {
|
||||
var name string
|
||||
if err := rows.Scan(&name); err != nil { return nil, err }
|
||||
res = append(res, name)
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func (s *ObjectService) CreateBucket(ctx context.Context, user, role, name string) (string, error) {
|
||||
if role != "admin" && role != "operator" { return "", ErrForbidden }
|
||||
// attempt via minio adapter
|
||||
if s.Minio != nil {
|
||||
stMap, err := s.GetSettings(ctx)
|
||||
if err != nil { return "", err }
|
||||
alias := "appliance"
|
||||
mSet := minio.Settings{}
|
||||
if v, ok := stMap["access_key"].(string); ok { mSet.AccessKey = v }
|
||||
if v, ok := stMap["secret_key"].(string); ok { mSet.SecretKey = v }
|
||||
if v, ok := stMap["data_path"].(string); ok { mSet.DataPath = v }
|
||||
if v, ok := stMap["port"].(int); ok { mSet.Port = v }
|
||||
if v, ok := stMap["tls"].(bool); ok { mSet.TLS = v }
|
||||
if err := s.Minio.ConfigureMC(ctx, alias, mSet); err != nil { return "", err }
|
||||
if err := s.Minio.CreateBucket(ctx, alias, name); err != nil { return "", err }
|
||||
// persist
|
||||
id := fmt.Sprintf("bucket-%d", time.Now().UnixNano())
|
||||
if _, err := s.DB.ExecContext(ctx, `INSERT INTO buckets (id, name) VALUES (?, ?)`, id, name); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if s.Audit != nil { s.Audit.Record(ctx, audit.Event{UserID: user, Action: "object.bucket.create", ResourceType: "bucket", ResourceID: name, Success: true}) }
|
||||
return id, nil
|
||||
}
|
||||
return "", errors.New("no minio adapter configured")
|
||||
}
|
||||
225
internal/service/shares/shares.go
Normal file
225
internal/service/shares/shares.go
Normal file
@@ -0,0 +1,225 @@
|
||||
package shares
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/google/uuid"
|
||||
"strings"
|
||||
|
||||
"github.com/example/storage-appliance/internal/audit"
|
||||
"github.com/example/storage-appliance/internal/domain"
|
||||
"github.com/example/storage-appliance/internal/infra/nfs"
|
||||
"github.com/example/storage-appliance/internal/infra/samba"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrForbidden = errors.New("forbidden")
|
||||
)
|
||||
|
||||
type SharesService struct {
|
||||
DB *sql.DB
|
||||
NFS *nfs.Adapter
|
||||
Samba *samba.Adapter
|
||||
Audit audit.AuditLogger
|
||||
}
|
||||
|
||||
func NewSharesService(db *sql.DB, n *nfs.Adapter, s *samba.Adapter, a audit.AuditLogger) *SharesService {
|
||||
return &SharesService{DB: db, NFS: n, Samba: s, Audit: a}
|
||||
}
|
||||
|
||||
func (s *SharesService) ListNFS(ctx context.Context) ([]domain.Share, error) {
|
||||
rows, err := s.DB.QueryContext(ctx, `SELECT id, name, path, type, options FROM shares WHERE type = 'nfs'`)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
var res []domain.Share
|
||||
for rows.Next() {
|
||||
var id, name, path, typ, options string
|
||||
if err := rows.Scan(&id, &name, &path, &typ, &options); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var optMap map[string]string
|
||||
if options != "" {
|
||||
_ = json.Unmarshal([]byte(options), &optMap)
|
||||
}
|
||||
res = append(res, domain.Share{ID: domain.UUID(id), Name: name, Path: path, Type: typ})
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
// CreateNFS stores a new NFS export, re-renders /etc/exports and applies it
|
||||
func (s *SharesService) CreateNFS(ctx context.Context, user, role, name, path string, opts map[string]string) (string, error) {
|
||||
if role != "admin" && role != "operator" {
|
||||
return "", ErrForbidden
|
||||
}
|
||||
// Verify path exists and is a dataset: check dataset table for matching name
|
||||
var count int
|
||||
if err := s.DB.QueryRowContext(ctx, `SELECT COUNT(1) FROM datasets WHERE name = ?`, path).Scan(&count); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if count == 0 {
|
||||
return "", fmt.Errorf("path not a known dataset: %s", path)
|
||||
}
|
||||
// Prevent exporting system paths: disallow leading '/' entries; require dataset name like pool/ds
|
||||
if path == "/" || path == "/etc" || path == "/bin" || path == "/usr" {
|
||||
return "", fmt.Errorf("can't export system path: %s", path)
|
||||
}
|
||||
// store options as JSON
|
||||
optJSON, _ := json.Marshal(opts)
|
||||
id := uuid.New().String()
|
||||
if _, err := s.DB.ExecContext(ctx, `INSERT INTO shares (id, name, path, type, options) VALUES (?, ?, ?, 'nfs', ?)`, id, name, path, string(optJSON)); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if s.Audit != nil {
|
||||
s.Audit.Record(ctx, audit.Event{UserID: user, Action: "nfs.create", ResourceType: "share", ResourceID: name, Success: true, Details: map[string]any{"path": path}})
|
||||
}
|
||||
// re-render exports
|
||||
shares, err := s.ListNFS(ctx)
|
||||
if err != nil {
|
||||
return id, err
|
||||
}
|
||||
if s.NFS != nil {
|
||||
if err := s.NFS.RenderExports(ctx, shares); err != nil {
|
||||
return id, err
|
||||
}
|
||||
if err := s.NFS.Apply(ctx); err != nil {
|
||||
return id, err
|
||||
}
|
||||
}
|
||||
return id, nil
|
||||
}
|
||||
|
||||
// SMB functions
|
||||
func (s *SharesService) ListSMB(ctx context.Context) ([]domain.Share, error) {
|
||||
rows, err := s.DB.QueryContext(ctx, `SELECT id, name, path, type, options FROM shares WHERE type = 'smb'`)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer rows.Close()
|
||||
var res []domain.Share
|
||||
for rows.Next() {
|
||||
var id, name, path, typ, options string
|
||||
if err := rows.Scan(&id, &name, &path, &typ, &options); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var config map[string]string
|
||||
if options != "" {
|
||||
_ = json.Unmarshal([]byte(options), &config)
|
||||
}
|
||||
res = append(res, domain.Share{ID: domain.UUID(id), Name: name, Path: path, Type: typ, Config: config})
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func (s *SharesService) CreateSMB(ctx context.Context, user, role, name, path string, readOnly bool, allowedUsers []string) (string, error) {
|
||||
if role != "admin" && role != "operator" {
|
||||
return "", ErrForbidden
|
||||
}
|
||||
// Verify dataset
|
||||
var count int
|
||||
if err := s.DB.QueryRowContext(ctx, `SELECT COUNT(1) FROM datasets WHERE name = ?`, path).Scan(&count); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if count == 0 {
|
||||
return "", fmt.Errorf("path not a known dataset: %s", path)
|
||||
}
|
||||
// disallow system paths by basic checks
|
||||
if path == "/" || path == "/etc" || path == "/bin" || path == "/usr" {
|
||||
return "", fmt.Errorf("can't export system path: %s", path)
|
||||
}
|
||||
// store options as JSON (read_only, allowed_users)
|
||||
cfg := map[string]string{}
|
||||
if readOnly {
|
||||
cfg["read_only"] = "true"
|
||||
} else {
|
||||
cfg["read_only"] = "false"
|
||||
}
|
||||
if len(allowedUsers) > 0 {
|
||||
cfg["allowed_users"] = strings.Join(allowedUsers, " ")
|
||||
}
|
||||
optJSON, _ := json.Marshal(cfg)
|
||||
id := uuid.New().String()
|
||||
if _, err := s.DB.ExecContext(ctx, `INSERT INTO shares (id, name, path, type, options) VALUES (?, ?, ?, 'smb', ?)`, id, name, path, string(optJSON)); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if s.Audit != nil {
|
||||
s.Audit.Record(ctx, audit.Event{UserID: user, Action: "smb.create", ResourceType: "share", ResourceID: name, Success: true, Details: map[string]any{"path": path, "read_only": readOnly}})
|
||||
}
|
||||
// re-render smb conf and reload
|
||||
shares, err := s.ListSMB(ctx)
|
||||
if err != nil {
|
||||
return id, err
|
||||
}
|
||||
if s.Samba != nil {
|
||||
if err := s.Samba.RenderConf(ctx, shares); err != nil {
|
||||
return id, err
|
||||
}
|
||||
if err := s.Samba.Reload(ctx); err != nil {
|
||||
return id, err
|
||||
}
|
||||
}
|
||||
return id, nil
|
||||
}
|
||||
|
||||
func (s *SharesService) DeleteSMB(ctx context.Context, user, role, id string) error {
|
||||
if role != "admin" && role != "operator" {
|
||||
return ErrForbidden
|
||||
}
|
||||
if _, err := s.DB.ExecContext(ctx, `DELETE FROM shares WHERE id = ?`, id); err != nil {
|
||||
return err
|
||||
}
|
||||
if s.Audit != nil {
|
||||
s.Audit.Record(ctx, audit.Event{UserID: user, Action: "smb.delete", ResourceType: "share", ResourceID: id, Success: true})
|
||||
}
|
||||
shares, err := s.ListSMB(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if s.Samba != nil {
|
||||
if err := s.Samba.RenderConf(ctx, shares); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := s.Samba.Reload(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *SharesService) DeleteNFS(ctx context.Context, user, role, id string) error {
|
||||
if role != "admin" && role != "operator" {
|
||||
return ErrForbidden
|
||||
}
|
||||
// verify exists
|
||||
if _, err := s.DB.ExecContext(ctx, `DELETE FROM shares WHERE id = ?`, id); err != nil {
|
||||
return err
|
||||
}
|
||||
if s.Audit != nil {
|
||||
s.Audit.Record(ctx, audit.Event{UserID: user, Action: "nfs.delete", ResourceType: "share", ResourceID: id, Success: true})
|
||||
}
|
||||
// re-render exports
|
||||
shares, err := s.ListNFS(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if s.NFS != nil {
|
||||
if err := s.NFS.RenderExports(ctx, shares); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := s.NFS.Apply(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *SharesService) NFSStatus(ctx context.Context) (string, error) {
|
||||
if s.NFS == nil {
|
||||
return "unavailable", nil
|
||||
}
|
||||
return s.NFS.Status(ctx)
|
||||
}
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/example/storage-appliance/internal/audit"
|
||||
"github.com/example/storage-appliance/internal/domain"
|
||||
@@ -49,6 +48,7 @@ func (s *StorageService) CreatePool(ctx context.Context, user string, role strin
|
||||
}
|
||||
// Create a job to build a pool. For skeleton, we just create a job entry with type create-pool
|
||||
j := domain.Job{Type: "create-pool", Status: "queued", Owner: domain.UUID(user)}
|
||||
j.Details = map[string]any{"name": name, "vdevs": vdevs}
|
||||
id, err := s.JobRunner.Enqueue(ctx, j)
|
||||
// Store details in audit
|
||||
if s.Audit != nil {
|
||||
@@ -64,6 +64,7 @@ func (s *StorageService) Snapshot(ctx context.Context, user, role, dataset, snap
|
||||
}
|
||||
// call zfs snapshot, but do as job; enqueue
|
||||
j := domain.Job{Type: "snapshot", Status: "queued", Owner: domain.UUID(user)}
|
||||
j.Details = map[string]any{"dataset": dataset, "snap_name": snapName}
|
||||
id, err := s.JobRunner.Enqueue(ctx, j)
|
||||
if s.Audit != nil {
|
||||
s.Audit.Record(ctx, audit.Event{UserID: user, Action: "dataset.snapshot.request", ResourceType: "snapshot", ResourceID: fmt.Sprintf("%s@%s", dataset, snapName), Success: err == nil, Details: map[string]any{"dataset": dataset}})
|
||||
@@ -76,6 +77,7 @@ func (s *StorageService) ScrubStart(ctx context.Context, user, role, pool string
|
||||
return "", ErrForbidden
|
||||
}
|
||||
j := domain.Job{Type: "scrub", Status: "queued", Owner: domain.UUID(user)}
|
||||
j.Details = map[string]any{"pool": pool}
|
||||
id, err := s.JobRunner.Enqueue(ctx, j)
|
||||
if s.Audit != nil {
|
||||
s.Audit.Record(ctx, audit.Event{UserID: user, Action: "pool.scrub.request", ResourceType: "pool", ResourceID: pool, Success: err == nil})
|
||||
@@ -93,7 +95,11 @@ func (s *StorageService) CreateDataset(ctx context.Context, user, role, name str
|
||||
if role != "admin" && role != "operator" {
|
||||
return ErrForbidden
|
||||
}
|
||||
return s.ZFS.CreateDataset(ctx, name, props)
|
||||
err := s.ZFS.CreateDataset(ctx, name, props)
|
||||
if s.Audit != nil {
|
||||
s.Audit.Record(ctx, audit.Event{UserID: user, Action: "dataset.create", ResourceType: "dataset", ResourceID: name, Success: err == nil, Details: map[string]any{"props": props}})
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// GetPoolStatus calls the adapter
|
||||
|
||||
@@ -5,9 +5,20 @@
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<script src="https://unpkg.com/htmx.org@1.9.2"></script>
|
||||
<script>
|
||||
// HTMX CSRF token support
|
||||
document.body.addEventListener('htmx:configRequest', function(event) {
|
||||
const csrfToken = document.querySelector('meta[name="csrf-token"]')?.getAttribute('content');
|
||||
if (csrfToken) {
|
||||
event.detail.headers['X-CSRF-Token'] = csrfToken;
|
||||
}
|
||||
});
|
||||
</script>
|
||||
<title>{{.Title}}</title>
|
||||
<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
|
||||
<meta name="csrf-token" content="fake-csrf-token">
|
||||
{{if .CSRFToken}}
|
||||
<meta name="csrf-token" content="{{.CSRFToken}}">
|
||||
{{end}}
|
||||
</head>
|
||||
<body class="bg-gray-100">
|
||||
<main class="container mx-auto p-4">
|
||||
|
||||
32
internal/templates/hx_iscsi_luns.html
Normal file
32
internal/templates/hx_iscsi_luns.html
Normal file
@@ -0,0 +1,32 @@
|
||||
{{ define "hx_iscsi_luns" }}
|
||||
<div>
|
||||
<table class="w-full">
|
||||
<thead><tr><th>LUN ID</th><th>ZVol</th><th>Size</th><th>Action</th></tr></thead>
|
||||
<tbody>
|
||||
{{ if . }}
|
||||
{{ range . }}
|
||||
<tr>
|
||||
<td>{{ .lun_id }}</td>
|
||||
<td>{{ .zvol }}</td>
|
||||
<td>{{ .size }}</td>
|
||||
<td>
|
||||
<div>
|
||||
<form hx-post="/api/iscsi/unmap_lun" hx-include="closest form" style="display:inline-block">
|
||||
<input type="hidden" name="id" value="{{ .id }}" />
|
||||
<button type="submit">Drain</button>
|
||||
</form>
|
||||
<form hx-post="/api/iscsi/delete_lun" hx-include="closest form" style="display:inline-block">
|
||||
<input type="hidden" name="id" value="{{ .id }}" />
|
||||
<input type="checkbox" name="force" id="force-{{ .id }}" value="1" />
|
||||
<label for="force-{{ .id }}">Force delete</label>
|
||||
<button type="submit">Delete</button>
|
||||
</form>
|
||||
</div>
|
||||
</td>
|
||||
</tr>
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{{ end }}
|
||||
18
internal/templates/hx_iscsi_target_info.html
Normal file
18
internal/templates/hx_iscsi_target_info.html
Normal file
@@ -0,0 +1,18 @@
|
||||
{{ define "hx_iscsi_target_info" }}
|
||||
<div>
|
||||
<h4>Initiator Connection</h4>
|
||||
<p>IQN: {{ .iqn }}</p>
|
||||
<h5>Portals</h5>
|
||||
<ul>
|
||||
{{ range .portals }}
|
||||
<li>{{ .address }}:{{ .port }}</li>
|
||||
{{ end }}
|
||||
</ul>
|
||||
<h5>Allowed Initiators</h5>
|
||||
<ul>
|
||||
{{ range .initiators }}
|
||||
<li>{{ .iqn }}</li>
|
||||
{{ end }}
|
||||
</ul>
|
||||
</div>
|
||||
{{ end }}
|
||||
19
internal/templates/hx_iscsi_targets.html
Normal file
19
internal/templates/hx_iscsi_targets.html
Normal file
@@ -0,0 +1,19 @@
|
||||
{{ define "hx_iscsi_targets" }}
|
||||
<table class="w-full">
|
||||
<thead><tr><th>Name</th><th>IQN</th><th>Action</th></tr></thead>
|
||||
<tbody>
|
||||
{{ if . }}
|
||||
{{ range . }}
|
||||
<tr>
|
||||
<td>{{ .name }}</td>
|
||||
<td>{{ .iqn }}</td>
|
||||
<td>
|
||||
<button hx-get="/api/iscsi/hx_luns/{{ .id }}">View LUNs</button>
|
||||
<button hx-get="/api/iscsi/target/{{ .id }}">Connection Info</button>
|
||||
</td>
|
||||
</tr>
|
||||
{{ end }}
|
||||
{{ end }}
|
||||
</tbody>
|
||||
</table>
|
||||
{{ end }}
|
||||
68
internal/templates/hx_monitoring.html
Normal file
68
internal/templates/hx_monitoring.html
Normal file
@@ -0,0 +1,68 @@
|
||||
{{define "hx_monitoring"}}
|
||||
<div class="grid grid-cols-1 md:grid-cols-2 gap-6">
|
||||
{{range .Groups}}
|
||||
<div class="bg-white rounded-lg shadow-md p-6">
|
||||
<div class="flex justify-between items-center mb-4">
|
||||
<h2 class="text-xl font-semibold">{{.Title}}</h2>
|
||||
<button hx-get="/hx/monitoring/group?group={{.Title}}"
|
||||
hx-target="closest .bg-white"
|
||||
hx-swap="outerHTML"
|
||||
class="text-blue-600 hover:text-blue-800 text-sm">
|
||||
🔄 Refresh
|
||||
</button>
|
||||
</div>
|
||||
|
||||
{{if .Errors}}
|
||||
<div class="bg-yellow-50 border-l-4 border-yellow-400 p-4 mb-4">
|
||||
<div class="flex">
|
||||
<div class="ml-3">
|
||||
<p class="text-sm text-yellow-700">
|
||||
<strong>Warnings:</strong>
|
||||
<ul class="list-disc list-inside mt-1">
|
||||
{{range .Errors}}
|
||||
<li>{{.}}</li>
|
||||
{{end}}
|
||||
</ul>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
<div class="space-y-3">
|
||||
{{range .Metrics}}
|
||||
<div class="flex justify-between items-center p-3 {{if eq .Status "error"}}bg-red-50{{else if eq .Status "warning"}}bg-yellow-50{{else}}bg-gray-50{{end}} rounded">
|
||||
<div class="flex-1">
|
||||
<div class="font-medium text-sm">{{.Name}}</div>
|
||||
<div class="text-xs text-gray-500 mt-1">{{.Timestamp.Format "15:04:05"}}</div>
|
||||
</div>
|
||||
<div class="flex items-center space-x-2">
|
||||
<span class="text-lg font-semibold">{{.Value}}</span>
|
||||
{{if eq .Status "error"}}
|
||||
<span class="text-red-600">⚠️</span>
|
||||
{{else if eq .Status "warning"}}
|
||||
<span class="text-yellow-600">⚡</span>
|
||||
{{else}}
|
||||
<span class="text-green-600">✓</span>
|
||||
{{end}}
|
||||
</div>
|
||||
</div>
|
||||
{{else}}
|
||||
<div class="text-center text-gray-500 py-4">No metrics available</div>
|
||||
{{end}}
|
||||
</div>
|
||||
</div>
|
||||
{{else}}
|
||||
<div class="col-span-2 bg-yellow-50 border-l-4 border-yellow-400 p-4">
|
||||
<div class="flex">
|
||||
<div class="ml-3">
|
||||
<p class="text-sm text-yellow-700">
|
||||
<strong>Warning:</strong> No monitoring data available. Some collectors may have failed.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
54
internal/templates/hx_monitoring_group.html
Normal file
54
internal/templates/hx_monitoring_group.html
Normal file
@@ -0,0 +1,54 @@
|
||||
{{define "hx_monitoring_group"}}
|
||||
<div class="bg-white rounded-lg shadow-md p-6">
|
||||
<div class="flex justify-between items-center mb-4">
|
||||
<h2 class="text-xl font-semibold">{{.Group.Title}}</h2>
|
||||
<button hx-get="/hx/monitoring/group?group={{.Group.Title}}"
|
||||
hx-target="closest .bg-white"
|
||||
hx-swap="outerHTML"
|
||||
class="text-blue-600 hover:text-blue-800 text-sm">
|
||||
🔄 Refresh
|
||||
</button>
|
||||
</div>
|
||||
|
||||
{{if .Group.Errors}}
|
||||
<div class="bg-yellow-50 border-l-4 border-yellow-400 p-4 mb-4">
|
||||
<div class="flex">
|
||||
<div class="ml-3">
|
||||
<p class="text-sm text-yellow-700">
|
||||
<strong>Warnings:</strong>
|
||||
<ul class="list-disc list-inside mt-1">
|
||||
{{range .Group.Errors}}
|
||||
<li>{{.}}</li>
|
||||
{{end}}
|
||||
</ul>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
<div class="space-y-3">
|
||||
{{range .Group.Metrics}}
|
||||
<div class="flex justify-between items-center p-3 {{if eq .Status "error"}}bg-red-50{{else if eq .Status "warning"}}bg-yellow-50{{else}}bg-gray-50{{end}} rounded">
|
||||
<div class="flex-1">
|
||||
<div class="font-medium text-sm">{{.Name}}</div>
|
||||
<div class="text-xs text-gray-500 mt-1">{{.Timestamp.Format "15:04:05"}}</div>
|
||||
</div>
|
||||
<div class="flex items-center space-x-2">
|
||||
<span class="text-lg font-semibold">{{.Value}}</span>
|
||||
{{if eq .Status "error"}}
|
||||
<span class="text-red-600">⚠️</span>
|
||||
{{else if eq .Status "warning"}}
|
||||
<span class="text-yellow-600">⚡</span>
|
||||
{{else}}
|
||||
<span class="text-green-600">✓</span>
|
||||
{{end}}
|
||||
</div>
|
||||
</div>
|
||||
{{else}}
|
||||
<div class="text-center text-gray-500 py-4">No metrics available</div>
|
||||
{{end}}
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
23
internal/templates/hx_nfs_shares.html
Normal file
23
internal/templates/hx_nfs_shares.html
Normal file
@@ -0,0 +1,23 @@
|
||||
{{define "hx_nfs_shares"}}
|
||||
<div>
|
||||
<table class="min-w-full bg-white">
|
||||
<thead>
|
||||
<tr><th>Name</th><th>Path</th><th>Type</th><th></th></tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{{range .}}
|
||||
<tr class="border-t"><td>{{.Name}}</td><td>{{.Path}}</td><td>{{.Type}}</td>
|
||||
<td>
|
||||
<form hx-post="/shares/nfs/delete" hx-swap="outerHTML" class="inline">
|
||||
<input type="hidden" name="id" value="{{.ID}}" />
|
||||
<button class="px-2 py-1 bg-red-500 text-white rounded text-xs">Delete</button>
|
||||
</form>
|
||||
</td>
|
||||
</tr>
|
||||
{{else}}
|
||||
<tr><td colspan="4">No NFS shares</td></tr>
|
||||
{{end}}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{{end}}
|
||||
41
internal/templates/hx_roles.html
Normal file
41
internal/templates/hx_roles.html
Normal file
@@ -0,0 +1,41 @@
|
||||
{{define "hx_roles"}}
|
||||
<div class="bg-white rounded-lg shadow-md overflow-hidden">
|
||||
<table class="min-w-full divide-y divide-gray-200">
|
||||
<thead class="bg-gray-50">
|
||||
<tr>
|
||||
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Role Name</th>
|
||||
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Description</th>
|
||||
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Permissions</th>
|
||||
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Actions</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody class="bg-white divide-y divide-gray-200">
|
||||
{{range .Roles}}
|
||||
<tr>
|
||||
<td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">{{.Name}}</td>
|
||||
<td class="px-6 py-4 text-sm text-gray-500">{{.Description}}</td>
|
||||
<td class="px-6 py-4 text-sm text-gray-500">
|
||||
{{range .Permissions}}
|
||||
<span class="inline-block bg-green-100 text-green-800 text-xs px-2 py-1 rounded mr-1 mb-1">{{.Name}}</span>
|
||||
{{else}}
|
||||
<span class="text-gray-400">No permissions</span>
|
||||
{{end}}
|
||||
</td>
|
||||
<td class="px-6 py-4 whitespace-nowrap text-sm">
|
||||
<button hx-post="/admin/roles/{{.ID}}/delete"
|
||||
hx-confirm="Are you sure you want to delete role {{.Name}}?"
|
||||
hx-target="#roles-list"
|
||||
hx-swap="outerHTML"
|
||||
class="text-red-600 hover:text-red-900">Delete</button>
|
||||
</td>
|
||||
</tr>
|
||||
{{else}}
|
||||
<tr>
|
||||
<td colspan="4" class="px-6 py-4 text-center text-gray-500">No roles found</td>
|
||||
</tr>
|
||||
{{end}}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
23
internal/templates/hx_smb_shares.html
Normal file
23
internal/templates/hx_smb_shares.html
Normal file
@@ -0,0 +1,23 @@
|
||||
{{define "hx_smb_shares"}}
|
||||
<div>
|
||||
<table class="min-w-full bg-white">
|
||||
<thead>
|
||||
<tr><th>Name</th><th>Path</th><th>Type</th><th>Options</th><th></th></tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{{range .}}
|
||||
<tr class="border-t"><td>{{.Name}}</td><td>{{.Path}}</td><td>{{.Type}}</td><td>{{range $k,$v := .Config}}{{$k}}={{$v}} {{end}}</td>
|
||||
<td>
|
||||
<form hx-post="/shares/smb/delete" hx-swap="outerHTML" class="inline">
|
||||
<input type="hidden" name="id" value="{{.ID}}" />
|
||||
<button class="px-2 py-1 bg-red-500 text-white rounded text-xs">Delete</button>
|
||||
</form>
|
||||
</td>
|
||||
</tr>
|
||||
{{else}}
|
||||
<tr><td colspan="5">No SMB shares</td></tr>
|
||||
{{end}}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{{end}}
|
||||
41
internal/templates/hx_users.html
Normal file
41
internal/templates/hx_users.html
Normal file
@@ -0,0 +1,41 @@
|
||||
{{define "hx_users"}}
|
||||
<div class="bg-white rounded-lg shadow-md overflow-hidden">
|
||||
<table class="min-w-full divide-y divide-gray-200">
|
||||
<thead class="bg-gray-50">
|
||||
<tr>
|
||||
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Username</th>
|
||||
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Roles</th>
|
||||
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Created</th>
|
||||
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider">Actions</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody class="bg-white divide-y divide-gray-200">
|
||||
{{range .Users}}
|
||||
<tr>
|
||||
<td class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900">{{.Username}}</td>
|
||||
<td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">
|
||||
{{range .Roles}}
|
||||
<span class="inline-block bg-blue-100 text-blue-800 text-xs px-2 py-1 rounded mr-1">{{.Name}}</span>
|
||||
{{else}}
|
||||
<span class="text-gray-400">No roles</span>
|
||||
{{end}}
|
||||
</td>
|
||||
<td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">{{.CreatedAt}}</td>
|
||||
<td class="px-6 py-4 whitespace-nowrap text-sm">
|
||||
<button hx-post="/admin/users/{{.ID}}/delete"
|
||||
hx-confirm="Are you sure you want to delete user {{.Username}}?"
|
||||
hx-target="#users-list"
|
||||
hx-swap="outerHTML"
|
||||
class="text-red-600 hover:text-red-900">Delete</button>
|
||||
</td>
|
||||
</tr>
|
||||
{{else}}
|
||||
<tr>
|
||||
<td colspan="4" class="px-6 py-4 text-center text-gray-500">No users found</td>
|
||||
</tr>
|
||||
{{end}}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
{{end}}
|
||||
|
||||
23
internal/templates/iscsi.html
Normal file
23
internal/templates/iscsi.html
Normal file
@@ -0,0 +1,23 @@
|
||||
{{ define "iscsi" }}
|
||||
<div class="p-4">
|
||||
<h2 class="text-xl">iSCSI Targets</h2>
|
||||
<div hx-get="/api/iscsi/hx_targets" hx-swap="outerHTML"></div>
|
||||
<div class="mt-4">
|
||||
<h3>Create Target</h3>
|
||||
<form hx-post="/api/iscsi/create_target">
|
||||
<label>Name: <input type="text" name="name"/></label>
|
||||
<label>IQN: <input type="text" name="iqn"/></label>
|
||||
<button type="submit">Create Target</button>
|
||||
</form>
|
||||
</div>
|
||||
<div class="mt-4">
|
||||
<h3>Create LUN</h3>
|
||||
<form hx-post="/api/iscsi/create_lun">
|
||||
<label>Target ID: <input type="text" name="target_id"/></label>
|
||||
<label>ZVol path: <input type="text" name="zvol"/></label>
|
||||
<label>Size (e.g. 10G): <input type="text" name="size"/></label>
|
||||
<button type="submit">Create LUN</button>
|
||||
</form>
|
||||
</div>
|
||||
</div>
|
||||
{{ end }}
|
||||
34
internal/templates/login.html
Normal file
34
internal/templates/login.html
Normal file
@@ -0,0 +1,34 @@
|
||||
{{define "login"}}
|
||||
<!doctype html>
|
||||
<html>
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<script src="https://unpkg.com/htmx.org@1.9.2"></script>
|
||||
<title>Login - Storage Appliance</title>
|
||||
<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
|
||||
</head>
|
||||
<body class="bg-gray-100 flex items-center justify-center min-h-screen">
|
||||
<div class="bg-white p-8 rounded-lg shadow-md w-full max-w-md">
|
||||
<h1 class="text-2xl font-bold mb-6 text-center">Storage Appliance</h1>
|
||||
<form hx-post="/login" hx-target="#error-message" hx-swap="innerHTML">
|
||||
<div class="mb-4">
|
||||
<label for="username" class="block text-sm font-medium text-gray-700 mb-2">Username</label>
|
||||
<input type="text" id="username" name="username" required
|
||||
class="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500">
|
||||
</div>
|
||||
<div class="mb-6">
|
||||
<label for="password" class="block text-sm font-medium text-gray-700 mb-2">Password</label>
|
||||
<input type="password" id="password" name="password" required
|
||||
class="w-full px-3 py-2 border border-gray-300 rounded-md focus:outline-none focus:ring-2 focus:ring-blue-500">
|
||||
</div>
|
||||
<div id="error-message" class="mb-4"></div>
|
||||
<button type="submit" class="w-full bg-blue-600 text-white py-2 px-4 rounded-md hover:bg-blue-700 focus:outline-none focus:ring-2 focus:ring-blue-500">
|
||||
Login
|
||||
</button>
|
||||
</form>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
{{end}}
|
||||
|
||||
16
internal/templates/monitoring.html
Normal file
16
internal/templates/monitoring.html
Normal file
@@ -0,0 +1,16 @@
|
||||
{{define "monitoring"}}
|
||||
{{template "base" .}}
|
||||
{{define "content"}}
|
||||
<div class="container mx-auto p-4">
|
||||
<div class="flex justify-between items-center mb-6">
|
||||
<h1 class="text-3xl font-bold">Monitoring Dashboard</h1>
|
||||
<a href="/dashboard" class="text-blue-600 hover:underline">← Back to Dashboard</a>
|
||||
</div>
|
||||
|
||||
<div id="monitoring-content" hx-get="/hx/monitoring" hx-trigger="load" hx-swap="innerHTML">
|
||||
<div class="text-center py-8 text-gray-500">Loading metrics...</div>
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
37
internal/templates/roles.html
Normal file
37
internal/templates/roles.html
Normal file
@@ -0,0 +1,37 @@
|
||||
{{define "roles"}}
|
||||
{{template "base" .}}
|
||||
{{define "content"}}
|
||||
<div class="container mx-auto p-4">
|
||||
<div class="flex justify-between items-center mb-6">
|
||||
<h1 class="text-3xl font-bold">Role Management</h1>
|
||||
<a href="/dashboard" class="text-blue-600 hover:underline">← Back to Dashboard</a>
|
||||
</div>
|
||||
|
||||
<div class="bg-white rounded-lg shadow-md p-6 mb-6">
|
||||
<h2 class="text-xl font-semibold mb-4">Create New Role</h2>
|
||||
<form hx-post="/admin/roles/create" hx-target="#roles-list" hx-swap="outerHTML" hx-trigger="submit" hx-on::after-request="this.reset()">
|
||||
<div class="grid grid-cols-2 gap-4 mb-4">
|
||||
<div>
|
||||
<label for="name" class="block text-sm font-medium text-gray-700 mb-1">Role Name</label>
|
||||
<input type="text" id="name" name="name" required
|
||||
class="w-full px-3 py-2 border border-gray-300 rounded-md">
|
||||
</div>
|
||||
<div>
|
||||
<label for="description" class="block text-sm font-medium text-gray-700 mb-1">Description</label>
|
||||
<input type="text" id="description" name="description"
|
||||
class="w-full px-3 py-2 border border-gray-300 rounded-md">
|
||||
</div>
|
||||
</div>
|
||||
<button type="submit" class="bg-blue-600 text-white px-4 py-2 rounded-md hover:bg-blue-700">
|
||||
Create Role
|
||||
</button>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<div id="roles-list" hx-get="/admin/hx/roles" hx-trigger="load">
|
||||
<div class="text-center py-8 text-gray-500">Loading roles...</div>
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
22
internal/templates/shares_nfs.html
Normal file
22
internal/templates/shares_nfs.html
Normal file
@@ -0,0 +1,22 @@
|
||||
{{define "content"}}
|
||||
<div class="bg-white rounded shadow p-4">
|
||||
<h1 class="text-2xl font-bold">NFS Shares</h1>
|
||||
<div class="mt-4">
|
||||
<button class="px-3 py-2 bg-blue-500 text-white rounded" hx-get="/hx/shares/nfs" hx-swap="outerHTML" hx-target="#nfs-shares">Refresh</button>
|
||||
</div>
|
||||
<div id="nfs-shares" class="mt-4">
|
||||
{{template "hx_nfs_shares" .}}
|
||||
</div>
|
||||
<div class="mt-6">
|
||||
<h2 class="text-lg font-semibold">Create NFS Share</h2>
|
||||
<form hx-post="/shares/nfs/create" hx-swap="afterbegin" class="mt-2">
|
||||
<div class="flex space-x-2">
|
||||
<input name="name" placeholder="share name" class="border rounded p-1" />
|
||||
<input name="path" placeholder="dataset (e.g. tank/ds)" class="border rounded p-1" />
|
||||
<input name="options" placeholder='{"clients":"*(rw)"}' class="border rounded p-1 w-64" />
|
||||
<button class="px-3 py-1 bg-green-500 text-white rounded" type="submit">Create</button>
|
||||
</div>
|
||||
</form>
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
23
internal/templates/shares_smb.html
Normal file
23
internal/templates/shares_smb.html
Normal file
@@ -0,0 +1,23 @@
|
||||
{{define "content"}}
|
||||
<div class="bg-white rounded shadow p-4">
|
||||
<h1 class="text-2xl font-bold">SMB Shares</h1>
|
||||
<div class="mt-4">
|
||||
<button class="px-3 py-2 bg-blue-500 text-white rounded" hx-get="/hx/shares/smb" hx-swap="outerHTML" hx-target="#smb-shares">Refresh</button>
|
||||
</div>
|
||||
<div id="smb-shares" class="mt-4">
|
||||
{{template "hx_smb_shares" .}}
|
||||
</div>
|
||||
<div class="mt-6">
|
||||
<h2 class="text-lg font-semibold">Create SMB Share</h2>
|
||||
<form hx-post="/shares/smb/create" hx-swap="afterbegin" class="mt-2">
|
||||
<div class="flex space-x-2">
|
||||
<input name="name" placeholder="share name" class="border rounded p-1" />
|
||||
<input name="path" placeholder="dataset (e.g. tank/ds)" class="border rounded p-1" />
|
||||
<input name="allowed_users" placeholder="user1,user2" class="border rounded p-1" />
|
||||
<label class="text-sm">Read only <input type="checkbox" name="read_only" value="1" /></label>
|
||||
<button class="px-3 py-1 bg-green-500 text-white rounded" type="submit">Create</button>
|
||||
</div>
|
||||
</form>
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
@@ -5,7 +5,7 @@
|
||||
<button class="px-3 py-2 bg-blue-500 text-white rounded" hx-get="/hx/pools" hx-swap="outerHTML" hx-target="#pools">Refresh pools</button>
|
||||
</div>
|
||||
<div id="pools" class="mt-4">
|
||||
{{template "hx_pools.html" .}}
|
||||
{{template "hx_pools" .}}
|
||||
</div>
|
||||
<div class="mt-6">
|
||||
<h2 class="text-lg font-semibold">Create Pool</h2>
|
||||
|
||||
37
internal/templates/users.html
Normal file
37
internal/templates/users.html
Normal file
@@ -0,0 +1,37 @@
|
||||
{{define "users"}}
|
||||
{{template "base" .}}
|
||||
{{define "content"}}
|
||||
<div class="container mx-auto p-4">
|
||||
<div class="flex justify-between items-center mb-6">
|
||||
<h1 class="text-3xl font-bold">User Management</h1>
|
||||
<a href="/dashboard" class="text-blue-600 hover:underline">← Back to Dashboard</a>
|
||||
</div>
|
||||
|
||||
<div class="bg-white rounded-lg shadow-md p-6 mb-6">
|
||||
<h2 class="text-xl font-semibold mb-4">Create New User</h2>
|
||||
<form hx-post="/admin/users/create" hx-target="#users-list" hx-swap="outerHTML" hx-trigger="submit" hx-on::after-request="this.reset()">
|
||||
<div class="grid grid-cols-2 gap-4 mb-4">
|
||||
<div>
|
||||
<label for="username" class="block text-sm font-medium text-gray-700 mb-1">Username</label>
|
||||
<input type="text" id="username" name="username" required
|
||||
class="w-full px-3 py-2 border border-gray-300 rounded-md">
|
||||
</div>
|
||||
<div>
|
||||
<label for="password" class="block text-sm font-medium text-gray-700 mb-1">Password</label>
|
||||
<input type="password" id="password" name="password" required
|
||||
class="w-full px-3 py-2 border border-gray-300 rounded-md">
|
||||
</div>
|
||||
</div>
|
||||
<button type="submit" class="bg-blue-600 text-white px-4 py-2 rounded-md hover:bg-blue-700">
|
||||
Create User
|
||||
</button>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<div id="users-list" hx-get="/admin/hx/users" hx-trigger="load">
|
||||
<div class="text-center py-8 text-gray-500">Loading users...</div>
|
||||
</div>
|
||||
</div>
|
||||
{{end}}
|
||||
{{end}}
|
||||
|
||||
2
migrations/0003_jobs_details.sql
Normal file
2
migrations/0003_jobs_details.sql
Normal file
@@ -0,0 +1,2 @@
|
||||
-- 0003_jobs_details.sql
|
||||
ALTER TABLE jobs ADD COLUMN details TEXT;
|
||||
9
migrations/0004_shares.sql
Normal file
9
migrations/0004_shares.sql
Normal file
@@ -0,0 +1,9 @@
|
||||
-- 0004_shares.sql
|
||||
CREATE TABLE IF NOT EXISTS shares (
|
||||
id TEXT PRIMARY KEY,
|
||||
name TEXT,
|
||||
path TEXT,
|
||||
type TEXT,
|
||||
options TEXT,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
17
migrations/0006_minio.sql
Normal file
17
migrations/0006_minio.sql
Normal file
@@ -0,0 +1,17 @@
|
||||
-- 0006_minio.sql
|
||||
CREATE TABLE IF NOT EXISTS object_storage (
|
||||
id TEXT PRIMARY KEY,
|
||||
name TEXT,
|
||||
access_key TEXT,
|
||||
secret_key TEXT,
|
||||
data_path TEXT,
|
||||
port INTEGER,
|
||||
tls INTEGER DEFAULT 0,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS buckets (
|
||||
id TEXT PRIMARY KEY,
|
||||
name TEXT,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
36
migrations/0007_iscsi.sql
Normal file
36
migrations/0007_iscsi.sql
Normal file
@@ -0,0 +1,36 @@
|
||||
-- 0007_iscsi.sql
|
||||
CREATE TABLE IF NOT EXISTS iscsi_targets (
|
||||
id TEXT PRIMARY KEY,
|
||||
iqn TEXT NOT NULL UNIQUE,
|
||||
name TEXT,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS iscsi_portals (
|
||||
id TEXT PRIMARY KEY,
|
||||
target_id TEXT NOT NULL,
|
||||
address TEXT NOT NULL,
|
||||
port INTEGER DEFAULT 3260,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY(target_id) REFERENCES iscsi_targets(id) ON DELETE CASCADE
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS iscsi_initiators (
|
||||
id TEXT PRIMARY KEY,
|
||||
target_id TEXT NOT NULL,
|
||||
initiator_iqn TEXT NOT NULL,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY(target_id) REFERENCES iscsi_targets(id) ON DELETE CASCADE
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS iscsi_luns (
|
||||
id TEXT PRIMARY KEY,
|
||||
target_id TEXT NOT NULL,
|
||||
lun_id INTEGER NOT NULL,
|
||||
zvol TEXT NOT NULL,
|
||||
size INTEGER,
|
||||
blocksize INTEGER,
|
||||
mapped INTEGER DEFAULT 0,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY(target_id) REFERENCES iscsi_targets(id) ON DELETE CASCADE
|
||||
);
|
||||
54
migrations/0008_auth_rbac.sql
Normal file
54
migrations/0008_auth_rbac.sql
Normal file
@@ -0,0 +1,54 @@
|
||||
-- 0008_auth_rbac.sql
|
||||
-- Enhanced users table (if not already exists, will be created by migrations.go)
|
||||
-- Roles table
|
||||
CREATE TABLE IF NOT EXISTS roles (
|
||||
id TEXT PRIMARY KEY,
|
||||
name TEXT NOT NULL UNIQUE,
|
||||
description TEXT,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
-- Permissions table
|
||||
CREATE TABLE IF NOT EXISTS permissions (
|
||||
id TEXT PRIMARY KEY,
|
||||
name TEXT NOT NULL UNIQUE,
|
||||
description TEXT,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
-- Many-to-many: roles to permissions
|
||||
CREATE TABLE IF NOT EXISTS role_permissions (
|
||||
role_id TEXT NOT NULL,
|
||||
permission_id TEXT NOT NULL,
|
||||
PRIMARY KEY (role_id, permission_id),
|
||||
FOREIGN KEY (role_id) REFERENCES roles(id) ON DELETE CASCADE,
|
||||
FOREIGN KEY (permission_id) REFERENCES permissions(id) ON DELETE CASCADE
|
||||
);
|
||||
|
||||
-- Many-to-many: users to roles
|
||||
CREATE TABLE IF NOT EXISTS user_roles (
|
||||
user_id TEXT NOT NULL,
|
||||
role_id TEXT NOT NULL,
|
||||
PRIMARY KEY (user_id, role_id),
|
||||
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE,
|
||||
FOREIGN KEY (role_id) REFERENCES roles(id) ON DELETE CASCADE
|
||||
);
|
||||
|
||||
-- Sessions table for authentication
|
||||
CREATE TABLE IF NOT EXISTS sessions (
|
||||
id TEXT PRIMARY KEY,
|
||||
user_id TEXT NOT NULL,
|
||||
token TEXT NOT NULL UNIQUE,
|
||||
expires_at DATETIME NOT NULL,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_sessions_token ON sessions(token);
|
||||
CREATE INDEX IF NOT EXISTS idx_sessions_user_id ON sessions(user_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_sessions_expires_at ON sessions(expires_at);
|
||||
|
||||
-- Enhanced audit_events table (add missing columns if they don't exist)
|
||||
-- Note: SQLite doesn't support ALTER TABLE ADD COLUMN IF NOT EXISTS easily,
|
||||
-- so we'll handle this in the migration code
|
||||
|
||||
13
packaging/DEBIAN/control
Executable file
13
packaging/DEBIAN/control
Executable file
@@ -0,0 +1,13 @@
|
||||
Package: adastra-storage
|
||||
Version: 1.0.0
|
||||
Section: admin
|
||||
Priority: optional
|
||||
Architecture: amd64
|
||||
Depends: golang-go (>= 1.21), zfsutils-linux, smartmontools, nfs-kernel-server, samba, targetcli-fb, minio
|
||||
Maintainer: Adastra Storage Team <admin@adastra-storage.local>
|
||||
Description: Adastra Storage Appliance Management System
|
||||
A comprehensive storage appliance management system providing
|
||||
ZFS pool management, NFS/SMB shares, iSCSI targets, object storage,
|
||||
and monitoring capabilities with a modern web interface.
|
||||
Homepage: https://github.com/example/storage-appliance
|
||||
|
||||
47
packaging/DEBIAN/postinst
Executable file
47
packaging/DEBIAN/postinst
Executable file
@@ -0,0 +1,47 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Post-installation script for adastra-storage
|
||||
|
||||
INSTALL_DIR="/opt/adastra-storage"
|
||||
SERVICE_USER="adastra"
|
||||
SERVICE_GROUP="adastra"
|
||||
|
||||
echo "Adastra Storage: Post-installation setup..."
|
||||
|
||||
# Create service user if it doesn't exist
|
||||
if ! id "$SERVICE_USER" &>/dev/null; then
|
||||
echo "Creating service user: $SERVICE_USER"
|
||||
useradd -r -s /bin/false -d "$INSTALL_DIR" "$SERVICE_USER" || true
|
||||
fi
|
||||
|
||||
# Set ownership
|
||||
chown -R "$SERVICE_USER:$SERVICE_GROUP" "$INSTALL_DIR" || true
|
||||
|
||||
# Create data directory
|
||||
mkdir -p "$INSTALL_DIR/data"
|
||||
chown "$SERVICE_USER:$SERVICE_GROUP" "$INSTALL_DIR/data"
|
||||
|
||||
# Enable and start systemd service
|
||||
if systemctl is-enabled adastra-storage.service >/dev/null 2>&1; then
|
||||
echo "Service already enabled"
|
||||
else
|
||||
systemctl daemon-reload
|
||||
systemctl enable adastra-storage.service
|
||||
echo "Service enabled. Start with: systemctl start adastra-storage"
|
||||
fi
|
||||
|
||||
# Set permissions for ZFS commands (if needed)
|
||||
# Note: Service user may need sudo access or be in appropriate groups
|
||||
usermod -aG disk "$SERVICE_USER" || true
|
||||
|
||||
echo "Adastra Storage installation complete!"
|
||||
echo "Default admin credentials: username=admin, password=admin"
|
||||
echo "Please change the default password after first login!"
|
||||
echo ""
|
||||
echo "Start the service: systemctl start adastra-storage"
|
||||
echo "Check status: systemctl status adastra-storage"
|
||||
echo "View logs: journalctl -u adastra-storage -f"
|
||||
|
||||
exit 0
|
||||
|
||||
25
packaging/DEBIAN/postrm
Executable file
25
packaging/DEBIAN/postrm
Executable file
@@ -0,0 +1,25 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Post-removal script for adastra-storage
|
||||
|
||||
INSTALL_DIR="/opt/adastra-storage"
|
||||
SERVICE_USER="adastra"
|
||||
|
||||
echo "Adastra Storage: Post-removal cleanup..."
|
||||
|
||||
# Remove service user (optional - comment out if you want to keep the user)
|
||||
# if id "$SERVICE_USER" &>/dev/null; then
|
||||
# echo "Removing service user: $SERVICE_USER"
|
||||
# userdel "$SERVICE_USER" || true
|
||||
# fi
|
||||
|
||||
# Note: We don't remove /opt/adastra-storage by default
|
||||
# to preserve data. Use the uninstaller script for complete removal.
|
||||
|
||||
echo "Adastra Storage removal complete!"
|
||||
echo "Note: Data directory at $INSTALL_DIR/data has been preserved."
|
||||
echo "To completely remove, run: $INSTALL_DIR/uninstall.sh"
|
||||
|
||||
exit 0
|
||||
|
||||
22
packaging/DEBIAN/prerm
Executable file
22
packaging/DEBIAN/prerm
Executable file
@@ -0,0 +1,22 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Pre-removal script for adastra-storage
|
||||
|
||||
echo "Adastra Storage: Pre-removal cleanup..."
|
||||
|
||||
# Stop and disable service
|
||||
if systemctl is-active adastra-storage.service >/dev/null 2>&1; then
|
||||
echo "Stopping adastra-storage service..."
|
||||
systemctl stop adastra-storage.service
|
||||
fi
|
||||
|
||||
if systemctl is-enabled adastra-storage.service >/dev/null 2>&1; then
|
||||
echo "Disabling adastra-storage service..."
|
||||
systemctl disable adastra-storage.service
|
||||
fi
|
||||
|
||||
systemctl daemon-reload
|
||||
|
||||
exit 0
|
||||
|
||||
77
packaging/INSTALL.md
Normal file
77
packaging/INSTALL.md
Normal file
@@ -0,0 +1,77 @@
|
||||
# Installation Guide
|
||||
|
||||
## Quick Installation
|
||||
|
||||
### Using the Installation Script
|
||||
|
||||
```bash
|
||||
sudo bash packaging/install.sh
|
||||
sudo systemctl start adastra-storage
|
||||
sudo systemctl enable adastra-storage
|
||||
```
|
||||
|
||||
### Using Debian Package
|
||||
|
||||
```bash
|
||||
cd packaging
|
||||
sudo ./build-deb.sh
|
||||
sudo dpkg -i ../adastra-storage_1.0.0_amd64.deb
|
||||
sudo apt-get install -f
|
||||
sudo systemctl start adastra-storage
|
||||
```
|
||||
|
||||
## Post-Installation
|
||||
|
||||
1. Access the web interface: http://localhost:8080
|
||||
2. Login with default credentials:
|
||||
- Username: `admin`
|
||||
- Password: `admin`
|
||||
3. **IMPORTANT**: Change the default password immediately!
|
||||
|
||||
## Service Management
|
||||
|
||||
```bash
|
||||
# Start
|
||||
sudo systemctl start adastra-storage
|
||||
|
||||
# Stop
|
||||
sudo systemctl stop adastra-storage
|
||||
|
||||
# Restart
|
||||
sudo systemctl restart adastra-storage
|
||||
|
||||
# Status
|
||||
sudo systemctl status adastra-storage
|
||||
|
||||
# Logs
|
||||
sudo journalctl -u adastra-storage -f
|
||||
```
|
||||
|
||||
## Uninstallation
|
||||
|
||||
```bash
|
||||
sudo /opt/adastra-storage/uninstall.sh
|
||||
```
|
||||
|
||||
## File Locations
|
||||
|
||||
- Installation: `/opt/adastra-storage`
|
||||
- Database: `/opt/adastra-storage/data/appliance.db`
|
||||
- Service file: `/etc/systemd/system/adastra-storage.service`
|
||||
- Logs: `journalctl -u adastra-storage`
|
||||
|
||||
## Dependencies
|
||||
|
||||
The installer automatically installs:
|
||||
- golang-go
|
||||
- zfsutils-linux
|
||||
- smartmontools
|
||||
- nfs-kernel-server
|
||||
- samba
|
||||
- targetcli-fb
|
||||
- minio
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
See the main README.md for detailed troubleshooting information.
|
||||
|
||||
37
packaging/adastra-storage.service
Normal file
37
packaging/adastra-storage.service
Normal file
@@ -0,0 +1,37 @@
|
||||
[Unit]
|
||||
Description=Adastra Storage Appliance Management System
|
||||
Documentation=https://github.com/example/storage-appliance
|
||||
After=network.target zfs-import.service
|
||||
Wants=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=adastra
|
||||
Group=adastra
|
||||
WorkingDirectory=/opt/adastra-storage
|
||||
ExecStart=/opt/adastra-storage/bin/adastra-storage
|
||||
Restart=on-failure
|
||||
RestartSec=5s
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
SyslogIdentifier=adastra-storage
|
||||
|
||||
# Security settings
|
||||
NoNewPrivileges=true
|
||||
PrivateTmp=true
|
||||
ProtectSystem=strict
|
||||
ProtectHome=true
|
||||
ReadWritePaths=/opt/adastra-storage/data /opt/adastra-storage/logs
|
||||
|
||||
# Resource limits
|
||||
LimitNOFILE=65536
|
||||
LimitNPROC=4096
|
||||
|
||||
# Environment
|
||||
Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
|
||||
Environment="INSTALL_DIR=/opt/adastra-storage"
|
||||
Environment="DATA_DIR=/opt/adastra-storage/data"
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
70
packaging/build-deb.sh
Executable file
70
packaging/build-deb.sh
Executable file
@@ -0,0 +1,70 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Build Debian package script
|
||||
|
||||
VERSION="1.0.0"
|
||||
PACKAGE_NAME="adastra-storage"
|
||||
BUILD_DIR="$(pwd)"
|
||||
PACKAGE_DIR="$BUILD_DIR/packaging"
|
||||
DEB_DIR="$BUILD_DIR/deb-build"
|
||||
ARCH="amd64"
|
||||
|
||||
echo "Building Debian package for $PACKAGE_NAME version $VERSION"
|
||||
|
||||
# Clean previous build
|
||||
rm -rf "$DEB_DIR"
|
||||
|
||||
# Create package structure
|
||||
mkdir -p "$DEB_DIR/$PACKAGE_NAME/DEBIAN"
|
||||
mkdir -p "$DEB_DIR/$PACKAGE_NAME/opt/adastra-storage/bin"
|
||||
mkdir -p "$DEB_DIR/$PACKAGE_NAME/opt/adastra-storage/templates"
|
||||
mkdir -p "$DEB_DIR/$PACKAGE_NAME/opt/adastra-storage/migrations"
|
||||
mkdir -p "$DEB_DIR/$PACKAGE_NAME/opt/adastra-storage/data"
|
||||
mkdir -p "$DEB_DIR/$PACKAGE_NAME/etc/systemd/system"
|
||||
|
||||
# Copy control files
|
||||
cp "$PACKAGE_DIR/DEBIAN/control" "$DEB_DIR/$PACKAGE_NAME/DEBIAN/"
|
||||
cp "$PACKAGE_DIR/DEBIAN/postinst" "$DEB_DIR/$PACKAGE_NAME/DEBIAN/"
|
||||
cp "$PACKAGE_DIR/DEBIAN/prerm" "$DEB_DIR/$PACKAGE_NAME/DEBIAN/"
|
||||
cp "$PACKAGE_DIR/DEBIAN/postrm" "$DEB_DIR/$PACKAGE_NAME/DEBIAN/"
|
||||
chmod +x "$DEB_DIR/$PACKAGE_NAME/DEBIAN/postinst"
|
||||
chmod +x "$DEB_DIR/$PACKAGE_NAME/DEBIAN/prerm"
|
||||
chmod +x "$DEB_DIR/$PACKAGE_NAME/DEBIAN/postrm"
|
||||
|
||||
# Build the application
|
||||
echo "Building application binary..."
|
||||
cd "$BUILD_DIR"
|
||||
go build -o "$DEB_DIR/$PACKAGE_NAME/opt/adastra-storage/bin/adastra-storage" ./cmd/appliance
|
||||
|
||||
# Copy application files
|
||||
echo "Copying application files..."
|
||||
cp -r "$BUILD_DIR/internal/templates"/* "$DEB_DIR/$PACKAGE_NAME/opt/adastra-storage/templates/"
|
||||
cp -r "$BUILD_DIR/migrations"/* "$DEB_DIR/$PACKAGE_NAME/opt/adastra-storage/migrations/"
|
||||
cp "$PACKAGE_DIR/uninstall.sh" "$DEB_DIR/$PACKAGE_NAME/opt/adastra-storage/uninstall.sh"
|
||||
chmod +x "$DEB_DIR/$PACKAGE_NAME/opt/adastra-storage/uninstall.sh"
|
||||
|
||||
# Copy systemd service
|
||||
cp "$PACKAGE_DIR/adastra-storage.service" "$DEB_DIR/$PACKAGE_NAME/etc/systemd/system/"
|
||||
|
||||
# Set permissions
|
||||
chmod 755 "$DEB_DIR/$PACKAGE_NAME/opt/adastra-storage/bin/adastra-storage"
|
||||
chmod 755 "$DEB_DIR/$PACKAGE_NAME/opt/adastra-storage/data"
|
||||
|
||||
# Build the package
|
||||
echo "Building .deb package..."
|
||||
cd "$DEB_DIR"
|
||||
dpkg-deb --build "$PACKAGE_NAME" "${PACKAGE_NAME}_${VERSION}_${ARCH}.deb"
|
||||
|
||||
# Move to build directory
|
||||
mv "${PACKAGE_NAME}_${VERSION}_${ARCH}.deb" "$BUILD_DIR/"
|
||||
|
||||
echo ""
|
||||
echo "Package built successfully:"
|
||||
echo " $BUILD_DIR/${PACKAGE_NAME}_${VERSION}_${ARCH}.deb"
|
||||
echo ""
|
||||
echo "To install:"
|
||||
echo " sudo dpkg -i ${PACKAGE_NAME}_${VERSION}_${ARCH}.deb"
|
||||
echo " sudo apt-get install -f # Install dependencies if needed"
|
||||
echo ""
|
||||
|
||||
173
packaging/install.sh
Executable file
173
packaging/install.sh
Executable file
@@ -0,0 +1,173 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Adastra Storage Installation Script for Ubuntu 24.04
|
||||
# This script builds and installs the Adastra Storage appliance
|
||||
|
||||
INSTALL_DIR="/opt/adastra-storage"
|
||||
SERVICE_USER="adastra"
|
||||
SERVICE_GROUP="adastra"
|
||||
BUILD_DIR="$(pwd)"
|
||||
PACKAGE_DIR="$BUILD_DIR/packaging"
|
||||
|
||||
echo "=========================================="
|
||||
echo "Adastra Storage Installation Script"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
# Check if running as root
|
||||
if [ "$EUID" -ne 0 ]; then
|
||||
echo "Please run as root (use sudo)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check Ubuntu version
|
||||
if [ ! -f /etc/os-release ]; then
|
||||
echo "Error: Cannot determine OS version"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
. /etc/os-release
|
||||
if [ "$ID" != "ubuntu" ] || [ "$VERSION_ID" != "24.04" ]; then
|
||||
echo "Warning: This installer is designed for Ubuntu 24.04"
|
||||
echo "Detected: $ID $VERSION_ID"
|
||||
read -p "Continue anyway? (y/N) " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "Step 1: Installing system dependencies..."
|
||||
apt-get update
|
||||
apt-get install -y \
|
||||
golang-go \
|
||||
zfsutils-linux \
|
||||
smartmontools \
|
||||
nfs-kernel-server \
|
||||
samba \
|
||||
targetcli-fb \
|
||||
build-essential \
|
||||
git \
|
||||
curl \
|
||||
wget
|
||||
|
||||
# Install MinIO (if not already installed)
|
||||
if ! command -v minio &> /dev/null; then
|
||||
echo "Installing MinIO..."
|
||||
wget -q https://dl.min.io/server/minio/release/linux-amd64/minio -O /usr/local/bin/minio
|
||||
chmod +x /usr/local/bin/minio
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Step 2: Building Adastra Storage application..."
|
||||
|
||||
# Build the application
|
||||
cd "$BUILD_DIR"
|
||||
if [ ! -f go.mod ]; then
|
||||
echo "Error: go.mod not found. Are you in the project root?"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Build binary
|
||||
echo "Building binary..."
|
||||
go build -o "$BUILD_DIR/appliance" ./cmd/appliance
|
||||
|
||||
if [ ! -f "$BUILD_DIR/appliance" ]; then
|
||||
echo "Error: Build failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Step 3: Creating installation directory structure..."
|
||||
|
||||
# Create directories
|
||||
mkdir -p "$INSTALL_DIR/bin"
|
||||
mkdir -p "$INSTALL_DIR/data"
|
||||
mkdir -p "$INSTALL_DIR/templates"
|
||||
mkdir -p "$INSTALL_DIR/migrations"
|
||||
mkdir -p "$INSTALL_DIR/logs"
|
||||
mkdir -p /etc/systemd/system
|
||||
|
||||
# Create service user if it doesn't exist
|
||||
if ! id "$SERVICE_USER" &>/dev/null; then
|
||||
echo "Creating service user: $SERVICE_USER"
|
||||
useradd -r -s /bin/false -d "$INSTALL_DIR" "$SERVICE_USER"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Step 4: Installing application files..."
|
||||
|
||||
# Copy binary
|
||||
cp "$BUILD_DIR/appliance" "$INSTALL_DIR/bin/adastra-storage"
|
||||
chmod +x "$INSTALL_DIR/bin/adastra-storage"
|
||||
|
||||
# Copy templates
|
||||
cp -r "$BUILD_DIR/internal/templates"/* "$INSTALL_DIR/templates/"
|
||||
|
||||
# Copy migrations
|
||||
cp -r "$BUILD_DIR/migrations"/* "$INSTALL_DIR/migrations/"
|
||||
|
||||
# Copy uninstaller
|
||||
cp "$PACKAGE_DIR/uninstall.sh" "$INSTALL_DIR/uninstall.sh"
|
||||
chmod +x "$INSTALL_DIR/uninstall.sh"
|
||||
|
||||
# Set ownership
|
||||
chown -R "$SERVICE_USER:$SERVICE_GROUP" "$INSTALL_DIR"
|
||||
|
||||
echo ""
|
||||
echo "Step 5: Installing systemd service..."
|
||||
|
||||
# Install systemd service
|
||||
cp "$PACKAGE_DIR/adastra-storage.service" /etc/systemd/system/
|
||||
systemctl daemon-reload
|
||||
|
||||
# Add service user to necessary groups
|
||||
usermod -aG disk "$SERVICE_USER" || true
|
||||
|
||||
echo ""
|
||||
echo "Step 6: Configuring service..."
|
||||
|
||||
# Create environment file (if needed)
|
||||
if [ ! -f "$INSTALL_DIR/.env" ]; then
|
||||
cat > "$INSTALL_DIR/.env" <<EOF
|
||||
# Adastra Storage Configuration
|
||||
INSTALL_DIR=$INSTALL_DIR
|
||||
DATA_DIR=$INSTALL_DIR/data
|
||||
LOG_DIR=$INSTALL_DIR/logs
|
||||
PORT=8080
|
||||
EOF
|
||||
chown "$SERVICE_USER:$SERVICE_GROUP" "$INSTALL_DIR/.env"
|
||||
chmod 600 "$INSTALL_DIR/.env"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo "Installation Complete!"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
echo "Installation directory: $INSTALL_DIR"
|
||||
echo "Data directory: $INSTALL_DIR/data"
|
||||
echo "Service user: $SERVICE_USER"
|
||||
echo ""
|
||||
echo "To start the service:"
|
||||
echo " systemctl start adastra-storage"
|
||||
echo ""
|
||||
echo "To enable on boot:"
|
||||
echo " systemctl enable adastra-storage"
|
||||
echo ""
|
||||
echo "To check status:"
|
||||
echo " systemctl status adastra-storage"
|
||||
echo ""
|
||||
echo "To view logs:"
|
||||
echo " journalctl -u adastra-storage -f"
|
||||
echo ""
|
||||
echo "Default admin credentials:"
|
||||
echo " Username: admin"
|
||||
echo " Password: admin"
|
||||
echo ""
|
||||
echo "⚠️ IMPORTANT: Change the default password after first login!"
|
||||
echo ""
|
||||
echo "Access the web interface at: http://localhost:8080"
|
||||
echo ""
|
||||
|
||||
97
packaging/uninstall.sh
Executable file
97
packaging/uninstall.sh
Executable file
@@ -0,0 +1,97 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Adastra Storage Uninstaller Script
|
||||
|
||||
INSTALL_DIR="/opt/adastra-storage"
|
||||
SERVICE_USER="adastra"
|
||||
|
||||
echo "=========================================="
|
||||
echo "Adastra Storage Uninstaller"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
# Check if running as root
|
||||
if [ "$EUID" -ne 0 ]; then
|
||||
echo "Please run as root (use sudo)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Confirm removal
|
||||
echo "This will remove Adastra Storage from your system."
|
||||
echo "Installation directory: $INSTALL_DIR"
|
||||
read -p "Are you sure you want to continue? (y/N) " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "Uninstallation cancelled."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Step 1: Stopping and disabling service..."
|
||||
|
||||
# Stop and disable service
|
||||
if systemctl is-active adastra-storage.service >/dev/null 2>&1; then
|
||||
echo "Stopping adastra-storage service..."
|
||||
systemctl stop adastra-storage.service
|
||||
fi
|
||||
|
||||
if systemctl is-enabled adastra-storage.service >/dev/null 2>&1; then
|
||||
echo "Disabling adastra-storage service..."
|
||||
systemctl disable adastra-storage.service
|
||||
fi
|
||||
|
||||
systemctl daemon-reload
|
||||
|
||||
echo ""
|
||||
echo "Step 2: Removing systemd service file..."
|
||||
|
||||
if [ -f /etc/systemd/system/adastra-storage.service ]; then
|
||||
rm -f /etc/systemd/system/adastra-storage.service
|
||||
systemctl daemon-reload
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Step 3: Removing application files..."
|
||||
|
||||
# Ask about data preservation
|
||||
read -p "Do you want to preserve data directory? (Y/n) " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Nn]$ ]]; then
|
||||
echo "Removing all files including data..."
|
||||
rm -rf "$INSTALL_DIR"
|
||||
else
|
||||
echo "Preserving data directory..."
|
||||
if [ -d "$INSTALL_DIR/data" ]; then
|
||||
echo "Data directory preserved at: $INSTALL_DIR/data"
|
||||
# Remove everything except data
|
||||
find "$INSTALL_DIR" -mindepth 1 -maxdepth 1 ! -name data -exec rm -rf {} +
|
||||
else
|
||||
rm -rf "$INSTALL_DIR"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Step 4: Removing service user (optional)..."
|
||||
|
||||
read -p "Remove service user '$SERVICE_USER'? (y/N) " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
if id "$SERVICE_USER" &>/dev/null; then
|
||||
userdel "$SERVICE_USER" 2>/dev/null || true
|
||||
echo "Service user removed."
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo "Uninstallation Complete!"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
echo "Note: System dependencies (golang, zfsutils-linux, etc.)"
|
||||
echo " were not removed. Remove them manually if needed:"
|
||||
echo ""
|
||||
echo " apt-get remove golang-go zfsutils-linux smartmontools"
|
||||
echo " nfs-kernel-server samba targetcli-fb"
|
||||
echo ""
|
||||
|
||||
Reference in New Issue
Block a user