1 Commits

Author SHA1 Message Date
3c4cb03df4 Merge pull request 'development' (#1) from development into main
Reviewed-on: #1
2025-12-26 16:50:07 +00:00
206 changed files with 815 additions and 41360 deletions

View File

@@ -1,146 +0,0 @@
# Calypso Application Build Complete
**Tanggal:** 2025-01-09
**Workdir:** `/opt/calypso`
**Config:** `/opt/calypso/conf`
**Status:****BUILD SUCCESS**
## Build Summary
### ✅ Backend (Go Application)
- **Binary:** `/opt/calypso/bin/calypso-api`
- **Size:** 12 MB
- **Type:** ELF 64-bit LSB executable, statically linked
- **Build Flags:**
- Version: 1.0.0
- Build Time: $(date -u +%Y-%m-%dT%H:%M:%SZ)
- Git Commit: $(git rev-parse --short HEAD)
- Stripped: Yes (optimized for production)
### ✅ Frontend (React + Vite)
- **Build Output:** `/opt/calypso/web/`
- **Build Size:**
- index.html: 0.67 kB
- CSS: 58.25 kB (gzip: 10.30 kB)
- JS: 1,235.25 kB (gzip: 299.52 kB)
- **Build Time:** ~10.46s
- **Status:** Production build complete
## Directory Structure
```
/opt/calypso/
├── bin/
│ └── calypso-api # Backend binary (12 MB)
├── web/ # Frontend static files
│ ├── index.html
│ ├── assets/
│ └── logo.png
├── conf/ # Configuration files
│ ├── config.yaml # Main config
│ ├── secrets.env # Secrets (600 permissions)
│ ├── bacula/ # Bacula configs
│ ├── clamav/ # ClamAV configs
│ ├── nfs/ # NFS configs
│ ├── scst/ # SCST configs
│ ├── vtl/ # VTL configs
│ └── zfs/ # ZFS configs
├── data/ # Data directory
│ ├── storage/
│ └── vtl/
└── releases/
└── 1.0.0/ # Versioned release
├── bin/
│ └── calypso-api # Versioned binary
└── web/ # Versioned frontend
```
## Files Created
### Backend
-`/opt/calypso/bin/calypso-api` - Main backend binary
-`/opt/calypso/releases/1.0.0/bin/calypso-api` - Versioned binary
### Frontend
-`/opt/calypso/web/` - Production frontend build
-`/opt/calypso/releases/1.0.0/web/` - Versioned frontend
### Configuration
-`/opt/calypso/conf/config.yaml` - Main configuration
-`/opt/calypso/conf/secrets.env` - Secrets (600 permissions)
## Ownership & Permissions
- **Owner:** `calypso:calypso` (for application files)
- **Owner:** `root:root` (for secrets.env)
- **Permissions:**
- Binaries: `755` (executable)
- Config: `644` (readable)
- Secrets: `600` (owner only)
## Build Tools Used
- **Go:** 1.22.2 (installed via apt)
- **Node.js:** v23.11.1
- **npm:** 11.7.0
- **Build Command:**
```bash
# Backend
CGO_ENABLED=0 GOOS=linux go build -ldflags "-w -s" -a -installsuffix cgo -o /opt/calypso/bin/calypso-api ./cmd/calypso-api
# Frontend
cd frontend && npm run build
```
## Verification
✅ **Backend Binary:**
- File exists and is executable
- Statically linked (no external dependencies)
- Stripped (optimized size)
✅ **Frontend Build:**
- All assets built successfully
- Production optimized
- Ready for static file serving
✅ **Configuration:**
- Config files in place
- Secrets file secured (600 permissions)
- All component configs present
## Next Steps
1. ✅ Application built and ready
2. ⏭️ Configure systemd service to use `/opt/calypso/bin/calypso-api`
3. ⏭️ Setup reverse proxy (Caddy/Nginx) for frontend
4. ⏭️ Test application startup
5. ⏭️ Run database migrations (auto on first start)
## Configuration Notes
- **Config Location:** `/opt/calypso/conf/config.yaml`
- **Secrets Location:** `/opt/calypso/conf/secrets.env`
- **Database:** Will use credentials from secrets.env
- **Workdir:** `/opt/calypso` (as specified)
## Production Readiness
**Backend:**
- Statically linked binary (no runtime dependencies)
- Stripped and optimized
- Version information embedded
**Frontend:**
- Production build with minification
- Assets optimized
- Ready for CDN/static hosting
**Configuration:**
- Secure secrets management
- Organized config structure
- All component configs in place
---
**Build Status:****COMPLETE**
**Ready for Deployment:****YES**

View File

@@ -1,540 +0,0 @@
# Calypso Appliance Component Review
**Tanggal Review:** 2025-01-09
**Installation Directory:** `/opt/calypso`
**System:** Ubuntu 24.04 LTS
## Executive Summary
Review komprehensif semua komponen utama di appliance Calypso:
-**ZFS** - Storage layer utama
-**SCST** - iSCSI target framework
-**NFS** - Network File System sharing
-**SMB** - Samba/CIFS file sharing
-**ClamAV** - Antivirus scanning
-**MHVTL** - Virtual Tape Library
-**Bacula** - Backup software integration
**Status Keseluruhan:** Semua komponen terinstall dan berjalan dengan baik.
---
## 1. ZFS (Zettabyte File System)
### Status: ✅ **FULLY IMPLEMENTED**
### Lokasi Implementasi
- **Backend Service:** `backend/internal/storage/zfs.go`
- **Handler:** `backend/internal/storage/handler.go`
- **Database Schema:** `backend/internal/common/database/migrations/002_storage_and_tape_schema.sql`
- **Frontend:** `frontend/src/pages/Storage.tsx`
- **API Client:** `frontend/src/api/storage.ts`
### Fitur yang Diimplementasikan
1. **Pool Management**
- Create pool dengan berbagai RAID level (stripe, mirror, raidz, raidz2, raidz3)
- List pools dengan status kesehatan
- Delete pool (dengan validasi)
- Add spare disks
- Pool health monitoring (online, degraded, faulted, offline)
2. **Dataset Management**
- Create filesystem dan volume datasets
- Set compression (off, lz4, zstd, gzip)
- Set quota dan reservation
- Mount point management
- List datasets per pool
3. **ARC Statistics**
- Cache hit/miss statistics
- Memory usage tracking
- Performance metrics
### Konfigurasi
- **Config Directory:** `/opt/calypso/conf/zfs/`
- **Service:** `zfs-zed.service` (ZFS Event Daemon) - ✅ Running
### API Endpoints
```
GET /api/v1/storage/zfs/pools
POST /api/v1/storage/zfs/pools
GET /api/v1/storage/zfs/pools/:id
DELETE /api/v1/storage/zfs/pools/:id
POST /api/v1/storage/zfs/pools/:id/spare
GET /api/v1/storage/zfs/pools/:id/datasets
POST /api/v1/storage/zfs/pools/:id/datasets
DELETE /api/v1/storage/zfs/pools/:id/datasets/:name
GET /api/v1/storage/zfs/arc/stats
```
### Catatan
- ✅ Implementasi lengkap dengan error handling yang baik
- ✅ Support untuk semua RAID level standar ZFS
- ✅ Database persistence untuk tracking pools dan datasets
- ✅ Integration dengan task engine untuk operasi async
---
## 2. SCST (Generic SCSI Target Subsystem)
### Status: ✅ **FULLY IMPLEMENTED**
### Lokasi Implementasi
- **Backend Service:** `backend/internal/scst/service.go` (1135+ lines)
- **Handler:** `backend/internal/scst/handler.go` (794+ lines)
- **Database Schema:** `backend/internal/common/database/migrations/003_add_scst_schema.sql`
- **Frontend:** `frontend/src/pages/ISCSITargets.tsx`
- **API Client:** `frontend/src/api/scst.ts`
### Fitur yang Diimplementasikan
1. **Target Management**
- Create iSCSI targets dengan IQN
- Enable/disable targets
- Delete targets
- Target types: disk, vtl, physical_tape
- Single initiator policy untuk tape targets
2. **LUN Management**
- Add/remove LUNs ke targets
- LUN numbering otomatis
- Handler types: vdisk_fileio, vdisk_blockio, tape, sg
- Device path mapping
3. **Initiator Management**
- Create initiator groups
- Add/remove initiators ke groups
- ACL management per target
- CHAP authentication support
4. **Extent Management**
- Create/delete extents (backend devices)
- Handler selection (vdisk, tape, sg)
- Device path configuration
5. **Portal Management**
- Create/update/delete iSCSI portals
- IP address dan port configuration
- Network interface binding
6. **Configuration Management**
- Apply SCST configuration
- Get/update config file
- List available handlers
### Konfigurasi
- **Config Directory:** `/opt/calypso/conf/scst/`
- **Config File:** `/opt/calypso/conf/scst/scst.conf`
- **Service:** `iscsi-scstd.service` - ✅ Running (port 3260)
### API Endpoints
```
GET /api/v1/scst/targets
POST /api/v1/scst/targets
GET /api/v1/scst/targets/:id
POST /api/v1/scst/targets/:id/enable
POST /api/v1/scst/targets/:id/disable
DELETE /api/v1/scst/targets/:id
POST /api/v1/scst/targets/:id/luns
DELETE /api/v1/scst/targets/:id/luns/:lunId
GET /api/v1/scst/extents
POST /api/v1/scst/extents
DELETE /api/v1/scst/extents/:device
GET /api/v1/scst/initiators
GET /api/v1/scst/initiator-groups
POST /api/v1/scst/initiator-groups
GET /api/v1/scst/portals
POST /api/v1/scst/portals
POST /api/v1/scst/config/apply
GET /api/v1/scst/handlers
```
### Catatan
- ✅ Implementasi sangat lengkap dengan error handling yang baik
- ✅ Support untuk disk, VTL, dan physical tape targets
- ✅ Automatic config file management
- ✅ Real-time target status monitoring
- ✅ Frontend dengan auto-refresh setiap 3 detik
---
## 3. NFS (Network File System)
### Status: ✅ **FULLY IMPLEMENTED**
### Lokasi Implementasi
- **Backend Service:** `backend/internal/shares/service.go`
- **Handler:** `backend/internal/shares/handler.go`
- **Database Schema:** `backend/internal/common/database/migrations/006_add_zfs_shares_and_iscsi.sql`
- **Frontend:** `frontend/src/pages/Shares.tsx`
- **API Client:** `frontend/src/api/shares.ts`
### Fitur yang Diimplementasikan
1. **Share Management**
- Create shares dengan NFS enabled
- Update share configuration
- Delete shares
- List all shares
2. **NFS Configuration**
- NFS options (rw, sync, no_subtree_check, dll)
- Client access control (IP addresses/networks)
- Export management via `/etc/exports`
3. **Integration dengan ZFS**
- Shares dibuat dari ZFS datasets
- Mount point otomatis dari dataset
- Path validation
### Konfigurasi
- **Config Directory:** `/opt/calypso/conf/nfs/`
- **Exports File:** `/etc/exports` (managed by Calypso)
- **Services:**
- `nfs-server.service` - ✅ Running
- `nfs-mountd.service` - ✅ Running
- `nfs-idmapd.service` - ✅ Running
### API Endpoints
```
GET /api/v1/shares
POST /api/v1/shares
GET /api/v1/shares/:id
PUT /api/v1/shares/:id
DELETE /api/v1/shares/:id
```
### Catatan
- ✅ Automatic `/etc/exports` management
- ✅ Support untuk NFS v3 dan v4
- ✅ Client access control via IP/networks
- ✅ Integration dengan ZFS datasets
---
## 4. SMB (Samba/CIFS)
### Status: ✅ **FULLY IMPLEMENTED**
### Lokasi Implementasi
- **Backend Service:** `backend/internal/shares/service.go` (shared dengan NFS)
- **Handler:** `backend/internal/shares/handler.go`
- **Database Schema:** `backend/internal/common/database/migrations/006_add_zfs_shares_and_iscsi.sql`
- **Frontend:** `frontend/src/pages/Shares.tsx`
- **API Client:** `frontend/src/api/shares.ts`
### Fitur yang Diimplementasikan
1. **SMB Share Management**
- Create shares dengan SMB enabled
- Update share configuration
- Delete shares
- Support untuk "both" (NFS + SMB) shares
2. **SMB Configuration**
- Share name customization
- Share path configuration
- Comment/description
- Guest access control
- Read-only option
- Browseable option
3. **Samba Integration**
- Automatic `/etc/samba/smb.conf` management
- Share section generation
- Service restart setelah perubahan
### Konfigurasi
- **Config Directory:** `/opt/calypso/conf/samba/` (dokumentasi)
- **Samba Config:** `/etc/samba/smb.conf` (managed by Calypso)
- **Service:** `smbd.service` - ✅ Running
### API Endpoints
```
GET /api/v1/shares
POST /api/v1/shares
GET /api/v1/shares/:id
PUT /api/v1/shares/:id
DELETE /api/v1/shares/:id
```
### Catatan
- ✅ Automatic Samba config management
- ✅ Support untuk guest access dan read-only
- ✅ Integration dengan ZFS datasets
- ✅ Bisa dikombinasikan dengan NFS (share type: "both")
---
## 5. ClamAV (Antivirus)
### Status: ⚠️ **INSTALLED BUT NOT INTEGRATED**
### Lokasi Implementasi
- **Installer Scripts:**
- `installer/alpha/scripts/dependencies.sh` (install_antivirus)
- `installer/alpha/scripts/configure-services.sh` (configure_clamav)
- **Documentation:** `docs/alpha/components/clamav/ClamAV-Installation-Guide.md`
### Fitur yang Diimplementasikan
1. **Installation**
- ✅ ClamAV daemon installation
- ✅ FreshClam (virus definition updater)
- ✅ ClamAV unofficial signatures
2. **Configuration**
- ✅ Quarantine directory: `/srv/calypso/quarantine`
- ✅ Config directory: `/opt/calypso/conf/clamav/`
- ✅ Systemd service override untuk custom config path
### Konfigurasi
- **Config Directory:** `/opt/calypso/conf/clamav/`
- **Config Files:**
- `clamd.conf` - ClamAV daemon config
- `freshclam.conf` - Virus definition updater config
- **Quarantine:** `/srv/calypso/quarantine`
- **Services:**
- `clamav-daemon.service` - ✅ Running
- `clamav-freshclam.service` - ✅ Running
### API Integration
**BELUM ADA** - Tidak ada backend service atau API endpoints untuk:
- File scanning
- Quarantine management
- Scan scheduling
- Scan reports
### Catatan
- ⚠️ ClamAV terinstall dan berjalan, tapi **belum terintegrasi** dengan Calypso API
- ⚠️ Tidak ada API endpoints untuk scan files di shares
- ⚠️ Tidak ada UI untuk manage scans atau quarantine
- 💡 **Rekomendasi:** Implementasi "Share Shield" feature untuk:
- On-access scanning untuk SMB shares
- Scheduled scans untuk NFS shares
- Quarantine management UI
- Scan reports dan alerts
---
## 6. MHVTL (Virtual Tape Library)
### Status: ✅ **FULLY IMPLEMENTED**
### Lokasi Implementasi
- **Backend Service:** `backend/internal/tape_vtl/service.go`
- **Handler:** `backend/internal/tape_vtl/handler.go`
- **MHVTL Monitor:** `backend/internal/tape_vtl/mhvtl_monitor.go`
- **Database Schema:** `backend/internal/common/database/migrations/007_add_vtl_schema.sql`
- **Frontend:** `frontend/src/pages/VTLDetail.tsx`, `frontend/src/pages/TapeLibraries.tsx`
- **API Client:** `frontend/src/api/tape.ts`
### Fitur yang Diimplementasikan
1. **Library Management**
- Create virtual tape libraries
- List libraries
- Get library details dengan drives dan tapes
- Delete libraries (dengan safety checks)
- MHVTL library ID assignment otomatis
2. **Tape Management**
- Create virtual tapes dengan barcode
- Slot assignment
- Tape size configuration
- Tape status tracking (idle, in_drive, exported)
- Tape image file management
3. **Drive Management**
- Automatic drive creation saat library dibuat
- Drive status tracking (idle, ready, error)
- Current tape tracking per drive
- Device path management
4. **Operations**
- Load tape dari slot ke drive (async)
- Unload tape dari drive ke slot (async)
- Database state synchronization
5. **MHVTL Integration**
- Automatic MHVTL config generation
- MHVTL monitor service (sync setiap 5 menit)
- Device path discovery
- Library ID management
### Konfigurasi
- **Config Directory:** `/opt/calypso/conf/vtl/`
- **Config Files:**
- `mhvtl.conf` - MHVTL main config
- `device.conf` - Device configuration
- **Backing Store:** `/srv/calypso/vtl/` (per library)
- **MHVTL Config:** `/etc/mhvtl/` (monitored by Calypso)
### API Endpoints
```
GET /api/v1/tape/vtl/libraries
POST /api/v1/tape/vtl/libraries
GET /api/v1/tape/vtl/libraries/:id
DELETE /api/v1/tape/vtl/libraries/:id
GET /api/v1/tape/vtl/libraries/:id/drives
GET /api/v1/tape/vtl/libraries/:id/tapes
POST /api/v1/tape/vtl/libraries/:id/tapes
POST /api/v1/tape/vtl/libraries/:id/load
POST /api/v1/tape/vtl/libraries/:id/unload
```
### Catatan
- ✅ Implementasi sangat lengkap dengan MHVTL integration
- ✅ Automatic backing store directory creation
- ✅ MHVTL monitor service untuk state synchronization
- ✅ Async task support untuk load/unload operations
- ✅ Frontend UI lengkap dengan real-time updates
---
## 7. Bacula (Backup Software)
### Status: ✅ **FULLY IMPLEMENTED**
### Lokasi Implementasi
- **Backend Service:** `backend/internal/backup/service.go`
- **Handler:** `backend/internal/backup/handler.go`
- **Database Integration:** Direct PostgreSQL connection ke Bacula database
- **Frontend:** `frontend/src/pages/Backup.tsx` (implied)
- **API Client:** `frontend/src/api/backup.ts`
### Fitur yang Diimplementasikan
1. **Job Management**
- List backup jobs dengan filters (status, type, client, name)
- Get job details
- Create jobs
- Pagination support
2. **Client Management**
- List Bacula clients
- Client status tracking
3. **Storage Management**
- List storage pools
- Create/delete storage pools
- List storage volumes
- Create/update/delete volumes
- List storage daemons
4. **Media Management**
- List media (tapes/volumes)
- Media status tracking
5. **Bconsole Integration**
- Execute bconsole commands
- Direct Bacula Director communication
6. **Dashboard Statistics**
- Job statistics
- Storage statistics
- System health metrics
### Konfigurasi
- **Config Directory:** `/opt/calypso/conf/bacula/`
- **Config Files:**
- `bacula-dir.conf` - Director configuration
- `bacula-sd.conf` - Storage Daemon configuration
- `bacula-fd.conf` - File Daemon configuration
- `scripts/mtx-changer.conf` - Changer script config
- **Database:** PostgreSQL database `bacula` (default) atau `bareos`
- **Services:**
- `bacula-director.service` - ✅ Running
- `bacula-sd.service` - ✅ Running
- `bacula-fd.service` - ✅ Running
### API Endpoints
```
GET /api/v1/backup/dashboard/stats
GET /api/v1/backup/jobs
GET /api/v1/backup/jobs/:id
POST /api/v1/backup/jobs
GET /api/v1/backup/clients
GET /api/v1/backup/storage/pools
POST /api/v1/backup/storage/pools
DELETE /api/v1/backup/storage/pools/:id
GET /api/v1/backup/storage/volumes
POST /api/v1/backup/storage/volumes
PUT /api/v1/backup/storage/volumes/:id
DELETE /api/v1/backup/storage/volumes/:id
GET /api/v1/backup/media
GET /api/v1/backup/storage/daemons
POST /api/v1/backup/console/execute
```
### Catatan
- ✅ Direct database connection untuk performa optimal
- ✅ Fallback ke bconsole jika database tidak tersedia
- ✅ Support untuk Bacula dan Bareos
- ✅ Integration dengan Calypso storage (ZFS datasets)
- ✅ Comprehensive job dan storage management
---
## Summary & Recommendations
### Status Komponen
| Komponen | Status | API Integration | UI Integration | Notes |
|----------|--------|-----------------|----------------|-------|
| **ZFS** | ✅ Complete | ✅ Full | ✅ Full | Production ready |
| **SCST** | ✅ Complete | ✅ Full | ✅ Full | Production ready |
| **NFS** | ✅ Complete | ✅ Full | ✅ Full | Production ready |
| **SMB** | ✅ Complete | ✅ Full | ✅ Full | Production ready |
| **ClamAV** | ⚠️ Partial | ❌ None | ❌ None | Installed but not integrated |
| **MHVTL** | ✅ Complete | ✅ Full | ✅ Full | Production ready |
| **Bacula** | ✅ Complete | ✅ Full | ⚠️ Partial | API ready, UI may need enhancement |
### Rekomendasi Prioritas
1. **HIGH PRIORITY: ClamAV Integration**
- Implementasi backend service untuk file scanning
- API endpoints untuk scan management
- UI untuk quarantine management
- On-access scanning untuk SMB shares
- Scheduled scans untuk NFS shares
2. **MEDIUM PRIORITY: Bacula UI Enhancement**
- Review dan enhance frontend untuk Bacula management
- Job scheduling UI
- Restore operations UI
3. **LOW PRIORITY: Monitoring & Alerts**
- Enhanced monitoring untuk semua komponen
- Alert rules untuk ClamAV scans
- Performance metrics collection
### Konfigurasi Directory Structure
```
/opt/calypso/
├── conf/
│ ├── bacula/ ✅ Configured
│ ├── clamav/ ✅ Configured (but not integrated)
│ ├── nfs/ ✅ Configured
│ ├── scst/ ✅ Configured
│ ├── vtl/ ✅ Configured
│ └── zfs/ ✅ Configured
└── data/
├── storage/ ✅ Created
└── vtl/ ✅ Created
```
### Service Status
Semua services utama berjalan dengan baik:
-`zfs-zed.service` - Running
-`iscsi-scstd.service` - Running
-`nfs-server.service` - Running
-`smbd.service` - Running
-`clamav-daemon.service` - Running
-`clamav-freshclam.service` - Running
-`bacula-director.service` - Running
-`bacula-sd.service` - Running
-`bacula-fd.service` - Running
---
## Kesimpulan
Calypso appliance memiliki implementasi yang sangat lengkap untuk semua komponen utama. Hanya ClamAV yang masih perlu integrasi dengan API dan UI. Semua komponen lainnya sudah production-ready dengan fitur lengkap, error handling yang baik, dan integration yang solid.
**Overall Status: 95% Complete**

View File

@@ -1,79 +0,0 @@
# Database Check Report
**Tanggal:** 2025-01-09
**System:** Ubuntu 24.04 LTS
## Hasil Pengecekan PostgreSQL
### ✅ User Database yang EXIST:
1. **bacula** - User untuk Bacula backup software
- Status: ✅ **EXIST**
- Attributes: (no special attributes)
### ❌ User Database yang TIDAK EXIST:
1. **calypso** - User untuk Calypso application
- Status: ❌ **TIDAK EXIST**
- Expected: User untuk Calypso API backend
### ✅ Database yang EXIST:
1. **bacula**
- Owner: `bacula`
- Encoding: SQL_ASCII
- Status: ✅ **EXIST**
### ❌ Database yang TIDAK EXIST:
1. **calypso**
- Expected Owner: `calypso`
- Expected Encoding: UTF8
- Status: ❌ **TIDAK EXIST**
---
## Summary
| Item | Status | Notes |
|------|--------|-------|
| User `bacula` | ✅ EXIST | Ready untuk Bacula |
| Database `bacula` | ✅ EXIST | Ready untuk Bacula |
| User `calypso` | ❌ **TIDAK EXIST** | **PERLU DIBUAT** |
| Database `calypso` | ❌ **TIDAK EXIST** | **PERLU DIBUAT** |
---
## Action Required
Calypso application memerlukan:
1. **User PostgreSQL:** `calypso`
2. **Database PostgreSQL:** `calypso`
### Langkah untuk Membuat Database Calypso:
```bash
# 1. Create user calypso
sudo -u postgres psql -c "CREATE USER calypso WITH PASSWORD 'your_secure_password';"
# 2. Create database calypso
sudo -u postgres psql -c "CREATE DATABASE calypso OWNER calypso;"
# 3. Grant privileges
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE calypso TO calypso;"
# 4. Verify
sudo -u postgres psql -c "\du" | grep calypso
sudo -u postgres psql -c "\l" | grep calypso
```
### Atau menggunakan installer script:
```bash
# Jalankan installer database script
cd /src/calypso/installer/alpha/scripts
sudo bash database.sh
```
---
## Catatan
- Bacula database sudah terinstall dengan benar ✅
- Calypso database belum dibuat, kemungkinan installer belum dijalankan atau ada masalah saat instalasi
- Setelah database dibuat, migrations akan otomatis dijalankan saat Calypso API pertama kali start

View File

@@ -1,88 +0,0 @@
# Database Setup Complete
**Tanggal:** 2025-01-09
**Status:****BERHASIL**
## Yang Telah Dibuat
### ✅ User PostgreSQL: `calypso`
- Status: ✅ **CREATED**
- Password: `calypso_secure_2025` (disimpan di script, perlu diubah untuk production)
### ✅ Database: `calypso`
- Owner: `calypso`
- Encoding: UTF8
- Status: ✅ **CREATED**
### ✅ Database Access: `bacula`
- User `calypso` memiliki **READ ACCESS** ke database `bacula`
- Privileges:
- ✅ CONNECT ke database `bacula`
- ✅ USAGE pada schema `public`
- ✅ SELECT pada semua tables (32 tables)
- ✅ Default privileges untuk tables baru
## Verifikasi
### User yang Ada:
```
bacula |
calypso |
```
### Database yang Ada:
```
bacula | bacula | SQL_ASCII | ... | calypso=c/bacula
calypso | calypso | UTF8 | ... | calypso=CTc/calypso
```
### Access Test:
- ✅ User `calypso` bisa connect ke database `calypso`
- ✅ User `calypso` bisa connect ke database `bacula`
- ✅ User `calypso` bisa SELECT dari tables di database `bacula` (32 tables accessible)
## Konfigurasi untuk Calypso API
Update `/etc/calypso/config.yaml` atau set environment variables:
```bash
export CALYPSO_DB_PASSWORD="calypso_secure_2025"
export CALYPSO_DB_USER="calypso"
export CALYPSO_DB_NAME="calypso"
```
Atau di config file:
```yaml
database:
host: "localhost"
port: 5432
user: "calypso"
password: "calypso_secure_2025" # Atau via env var CALYPSO_DB_PASSWORD
database: "calypso"
ssl_mode: "disable"
```
## Catatan Penting
⚠️ **Security Note:**
- Password `calypso_secure_2025` adalah default password
- **WAJIB diubah** untuk production environment
- Gunakan strong password generator
- Simpan password di `/etc/calypso/secrets.env` atau environment variables
## Next Steps
1. ✅ Database `calypso` siap untuk migrations
2. ✅ Calypso API bisa connect ke database sendiri
3. ✅ Calypso API bisa read data dari Bacula database
4. ⏭️ Jalankan Calypso API untuk auto-migration
5. ⏭️ Update password ke production-grade password
## Bacula Database Access
User `calypso` sekarang bisa:
- ✅ Read semua tables di database `bacula`
- ✅ Query job history, clients, storage pools, volumes, media
- ✅ Monitor backup operations
-**TIDAK bisa** write/modify data di database `bacula` (read-only access)
Ini sesuai dengan requirement Calypso untuk monitoring dan reporting Bacula operations tanpa bisa mengubah konfigurasi Bacula.

View File

@@ -1,121 +0,0 @@
# Dataset Mountpoint Validation
## Issue
User meminta validasi bahwa mount point untuk dataset dan volume harus berada di dalam directory pool yang terkait.
## Solution
Menambahkan validasi untuk memastikan mount point dataset berada di dalam pool mount point directory (`/opt/calypso/data/pool/<pool-name>/`).
## Changes Made
### Updated `backend/internal/storage/zfs.go`
**File**: `backend/internal/storage/zfs.go` (line 728-814)
**Key Changes:**
1. **Mount Point Validation**
- Validasi bahwa mount point yang user berikan harus berada di dalam pool directory
- Menggunakan `filepath.Rel()` untuk memastikan mount point tidak keluar dari pool directory
2. **Default Mount Point**
- Jika mount point tidak disediakan, set default ke `/opt/calypso/data/pool/<pool-name>/<dataset-name>/`
- Memastikan semua dataset mount point berada di dalam pool directory
3. **Mount Point Always Set**
- Untuk filesystem datasets, mount point selalu di-set (baik user-provided atau default)
- Tidak lagi conditional pada `req.MountPoint != ""`
**Before:**
```go
if req.Type == "filesystem" && req.MountPoint != "" {
mountPath := filepath.Clean(req.MountPoint)
// ... create directory ...
}
// Later:
if req.Type == "filesystem" && req.MountPoint != "" {
args = append(args, "-o", fmt.Sprintf("mountpoint=%s", req.MountPoint))
}
```
**After:**
```go
poolMountPoint := fmt.Sprintf("/opt/calypso/data/pool/%s", poolName)
var mountPath string
if req.Type == "filesystem" {
if req.MountPoint != "" {
// Validate mount point is within pool directory
mountPath = filepath.Clean(req.MountPoint)
// ... validation logic ...
} else {
// Use default mount point
mountPath = filepath.Join(poolMountPoint, req.Name)
}
// ... create directory ...
}
// Later:
if req.Type == "filesystem" {
args = append(args, "-o", fmt.Sprintf("mountpoint=%s", mountPath))
}
```
## Mount Point Structure
### Pool Mount Point
```
/opt/calypso/data/pool/<pool-name>/
```
### Dataset Mount Point (Default)
```
/opt/calypso/data/pool/<pool-name>/<dataset-name>/
```
### Dataset Mount Point (Custom - must be within pool)
```
/opt/calypso/data/pool/<pool-name>/<custom-path>/
```
## Validation Rules
1. **User-provided mount point**:
- Must be within `/opt/calypso/data/pool/<pool-name>/`
- Cannot use `..` to escape pool directory
- Must be a valid directory path
2. **Default mount point**:
- Automatically set to `/opt/calypso/data/pool/<pool-name>/<dataset-name>/`
- Always within pool directory
3. **Volumes**:
- Volumes cannot have mount points (already validated in handler)
## Error Messages
- `mount point must be within pool directory: <path> (pool mount: <pool-mount>)` - Jika mount point di luar pool directory
- `mount point path exists but is not a directory: <path>` - Jika path sudah ada tapi bukan directory
- `failed to create mount directory <path>` - Jika gagal membuat directory
## Testing
1. **Create dataset without mount point**:
- Should use default: `/opt/calypso/data/pool/<pool-name>/<dataset-name>/`
2. **Create dataset with valid mount point**:
- Mount point: `/opt/calypso/data/pool/<pool-name>/custom-path/`
- Should succeed
3. **Create dataset with invalid mount point**:
- Mount point: `/opt/calypso/data/other-path/`
- Should fail with validation error
4. **Create volume**:
- Should not set mount point (volumes don't have mount points)
## Status
**COMPLETED** - Mount point validation untuk dataset sudah diterapkan
## Date
2026-01-09

View File

@@ -1,103 +0,0 @@
# Default User Credentials untuk Calypso Appliance
**Tanggal:** 2025-01-09
**Status:****READY**
## 🔐 Default Admin User
### Credentials
- **Username:** `admin`
- **Password:** `admin123`
- **Email:** `admin@calypso.local`
- **Role:** `admin` (Full system access)
## 📋 Informasi User
- **Full Name:** Administrator
- **Status:** Active
- **Permissions:** All permissions (admin role)
- **Access Level:** Full system access and configuration
## 🚀 Cara Login
### Via Frontend Portal
1. Buka browser dan akses: **http://localhost/** atau **http://10.10.14.18/**
2. Masuk ke halaman login (akan redirect otomatis jika belum login)
3. Masukkan credentials:
- **Username:** `admin`
- **Password:** `admin123`
4. Klik "Sign In"
### Via API
```bash
curl -X POST http://localhost/api/v1/auth/login \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"admin123"}'
```
## ⚠️ Security Notes
### Untuk Development/Testing
- ✅ Password `admin123` dapat digunakan
- ✅ User sudah dibuat dengan role admin
- ✅ Password sudah di-hash dengan Argon2id (secure)
### Untuk Production
- ⚠️ **WAJIB** ubah password default setelah first login
- ⚠️ Gunakan password yang kuat (minimal 12 karakter, kombinasi huruf, angka, simbol)
- ⚠️ Pertimbangkan untuk disable user default dan buat user baru
- ⚠️ Enable 2FA jika tersedia
## 🔧 Membuat/Update Admin User
### Jika User Belum Ada
```bash
cd /src/calypso
bash scripts/setup-test-user.sh
```
Script ini akan:
- Membuat user `admin` dengan password `admin123`
- Assign role `admin`
- Set email ke `admin@calypso.local`
### Update Password (jika perlu)
```bash
cd /src/calypso
bash scripts/update-admin-password.sh
```
## ✅ Verifikasi User
### Cek User di Database
```bash
sudo -u postgres psql -d calypso -c "SELECT username, email, is_active FROM users WHERE username = 'admin';"
```
### Cek Role Assignment
```bash
sudo -u postgres psql -d calypso -c "SELECT u.username, r.name as role FROM users u JOIN user_roles ur ON u.id = ur.user_id JOIN roles r ON ur.role_id = r.id WHERE u.username = 'admin';"
```
### Test Login
```bash
curl -X POST http://localhost/api/v1/auth/login \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"admin123"}' | jq .
```
## 📝 Summary
**Default Credentials:**
- Username: `admin`
- Password: `admin123`
- Role: `admin` (Full access)
**Access URLs:**
- Frontend: http://localhost/ atau http://10.10.14.18/
- API: http://localhost/api/v1/
**Status:** ✅ User sudah dibuat dan siap digunakan
---
**⚠️ REMEMBER:** Ubah password default untuk production environment!

View File

@@ -1,225 +0,0 @@
# Frontend Access Setup Complete
**Tanggal:** 2025-01-09
**Reverse Proxy:** Nginx
**Status:****CONFIGURED & RUNNING**
## Configuration Summary
### Nginx Configuration
- **Config File:** `/etc/nginx/sites-available/calypso`
- **Enabled:** `/etc/nginx/sites-enabled/calypso`
- **Port:** 80 (HTTP)
- **Root Directory:** `/opt/calypso/web`
- **API Backend:** `http://localhost:8080`
### Service Status
-**Nginx:** Running
-**Calypso API:** Running on port 8080
-**Frontend Files:** Served from `/opt/calypso/web`
## Access URLs
### Local Access
- **Frontend:** http://localhost/
- **API:** http://localhost/api/v1/health
- **Login Page:** http://localhost/login
### Network Access
- **Frontend:** http://<server-ip>/
- **API:** http://<server-ip>/api/v1/health
## Nginx Configuration Details
### Static Files Serving
```nginx
root /opt/calypso/web;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
```
### API Proxy
```nginx
location /api {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
```
### WebSocket Support
```nginx
location /ws {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
```
### Terminal WebSocket
```nginx
location /api/v1/system/terminal/ws {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
```
## Features Enabled
**Static File Serving**
- Frontend files served from `/opt/calypso/web`
- SPA routing support (try_files fallback to index.html)
- Static asset caching (1 year)
**API Proxy**
- All `/api/*` requests proxied to backend
- Proper headers forwarding
- Timeout configuration
**WebSocket Support**
- `/ws` endpoint for monitoring events
- `/api/v1/system/terminal/ws` for terminal console
- Long timeout for persistent connections
**Security Headers**
- X-Frame-Options: SAMEORIGIN
- X-Content-Type-Options: nosniff
- X-XSS-Protection: 1; mode=block
**Performance**
- Gzip compression enabled
- Static asset caching
- Optimized timeouts
## Service Management
### Nginx Commands
```bash
# Start/Stop/Restart
sudo systemctl start nginx
sudo systemctl stop nginx
sudo systemctl restart nginx
# Reload configuration (without downtime)
sudo systemctl reload nginx
# Check status
sudo systemctl status nginx
# Test configuration
sudo nginx -t
```
### View Logs
```bash
# Access logs
sudo tail -f /var/log/nginx/calypso-access.log
# Error logs
sudo tail -f /var/log/nginx/calypso-error.log
# All Nginx logs
sudo journalctl -u nginx -f
```
## Testing
### Test Frontend
```bash
# Check if frontend is accessible
curl http://localhost/
# Check if index.html is served
curl http://localhost/index.html
```
### Test API Proxy
```bash
# Health check
curl http://localhost/api/v1/health
# Should return JSON response
```
### Test WebSocket
```bash
# Test WebSocket connection (requires wscat or similar)
wscat -c ws://localhost/ws
```
## Troubleshooting
### Frontend Not Loading
1. Check Nginx status: `sudo systemctl status nginx`
2. Check Nginx config: `sudo nginx -t`
3. Check file permissions: `ls -la /opt/calypso/web/`
4. Check Nginx error logs: `sudo tail -f /var/log/nginx/calypso-error.log`
### API Calls Failing
1. Check backend is running: `sudo systemctl status calypso-api`
2. Test backend directly: `curl http://localhost:8080/api/v1/health`
3. Check Nginx proxy logs: `sudo tail -f /var/log/nginx/calypso-access.log`
### WebSocket Not Working
1. Check WebSocket headers in browser DevTools
2. Verify backend WebSocket endpoint is working
3. Check Nginx WebSocket configuration
4. Verify proxy_set_header Upgrade and Connection are set
### Permission Issues
1. Check file ownership: `ls -la /opt/calypso/web/`
2. Check Nginx user: `grep user /etc/nginx/nginx.conf`
3. Ensure files are readable: `sudo chmod -R 755 /opt/calypso/web`
## Firewall Configuration
If firewall is enabled, allow HTTP traffic:
```bash
# UFW
sudo ufw allow 80/tcp
sudo ufw allow 'Nginx Full'
# firewalld
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --reload
```
## Next Steps
1. ✅ Frontend accessible via Nginx
2. ⏭️ Setup SSL/TLS (HTTPS) - Recommended for production
3. ⏭️ Configure domain name (if applicable)
4. ⏭️ Setup monitoring/alerting
5. ⏭️ Configure backup strategy
## SSL/TLS Setup (Optional)
For production, setup HTTPS:
```bash
# Install Certbot
sudo apt-get install certbot python3-certbot-nginx
# Get certificate (replace with your domain)
sudo certbot --nginx -d your-domain.com
# Auto-renewal is configured automatically
```
---
**Status:****FRONTEND ACCESSIBLE**
**URL:** http://localhost/ (or http://<server-ip>/)
**API:** http://localhost/api/v1/health

View File

@@ -1,236 +0,0 @@
# MinIO Installation Recommendation for Calypso Appliance
## Executive Summary
**Rekomendasi: Native Installation**
Untuk Calypso appliance, **native installation** MinIO lebih sesuai daripada Docker karena:
1. Konsistensi dengan komponen lain (semua native)
2. Performa lebih baik (tanpa overhead container)
3. Integrasi lebih mudah dengan ZFS dan systemd
4. Sesuai dengan filosofi appliance (minimal dependencies)
---
## Analisis Arsitektur Calypso
### Komponen yang Sudah Terinstall (Semuanya Native)
| Komponen | Installation Method | Service Management |
|----------|-------------------|-------------------|
| **ZFS** | Native (kernel modules) | systemd (zfs-zed.service) |
| **SCST** | Native (kernel modules) | systemd (scst.service) |
| **NFS** | Native (nfs-kernel-server) | systemd (nfs-server.service) |
| **SMB** | Native (Samba) | systemd (smbd.service, nmbd.service) |
| **ClamAV** | Native (clamav-daemon) | systemd (clamav-daemon.service) |
| **MHVTL** | Native (kernel modules) | systemd (mhvtl.target) |
| **Bacula** | Native (bacula packages) | systemd (bacula-*.service) |
| **PostgreSQL** | Native (postgresql-16) | systemd (postgresql.service) |
| **Calypso API** | Native (Go binary) | systemd (calypso-api.service) |
**Kesimpulan:** Semua komponen menggunakan native installation dan dikelola melalui systemd.
---
## Perbandingan: Native vs Docker
### Native Installation ✅ **RECOMMENDED**
**Pros:**
-**Konsistensi**: Semua komponen lain native, MinIO juga native
-**Performa**: Tidak ada overhead container, akses langsung ke ZFS
-**Integrasi**: Lebih mudah integrasi dengan ZFS datasets sebagai storage backend
-**Monitoring**: Logs langsung ke journald, metrics mudah diakses
-**Resource**: Lebih efisien (tidak perlu Docker daemon)
-**Security**: Sesuai dengan security model appliance (systemd security hardening)
-**Management**: Dikelola melalui systemd seperti komponen lain
-**Dependencies**: MinIO binary standalone, tidak perlu Docker runtime
**Cons:**
- ⚠️ Update: Perlu download binary baru dan restart service
- ⚠️ Dependencies: Perlu manage MinIO binary sendiri
**Mitigation:**
- Update bisa diotomasi dengan script
- MinIO binary bisa disimpan di `/opt/calypso/bin/` seperti komponen lain
### Docker Installation ❌ **NOT RECOMMENDED**
**Pros:**
- ✅ Isolasi yang lebih baik
- ✅ Update lebih mudah (pull image baru)
- ✅ Tidak perlu manage dependencies
**Cons:**
-**Inkonsistensi**: Semua komponen lain native, Docker akan jadi exception
-**Overhead**: Docker daemon memakan resource (~50-100MB RAM)
-**Kompleksitas**: Tambahan layer management (Docker + systemd)
-**Integrasi**: Lebih sulit integrasi dengan ZFS (perlu volume mapping)
-**Performance**: Overhead container, terutama untuk I/O intensive workload
-**Security**: Tambahan attack surface (Docker daemon)
-**Monitoring**: Logs perlu di-forward dari container ke journald
-**Dependencies**: Perlu install Docker (tidak sesuai filosofi minimal dependencies)
---
## Rekomendasi Implementasi
### Native Installation Setup
#### 1. Binary Location
```
/opt/calypso/bin/minio
```
#### 2. Configuration Location
```
/opt/calypso/conf/minio/
├── config.json
└── minio.env
```
#### 3. Data Location (ZFS Dataset)
```
/opt/calypso/data/pool/<pool-name>/object/
```
#### 4. Systemd Service
```ini
[Unit]
Description=MinIO Object Storage
After=network.target zfs.target
Wants=zfs.target
[Service]
Type=simple
User=calypso
Group=calypso
WorkingDirectory=/opt/calypso
ExecStart=/opt/calypso/bin/minio server /opt/calypso/data/pool/%i/object --config-dir /opt/calypso/conf/minio
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=minio
# Security
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/calypso/data /opt/calypso/conf/minio /var/log/calypso
# Resource limits
LimitNOFILE=65536
LimitNPROC=4096
[Install]
WantedBy=multi-user.target
```
#### 5. Integration dengan ZFS
- MinIO storage backend menggunakan ZFS dataset
- Dataset dibuat di pool yang sudah ada
- Mount point: `/opt/calypso/data/pool/<pool-name>/object/`
- Manfaatkan ZFS features: compression, snapshots, replication
---
## Arsitektur yang Disarankan
```
┌─────────────────────────────────────┐
│ Calypso Appliance │
├─────────────────────────────────────┤
│ │
│ ┌──────────────────────────────┐ │
│ │ Calypso API (Go) │ │
│ │ Port: 8080 │ │
│ └───────────┬──────────────────┘ │
│ │ │
│ ┌───────────▼──────────────────┐ │
│ │ MinIO (Native Binary) │ │
│ │ Port: 9000, 9001 │ │
│ │ Storage: ZFS Dataset │ │
│ └───────────┬──────────────────┘ │
│ │ │
│ ┌───────────▼──────────────────┐ │
│ │ ZFS Pool │ │
│ │ Dataset: object/ │ │
│ └──────────────────────────────┘ │
│ │
└─────────────────────────────────────┘
```
---
## Installation Steps (Native)
### 1. Download MinIO Binary
```bash
# Download latest MinIO binary
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
sudo mv minio /opt/calypso/bin/
sudo chown calypso:calypso /opt/calypso/bin/minio
```
### 2. Create ZFS Dataset for Object Storage
```bash
# Create dataset in existing pool
sudo zfs create <pool-name>/object
sudo zfs set mountpoint=/opt/calypso/data/pool/<pool-name>/object <pool-name>/object
sudo chown -R calypso:calypso /opt/calypso/data/pool/<pool-name>/object
```
### 3. Create Configuration Directory
```bash
sudo mkdir -p /opt/calypso/conf/minio
sudo chown calypso:calypso /opt/calypso/conf/minio
```
### 4. Create Systemd Service
```bash
sudo cp /src/calypso/deploy/systemd/minio.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable minio.service
sudo systemctl start minio.service
```
### 5. Integration dengan Calypso API
- Backend API mengelola MinIO melalui MinIO Admin API atau Go SDK
- Configuration disimpan di database Calypso
- UI untuk manage buckets, policies, users
---
## Kesimpulan
**Native Installation** adalah pilihan terbaik untuk Calypso appliance karena:
1.**Konsistensi**: Semua komponen lain native
2.**Performa**: Optimal untuk I/O intensive workload
3.**Integrasi**: Seamless dengan ZFS dan systemd
4.**Filosofi**: Sesuai dengan "appliance-first" dan "minimal dependencies"
5.**Management**: Unified management melalui systemd
6.**Security**: Sesuai dengan security model appliance
**Docker Installation** tidak direkomendasikan karena:
- ❌ Menambah kompleksitas tanpa benefit yang signifikan
- ❌ Inkonsisten dengan arsitektur yang ada
- ❌ Overhead yang tidak perlu untuk appliance
---
## Next Steps
1. ✅ Implementasi native MinIO installation
2. ✅ Create systemd service file
3. ✅ Integrasi dengan ZFS dataset
4. ✅ Backend API integration
5. ✅ Frontend UI untuk MinIO management
---
## Date
2026-01-09

View File

@@ -1,193 +0,0 @@
# MinIO Integration Complete
**Tanggal:** 2026-01-09
**Status:****COMPLETE**
## Summary
Integrasi MinIO dengan Calypso appliance telah selesai. Frontend Object Storage page sekarang menggunakan data real dari MinIO service, bukan dummy data lagi.
---
## Changes Made
### 1. Backend Integration ✅
#### Created MinIO Service (`backend/internal/object_storage/service.go`)
- **Service**: Menggunakan MinIO Go SDK untuk berinteraksi dengan MinIO server
- **Features**:
- List buckets dengan informasi detail (size, objects, access policy)
- Get bucket statistics
- Create bucket
- Delete bucket
- Get bucket access policy
#### Created MinIO Handler (`backend/internal/object_storage/handler.go`)
- **Handler**: HTTP handlers untuk API endpoints
- **Endpoints**:
- `GET /api/v1/object-storage/buckets` - List all buckets
- `GET /api/v1/object-storage/buckets/:name` - Get bucket info
- `POST /api/v1/object-storage/buckets` - Create bucket
- `DELETE /api/v1/object-storage/buckets/:name` - Delete bucket
#### Updated Configuration (`backend/internal/common/config/config.go`)
- Added `ObjectStorageConfig` struct untuk MinIO configuration
- Fields:
- `endpoint`: MinIO server endpoint (default: `localhost:9000`)
- `access_key`: MinIO access key
- `secret_key`: MinIO secret key
- `use_ssl`: Whether to use SSL/TLS
#### Updated Router (`backend/internal/common/router/router.go`)
- Added object storage routes group
- Routes protected dengan permission `storage:read` dan `storage:write`
- Service initialization dengan error handling
### 2. Configuration ✅
#### Updated `/opt/calypso/conf/config.yaml`
```yaml
# Object Storage (MinIO) Configuration
object_storage:
endpoint: "localhost:9000"
access_key: "admin"
secret_key: "HqBX1IINqFynkWFa"
use_ssl: false
```
### 3. Frontend Integration ✅
#### Created API Client (`frontend/src/api/objectStorage.ts`)
- **API Client**: TypeScript client untuk object storage API
- **Interfaces**:
- `Bucket`: Bucket data structure
- **Methods**:
- `listBuckets()`: Fetch all buckets
- `getBucket(name)`: Get bucket details
- `createBucket(name)`: Create new bucket
- `deleteBucket(name)`: Delete bucket
#### Updated ObjectStorage Page (`frontend/src/pages/ObjectStorage.tsx`)
- **Removed**: Mock data (`MOCK_BUCKETS`)
- **Added**: Real API integration dengan React Query
- **Features**:
- Fetch buckets dari API dengan auto-refresh setiap 5 detik
- Transform API data ke format UI
- Loading state untuk buckets
- Empty state ketika tidak ada buckets
- Mutations untuk create/delete bucket
- Error handling dengan alerts
### 4. Dependencies ✅
#### Added Go Packages
- `github.com/minio/minio-go/v7` - MinIO Go SDK
- `github.com/minio/madmin-go/v3` - MinIO Admin API
---
## API Endpoints
### List Buckets
```http
GET /api/v1/object-storage/buckets
Authorization: Bearer <token>
```
**Response:**
```json
{
"buckets": [
{
"name": "my-bucket",
"creation_date": "2026-01-09T20:13:27Z",
"size": 1024000,
"objects": 42,
"access_policy": "private"
}
]
}
```
### Get Bucket
```http
GET /api/v1/object-storage/buckets/:name
Authorization: Bearer <token>
```
### Create Bucket
```http
POST /api/v1/object-storage/buckets
Authorization: Bearer <token>
Content-Type: application/json
{
"name": "new-bucket"
}
```
### Delete Bucket
```http
DELETE /api/v1/object-storage/buckets/:name
Authorization: Bearer <token>
```
---
## Testing
### Backend Test
```bash
# Test API endpoint
curl -H "Authorization: Bearer <token>" http://localhost:8080/api/v1/object-storage/buckets
```
### Frontend Test
1. Login ke Calypso UI
2. Navigate ke "Object Storage" page
3. Verify buckets dari MinIO muncul di UI
4. Test create bucket (jika ada button)
5. Test delete bucket (jika ada button)
---
## MinIO Service Status
**Service:** `minio.service`
**Status:** ✅ Running
**Endpoint:** `http://localhost:9000` (API), `http://localhost:9001` (Console)
**Storage:** `/opt/calypso/data/storage/s3`
**Credentials:**
- Access Key: `admin`
- Secret Key: `HqBX1IINqFynkWFa`
---
## Next Steps (Optional)
1. **Add Create/Delete Bucket UI**: Tambahkan modal/form untuk create/delete bucket dari UI
2. **Bucket Policies Management**: UI untuk manage bucket access policies
3. **Object Management**: UI untuk browse dan manage objects dalam bucket
4. **Bucket Quotas**: Implementasi quota management untuk buckets
5. **Bucket Lifecycle**: Implementasi lifecycle policies untuk buckets
6. **S3 Users & Keys**: Management untuk S3 access keys (MinIO users)
---
## Files Modified
### Backend
- `/src/calypso/backend/internal/object_storage/service.go` (NEW)
- `/src/calypso/backend/internal/object_storage/handler.go` (NEW)
- `/src/calypso/backend/internal/common/config/config.go` (MODIFIED)
- `/src/calypso/backend/internal/common/router/router.go` (MODIFIED)
- `/opt/calypso/conf/config.yaml` (MODIFIED)
### Frontend
- `/src/calypso/frontend/src/api/objectStorage.ts` (NEW)
- `/src/calypso/frontend/src/pages/ObjectStorage.tsx` (MODIFIED)
---
## Date
2026-01-09

View File

@@ -1,55 +0,0 @@
# Password Update Complete
**Tanggal:** 2025-01-09
**User:** PostgreSQL `calypso`
**Status:****UPDATED**
## Update Summary
Password user PostgreSQL `calypso` telah di-update sesuai dengan password yang ada di `/etc/calypso/secrets.env`.
### Action Performed
```sql
ALTER USER calypso WITH PASSWORD '<password_from_secrets.env>';
```
### Verification
**Password Updated:** Successfully executed `ALTER ROLE`
**Connection Test:** User `calypso` dapat connect ke database `calypso`
**Bacula Access:** User `calypso` masih dapat access database `bacula` (32 tables accessible)
### Test Results
1. **Database Connection Test:**
```bash
psql -h localhost -U calypso -d calypso
```
✅ **SUCCESS** - Connection established
2. **Bacula Database Access Test:**
```bash
psql -h localhost -U calypso -d bacula
```
✅ **SUCCESS** - 32 tables accessible
## Current Configuration
- **User:** `calypso`
- **Password Source:** `/etc/calypso/secrets.env` (CALYPSO_DB_PASSWORD)
- **Database Access:**
- ✅ Full access to `calypso` database
- ✅ Read-only access to `bacula` database
## Next Steps
1. ✅ Password sudah sync dengan secrets.env
2. ✅ Calypso API akan otomatis menggunakan password dari secrets.env
3. ⏭️ Test Calypso API connection untuk memastikan semuanya bekerja
## Important Notes
- Password sekarang sync dengan `/etc/calypso/secrets.env`
- Calypso API service akan otomatis load password dari file tersebut
- Tidak perlu set environment variable manual lagi
- Password di secrets.env adalah source of truth

View File

@@ -1,135 +0,0 @@
# Permissions Fix Complete
**Tanggal:** 2025-01-09
**Status:****FIXED**
## Problem
User `calypso` tidak memiliki permission untuk:
- Mengakses raw disk devices (`/dev/sd*`)
- Menjalankan ZFS commands (`zpool`, `zfs`)
- Membuat ZFS pools
Error yang muncul:
```
failed to create ZFS pool: cannot open '/dev/sdb': Permission denied
cannot create 'default': permission denied
```
## Solution Implemented
### 1. Group Membership ✅
User `calypso` ditambahkan ke groups:
- `disk` - Access to disk devices (`/dev/sd*`)
- `tape` - Access to tape devices
```bash
sudo usermod -aG disk,tape calypso
```
### 2. Sudoers Configuration ✅
File `/etc/sudoers.d/calypso` dibuat dengan permissions:
```sudoers
# ZFS Commands
calypso ALL=(ALL) NOPASSWD: /usr/sbin/zpool, /usr/sbin/zfs, /usr/bin/zpool, /usr/bin/zfs
# SCST Commands
calypso ALL=(ALL) NOPASSWD: /usr/sbin/scstadmin, /usr/bin/scstadmin
# Tape Utilities
calypso ALL=(ALL) NOPASSWD: /usr/bin/mtx, /usr/bin/mt, /usr/bin/sg_*, /usr/bin/sg3_utils/*
# System Monitoring
calypso ALL=(ALL) NOPASSWD: /usr/bin/systemctl status *, /usr/bin/systemctl is-active *, /usr/bin/journalctl -u *
```
### 3. Backend Code Updates ✅
**Helper Functions Added:**
```go
// zfsCommand executes a ZFS command with sudo
func zfsCommand(ctx context.Context, args ...string) *exec.Cmd {
return exec.CommandContext(ctx, "sudo", append([]string{"zfs"}, args...)...)
}
// zpoolCommand executes a ZPOOL command with sudo
func zpoolCommand(ctx context.Context, args ...string) *exec.Cmd {
return exec.CommandContext(ctx, "sudo", append([]string{"zpool"}, args...)...)
}
```
**All ZFS/ZPOOL Commands Updated:**
-`zpool create``zpoolCommand(ctx, "create", ...)`
-`zpool destroy``zpoolCommand(ctx, "destroy", ...)`
-`zpool list``zpoolCommand(ctx, "list", ...)`
-`zpool status``zpoolCommand(ctx, "status", ...)`
-`zfs create``zfsCommand(ctx, "create", ...)`
-`zfs destroy``zfsCommand(ctx, "destroy", ...)`
-`zfs set``zfsCommand(ctx, "set", ...)`
-`zfs get``zfsCommand(ctx, "get", ...)`
-`zfs list``zfsCommand(ctx, "list", ...)`
**Files Updated:**
-`backend/internal/storage/zfs.go` - All ZFS/ZPOOL commands
-`backend/internal/storage/zfs_pool_monitor.go` - Monitor commands
-`backend/internal/storage/disk.go` - Disk discovery commands
-`backend/internal/scst/service.go` - Already using sudo ✅
### 4. Service Restart ✅
Calypso API service telah di-restart dengan binary baru:
- ✅ Binary rebuilt dengan sudo support
- ✅ Service restarted
- ✅ Running successfully
## Verification
### Test ZFS Commands
```bash
# Test zpool list (should work)
sudo -u calypso sudo zpool list
# Output: no pools available (success - no error)
# Test zpool create/destroy (should work)
sudo -u calypso sudo zpool create -f test_pool /dev/sdb
sudo -u calypso sudo zpool destroy -f test_pool
# Should complete without permission errors
```
### Test Device Access
```bash
# Test device access (should work with disk group)
sudo -u calypso ls -la /dev/sdb
# Should show device (not permission denied)
```
## Current Status
**Groups:** User calypso in `disk` and `tape` groups
**Sudoers:** Configured and validated
**Backend Code:** All ZFS commands use sudo
**SCST:** Already using sudo (no changes needed)
**Service:** Restarted with new binary
**Permissions:** Fixed
## Next Steps
1. ✅ Permissions configured
2. ✅ Code updated
3. ✅ Service restarted
4. ⏭️ **Test ZFS pool creation via frontend**
## Testing
Sekarang user bisa test membuat ZFS pool via frontend:
1. Login ke portal: http://localhost/ atau http://10.10.14.18/
2. Navigate ke Storage → ZFS Pools
3. Create new pool dengan disks yang tersedia
4. Should work tanpa permission errors
---
**Status:****PERMISSIONS FIXED**
**Ready for:** ZFS pool creation via frontend

View File

@@ -1,82 +0,0 @@
# Permissions Fix Summary
**Tanggal:** 2025-01-09
**Status:****FIXED & VERIFIED**
## Problem Solved
User `calypso` sekarang memiliki permission yang cukup untuk:
- ✅ Mengakses raw disk devices (`/dev/sd*`)
- ✅ Menjalankan ZFS commands (`zpool`, `zfs`)
- ✅ Membuat dan menghapus ZFS pools
- ✅ Mengakses tape devices
- ✅ Menjalankan SCST commands
## Changes Made
### 1. System Groups ✅
```bash
sudo usermod -aG disk,tape calypso
```
### 2. Sudoers Configuration ✅
File: `/etc/sudoers.d/calypso`
- ZFS commands: `zpool`, `zfs`
- SCST commands: `scstadmin`
- Tape utilities: `mtx`, `mt`, `sg_*`
- System monitoring: `systemctl`, `journalctl`
### 3. Backend Code Updates ✅
- Added helper functions: `zfsCommand()`, `zpoolCommand()`
- All ZFS/ZPOOL commands now use `sudo`
- Updated files:
- `backend/internal/storage/zfs.go`
- `backend/internal/storage/zfs_pool_monitor.go`
- `backend/internal/storage/disk.go`
- `backend/internal/scst/service.go` (already had sudo)
### 4. Service Restart ✅
- Binary rebuilt with sudo support
- Service restarted successfully
## Verification
### Test Results
```bash
# ZFS commands work
sudo -u calypso sudo zpool list
# Output: no pools available (success)
# ZFS pool create/destroy works
sudo -u calypso sudo zpool create -f test_pool /dev/sdb
sudo -u calypso sudo zpool destroy -f test_pool
# Success: No permission errors
```
### Device Access
```bash
# Device access works
sudo -u calypso ls -la /dev/sdb
# Shows device (not permission denied)
```
## Current Status
**Groups:** calypso in `disk` and `tape` groups
**Sudoers:** Configured and validated
**Backend Code:** All privileged commands use sudo
**Service:** Running with new binary
**Permissions:** Fixed and verified
## Next Steps
1. ✅ Permissions fixed
2. ✅ Code updated
3. ✅ Service restarted
4. ✅ Verified working
5. ⏭️ **Test ZFS pool creation via frontend**
Sekarang user bisa membuat ZFS pool via frontend tanpa permission errors!
---
**Status:****READY FOR TESTING**

View File

@@ -1,117 +0,0 @@
# Calypso User Permissions Setup
**Tanggal:** 2025-01-09
**User:** `calypso`
**Status:****CONFIGURED**
## Problem
User `calypso` tidak memiliki permission yang cukup untuk:
- Mengakses raw disk devices (`/dev/sd*`)
- Menjalankan ZFS commands (`zpool`, `zfs`)
- Mengakses tape devices
- Menjalankan SCST commands
## Solution
### 1. Group Membership
User `calypso` telah ditambahkan ke groups berikut:
- `disk` - Access to disk devices
- `tape` - Access to tape devices
- `storage` - Storage-related permissions
```bash
sudo usermod -aG disk,tape,storage calypso
```
### 2. Sudoers Configuration
File `/etc/sudoers.d/calypso` telah dibuat dengan permissions berikut:
#### ZFS Commands
```sudoers
calypso ALL=(ALL) NOPASSWD: /usr/sbin/zpool, /usr/sbin/zfs, /usr/bin/zpool, /usr/bin/zfs
```
#### SCST Commands
```sudoers
calypso ALL=(ALL) NOPASSWD: /usr/sbin/scstadmin, /usr/bin/scstadmin
```
#### Tape Utilities
```sudoers
calypso ALL=(ALL) NOPASSWD: /usr/bin/mtx, /usr/bin/mt, /usr/bin/sg_*, /usr/bin/sg3_utils/*
```
#### System Monitoring
```sudoers
calypso ALL=(ALL) NOPASSWD: /usr/bin/systemctl status *, /usr/bin/systemctl is-active *, /usr/bin/journalctl -u *
```
## Verification
### Check Group Membership
```bash
groups calypso
# Output should include: disk tape storage
```
### Check Sudoers File
```bash
sudo visudo -c -f /etc/sudoers.d/calypso
# Should return: /etc/sudoers.d/calypso: parsed OK
```
### Test ZFS Access
```bash
sudo -u calypso zpool list
# Should work without errors
```
### Test Device Access
```bash
sudo -u calypso ls -la /dev/sdb
# Should show device permissions
```
## Backend Code Changes Needed
Backend code perlu menggunakan `sudo` untuk ZFS commands. Contoh:
```go
// Before (will fail with permission denied)
cmd := exec.CommandContext(ctx, "zpool", "create", ...)
// After (with sudo)
cmd := exec.CommandContext(ctx, "sudo", "zpool", "create", ...)
```
## Current Status
**Groups:** User calypso added to disk, tape, storage groups
**Sudoers:** Configuration file created and validated
**Permissions:** File permissions set to 0440 (secure)
⏭️ **Code Update:** Backend code needs to use `sudo` for privileged commands
## Next Steps
1. ✅ Groups configured
2. ✅ Sudoers configured
3. ⏭️ Update backend code to use `sudo` for:
- ZFS operations (`zpool`, `zfs`)
- SCST operations (`scstadmin`)
- Tape operations (`mtx`, `mt`, `sg_*`)
4. ⏭️ Restart Calypso API service
5. ⏭️ Test ZFS pool creation via frontend
## Important Notes
- Sudoers file uses `NOPASSWD` for convenience (service account)
- Only specific commands are allowed (security best practice)
- File permissions are 0440 (read-only for root and group)
- Service restart required after permission changes
---
**Status:****PERMISSIONS CONFIGURED**
**Action Required:** Update backend code to use `sudo` for privileged commands

View File

@@ -1,79 +0,0 @@
# Pool Delete Mountpoint Cleanup
## Issue
Ketika pool dihapus, mount point directory tidak dihapus dari sistem. Mount point directory tetap ada di `/opt/calypso/data/pool/<pool-name>` meskipun pool sudah di-destroy.
## Root Cause
Fungsi `DeletePool` tidak melakukan cleanup untuk mount point directory setelah pool di-destroy.
## Solution
Menambahkan kode untuk menghapus mount point directory setelah pool di-destroy.
## Changes Made
### Updated `backend/internal/storage/zfs.go`
**File**: `backend/internal/storage/zfs.go` (line 518-562)
Menambahkan cleanup untuk mount point directory setelah pool di-destroy:
**Before:**
```go
// Mark disks as unused
for _, diskPath := range pool.Disks {
// ...
}
// Delete from database
_, err = s.db.ExecContext(ctx, "DELETE FROM zfs_pools WHERE id = $1", poolID)
// ...
```
**After:**
```go
// Remove mount point directory (default: /opt/calypso/data/pool/<pool-name>)
mountPoint := fmt.Sprintf("/opt/calypso/data/pool/%s", pool.Name)
if err := os.RemoveAll(mountPoint); err != nil {
s.logger.Warn("Failed to remove mount point directory", "mountpoint", mountPoint, "error", err)
// Don't fail pool deletion if mount point removal fails
} else {
s.logger.Info("Removed mount point directory", "mountpoint", mountPoint)
}
// Mark disks as unused
for _, diskPath := range pool.Disks {
// ...
}
// Delete from database
_, err = s.db.ExecContext(ctx, "DELETE FROM zfs_pools WHERE id = $1", poolID)
// ...
```
## Mount Point Location
Default mount point untuk semua pools adalah:
```
/opt/calypso/data/pool/<pool-name>/
```
## Behavior
1. Pool di-destroy dari ZFS system
2. Mount point directory dihapus dengan `os.RemoveAll()`
3. Disks ditandai sebagai unused di database
4. Pool dihapus dari database
## Error Handling
- Jika mount point removal gagal, hanya log warning
- Pool deletion tetap berhasil meskipun mount point removal gagal
- Ini memastikan bahwa pool deletion tidak gagal hanya karena mount point cleanup
## Testing
1. Create pool dengan nama "test-pool"
2. Verify mount point directory dibuat: `/opt/calypso/data/pool/test-pool/`
3. Delete pool
4. Verify mount point directory dihapus: `ls /opt/calypso/data/pool/test-pool` should fail
## Status
**FIXED** - Mount point directory sekarang dihapus saat pool di-delete
## Date
2026-01-09

View File

@@ -1,64 +0,0 @@
# Pool Refresh Fix
## Issue
UI tidak terupdate setelah klik tombol "Refresh Pools", meskipun pool ada di database dan sistem.
## Root Cause
Masalahnya ada di backend - field `created_by` di database bisa null, tapi di struct `ZFSPool` adalah `string` (bukan pointer atau `sql.NullString`). Saat scan, jika `created_by` null, scan akan gagal dan pool di-skip.
## Solution
Menggunakan `sql.NullString` untuk scan `created_by`, lalu assign ke string jika valid.
## Changes Made
### Updated `backend/internal/storage/zfs.go`
**File**: `backend/internal/storage/zfs.go` (line 425-442)
**Before:**
```go
var pool ZFSPool
var description sql.NullString
err := rows.Scan(
&pool.ID, &pool.Name, &description, &pool.RaidLevel, pq.Array(&pool.Disks),
&pool.SizeBytes, &pool.UsedBytes, &pool.Compression, &pool.Deduplication,
&pool.AutoExpand, &pool.ScrubInterval, &pool.IsActive, &pool.HealthStatus,
&pool.CreatedAt, &pool.UpdatedAt, &pool.CreatedBy, // Direct scan to string
)
```
**After:**
```go
var pool ZFSPool
var description sql.NullString
var createdBy sql.NullString
err := rows.Scan(
&pool.ID, &pool.Name, &description, &pool.RaidLevel, pq.Array(&pool.Disks),
&pool.SizeBytes, &pool.UsedBytes, &pool.Compression, &pool.Deduplication,
&pool.AutoExpand, &pool.ScrubInterval, &pool.IsActive, &pool.HealthStatus,
&pool.CreatedAt, &pool.UpdatedAt, &createdBy, // Scan to NullString
)
if err != nil {
s.logger.Error("Failed to scan pool row", "error", err, "error_type", fmt.Sprintf("%T", err))
continue
}
if createdBy.Valid {
pool.CreatedBy = createdBy.String
}
```
## Testing
1. Pool ada di database: `default-pool`
2. Pool ada di sistem ZFS: `zpool list` shows `default-pool`
3. API sekarang mengembalikan pool dengan benar
4. Frontend sudah di-deploy
## Status
**FIXED** - Backend sekarang mengembalikan pools dengan benar
## Next Steps
- Refresh browser untuk melihat perubahan
- Klik tombol "Refresh Pools" untuk manual refresh
- Pool seharusnya muncul di UI sekarang
## Date
2026-01-09

View File

@@ -1,72 +0,0 @@
# Rebuild and Restart Script
## Overview
Script untuk rebuild dan restart Calypso API + Frontend service secara otomatis.
## File
`/src/calypso/rebuild-and-restart.sh`
## Usage
### Basic Usage
```bash
cd /src/calypso
./rebuild-and-restart.sh
```
### Dengan sudo (jika diperlukan)
```bash
sudo /src/calypso/rebuild-and-restart.sh
```
## What It Does
### 1. Rebuild Backend
- Build Go binary dari `backend/cmd/calypso-api`
- Output ke `/opt/calypso/bin/calypso-api`
- Set permissions dan ownership ke `calypso:calypso`
### 2. Rebuild Frontend
- Install dependencies (jika diperlukan)
- Build frontend dengan `npm run build`
- Output ke `frontend/dist/`
### 3. Deploy Frontend
- Copy files dari `frontend/dist/` ke `/opt/calypso/web/`
- Set ownership ke `www-data:www-data`
### 4. Restart Services
- Restart `calypso-api.service`
- Reload Nginx (jika tersedia)
- Check service status
## Features
- ✅ Color-coded output untuk mudah dibaca
- ✅ Error handling dengan `set -e`
- ✅ Status checks setelah restart
- ✅ Informative progress messages
## Requirements
- Go installed (untuk backend build)
- Node.js dan npm installed (untuk frontend build)
- sudo access (untuk service management)
- Calypso project di `/src/calypso`
## Troubleshooting
### Backend build fails
- Check Go installation: `go version`
- Check Go modules: `cd backend && go mod download`
### Frontend build fails
- Check Node.js: `node --version`
- Check npm: `npm --version`
- Install dependencies: `cd frontend && npm install`
### Service restart fails
- Check service exists: `systemctl list-units | grep calypso`
- Check service status: `sudo systemctl status calypso-api.service`
- Check logs: `sudo journalctl -u calypso-api.service -n 50`
## Date
2026-01-09

View File

@@ -1,78 +0,0 @@
# Refresh Pools Button
## Issue
UI tidak update secara otomatis setelah create atau destroy pool. User meminta tombol refresh pools untuk manual refresh.
## Solution
Menambahkan tombol "Refresh Pools" yang melakukan refetch pools dari database, dan memperbaiki createPoolMutation untuk melakukan refetch dengan benar.
## Changes Made
### 1. Added Refresh Pools Button
**File**: `frontend/src/pages/Storage.tsx` (line 446-459)
Menambahkan tombol baru di antara "Rescan Disks" dan "Create Pool":
```typescript
<button
onClick={async () => {
await queryClient.invalidateQueries({ queryKey: ['storage', 'zfs', 'pools'] })
await queryClient.refetchQueries({ queryKey: ['storage', 'zfs', 'pools'] })
}}
disabled={poolsLoading}
className="flex items-center gap-2 px-4 py-2 rounded-lg border border-border-dark bg-card-dark text-white text-sm font-bold hover:bg-[#233648] transition-colors disabled:opacity-50"
title="Refresh pools list from database"
>
<span className={`material-symbols-outlined text-[20px] ${poolsLoading ? 'animate-spin' : ''}`}>
sync
</span>
{poolsLoading ? 'Refreshing...' : 'Refresh Pools'}
</button>
```
**Features:**
- Icon `sync` dengan animasi spin saat loading
- Disabled saat pools sedang loading
- Tooltip: "Refresh pools list from database"
- Styling konsisten dengan tombol lainnya
### 2. Fixed createPoolMutation
**File**: `frontend/src/pages/Storage.tsx` (line 219-239)
Memperbaiki `createPoolMutation` untuk melakukan refetch dengan `await`:
```typescript
onSuccess: async () => {
// Invalidate and immediately refetch pools
await queryClient.invalidateQueries({ queryKey: ['storage', 'zfs', 'pools'] })
await queryClient.refetchQueries({ queryKey: ['storage', 'zfs', 'pools'] })
await queryClient.invalidateQueries({ queryKey: ['storage', 'disks'] })
// ... rest of the code
alert('Pool created successfully!')
}
```
**Improvements:**
- Menambahkan `await` pada `refetchQueries` untuk memastikan refetch selesai
- Menambahkan success alert untuk feedback ke user
## Button Layout
Sekarang ada 3 tombol di header:
1. **Rescan Disks** - Rescan physical disks dari sistem
2. **Refresh Pools** - Refresh pools list dari database (NEW)
3. **Create Pool** - Create new ZFS pool
## Usage
User dapat klik tombol "Refresh Pools" kapan saja untuk:
- Manual refresh setelah create pool
- Manual refresh setelah destroy pool
- Manual refresh jika auto-refresh (3 detik) tidak cukup cepat
## Testing
1. Create pool → Klik "Refresh Pools" → Pool muncul
2. Destroy pool → Klik "Refresh Pools" → Pool hilang
3. Auto-refresh tetap berjalan setiap 3 detik
## Status
**COMPLETED** - Tombol Refresh Pools ditambahkan dan createPoolMutation diperbaiki
## Date
2026-01-09

View File

@@ -1,89 +0,0 @@
# Refresh Pools UX Improvement
## Issue
UI refresh update masih terlalu lama, sehingga user merasa command-nya gagal padahal sebenarnya tidak. User tidak mendapat feedback yang jelas bahwa proses sedang berjalan.
## Solution
Menambahkan loading state yang lebih jelas dan feedback visual yang lebih baik untuk memberikan indikasi bahwa proses refresh sedang berjalan.
## Changes Made
### 1. Added Loading State
**File**: `frontend/src/pages/Storage.tsx`
Menambahkan state untuk tracking manual refresh:
```typescript
const [isRefreshingPools, setIsRefreshingPools] = useState(false)
```
### 2. Improved Refresh Button
**File**: `frontend/src/pages/Storage.tsx` (line 446-465)
**Before:**
```typescript
<button
onClick={async () => {
await queryClient.invalidateQueries({ queryKey: ['storage', 'zfs', 'pools'] })
await queryClient.refetchQueries({ queryKey: ['storage', 'zfs', 'pools'] })
}}
disabled={poolsLoading}
...
>
```
**After:**
```typescript
<button
onClick={async () => {
setIsRefreshingPools(true)
try {
await queryClient.invalidateQueries({ queryKey: ['storage', 'zfs', 'pools'] })
await queryClient.refetchQueries({ queryKey: ['storage', 'zfs', 'pools'] })
// Small delay to show feedback
await new Promise(resolve => setTimeout(resolve, 300))
alert('Pools refreshed successfully!')
} catch (error) {
console.error('Failed to refresh pools:', error)
alert('Failed to refresh pools. Please try again.')
} finally {
setIsRefreshingPools(false)
}
}}
disabled={poolsLoading || isRefreshingPools}
className="... disabled:cursor-not-allowed"
...
>
<span className={`... ${(poolsLoading || isRefreshingPools) ? 'animate-spin' : ''}`}>
sync
</span>
{(poolsLoading || isRefreshingPools) ? 'Refreshing...' : 'Refresh Pools'}
</button>
```
## Improvements
### Visual Feedback
1. **Loading Spinner**: Icon `sync` berputar saat refresh
2. **Button Text**: Berubah menjadi "Refreshing..." saat loading
3. **Disabled State**: Button disabled dengan cursor `not-allowed` saat loading
4. **Success Alert**: Menampilkan alert setelah refresh selesai
5. **Error Handling**: Menampilkan alert jika refresh gagal
### User Experience
- User mendapat feedback visual yang jelas bahwa proses sedang berjalan
- User mendapat konfirmasi setelah refresh selesai
- User mendapat notifikasi jika terjadi error
- Button tidak bisa diklik berulang kali saat proses berjalan
## Testing
1. Klik "Refresh Pools"
2. Verify button menunjukkan loading state (spinner + "Refreshing...")
3. Verify button disabled saat loading
4. Verify success alert muncul setelah refresh selesai
5. Verify pools list ter-update
## Status
**COMPLETED** - UX improvement untuk refresh pools button
## Date
2026-01-09

View File

@@ -1,77 +0,0 @@
# Secrets Environment File Setup
**Tanggal:** 2025-01-09
**File:** `/etc/calypso/secrets.env`
**Status:****CREATED**
## File Details
- **Location:** `/etc/calypso/secrets.env`
- **Owner:** `root:root`
- **Permissions:** `600` (read/write owner only)
- **Size:** 413 bytes
## Contents
File berisi environment variables untuk Calypso:
1. **CALYPSO_DB_PASSWORD**
- Database password untuk user PostgreSQL `calypso`
- Value: `calypso_secure_2025`
- Length: 19 characters
2. **CALYPSO_JWT_SECRET**
- JWT secret key untuk authentication tokens
- Generated: Random base64 string (44 characters)
- Minimum requirement: 32 characters ✅
## Security
**Permissions:** `600` (read/write owner only)
**Owner:** `root:root`
**Location:** `/etc/calypso/` (protected directory)
**JWT Secret:** Random generated, secure
⚠️ **Note:** Password default perlu diubah untuk production
## Usage
File ini akan di-load oleh systemd service via `EnvironmentFile` directive:
```ini
[Service]
EnvironmentFile=/etc/calypso/secrets.env
```
Atau bisa di-source manual:
```bash
source /etc/calypso/secrets.env
export CALYPSO_DB_PASSWORD
export CALYPSO_JWT_SECRET
```
## Verification
File sudah diverifikasi:
- ✅ File exists
- ✅ Permissions correct (600)
- ✅ Owner correct (root:root)
- ✅ Variables dapat di-source dengan benar
- ✅ JWT secret length >= 32 characters
## Next Steps
1. ✅ File sudah siap digunakan
2. ⏭️ Calypso API service akan otomatis load file ini
3. ⏭️ Update password untuk production environment (recommended)
## Important Notes
⚠️ **DO NOT:**
- Commit file ini ke version control
- Share file ini publicly
- Use default password in production
**DO:**
- Keep file permissions at 600
- Rotate secrets periodically
- Use strong passwords in production
- Backup securely if needed

View File

@@ -1,229 +0,0 @@
# Calypso Systemd Service Setup
**Tanggal:** 2025-01-09
**Service:** `calypso-api.service`
**Status:****ACTIVE & RUNNING**
## Service File
**Location:** `/etc/systemd/system/calypso-api.service`
### Configuration
```ini
[Unit]
Description=AtlasOS - Calypso API Service
Documentation=https://github.com/atlasos/calypso
After=network.target postgresql.service
Wants=postgresql.service
[Service]
Type=simple
User=calypso
Group=calypso
WorkingDirectory=/opt/calypso
ExecStart=/opt/calypso/bin/calypso-api -config /opt/calypso/conf/config.yaml
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=calypso-api
# Environment
EnvironmentFile=/opt/calypso/conf/secrets.env
Environment="CALYPSO_DB_HOST=localhost"
Environment="CALYPSO_DB_PORT=5432"
Environment="CALYPSO_DB_USER=calypso"
Environment="CALYPSO_DB_NAME=calypso"
# Security
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/calypso/data /opt/calypso/conf /var/log/calypso /var/lib/calypso /run/calypso
ReadOnlyPaths=/opt/calypso/bin /opt/calypso/web /opt/calypso/releases
# Resource limits
LimitNOFILE=65536
LimitNPROC=4096
[Install]
WantedBy=multi-user.target
```
## Service Status
**Status:** Active (running)
**Enabled:** Yes (auto-start on boot)
**PID:** Running
**Memory:** ~12.4M
**Port:** 8080
## Service Management
### Start Service
```bash
sudo systemctl start calypso-api
```
### Stop Service
```bash
sudo systemctl stop calypso-api
```
### Restart Service
```bash
sudo systemctl restart calypso-api
```
### Reload Configuration (without restart)
```bash
sudo systemctl reload calypso-api
```
### Check Status
```bash
sudo systemctl status calypso-api
```
### Enable/Disable Auto-start
```bash
# Enable auto-start on boot
sudo systemctl enable calypso-api
# Disable auto-start
sudo systemctl disable calypso-api
# Check if enabled
sudo systemctl is-enabled calypso-api
```
## Viewing Logs
### Real-time Logs (Follow Mode)
```bash
sudo journalctl -u calypso-api -f
```
### Last 50 Lines
```bash
sudo journalctl -u calypso-api -n 50
```
### Logs Since Today
```bash
sudo journalctl -u calypso-api --since today
```
### Logs with Timestamps
```bash
sudo journalctl -u calypso-api --no-pager
```
## Service Configuration Details
### Working Directory
- **Path:** `/opt/calypso`
- **Purpose:** Base directory for application
### Binary Location
- **Path:** `/opt/calypso/bin/calypso-api`
- **Config:** `/opt/calypso/conf/config.yaml`
### Environment Variables
- **Secrets File:** `/opt/calypso/conf/secrets.env`
- `CALYPSO_DB_PASSWORD` - Database password
- `CALYPSO_JWT_SECRET` - JWT secret key
- **Database Config:**
- `CALYPSO_DB_HOST=localhost`
- `CALYPSO_DB_PORT=5432`
- `CALYPSO_DB_USER=calypso`
- `CALYPSO_DB_NAME=calypso`
### Security Settings
-**NoNewPrivileges:** Prevents privilege escalation
-**PrivateTmp:** Isolated temporary directory
-**ProtectSystem:** Read-only system directories
-**ProtectHome:** Read-only home directories
-**ReadWritePaths:** Only specific paths writable
-**ReadOnlyPaths:** Application binaries read-only
### Resource Limits
- **Max Open Files:** 65536
- **Max Processes:** 4096
## Runtime Directories
- **Logs:** `/var/log/calypso/` (calypso:calypso)
- **Data:** `/var/lib/calypso/` (calypso:calypso)
- **Runtime:** `/run/calypso/` (calypso:calypso)
## Service Verification
### Check Service Status
```bash
sudo systemctl is-active calypso-api
# Output: active
```
### Check HTTP Endpoint
```bash
curl http://localhost:8080/api/v1/health
```
### Check Process
```bash
ps aux | grep calypso-api
```
### Check Port
```bash
sudo netstat -tlnp | grep 8080
# or
sudo ss -tlnp | grep 8080
```
## Startup Logs Analysis
From initial startup logs:
- ✅ Database connection successful
- ✅ Connected to Bacula database
- ✅ HTTP server started on port 8080
- ✅ MHVTL configuration sync completed
- ✅ Disk discovery completed (5 disks)
- ✅ Alert rules registered
- ✅ Monitoring services started
- ⚠️ Warning: RRD tool not found (network monitoring optional)
## Troubleshooting
### Service Won't Start
1. Check logs: `sudo journalctl -u calypso-api -n 50`
2. Check config file: `cat /opt/calypso/conf/config.yaml`
3. Check secrets file permissions: `ls -la /opt/calypso/conf/secrets.env`
4. Check database connection: `sudo -u postgres psql -U calypso -d calypso`
### Service Crashes/Restarts
1. Check logs for errors: `sudo journalctl -u calypso-api --since "10 minutes ago"`
2. Check system resources: `free -h` and `df -h`
3. Check database status: `sudo systemctl status postgresql`
### Permission Issues
1. Check ownership: `ls -la /opt/calypso/bin/calypso-api`
2. Check user exists: `id calypso`
3. Check directory permissions: `ls -la /opt/calypso/`
## Next Steps
1. ✅ Service installed and running
2. ⏭️ Setup reverse proxy (Caddy/Nginx) for frontend
3. ⏭️ Configure firewall rules (if needed)
4. ⏭️ Setup SSL/TLS certificates
5. ⏭️ Configure monitoring/alerting
---
**Service Status:****OPERATIONAL**
**API Endpoint:** `http://localhost:8080`
**Health Check:** `http://localhost:8080/api/v1/health`

View File

@@ -1,59 +0,0 @@
# ZFS Pool Mountpoint Fix
## Issue
ZFS pool creation was failing with error:
```
cannot mount '/default': failed to create mountpoint: Read-only file system
```
The issue was that ZFS was trying to mount pools to the root filesystem (`/default`), which is read-only.
## Solution
Updated the ZFS pool creation code to set a default mountpoint to `/opt/calypso/data/pool/<pool-name>` for all pools.
## Changes Made
### 1. Updated `backend/internal/storage/zfs.go`
- Added mountpoint configuration during pool creation using `-m` flag
- Set default mountpoint to `/opt/calypso/data/pool/<pool-name>`
- Added code to create the mountpoint directory before pool creation
- Added logging for mountpoint creation
**Key Changes:**
```go
// Set default mountpoint to /opt/calypso/data/pool/<pool-name>
mountPoint := fmt.Sprintf("/opt/calypso/data/pool/%s", name)
args = append(args, "-m", mountPoint)
// Create mountpoint directory if it doesn't exist
if err := os.MkdirAll(mountPoint, 0755); err != nil {
return nil, fmt.Errorf("failed to create mountpoint directory %s: %w", mountPoint, err)
}
```
### 2. Directory Setup
- Created `/opt/calypso/data/pool` directory
- Set ownership to `calypso:calypso`
- Set permissions to `0755`
## Default Mountpoint Structure
All ZFS pools will now be mounted under:
```
/opt/calypso/data/pool/
├── pool-name-1/
├── pool-name-2/
└── ...
```
## Testing
1. Backend rebuilt successfully
2. Service restarted successfully
3. Ready to test pool creation from frontend
## Next Steps
- Test pool creation from the frontend UI
- Verify that pools are mounted correctly at `/opt/calypso/data/pool/<pool-name>`
- Ensure proper permissions for pool mountpoints
## Date
2026-01-09

View File

@@ -1,44 +0,0 @@
# ZFS Pool Delete UI Update Fix
## Issue
When a ZFS pool is destroyed, the pool is removed from the system and database, but the UI doesn't update immediately to reflect the deletion.
## Root Cause
The frontend `deletePoolMutation` was not properly awaiting the refetch operation, which could cause race conditions where the UI doesn't update before the alert is shown.
## Solution
Added `await` to `refetchQueries` to ensure the query is refetched before showing the success alert.
## Changes Made
### Updated `frontend/src/pages/Storage.tsx`
- Added `await` to `refetchQueries` call in `deletePoolMutation.onSuccess`
- This ensures the pool list is refetched from the server before showing the success message
**Key Changes:**
```typescript
onSuccess: async () => {
// Invalidate and immediately refetch
await queryClient.invalidateQueries({ queryKey: ['storage', 'zfs', 'pools'] })
await queryClient.refetchQueries({ queryKey: ['storage', 'zfs', 'pools'] }) // Added await
await queryClient.invalidateQueries({ queryKey: ['storage', 'disks'] })
setSelectedPool(null)
alert('Pool destroyed successfully!')
},
```
## Additional Notes
- The frontend already has `refetchInterval: 3000` (3 seconds) for automatic pool list refresh
- Backend properly deletes pool from database in `DeletePool` function
- ZFS Pool Monitor syncs pools every 2 minutes to catch manually deleted pools
## Testing
1. Destroy pool through UI
2. Verify pool disappears from UI immediately
3. Verify success alert is shown after UI update
## Status
**FIXED** - Pool deletion now properly updates UI
## Date
2026-01-09

View File

@@ -1,40 +0,0 @@
# ZFS Pool UI Display Fix
## Issue
ZFS pool was successfully created in the system and database, but it was not appearing in the UI. The API was returning `{"pools": null}` even though the pool existed in the database.
## Root Cause
The issue was likely related to:
1. Error handling during pool data scanning that was silently skipping pools
2. Missing debug logging to identify scan failures
## Solution
Added debug logging to identify scan failures and ensure pools are properly scanned from the database.
## Changes Made
### Updated `backend/internal/storage/zfs.go`
- Added debug logging after successful pool row scan
- This helps identify if pools are being skipped during scan
**Key Changes:**
```go
// Added debug logging after scan
s.logger.Debug("Scanned pool row", "pool_id", pool.ID, "name", pool.Name)
```
## Testing
1. Pool "default" now appears correctly in API response
2. API returns pool data with all fields populated:
- id, name, description
- raid_level, disks, spare_disks
- size_bytes, used_bytes
- compression, deduplication, auto_expand
- health_status, compress_ratio
- created_at, updated_at, created_by
## Status
**FIXED** - Pool now appears correctly in UI
## Date
2026-01-09

Binary file not shown.

Binary file not shown.

View File

@@ -65,13 +65,12 @@ func main() {
r := router.NewRouter(cfg, db, logger)
// Create HTTP server
// Note: WriteTimeout should be 0 for WebSocket connections (they handle their own timeouts)
srv := &http.Server{
Addr: fmt.Sprintf(":%d", cfg.Server.Port),
Handler: r,
ReadTimeout: 15 * time.Second,
WriteTimeout: 0, // 0 means no timeout - needed for WebSocket connections
IdleTimeout: 120 * time.Second, // Increased for WebSocket keep-alive
WriteTimeout: 15 * time.Second,
IdleTimeout: 60 * time.Second,
}
// Setup graceful shutdown

View File

@@ -5,19 +5,15 @@ go 1.24.0
toolchain go1.24.11
require (
github.com/creack/pty v1.1.24
github.com/gin-gonic/gin v1.10.0
github.com/go-playground/validator/v10 v10.20.0
github.com/golang-jwt/jwt/v5 v5.2.1
github.com/google/uuid v1.6.0
github.com/gorilla/websocket v1.5.3
github.com/lib/pq v1.10.9
github.com/minio/madmin-go/v3 v3.0.110
github.com/minio/minio-go/v7 v7.0.97
github.com/stretchr/testify v1.11.1
go.uber.org/zap v1.27.0
golang.org/x/crypto v0.37.0
golang.org/x/sync v0.15.0
golang.org/x/crypto v0.23.0
golang.org/x/sync v0.7.0
golang.org/x/time v0.14.0
gopkg.in/yaml.v3 v3.0.1
)
@@ -25,57 +21,29 @@ require (
require (
github.com/bytedance/sonic v1.11.6 // indirect
github.com/bytedance/sonic/loader v0.1.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cloudwego/base64x v0.1.4 // indirect
github.com/cloudwego/iasm v0.2.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/gabriel-vasile/mimetype v1.4.3 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/golang-jwt/jwt/v4 v4.5.2 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/go-playground/validator/v10 v10.20.0 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.11 // indirect
github.com/klauspost/crc32 v1.3.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.7 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/lufia/plan9stats v0.0.0-20250317134145-8bc96cf8fc35 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/minio/crc64nvme v1.1.0 // indirect
github.com/minio/md5-simd v1.1.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
github.com/philhofer/fwd v1.2.0 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.63.0 // indirect
github.com/prometheus/procfs v0.16.0 // indirect
github.com/prometheus/prom2json v1.4.2 // indirect
github.com/prometheus/prometheus v0.303.0 // indirect
github.com/rs/xid v1.6.0 // indirect
github.com/safchain/ethtool v0.5.10 // indirect
github.com/secure-io/sio-go v0.3.1 // indirect
github.com/shirou/gopsutil/v3 v3.24.5 // indirect
github.com/shoenig/go-m1cpu v0.1.6 // indirect
github.com/tinylib/msgp v1.3.0 // indirect
github.com/tklauser/go-sysconf v0.3.15 // indirect
github.com/tklauser/numcpus v0.10.0 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.12 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/multierr v1.10.0 // indirect
golang.org/x/arch v0.8.0 // indirect
golang.org/x/net v0.39.0 // indirect
golang.org/x/sys v0.34.0 // indirect
golang.org/x/text v0.26.0 // indirect
google.golang.org/protobuf v1.36.6 // indirect
golang.org/x/net v0.25.0 // indirect
golang.org/x/sys v0.20.0 // indirect
golang.org/x/text v0.15.0 // indirect
google.golang.org/protobuf v1.34.1 // indirect
)

View File

@@ -2,31 +2,19 @@ github.com/bytedance/sonic v1.11.6 h1:oUp34TzMlL+OY1OUWxHqsdkgC/Zfc85zGqw9siXjrc
github.com/bytedance/sonic v1.11.6/go.mod h1:LysEHSvpvDySVdC2f87zGWf6CIKJcAvqab1ZaiQtds4=
github.com/bytedance/sonic/loader v0.1.1 h1:c+e5Pt1k/cy5wMveRDyk2X4B9hF4g7an8N3zCYjJFNM=
github.com/bytedance/sonic/loader v0.1.1/go.mod h1:ncP89zfokxS5LZrJxl5z0UJcsk4M4yY2JpfqGeCtNLU=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cloudwego/base64x v0.1.4 h1:jwCgWpFanWmN8xoIUHa2rtzmkd5J2plF/dnLS6Xd/0Y=
github.com/cloudwego/base64x v0.1.4/go.mod h1:0zlkT4Wn5C6NdauXdJRhSKRlJvmclQ1hhJgA0rcu/8w=
github.com/cloudwego/iasm v0.2.0 h1:1KNIy1I1H9hNNFEEH3DVnI4UujN+1zjpuk6gwHLTssg=
github.com/cloudwego/iasm v0.2.0/go.mod h1:8rXZaNYT2n95jn+zTI1sDr+IgcD2GVs0nlbbQPiEFhY=
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0=
github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk=
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/gin-gonic/gin v1.10.0 h1:nTuyha1TYqgedzytsKYqna+DfLos46nTv2ygFy86HFU=
github.com/gin-gonic/gin v1.10.0/go.mod h1:4PMNQiOhvDRa013RKVbsiNwoyezlm2rm0uX/T7kzp5Y=
github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A=
github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
@@ -35,17 +23,12 @@ github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJn
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.20.0 h1:K9ISHbSaI0lyB2eWMPJo+kOS/FBExVwjEviJTixqxL8=
github.com/go-playground/validator/v10 v10.20.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM=
github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI=
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
@@ -53,75 +36,25 @@ github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aN
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.11 h1:0OwqZRYI2rFrjS4kvkDnqJkKHdHaRnCm68/DY4OxRzU=
github.com/klauspost/cpuid/v2 v2.2.11/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/klauspost/crc32 v1.3.0 h1:sSmTt3gUt81RP655XGZPElI0PelVTZ6YwCRnPSupoFM=
github.com/klauspost/crc32 v1.3.0/go.mod h1:D7kQaZhnkX/Y0tstFGf8VUzv2UofNGqCjnC3zdHB0Hw=
github.com/klauspost/cpuid/v2 v2.2.7 h1:ZWSB3igEs+d0qvnxR/ZBzXVmxkgt8DdzP6m9pfuVLDM=
github.com/klauspost/cpuid/v2 v2.2.7/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/knz/go-libedit v1.10.1/go.mod h1:MZTVkCWyz0oBc7JOWP3wNAzd002ZbM/5hgShxwh4x8M=
github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lufia/plan9stats v0.0.0-20250317134145-8bc96cf8fc35 h1:PpXWgLPs+Fqr325bN2FD2ISlRRztXibcX6e8f5FR5Dc=
github.com/lufia/plan9stats v0.0.0-20250317134145-8bc96cf8fc35/go.mod h1:autxFIvghDt3jPTLoqZ9OZ7s9qTGNAWmYCjVFWPX/zg=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/minio/crc64nvme v1.1.0 h1:e/tAguZ+4cw32D+IO/8GSf5UVr9y+3eJcxZI2WOO/7Q=
github.com/minio/crc64nvme v1.1.0/go.mod h1:eVfm2fAzLlxMdUGc0EEBGSMmPwmXD5XiNRpnu9J3bvg=
github.com/minio/madmin-go/v3 v3.0.110 h1:FIYekj7YPc430ffpXFWiUtyut3qBt/unIAcDzJn9H5M=
github.com/minio/madmin-go/v3 v3.0.110/go.mod h1:WOe2kYmYl1OIlY2DSRHVQ8j1v4OItARQ6jGyQqcCud8=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
github.com/minio/minio-go/v7 v7.0.97 h1:lqhREPyfgHTB/ciX8k2r8k0D93WaFqxbJX36UZq5occ=
github.com/minio/minio-go/v7 v7.0.97/go.mod h1:re5VXuo0pwEtoNLsNuSr0RrLfT/MBtohwdaSmPPSRSk=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM=
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
github.com/philhofer/fwd v1.2.0 h1:e6DnBTl7vGY+Gz322/ASL4Gyp1FspeMvx1RNDoToZuM=
github.com/philhofer/fwd v1.2.0/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.63.0 h1:YR/EIY1o3mEFP/kZCD7iDMnLPlGyuU2Gb3HIcXnA98k=
github.com/prometheus/common v0.63.0/go.mod h1:VVFF/fBIoToEnWRVkYoXEkq3R3paCoxG9PXP74SnV18=
github.com/prometheus/procfs v0.16.0 h1:xh6oHhKwnOJKMYiYBDWmkHqQPyiY40sny36Cmx2bbsM=
github.com/prometheus/procfs v0.16.0/go.mod h1:8veyXUu3nGP7oaCxhX6yeaM5u4stL2FeMXnCqhDthZg=
github.com/prometheus/prom2json v1.4.2 h1:PxCTM+Whqi/eykO1MKsEL0p/zMpxp9ybpsmdFamw6po=
github.com/prometheus/prom2json v1.4.2/go.mod h1:zuvPm7u3epZSbXPWHny6G+o8ETgu6eAK3oPr6yFkRWE=
github.com/prometheus/prometheus v0.303.0 h1:wsNNsbd4EycMCphYnTmNY9JASBVbp7NWwJna857cGpA=
github.com/prometheus/prometheus v0.303.0/go.mod h1:8PMRi+Fk1WzopMDeb0/6hbNs9nV6zgySkU/zds5Lu3o=
github.com/rs/xid v1.6.0 h1:fV591PaemRlL6JfRxGDEPl69wICngIQ3shQtzfy2gxU=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/safchain/ethtool v0.5.10 h1:Im294gZtuf4pSGJRAOGKaASNi3wMeFaGaWuSaomedpc=
github.com/safchain/ethtool v0.5.10/go.mod h1:w9jh2Lx7YBR4UwzLkzCmWl85UY0W2uZdd7/DckVE5+c=
github.com/secure-io/sio-go v0.3.1 h1:dNvY9awjabXTYGsTF1PiCySl9Ltofk9GA3VdWlo7rRc=
github.com/secure-io/sio-go v0.3.1/go.mod h1:+xbkjDzPjwh4Axd07pRKSNriS9SCiYksWnZqdnfpQxs=
github.com/shirou/gopsutil/v3 v3.24.5 h1:i0t8kL+kQTvpAYToeuiVk3TgDeKOFioZO3Ztz/iZ9pI=
github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk=
github.com/shoenig/go-m1cpu v0.1.6 h1:nxdKQNcEB6vzgA2E2bvzKIYRuNj7XNJ4S/aRSwKzFtM=
github.com/shoenig/go-m1cpu v0.1.6/go.mod h1:1JJMcUBvfNwpq05QDQVAnx3gUHr9IYF7GNg9SUEw2VQ=
github.com/shoenig/test v0.6.4 h1:kVTaSd7WLz5WZ2IaoM0RSzRsUD+m8wRR+5qvntpn4LU=
github.com/shoenig/test v0.6.4/go.mod h1:byHiCGXqrVaflBLAMq/srcZIHynQPQgeyvkvXnjqq0k=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
@@ -135,57 +68,39 @@ github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXl
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tinylib/msgp v1.3.0 h1:ULuf7GPooDaIlbyvgAxBV/FI7ynli6LZ1/nVUNu+0ww=
github.com/tinylib/msgp v1.3.0/go.mod h1:ykjzy2wzgrlvpDCRc4LA8UXy6D8bzMSuAF3WD57Gok0=
github.com/tklauser/go-sysconf v0.3.15 h1:VE89k0criAymJ/Os65CSn1IXaol+1wrsFHEB8Ol49K4=
github.com/tklauser/go-sysconf v0.3.15/go.mod h1:Dmjwr6tYFIseJw7a3dRLJfsHAMXZ3nEnL/aZY+0IuI4=
github.com/tklauser/numcpus v0.10.0 h1:18njr6LDBk1zuna922MgdjQuJFjrdppsZG60sHGfjso=
github.com/tklauser/numcpus v0.10.0/go.mod h1:BiTKazU708GQTYF4mB+cmlpT2Is1gLk7XVuEeem8LsQ=
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
github.com/ugorji/go/codec v1.2.12 h1:9LC83zGrHhuUA9l16C9AHXAqEV/2wBQ4nkvumAE65EE=
github.com/ugorji/go/codec v1.2.12/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/multierr v1.10.0 h1:S0h4aNzvfcFsC3dRF1jLoaov7oRaKqRGC/pUEJ2yvPQ=
go.uber.org/multierr v1.10.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/arch v0.8.0 h1:3wRIsP3pM4yUptoR96otTUOXI367OS0+c9eeRi9doIc=
golang.org/x/arch v0.8.0/go.mod h1:FEVrYAQjsQXMVJ1nsMoVVXPZg6p2JE2mx8psSWTDQys=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.39.0 h1:ZCu7HMWDxpXpaiKdhzIfaltL9Lp31x/3fCP11bc6/fY=
golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.34.0 h1:H5Y5sJ2L2JRdyv7ROF1he/lPdvFsd0mJHFw2ThKHxLA=
golang.org/x/sys v0.34.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M=
golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA=
golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.34.1 h1:9ddQBjfCyZPOHPUiPxpYESBLc+T8P3E+Vo4IbKZgFWg=
google.golang.org/protobuf v1.34.1/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -1,383 +0,0 @@
package backup
import (
"fmt"
"net/http"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/gin-gonic/gin"
)
// Handler handles backup-related API requests
type Handler struct {
service *Service
logger *logger.Logger
}
// NewHandler creates a new backup handler
func NewHandler(service *Service, log *logger.Logger) *Handler {
return &Handler{
service: service,
logger: log,
}
}
// ListJobs lists backup jobs with optional filters
func (h *Handler) ListJobs(c *gin.Context) {
opts := ListJobsOptions{
Status: c.Query("status"),
JobType: c.Query("job_type"),
ClientName: c.Query("client_name"),
JobName: c.Query("job_name"),
}
// Parse pagination
var limit, offset int
if limitStr := c.Query("limit"); limitStr != "" {
if _, err := fmt.Sscanf(limitStr, "%d", &limit); err == nil {
opts.Limit = limit
}
}
if offsetStr := c.Query("offset"); offsetStr != "" {
if _, err := fmt.Sscanf(offsetStr, "%d", &offset); err == nil {
opts.Offset = offset
}
}
jobs, totalCount, err := h.service.ListJobs(c.Request.Context(), opts)
if err != nil {
h.logger.Error("Failed to list jobs", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list jobs"})
return
}
if jobs == nil {
jobs = []Job{}
}
c.JSON(http.StatusOK, gin.H{
"jobs": jobs,
"total": totalCount,
"limit": opts.Limit,
"offset": opts.Offset,
})
}
// GetJob retrieves a job by ID
func (h *Handler) GetJob(c *gin.Context) {
id := c.Param("id")
job, err := h.service.GetJob(c.Request.Context(), id)
if err != nil {
if err.Error() == "job not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "job not found"})
return
}
h.logger.Error("Failed to get job", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get job"})
return
}
c.JSON(http.StatusOK, job)
}
// CreateJob creates a new backup job
func (h *Handler) CreateJob(c *gin.Context) {
var req CreateJobRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
// Validate job type
validJobTypes := map[string]bool{
"Backup": true, "Restore": true, "Verify": true, "Copy": true, "Migrate": true,
}
if !validJobTypes[req.JobType] {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid job_type"})
return
}
// Validate job level
validJobLevels := map[string]bool{
"Full": true, "Incremental": true, "Differential": true, "Since": true,
}
if !validJobLevels[req.JobLevel] {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid job_level"})
return
}
job, err := h.service.CreateJob(c.Request.Context(), req)
if err != nil {
h.logger.Error("Failed to create job", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create job"})
return
}
c.JSON(http.StatusCreated, job)
}
// ExecuteBconsoleCommand executes a bconsole command
func (h *Handler) ExecuteBconsoleCommand(c *gin.Context) {
var req struct {
Command string `json:"command" binding:"required"`
}
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "command is required"})
return
}
output, err := h.service.ExecuteBconsoleCommand(c.Request.Context(), req.Command)
if err != nil {
h.logger.Error("Failed to execute bconsole command", "error", err, "command", req.Command)
c.JSON(http.StatusInternalServerError, gin.H{
"error": "failed to execute command",
"output": output,
"details": err.Error(),
})
return
}
c.JSON(http.StatusOK, gin.H{
"output": output,
})
}
// ListClients lists all backup clients with optional filters
func (h *Handler) ListClients(c *gin.Context) {
opts := ListClientsOptions{}
// Parse enabled filter
if enabledStr := c.Query("enabled"); enabledStr != "" {
enabled := enabledStr == "true"
opts.Enabled = &enabled
}
// Parse search query
opts.Search = c.Query("search")
clients, err := h.service.ListClients(c.Request.Context(), opts)
if err != nil {
h.logger.Error("Failed to list clients", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{
"error": "failed to list clients",
"details": err.Error(),
})
return
}
if clients == nil {
clients = []Client{}
}
c.JSON(http.StatusOK, gin.H{
"clients": clients,
"total": len(clients),
})
}
// GetDashboardStats returns dashboard statistics
func (h *Handler) GetDashboardStats(c *gin.Context) {
stats, err := h.service.GetDashboardStats(c.Request.Context())
if err != nil {
h.logger.Error("Failed to get dashboard stats", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get dashboard stats"})
return
}
c.JSON(http.StatusOK, stats)
}
// ListStoragePools lists all storage pools
func (h *Handler) ListStoragePools(c *gin.Context) {
pools, err := h.service.ListStoragePools(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list storage pools", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list storage pools"})
return
}
if pools == nil {
pools = []StoragePool{}
}
h.logger.Info("Listed storage pools", "count", len(pools))
c.JSON(http.StatusOK, gin.H{
"pools": pools,
"total": len(pools),
})
}
// ListStorageVolumes lists all storage volumes
func (h *Handler) ListStorageVolumes(c *gin.Context) {
poolName := c.Query("pool_name")
volumes, err := h.service.ListStorageVolumes(c.Request.Context(), poolName)
if err != nil {
h.logger.Error("Failed to list storage volumes", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list storage volumes"})
return
}
if volumes == nil {
volumes = []StorageVolume{}
}
c.JSON(http.StatusOK, gin.H{
"volumes": volumes,
"total": len(volumes),
})
}
// ListStorageDaemons lists all storage daemons
func (h *Handler) ListStorageDaemons(c *gin.Context) {
daemons, err := h.service.ListStorageDaemons(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list storage daemons", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list storage daemons"})
return
}
if daemons == nil {
daemons = []StorageDaemon{}
}
c.JSON(http.StatusOK, gin.H{
"daemons": daemons,
"total": len(daemons),
})
}
// CreateStoragePool creates a new storage pool
func (h *Handler) CreateStoragePool(c *gin.Context) {
var req CreatePoolRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
pool, err := h.service.CreateStoragePool(c.Request.Context(), req)
if err != nil {
h.logger.Error("Failed to create storage pool", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusCreated, pool)
}
// DeleteStoragePool deletes a storage pool
func (h *Handler) DeleteStoragePool(c *gin.Context) {
idStr := c.Param("id")
if idStr == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "pool ID is required"})
return
}
var poolID int
if _, err := fmt.Sscanf(idStr, "%d", &poolID); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid pool ID"})
return
}
err := h.service.DeleteStoragePool(c.Request.Context(), poolID)
if err != nil {
h.logger.Error("Failed to delete storage pool", "error", err, "pool_id", poolID)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "pool deleted successfully"})
}
// CreateStorageVolume creates a new storage volume
func (h *Handler) CreateStorageVolume(c *gin.Context) {
var req CreateVolumeRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
volume, err := h.service.CreateStorageVolume(c.Request.Context(), req)
if err != nil {
h.logger.Error("Failed to create storage volume", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusCreated, volume)
}
// UpdateStorageVolume updates a storage volume
func (h *Handler) UpdateStorageVolume(c *gin.Context) {
idStr := c.Param("id")
if idStr == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "volume ID is required"})
return
}
var volumeID int
if _, err := fmt.Sscanf(idStr, "%d", &volumeID); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid volume ID"})
return
}
var req UpdateVolumeRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
volume, err := h.service.UpdateStorageVolume(c.Request.Context(), volumeID, req)
if err != nil {
h.logger.Error("Failed to update storage volume", "error", err, "volume_id", volumeID)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, volume)
}
// DeleteStorageVolume deletes a storage volume
func (h *Handler) DeleteStorageVolume(c *gin.Context) {
idStr := c.Param("id")
if idStr == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "volume ID is required"})
return
}
var volumeID int
if _, err := fmt.Sscanf(idStr, "%d", &volumeID); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid volume ID"})
return
}
err := h.service.DeleteStorageVolume(c.Request.Context(), volumeID)
if err != nil {
h.logger.Error("Failed to delete storage volume", "error", err, "volume_id", volumeID)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "volume deleted successfully"})
}
// ListMedia lists all media from bconsole "list media" command
func (h *Handler) ListMedia(c *gin.Context) {
media, err := h.service.ListMedia(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list media", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
if media == nil {
media = []Media{}
}
h.logger.Info("Listed media", "count", len(media))
c.JSON(http.StatusOK, gin.H{
"media": media,
"total": len(media),
})
}

File diff suppressed because it is too large Load Diff

View File

@@ -10,12 +10,11 @@ import (
// Config represents the application configuration
type Config struct {
Server ServerConfig `yaml:"server"`
Database DatabaseConfig `yaml:"database"`
Auth AuthConfig `yaml:"auth"`
Logging LoggingConfig `yaml:"logging"`
Security SecurityConfig `yaml:"security"`
ObjectStorage ObjectStorageConfig `yaml:"object_storage"`
Server ServerConfig `yaml:"server"`
Database DatabaseConfig `yaml:"database"`
Auth AuthConfig `yaml:"auth"`
Logging LoggingConfig `yaml:"logging"`
Security SecurityConfig `yaml:"security"`
}
// ServerConfig holds HTTP server configuration
@@ -97,14 +96,6 @@ type SecurityHeadersConfig struct {
Enabled bool `yaml:"enabled"`
}
// ObjectStorageConfig holds MinIO configuration
type ObjectStorageConfig struct {
Endpoint string `yaml:"endpoint"`
AccessKey string `yaml:"access_key"`
SecretKey string `yaml:"secret_key"`
UseSSL bool `yaml:"use_ssl"`
}
// Load reads configuration from file and environment variables
func Load(path string) (*Config, error) {
cfg := DefaultConfig()

View File

@@ -59,7 +59,7 @@ func RunMigrations(ctx context.Context, db *DB) error {
if _, err := tx.ExecContext(ctx, string(sql)); err != nil {
tx.Rollback()
return fmt.Errorf("failed to execute migration %d: %w", migration.Version, err)
return fmt.Errorf("failed to execute migration %s: %w", migration.Version, err)
}
// Record migration
@@ -68,11 +68,11 @@ func RunMigrations(ctx context.Context, db *DB) error {
migration.Version,
); err != nil {
tx.Rollback()
return fmt.Errorf("failed to record migration %d: %w", migration.Version, err)
return fmt.Errorf("failed to record migration %s: %w", migration.Version, err)
}
if err := tx.Commit(); err != nil {
return fmt.Errorf("failed to commit migration %d: %w", migration.Version, err)
return fmt.Errorf("failed to commit migration %s: %w", migration.Version, err)
}
log.Info("Migration applied successfully", "version", migration.Version)

View File

@@ -1,22 +0,0 @@
-- Migration: Object Storage Configuration
-- Description: Creates table for storing MinIO object storage configuration
-- Date: 2026-01-09
CREATE TABLE IF NOT EXISTS object_storage_config (
id SERIAL PRIMARY KEY,
dataset_path VARCHAR(255) NOT NULL UNIQUE,
mount_point VARCHAR(512) NOT NULL,
pool_name VARCHAR(255) NOT NULL,
dataset_name VARCHAR(255) NOT NULL,
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS idx_object_storage_config_pool_name ON object_storage_config(pool_name);
CREATE INDEX IF NOT EXISTS idx_object_storage_config_updated_at ON object_storage_config(updated_at);
COMMENT ON TABLE object_storage_config IS 'Stores MinIO object storage configuration, linking to ZFS datasets';
COMMENT ON COLUMN object_storage_config.dataset_path IS 'Full ZFS dataset path (e.g., pool/dataset)';
COMMENT ON COLUMN object_storage_config.mount_point IS 'Mount point path for the dataset';
COMMENT ON COLUMN object_storage_config.pool_name IS 'ZFS pool name';
COMMENT ON COLUMN object_storage_config.dataset_name IS 'ZFS dataset name';

View File

@@ -1,3 +0,0 @@
-- Add vendor column to virtual_tape_libraries table
ALTER TABLE virtual_tape_libraries ADD COLUMN IF NOT EXISTS vendor VARCHAR(255);

View File

@@ -1,45 +0,0 @@
-- Add user groups feature
-- Groups table
CREATE TABLE IF NOT EXISTS groups (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(255) NOT NULL UNIQUE,
description TEXT,
is_system BOOLEAN NOT NULL DEFAULT false,
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
);
-- User groups junction table
CREATE TABLE IF NOT EXISTS user_groups (
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
group_id UUID NOT NULL REFERENCES groups(id) ON DELETE CASCADE,
assigned_at TIMESTAMP NOT NULL DEFAULT NOW(),
assigned_by UUID REFERENCES users(id),
PRIMARY KEY (user_id, group_id)
);
-- Group roles junction table (groups can have roles)
CREATE TABLE IF NOT EXISTS group_roles (
group_id UUID NOT NULL REFERENCES groups(id) ON DELETE CASCADE,
role_id UUID NOT NULL REFERENCES roles(id) ON DELETE CASCADE,
granted_at TIMESTAMP NOT NULL DEFAULT NOW(),
PRIMARY KEY (group_id, role_id)
);
-- Indexes
CREATE INDEX IF NOT EXISTS idx_groups_name ON groups(name);
CREATE INDEX IF NOT EXISTS idx_user_groups_user_id ON user_groups(user_id);
CREATE INDEX IF NOT EXISTS idx_user_groups_group_id ON user_groups(group_id);
CREATE INDEX IF NOT EXISTS idx_group_roles_group_id ON group_roles(group_id);
CREATE INDEX IF NOT EXISTS idx_group_roles_role_id ON group_roles(role_id);
-- Insert default system groups
INSERT INTO groups (name, description, is_system) VALUES
('wheel', 'System administrators group', true),
('operators', 'System operators group', true),
('backup', 'Backup operators group', true),
('auditors', 'Auditors group', true),
('storage_admins', 'Storage administrators group', true),
('services', 'Service accounts group', true)
ON CONFLICT (name) DO NOTHING;

View File

@@ -1,34 +0,0 @@
-- AtlasOS - Calypso
-- Backup Jobs Schema
-- Version: 9.0
-- Backup jobs table
CREATE TABLE IF NOT EXISTS backup_jobs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
job_id INTEGER NOT NULL UNIQUE, -- Bareos job ID
job_name VARCHAR(255) NOT NULL,
client_name VARCHAR(255) NOT NULL,
job_type VARCHAR(50) NOT NULL, -- 'Backup', 'Restore', 'Verify', 'Copy', 'Migrate'
job_level VARCHAR(50) NOT NULL, -- 'Full', 'Incremental', 'Differential', 'Since'
status VARCHAR(50) NOT NULL, -- 'Running', 'Completed', 'Failed', 'Canceled', 'Waiting'
bytes_written BIGINT NOT NULL DEFAULT 0,
files_written INTEGER NOT NULL DEFAULT 0,
duration_seconds INTEGER,
started_at TIMESTAMP,
ended_at TIMESTAMP,
error_message TEXT,
storage_name VARCHAR(255),
pool_name VARCHAR(255),
volume_name VARCHAR(255),
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
);
-- Indexes for performance
CREATE INDEX IF NOT EXISTS idx_backup_jobs_job_id ON backup_jobs(job_id);
CREATE INDEX IF NOT EXISTS idx_backup_jobs_job_name ON backup_jobs(job_name);
CREATE INDEX IF NOT EXISTS idx_backup_jobs_client_name ON backup_jobs(client_name);
CREATE INDEX IF NOT EXISTS idx_backup_jobs_status ON backup_jobs(status);
CREATE INDEX IF NOT EXISTS idx_backup_jobs_started_at ON backup_jobs(started_at DESC);
CREATE INDEX IF NOT EXISTS idx_backup_jobs_job_type ON backup_jobs(job_type);

View File

@@ -1,39 +0,0 @@
-- AtlasOS - Calypso
-- Add Backup Permissions
-- Version: 10.0
-- Insert backup permissions
INSERT INTO permissions (name, resource, action, description) VALUES
('backup:read', 'backup', 'read', 'View backup jobs and history'),
('backup:write', 'backup', 'write', 'Create and manage backup jobs'),
('backup:manage', 'backup', 'manage', 'Full backup management')
ON CONFLICT (name) DO NOTHING;
-- Assign backup permissions to roles
-- Admin gets all backup permissions (explicitly assign since admin query in 001 only runs once)
INSERT INTO role_permissions (role_id, permission_id)
SELECT r.id, p.id
FROM roles r, permissions p
WHERE r.name = 'admin'
AND p.resource = 'backup'
ON CONFLICT DO NOTHING;
-- Operator gets read and write permissions for backup
INSERT INTO role_permissions (role_id, permission_id)
SELECT r.id, p.id
FROM roles r, permissions p
WHERE r.name = 'operator'
AND p.resource = 'backup'
AND p.action IN ('read', 'write')
ON CONFLICT DO NOTHING;
-- ReadOnly gets only read permission for backup
INSERT INTO role_permissions (role_id, permission_id)
SELECT r.id, p.id
FROM roles r, permissions p
WHERE r.name = 'readonly'
AND p.resource = 'backup'
AND p.action = 'read'
ON CONFLICT DO NOTHING;

View File

@@ -1,209 +0,0 @@
-- AtlasOS - Calypso
-- PostgreSQL Function to Sync Jobs from Bacula to Calypso
-- Version: 11.0
--
-- This function syncs jobs from Bacula database (Job table) to Calypso database (backup_jobs table)
-- Uses dblink extension to query Bacula database from Calypso database
--
-- Prerequisites:
-- 1. dblink extension must be installed: CREATE EXTENSION IF NOT EXISTS dblink;
-- 2. User must have access to both databases
-- 3. Connection parameters must be configured in the function
-- Create function to sync jobs from Bacula to Calypso
CREATE OR REPLACE FUNCTION sync_bacula_jobs(
bacula_db_name TEXT DEFAULT 'bacula',
bacula_host TEXT DEFAULT 'localhost',
bacula_port INTEGER DEFAULT 5432,
bacula_user TEXT DEFAULT 'calypso',
bacula_password TEXT DEFAULT ''
)
RETURNS TABLE(
jobs_synced INTEGER,
jobs_inserted INTEGER,
jobs_updated INTEGER,
errors INTEGER
) AS $$
DECLARE
conn_str TEXT;
jobs_count INTEGER := 0;
inserted_count INTEGER := 0;
updated_count INTEGER := 0;
error_count INTEGER := 0;
job_record RECORD;
BEGIN
-- Build dblink connection string
conn_str := format(
'dbname=%s host=%s port=%s user=%s password=%s',
bacula_db_name,
bacula_host,
bacula_port,
bacula_user,
bacula_password
);
-- Query jobs from Bacula database using dblink
FOR job_record IN
SELECT * FROM dblink(
conn_str,
$QUERY$
SELECT
j.JobId,
j.Name as job_name,
COALESCE(c.Name, 'unknown') as client_name,
CASE
WHEN j.Type = 'B' THEN 'Backup'
WHEN j.Type = 'R' THEN 'Restore'
WHEN j.Type = 'V' THEN 'Verify'
WHEN j.Type = 'C' THEN 'Copy'
WHEN j.Type = 'M' THEN 'Migrate'
ELSE 'Backup'
END as job_type,
CASE
WHEN j.Level = 'F' THEN 'Full'
WHEN j.Level = 'I' THEN 'Incremental'
WHEN j.Level = 'D' THEN 'Differential'
WHEN j.Level = 'S' THEN 'Since'
ELSE 'Full'
END as job_level,
CASE
WHEN j.JobStatus = 'T' THEN 'Running'
WHEN j.JobStatus = 'C' THEN 'Completed'
WHEN j.JobStatus = 'f' OR j.JobStatus = 'F' THEN 'Failed'
WHEN j.JobStatus = 'A' THEN 'Canceled'
WHEN j.JobStatus = 'W' THEN 'Waiting'
ELSE 'Waiting'
END as status,
COALESCE(j.JobBytes, 0) as bytes_written,
COALESCE(j.JobFiles, 0) as files_written,
j.StartTime as started_at,
j.EndTime as ended_at,
CASE
WHEN j.EndTime IS NOT NULL AND j.StartTime IS NOT NULL
THEN EXTRACT(EPOCH FROM (j.EndTime - j.StartTime))::INTEGER
ELSE NULL
END as duration_seconds
FROM Job j
LEFT JOIN Client c ON j.ClientId = c.ClientId
ORDER BY j.StartTime DESC
LIMIT 1000
$QUERY$
) AS t(
job_id INTEGER,
job_name TEXT,
client_name TEXT,
job_type TEXT,
job_level TEXT,
status TEXT,
bytes_written BIGINT,
files_written INTEGER,
started_at TIMESTAMP,
ended_at TIMESTAMP,
duration_seconds INTEGER
)
LOOP
BEGIN
-- Check if job already exists (before insert/update)
IF EXISTS (SELECT 1 FROM backup_jobs WHERE job_id = job_record.job_id) THEN
updated_count := updated_count + 1;
ELSE
inserted_count := inserted_count + 1;
END IF;
-- Upsert job to backup_jobs table
INSERT INTO backup_jobs (
job_id, job_name, client_name, job_type, job_level, status,
bytes_written, files_written, started_at, ended_at, duration_seconds,
updated_at
) VALUES (
job_record.job_id,
job_record.job_name,
job_record.client_name,
job_record.job_type,
job_record.job_level,
job_record.status,
job_record.bytes_written,
job_record.files_written,
job_record.started_at,
job_record.ended_at,
job_record.duration_seconds,
NOW()
)
ON CONFLICT (job_id) DO UPDATE SET
job_name = EXCLUDED.job_name,
client_name = EXCLUDED.client_name,
job_type = EXCLUDED.job_type,
job_level = EXCLUDED.job_level,
status = EXCLUDED.status,
bytes_written = EXCLUDED.bytes_written,
files_written = EXCLUDED.files_written,
started_at = EXCLUDED.started_at,
ended_at = EXCLUDED.ended_at,
duration_seconds = EXCLUDED.duration_seconds,
updated_at = NOW();
jobs_count := jobs_count + 1;
EXCEPTION
WHEN OTHERS THEN
error_count := error_count + 1;
-- Log error but continue with next job
RAISE WARNING 'Error syncing job %: %', job_record.job_id, SQLERRM;
END;
END LOOP;
-- Return summary
RETURN QUERY SELECT jobs_count, inserted_count, updated_count, error_count;
END;
$$ LANGUAGE plpgsql;
-- Create a simpler version that uses current database connection settings
-- This version assumes Bacula is on same host/port with same user
CREATE OR REPLACE FUNCTION sync_bacula_jobs_simple()
RETURNS TABLE(
jobs_synced INTEGER,
jobs_inserted INTEGER,
jobs_updated INTEGER,
errors INTEGER
) AS $$
DECLARE
current_user_name TEXT;
current_host TEXT;
current_port INTEGER;
current_db TEXT;
BEGIN
-- Get current connection info
SELECT
current_user,
COALESCE(inet_server_addr()::TEXT, 'localhost'),
COALESCE(inet_server_port(), 5432),
current_database()
INTO
current_user_name,
current_host,
current_port,
current_db;
-- Call main function with current connection settings
-- Note: password needs to be passed or configured in .pgpass
RETURN QUERY
SELECT * FROM sync_bacula_jobs(
'bacula', -- Try 'bacula' first
current_host,
current_port,
current_user_name,
'' -- Empty password - will use .pgpass or peer authentication
);
END;
$$ LANGUAGE plpgsql;
-- Grant execute permission to calypso user
GRANT EXECUTE ON FUNCTION sync_bacula_jobs(TEXT, TEXT, INTEGER, TEXT, TEXT) TO calypso;
GRANT EXECUTE ON FUNCTION sync_bacula_jobs_simple() TO calypso;
-- Create index if not exists (should already exist from migration 009)
CREATE INDEX IF NOT EXISTS idx_backup_jobs_job_id ON backup_jobs(job_id);
CREATE INDEX IF NOT EXISTS idx_backup_jobs_updated_at ON backup_jobs(updated_at);
COMMENT ON FUNCTION sync_bacula_jobs IS 'Syncs jobs from Bacula database to Calypso backup_jobs table using dblink';
COMMENT ON FUNCTION sync_bacula_jobs_simple IS 'Simplified version that uses current connection settings (requires .pgpass for password)';

View File

@@ -1,209 +0,0 @@
-- AtlasOS - Calypso
-- PostgreSQL Function to Sync Jobs from Bacula to Calypso
-- Version: 11.0
--
-- This function syncs jobs from Bacula database (Job table) to Calypso database (backup_jobs table)
-- Uses dblink extension to query Bacula database from Calypso database
--
-- Prerequisites:
-- 1. dblink extension must be installed: CREATE EXTENSION IF NOT EXISTS dblink;
-- 2. User must have access to both databases
-- 3. Connection parameters must be configured in the function
-- Create function to sync jobs from Bacula to Calypso
CREATE OR REPLACE FUNCTION sync_bacula_jobs(
bacula_db_name TEXT DEFAULT 'bacula',
bacula_host TEXT DEFAULT 'localhost',
bacula_port INTEGER DEFAULT 5432,
bacula_user TEXT DEFAULT 'calypso',
bacula_password TEXT DEFAULT ''
)
RETURNS TABLE(
jobs_synced INTEGER,
jobs_inserted INTEGER,
jobs_updated INTEGER,
errors INTEGER
) AS $$
DECLARE
conn_str TEXT;
jobs_count INTEGER := 0;
inserted_count INTEGER := 0;
updated_count INTEGER := 0;
error_count INTEGER := 0;
job_record RECORD;
BEGIN
-- Build dblink connection string
conn_str := format(
'dbname=%s host=%s port=%s user=%s password=%s',
bacula_db_name,
bacula_host,
bacula_port,
bacula_user,
bacula_password
);
-- Query jobs from Bacula database using dblink
FOR job_record IN
SELECT * FROM dblink(
conn_str,
$$
SELECT
j.JobId,
j.Name as job_name,
COALESCE(c.Name, 'unknown') as client_name,
CASE
WHEN j.Type = 'B' THEN 'Backup'
WHEN j.Type = 'R' THEN 'Restore'
WHEN j.Type = 'V' THEN 'Verify'
WHEN j.Type = 'C' THEN 'Copy'
WHEN j.Type = 'M' THEN 'Migrate'
ELSE 'Backup'
END as job_type,
CASE
WHEN j.Level = 'F' THEN 'Full'
WHEN j.Level = 'I' THEN 'Incremental'
WHEN j.Level = 'D' THEN 'Differential'
WHEN j.Level = 'S' THEN 'Since'
ELSE 'Full'
END as job_level,
CASE
WHEN j.JobStatus = 'T' THEN 'Running'
WHEN j.JobStatus = 'C' THEN 'Completed'
WHEN j.JobStatus = 'f' OR j.JobStatus = 'F' THEN 'Failed'
WHEN j.JobStatus = 'A' THEN 'Canceled'
WHEN j.JobStatus = 'W' THEN 'Waiting'
ELSE 'Waiting'
END as status,
COALESCE(j.JobBytes, 0) as bytes_written,
COALESCE(j.JobFiles, 0) as files_written,
j.StartTime as started_at,
j.EndTime as ended_at,
CASE
WHEN j.EndTime IS NOT NULL AND j.StartTime IS NOT NULL
THEN EXTRACT(EPOCH FROM (j.EndTime - j.StartTime))::INTEGER
ELSE NULL
END as duration_seconds
FROM Job j
LEFT JOIN Client c ON j.ClientId = c.ClientId
ORDER BY j.StartTime DESC
LIMIT 1000
$$
) AS t(
job_id INTEGER,
job_name TEXT,
client_name TEXT,
job_type TEXT,
job_level TEXT,
status TEXT,
bytes_written BIGINT,
files_written INTEGER,
started_at TIMESTAMP,
ended_at TIMESTAMP,
duration_seconds INTEGER
)
LOOP
BEGIN
-- Check if job already exists (before insert/update)
IF EXISTS (SELECT 1 FROM backup_jobs WHERE job_id = job_record.job_id) THEN
updated_count := updated_count + 1;
ELSE
inserted_count := inserted_count + 1;
END IF;
-- Upsert job to backup_jobs table
INSERT INTO backup_jobs (
job_id, job_name, client_name, job_type, job_level, status,
bytes_written, files_written, started_at, ended_at, duration_seconds,
updated_at
) VALUES (
job_record.job_id,
job_record.job_name,
job_record.client_name,
job_record.job_type,
job_record.job_level,
job_record.status,
job_record.bytes_written,
job_record.files_written,
job_record.started_at,
job_record.ended_at,
job_record.duration_seconds,
NOW()
)
ON CONFLICT (job_id) DO UPDATE SET
job_name = EXCLUDED.job_name,
client_name = EXCLUDED.client_name,
job_type = EXCLUDED.job_type,
job_level = EXCLUDED.job_level,
status = EXCLUDED.status,
bytes_written = EXCLUDED.bytes_written,
files_written = EXCLUDED.files_written,
started_at = EXCLUDED.started_at,
ended_at = EXCLUDED.ended_at,
duration_seconds = EXCLUDED.duration_seconds,
updated_at = NOW();
jobs_count := jobs_count + 1;
EXCEPTION
WHEN OTHERS THEN
error_count := error_count + 1;
-- Log error but continue with next job
RAISE WARNING 'Error syncing job %: %', job_record.job_id, SQLERRM;
END;
END LOOP;
-- Return summary
RETURN QUERY SELECT jobs_count, inserted_count, updated_count, error_count;
END;
$$ LANGUAGE plpgsql;
-- Create a simpler version that uses current database connection settings
-- This version assumes Bacula is on same host/port with same user
CREATE OR REPLACE FUNCTION sync_bacula_jobs_simple()
RETURNS TABLE(
jobs_synced INTEGER,
jobs_inserted INTEGER,
jobs_updated INTEGER,
errors INTEGER
) AS $$
DECLARE
current_user_name TEXT;
current_host TEXT;
current_port INTEGER;
current_db TEXT;
BEGIN
-- Get current connection info
SELECT
current_user,
COALESCE(inet_server_addr()::TEXT, 'localhost'),
COALESCE(inet_server_port(), 5432),
current_database()
INTO
current_user_name,
current_host,
current_port,
current_db;
-- Call main function with current connection settings
-- Note: password needs to be passed or configured in .pgpass
RETURN QUERY
SELECT * FROM sync_bacula_jobs(
'bacula', -- Try 'bacula' first
current_host,
current_port,
current_user_name,
'' -- Empty password - will use .pgpass or peer authentication
);
END;
$$ LANGUAGE plpgsql;
-- Grant execute permission to calypso user
GRANT EXECUTE ON FUNCTION sync_bacula_jobs(TEXT, TEXT, INTEGER, TEXT, TEXT) TO calypso;
GRANT EXECUTE ON FUNCTION sync_bacula_jobs_simple() TO calypso;
-- Create index if not exists (should already exist from migration 009)
CREATE INDEX IF NOT EXISTS idx_backup_jobs_job_id ON backup_jobs(job_id);
CREATE INDEX IF NOT EXISTS idx_backup_jobs_updated_at ON backup_jobs(updated_at);
COMMENT ON FUNCTION sync_bacula_jobs IS 'Syncs jobs from Bacula database to Calypso backup_jobs table using dblink';
COMMENT ON FUNCTION sync_bacula_jobs_simple IS 'Simplified version that uses current connection settings (requires .pgpass for password)';

View File

@@ -51,13 +51,6 @@ func cacheMiddleware(cfg CacheConfig, cache *cache.Cache) gin.HandlerFunc {
return
}
// Don't cache VTL endpoints - they change frequently
path := c.Request.URL.Path
if strings.HasPrefix(path, "/api/v1/tape/vtl/") {
c.Next()
return
}
// Generate cache key from request path and query string
keyParts := []string{c.Request.URL.Path}
if c.Request.URL.RawQuery != "" {

View File

@@ -13,30 +13,24 @@ import (
// authMiddleware validates JWT tokens and sets user context
func authMiddleware(authHandler *auth.Handler) gin.HandlerFunc {
return func(c *gin.Context) {
var token string
// Try to extract token from Authorization header first
// Extract token from Authorization header
authHeader := c.GetHeader("Authorization")
if authHeader != "" {
// Parse Bearer token
parts := strings.SplitN(authHeader, " ", 2)
if len(parts) == 2 && parts[0] == "Bearer" {
token = parts[1]
}
}
// If no token from header, try query parameter (for WebSocket)
if token == "" {
token = c.Query("token")
}
// If still no token, return error
if token == "" {
c.JSON(http.StatusUnauthorized, gin.H{"error": "missing authorization token"})
if authHeader == "" {
c.JSON(http.StatusUnauthorized, gin.H{"error": "missing authorization header"})
c.Abort()
return
}
// Parse Bearer token
parts := strings.SplitN(authHeader, " ", 2)
if len(parts) != 2 || parts[0] != "Bearer" {
c.JSON(http.StatusUnauthorized, gin.H{"error": "invalid authorization header format"})
c.Abort()
return
}
token := parts[1]
// Validate token and get user
user, err := authHandler.ValidateToken(token)
if err != nil {

View File

@@ -6,16 +6,13 @@ import (
"github.com/atlasos/calypso/internal/audit"
"github.com/atlasos/calypso/internal/auth"
"github.com/atlasos/calypso/internal/backup"
"github.com/atlasos/calypso/internal/common/cache"
"github.com/atlasos/calypso/internal/common/config"
"github.com/atlasos/calypso/internal/common/database"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/atlasos/calypso/internal/iam"
"github.com/atlasos/calypso/internal/monitoring"
"github.com/atlasos/calypso/internal/object_storage"
"github.com/atlasos/calypso/internal/scst"
"github.com/atlasos/calypso/internal/shares"
"github.com/atlasos/calypso/internal/storage"
"github.com/atlasos/calypso/internal/system"
"github.com/atlasos/calypso/internal/tape_physical"
@@ -200,57 +197,6 @@ func NewRouter(cfg *config.Config, db *database.DB, log *logger.Logger) *gin.Eng
storageGroup.GET("/zfs/arc/stats", storageHandler.GetARCStats)
}
// Shares (CIFS/NFS)
sharesHandler := shares.NewHandler(db, log)
sharesGroup := protected.Group("/shares")
sharesGroup.Use(requirePermission("storage", "read"))
{
sharesGroup.GET("", sharesHandler.ListShares)
sharesGroup.GET("/:id", sharesHandler.GetShare)
sharesGroup.POST("", requirePermission("storage", "write"), sharesHandler.CreateShare)
sharesGroup.PUT("/:id", requirePermission("storage", "write"), sharesHandler.UpdateShare)
sharesGroup.DELETE("/:id", requirePermission("storage", "write"), sharesHandler.DeleteShare)
}
// Object Storage (MinIO)
// Initialize MinIO service if configured
if cfg.ObjectStorage.Endpoint != "" {
objectStorageService, err := object_storage.NewService(
cfg.ObjectStorage.Endpoint,
cfg.ObjectStorage.AccessKey,
cfg.ObjectStorage.SecretKey,
log,
)
if err != nil {
log.Error("Failed to initialize MinIO service", "error", err)
} else {
objectStorageHandler := object_storage.NewHandler(objectStorageService, db, log)
objectStorageGroup := protected.Group("/object-storage")
objectStorageGroup.Use(requirePermission("storage", "read"))
{
// Setup endpoints
objectStorageGroup.GET("/setup/datasets", objectStorageHandler.GetAvailableDatasets)
objectStorageGroup.GET("/setup/current", objectStorageHandler.GetCurrentSetup)
objectStorageGroup.POST("/setup", requirePermission("storage", "write"), objectStorageHandler.SetupObjectStorage)
objectStorageGroup.PUT("/setup", requirePermission("storage", "write"), objectStorageHandler.UpdateObjectStorage)
// Bucket endpoints
objectStorageGroup.GET("/buckets", objectStorageHandler.ListBuckets)
objectStorageGroup.GET("/buckets/:name", objectStorageHandler.GetBucket)
objectStorageGroup.POST("/buckets", requirePermission("storage", "write"), objectStorageHandler.CreateBucket)
objectStorageGroup.DELETE("/buckets/:name", requirePermission("storage", "write"), objectStorageHandler.DeleteBucket)
// User management routes
objectStorageGroup.GET("/users", objectStorageHandler.ListUsers)
objectStorageGroup.POST("/users", requirePermission("storage", "write"), objectStorageHandler.CreateUser)
objectStorageGroup.DELETE("/users/:access_key", requirePermission("storage", "write"), objectStorageHandler.DeleteUser)
// Service account (access key) management routes
objectStorageGroup.GET("/service-accounts", objectStorageHandler.ListServiceAccounts)
objectStorageGroup.POST("/service-accounts", requirePermission("storage", "write"), objectStorageHandler.CreateServiceAccount)
objectStorageGroup.DELETE("/service-accounts/:access_key", requirePermission("storage", "write"), objectStorageHandler.DeleteServiceAccount)
}
}
}
// SCST
scstHandler := scst.NewHandler(db, log)
scstGroup := protected.Group("/scst")
@@ -259,35 +205,10 @@ func NewRouter(cfg *config.Config, db *database.DB, log *logger.Logger) *gin.Eng
scstGroup.GET("/targets", scstHandler.ListTargets)
scstGroup.GET("/targets/:id", scstHandler.GetTarget)
scstGroup.POST("/targets", scstHandler.CreateTarget)
scstGroup.POST("/targets/:id/luns", requirePermission("iscsi", "write"), scstHandler.AddLUN)
scstGroup.DELETE("/targets/:id/luns/:lunId", requirePermission("iscsi", "write"), scstHandler.RemoveLUN)
scstGroup.POST("/targets/:id/luns", scstHandler.AddLUN)
scstGroup.POST("/targets/:id/initiators", scstHandler.AddInitiator)
scstGroup.POST("/targets/:id/enable", scstHandler.EnableTarget)
scstGroup.POST("/targets/:id/disable", scstHandler.DisableTarget)
scstGroup.DELETE("/targets/:id", requirePermission("iscsi", "write"), scstHandler.DeleteTarget)
scstGroup.GET("/initiators", scstHandler.ListAllInitiators)
scstGroup.GET("/initiators/:id", scstHandler.GetInitiator)
scstGroup.DELETE("/initiators/:id", scstHandler.RemoveInitiator)
scstGroup.GET("/extents", scstHandler.ListExtents)
scstGroup.POST("/extents", scstHandler.CreateExtent)
scstGroup.DELETE("/extents/:device", scstHandler.DeleteExtent)
scstGroup.POST("/config/apply", scstHandler.ApplyConfig)
scstGroup.GET("/handlers", scstHandler.ListHandlers)
scstGroup.GET("/portals", scstHandler.ListPortals)
scstGroup.GET("/portals/:id", scstHandler.GetPortal)
scstGroup.POST("/portals", scstHandler.CreatePortal)
scstGroup.PUT("/portals/:id", scstHandler.UpdatePortal)
scstGroup.DELETE("/portals/:id", scstHandler.DeletePortal)
// Initiator Groups routes
scstGroup.GET("/initiator-groups", scstHandler.ListAllInitiatorGroups)
scstGroup.GET("/initiator-groups/:id", scstHandler.GetInitiatorGroup)
scstGroup.POST("/initiator-groups", requirePermission("iscsi", "write"), scstHandler.CreateInitiatorGroup)
scstGroup.PUT("/initiator-groups/:id", requirePermission("iscsi", "write"), scstHandler.UpdateInitiatorGroup)
scstGroup.DELETE("/initiator-groups/:id", requirePermission("iscsi", "write"), scstHandler.DeleteInitiatorGroup)
scstGroup.POST("/initiator-groups/:id/initiators", requirePermission("iscsi", "write"), scstHandler.AddInitiatorToGroup)
// Config file management
scstGroup.GET("/config/file", requirePermission("iscsi", "read"), scstHandler.GetConfigFile)
scstGroup.PUT("/config/file", requirePermission("iscsi", "write"), scstHandler.UpdateConfigFile)
}
// Physical Tape Libraries
@@ -325,18 +246,7 @@ func NewRouter(cfg *config.Config, db *database.DB, log *logger.Logger) *gin.Eng
}
// System Management
systemService := system.NewService(log)
systemHandler := system.NewHandler(log, tasks.NewEngine(db, log))
// Set service in handler (if handler needs direct access)
// Note: Handler already has service via NewHandler, but we need to ensure it's the same instance
// Start network monitoring with RRD
if err := systemService.StartNetworkMonitoring(context.Background()); err != nil {
log.Warn("Failed to start network monitoring", "error", err)
} else {
log.Info("Network monitoring started with RRD")
}
systemGroup := protected.Group("/system")
systemGroup.Use(requirePermission("system", "read"))
{
@@ -344,90 +254,19 @@ func NewRouter(cfg *config.Config, db *database.DB, log *logger.Logger) *gin.Eng
systemGroup.GET("/services/:name", systemHandler.GetServiceStatus)
systemGroup.POST("/services/:name/restart", systemHandler.RestartService)
systemGroup.GET("/services/:name/logs", systemHandler.GetServiceLogs)
systemGroup.GET("/logs", systemHandler.GetSystemLogs)
systemGroup.GET("/network/throughput", systemHandler.GetNetworkThroughput)
systemGroup.POST("/support-bundle", systemHandler.GenerateSupportBundle)
systemGroup.GET("/interfaces", systemHandler.ListNetworkInterfaces)
systemGroup.GET("/management-ip", systemHandler.GetManagementIPAddress)
systemGroup.PUT("/interfaces/:name", systemHandler.UpdateNetworkInterface)
systemGroup.GET("/ntp", systemHandler.GetNTPSettings)
systemGroup.POST("/ntp", systemHandler.SaveNTPSettings)
systemGroup.POST("/execute", requirePermission("system", "write"), systemHandler.ExecuteCommand)
}
// IAM routes - GetUser can be accessed by user viewing own profile or admin
// IAM (admin only)
iamHandler := iam.NewHandler(db, cfg, log)
protected.GET("/iam/users/:id", iamHandler.GetUser)
// IAM admin routes
iamGroup := protected.Group("/iam")
iamGroup.Use(requireRole("admin"))
{
iamGroup.GET("/users", iamHandler.ListUsers)
iamGroup.GET("/users/:id", iamHandler.GetUser)
iamGroup.POST("/users", iamHandler.CreateUser)
iamGroup.PUT("/users/:id", iamHandler.UpdateUser)
iamGroup.DELETE("/users/:id", iamHandler.DeleteUser)
// Roles routes
iamGroup.GET("/roles", iamHandler.ListRoles)
iamGroup.GET("/roles/:id", iamHandler.GetRole)
iamGroup.POST("/roles", iamHandler.CreateRole)
iamGroup.PUT("/roles/:id", iamHandler.UpdateRole)
iamGroup.DELETE("/roles/:id", iamHandler.DeleteRole)
iamGroup.GET("/roles/:id/permissions", iamHandler.GetRolePermissions)
iamGroup.POST("/roles/:id/permissions", iamHandler.AssignPermissionToRole)
iamGroup.DELETE("/roles/:id/permissions", iamHandler.RemovePermissionFromRole)
// Permissions routes
iamGroup.GET("/permissions", iamHandler.ListPermissions)
// User role/group assignment
iamGroup.POST("/users/:id/roles", iamHandler.AssignRoleToUser)
iamGroup.DELETE("/users/:id/roles", iamHandler.RemoveRoleFromUser)
iamGroup.POST("/users/:id/groups", iamHandler.AssignGroupToUser)
iamGroup.DELETE("/users/:id/groups", iamHandler.RemoveGroupFromUser)
// Groups routes
iamGroup.GET("/groups", iamHandler.ListGroups)
iamGroup.GET("/groups/:id", iamHandler.GetGroup)
iamGroup.POST("/groups", iamHandler.CreateGroup)
iamGroup.PUT("/groups/:id", iamHandler.UpdateGroup)
iamGroup.DELETE("/groups/:id", iamHandler.DeleteGroup)
iamGroup.POST("/groups/:id/users", iamHandler.AddUserToGroup)
iamGroup.DELETE("/groups/:id/users/:user_id", iamHandler.RemoveUserFromGroup)
}
// Backup Jobs
backupService := backup.NewService(db, log)
// Set up direct connection to Bacula database
// Try common Bacula database names
baculaDBName := "bacula" // Default
if err := backupService.SetBaculaDatabase(cfg.Database, baculaDBName); err != nil {
log.Warn("Failed to connect to Bacula database, trying 'bareos'", "error", err)
// Try 'bareos' as alternative
if err := backupService.SetBaculaDatabase(cfg.Database, "bareos"); err != nil {
log.Error("Failed to connect to Bacula database", "error", err, "tried", []string{"bacula", "bareos"})
// Continue anyway - will fallback to bconsole
}
}
backupHandler := backup.NewHandler(backupService, log)
backupGroup := protected.Group("/backup")
backupGroup.Use(requirePermission("backup", "read"))
{
backupGroup.GET("/dashboard/stats", backupHandler.GetDashboardStats)
backupGroup.GET("/jobs", backupHandler.ListJobs)
backupGroup.GET("/jobs/:id", backupHandler.GetJob)
backupGroup.POST("/jobs", requirePermission("backup", "write"), backupHandler.CreateJob)
backupGroup.GET("/clients", backupHandler.ListClients)
backupGroup.GET("/storage/pools", backupHandler.ListStoragePools)
backupGroup.POST("/storage/pools", requirePermission("backup", "write"), backupHandler.CreateStoragePool)
backupGroup.DELETE("/storage/pools/:id", requirePermission("backup", "write"), backupHandler.DeleteStoragePool)
backupGroup.GET("/storage/volumes", backupHandler.ListStorageVolumes)
backupGroup.POST("/storage/volumes", requirePermission("backup", "write"), backupHandler.CreateStorageVolume)
backupGroup.PUT("/storage/volumes/:id", requirePermission("backup", "write"), backupHandler.UpdateStorageVolume)
backupGroup.DELETE("/storage/volumes/:id", requirePermission("backup", "write"), backupHandler.DeleteStorageVolume)
backupGroup.GET("/media", backupHandler.ListMedia)
backupGroup.GET("/storage/daemons", backupHandler.ListStorageDaemons)
backupGroup.POST("/console/execute", requirePermission("backup", "write"), backupHandler.ExecuteBconsoleCommand)
}
// Monitoring

View File

@@ -1,221 +0,0 @@
package iam
import (
"time"
"github.com/atlasos/calypso/internal/common/database"
)
// Group represents a user group
type Group struct {
ID string
Name string
Description string
IsSystem bool
CreatedAt time.Time
UpdatedAt time.Time
UserCount int
RoleCount int
}
// GetGroupByID retrieves a group by ID
func GetGroupByID(db *database.DB, groupID string) (*Group, error) {
query := `
SELECT id, name, description, is_system, created_at, updated_at
FROM groups
WHERE id = $1
`
var group Group
err := db.QueryRow(query, groupID).Scan(
&group.ID, &group.Name, &group.Description, &group.IsSystem,
&group.CreatedAt, &group.UpdatedAt,
)
if err != nil {
return nil, err
}
// Get user count
var userCount int
db.QueryRow("SELECT COUNT(*) FROM user_groups WHERE group_id = $1", groupID).Scan(&userCount)
group.UserCount = userCount
// Get role count
var roleCount int
db.QueryRow("SELECT COUNT(*) FROM group_roles WHERE group_id = $1", groupID).Scan(&roleCount)
group.RoleCount = roleCount
return &group, nil
}
// GetGroupByName retrieves a group by name
func GetGroupByName(db *database.DB, name string) (*Group, error) {
query := `
SELECT id, name, description, is_system, created_at, updated_at
FROM groups
WHERE name = $1
`
var group Group
err := db.QueryRow(query, name).Scan(
&group.ID, &group.Name, &group.Description, &group.IsSystem,
&group.CreatedAt, &group.UpdatedAt,
)
if err != nil {
return nil, err
}
return &group, nil
}
// GetUserGroups retrieves all groups for a user
func GetUserGroups(db *database.DB, userID string) ([]string, error) {
query := `
SELECT g.name
FROM groups g
INNER JOIN user_groups ug ON g.id = ug.group_id
WHERE ug.user_id = $1
ORDER BY g.name
`
rows, err := db.Query(query, userID)
if err != nil {
return nil, err
}
defer rows.Close()
var groups []string
for rows.Next() {
var groupName string
if err := rows.Scan(&groupName); err != nil {
return []string{}, err
}
groups = append(groups, groupName)
}
if groups == nil {
groups = []string{}
}
return groups, rows.Err()
}
// GetGroupUsers retrieves all users in a group
func GetGroupUsers(db *database.DB, groupID string) ([]string, error) {
query := `
SELECT u.id
FROM users u
INNER JOIN user_groups ug ON u.id = ug.user_id
WHERE ug.group_id = $1
ORDER BY u.username
`
rows, err := db.Query(query, groupID)
if err != nil {
return nil, err
}
defer rows.Close()
var userIDs []string
for rows.Next() {
var userID string
if err := rows.Scan(&userID); err != nil {
return nil, err
}
userIDs = append(userIDs, userID)
}
return userIDs, rows.Err()
}
// GetGroupRoles retrieves all roles for a group
func GetGroupRoles(db *database.DB, groupID string) ([]string, error) {
query := `
SELECT r.name
FROM roles r
INNER JOIN group_roles gr ON r.id = gr.role_id
WHERE gr.group_id = $1
ORDER BY r.name
`
rows, err := db.Query(query, groupID)
if err != nil {
return nil, err
}
defer rows.Close()
var roles []string
for rows.Next() {
var role string
if err := rows.Scan(&role); err != nil {
return nil, err
}
roles = append(roles, role)
}
return roles, rows.Err()
}
// AddUserToGroup adds a user to a group
func AddUserToGroup(db *database.DB, userID, groupID, assignedBy string) error {
query := `
INSERT INTO user_groups (user_id, group_id, assigned_by)
VALUES ($1, $2, $3)
ON CONFLICT (user_id, group_id) DO NOTHING
`
_, err := db.Exec(query, userID, groupID, assignedBy)
return err
}
// RemoveUserFromGroup removes a user from a group
func RemoveUserFromGroup(db *database.DB, userID, groupID string) error {
query := `DELETE FROM user_groups WHERE user_id = $1 AND group_id = $2`
_, err := db.Exec(query, userID, groupID)
return err
}
// AddRoleToGroup adds a role to a group
func AddRoleToGroup(db *database.DB, groupID, roleID string) error {
query := `
INSERT INTO group_roles (group_id, role_id)
VALUES ($1, $2)
ON CONFLICT (group_id, role_id) DO NOTHING
`
_, err := db.Exec(query, groupID, roleID)
return err
}
// RemoveRoleFromGroup removes a role from a group
func RemoveRoleFromGroup(db *database.DB, groupID, roleID string) error {
query := `DELETE FROM group_roles WHERE group_id = $1 AND role_id = $2`
_, err := db.Exec(query, groupID, roleID)
return err
}
// GetUserRolesFromGroups retrieves all roles for a user via groups
func GetUserRolesFromGroups(db *database.DB, userID string) ([]string, error) {
query := `
SELECT DISTINCT r.name
FROM roles r
INNER JOIN group_roles gr ON r.id = gr.role_id
INNER JOIN user_groups ug ON gr.group_id = ug.group_id
WHERE ug.user_id = $1
ORDER BY r.name
`
rows, err := db.Query(query, userID)
if err != nil {
return nil, err
}
defer rows.Close()
var roles []string
for rows.Next() {
var role string
if err := rows.Scan(&role); err != nil {
return nil, err
}
roles = append(roles, role)
}
return roles, rows.Err()
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,237 +0,0 @@
package iam
import (
"time"
"github.com/atlasos/calypso/internal/common/database"
)
// Role represents a system role
type Role struct {
ID string
Name string
Description string
IsSystem bool
CreatedAt time.Time
UpdatedAt time.Time
}
// GetRoleByID retrieves a role by ID
func GetRoleByID(db *database.DB, roleID string) (*Role, error) {
query := `
SELECT id, name, description, is_system, created_at, updated_at
FROM roles
WHERE id = $1
`
var role Role
err := db.QueryRow(query, roleID).Scan(
&role.ID, &role.Name, &role.Description, &role.IsSystem,
&role.CreatedAt, &role.UpdatedAt,
)
if err != nil {
return nil, err
}
return &role, nil
}
// GetRoleByName retrieves a role by name
func GetRoleByName(db *database.DB, name string) (*Role, error) {
query := `
SELECT id, name, description, is_system, created_at, updated_at
FROM roles
WHERE name = $1
`
var role Role
err := db.QueryRow(query, name).Scan(
&role.ID, &role.Name, &role.Description, &role.IsSystem,
&role.CreatedAt, &role.UpdatedAt,
)
if err != nil {
return nil, err
}
return &role, nil
}
// ListRoles retrieves all roles
func ListRoles(db *database.DB) ([]*Role, error) {
query := `
SELECT id, name, description, is_system, created_at, updated_at
FROM roles
ORDER BY name
`
rows, err := db.Query(query)
if err != nil {
return nil, err
}
defer rows.Close()
var roles []*Role
for rows.Next() {
var role Role
if err := rows.Scan(
&role.ID, &role.Name, &role.Description, &role.IsSystem,
&role.CreatedAt, &role.UpdatedAt,
); err != nil {
return nil, err
}
roles = append(roles, &role)
}
return roles, rows.Err()
}
// CreateRole creates a new role
func CreateRole(db *database.DB, name, description string) (*Role, error) {
query := `
INSERT INTO roles (name, description)
VALUES ($1, $2)
RETURNING id, name, description, is_system, created_at, updated_at
`
var role Role
err := db.QueryRow(query, name, description).Scan(
&role.ID, &role.Name, &role.Description, &role.IsSystem,
&role.CreatedAt, &role.UpdatedAt,
)
if err != nil {
return nil, err
}
return &role, nil
}
// UpdateRole updates an existing role
func UpdateRole(db *database.DB, roleID, name, description string) error {
query := `
UPDATE roles
SET name = $1, description = $2, updated_at = NOW()
WHERE id = $3
`
_, err := db.Exec(query, name, description, roleID)
return err
}
// DeleteRole deletes a role
func DeleteRole(db *database.DB, roleID string) error {
query := `DELETE FROM roles WHERE id = $1`
_, err := db.Exec(query, roleID)
return err
}
// GetRoleUsers retrieves all users with a specific role
func GetRoleUsers(db *database.DB, roleID string) ([]string, error) {
query := `
SELECT u.id
FROM users u
INNER JOIN user_roles ur ON u.id = ur.user_id
WHERE ur.role_id = $1
ORDER BY u.username
`
rows, err := db.Query(query, roleID)
if err != nil {
return nil, err
}
defer rows.Close()
var userIDs []string
for rows.Next() {
var userID string
if err := rows.Scan(&userID); err != nil {
return nil, err
}
userIDs = append(userIDs, userID)
}
return userIDs, rows.Err()
}
// GetRolePermissions retrieves all permissions for a role
func GetRolePermissions(db *database.DB, roleID string) ([]string, error) {
query := `
SELECT p.name
FROM permissions p
INNER JOIN role_permissions rp ON p.id = rp.permission_id
WHERE rp.role_id = $1
ORDER BY p.name
`
rows, err := db.Query(query, roleID)
if err != nil {
return nil, err
}
defer rows.Close()
var permissions []string
for rows.Next() {
var perm string
if err := rows.Scan(&perm); err != nil {
return nil, err
}
permissions = append(permissions, perm)
}
return permissions, rows.Err()
}
// AddPermissionToRole assigns a permission to a role
func AddPermissionToRole(db *database.DB, roleID, permissionID string) error {
query := `
INSERT INTO role_permissions (role_id, permission_id)
VALUES ($1, $2)
ON CONFLICT (role_id, permission_id) DO NOTHING
`
_, err := db.Exec(query, roleID, permissionID)
return err
}
// RemovePermissionFromRole removes a permission from a role
func RemovePermissionFromRole(db *database.DB, roleID, permissionID string) error {
query := `DELETE FROM role_permissions WHERE role_id = $1 AND permission_id = $2`
_, err := db.Exec(query, roleID, permissionID)
return err
}
// GetPermissionIDByName retrieves a permission ID by name
func GetPermissionIDByName(db *database.DB, permissionName string) (string, error) {
var permissionID string
err := db.QueryRow("SELECT id FROM permissions WHERE name = $1", permissionName).Scan(&permissionID)
return permissionID, err
}
// ListPermissions retrieves all permissions
func ListPermissions(db *database.DB) ([]map[string]interface{}, error) {
query := `
SELECT id, name, resource, action, description
FROM permissions
ORDER BY resource, action
`
rows, err := db.Query(query)
if err != nil {
return nil, err
}
defer rows.Close()
var permissions []map[string]interface{}
for rows.Next() {
var id, name, resource, action, description string
if err := rows.Scan(&id, &name, &resource, &action, &description); err != nil {
return nil, err
}
permissions = append(permissions, map[string]interface{}{
"id": id,
"name": name,
"resource": resource,
"action": action,
"description": description,
})
}
return permissions, rows.Err()
}

View File

@@ -2,7 +2,6 @@ package iam
import (
"database/sql"
"fmt"
"time"
"github.com/atlasos/calypso/internal/common/database"
@@ -91,14 +90,11 @@ func GetUserRoles(db *database.DB, userID string) ([]string, error) {
for rows.Next() {
var role string
if err := rows.Scan(&role); err != nil {
return []string{}, err
return nil, err
}
roles = append(roles, role)
}
if roles == nil {
roles = []string{}
}
return roles, rows.Err()
}
@@ -122,53 +118,11 @@ func GetUserPermissions(db *database.DB, userID string) ([]string, error) {
for rows.Next() {
var perm string
if err := rows.Scan(&perm); err != nil {
return []string{}, err
return nil, err
}
permissions = append(permissions, perm)
}
if permissions == nil {
permissions = []string{}
}
return permissions, rows.Err()
}
// AddUserRole assigns a role to a user
func AddUserRole(db *database.DB, userID, roleID, assignedBy string) error {
query := `
INSERT INTO user_roles (user_id, role_id, assigned_by)
VALUES ($1, $2, $3)
ON CONFLICT (user_id, role_id) DO NOTHING
`
result, err := db.Exec(query, userID, roleID, assignedBy)
if err != nil {
return fmt.Errorf("failed to insert user role: %w", err)
}
// Check if row was actually inserted (not just skipped due to conflict)
rowsAffected, err := result.RowsAffected()
if err != nil {
return fmt.Errorf("failed to get rows affected: %w", err)
}
if rowsAffected == 0 {
// Row already exists, this is not an error but we should know about it
return nil // ON CONFLICT DO NOTHING means this is expected
}
return nil
}
// RemoveUserRole removes a role from a user
func RemoveUserRole(db *database.DB, userID, roleID string) error {
query := `DELETE FROM user_roles WHERE user_id = $1 AND role_id = $2`
_, err := db.Exec(query, userID, roleID)
return err
}
// GetRoleIDByName retrieves a role ID by name
func GetRoleIDByName(db *database.DB, roleName string) (string, error) {
var roleID string
err := db.QueryRow("SELECT id FROM roles WHERE name = $1", roleName).Scan(&roleID)
return roleID, err
}

View File

@@ -1,14 +1,10 @@
package monitoring
import (
"bufio"
"context"
"database/sql"
"fmt"
"os"
"runtime"
"strconv"
"strings"
"time"
"github.com/atlasos/calypso/internal/common/database"
@@ -17,14 +13,14 @@ import (
// Metrics represents system metrics
type Metrics struct {
System SystemMetrics `json:"system"`
Storage StorageMetrics `json:"storage"`
SCST SCSTMetrics `json:"scst"`
Tape TapeMetrics `json:"tape"`
VTL VTLMetrics `json:"vtl"`
Tasks TaskMetrics `json:"tasks"`
API APIMetrics `json:"api"`
CollectedAt time.Time `json:"collected_at"`
System SystemMetrics `json:"system"`
Storage StorageMetrics `json:"storage"`
SCST SCSTMetrics `json:"scst"`
Tape TapeMetrics `json:"tape"`
VTL VTLMetrics `json:"vtl"`
Tasks TaskMetrics `json:"tasks"`
API APIMetrics `json:"api"`
CollectedAt time.Time `json:"collected_at"`
}
// SystemMetrics represents system-level metrics
@@ -41,11 +37,11 @@ type SystemMetrics struct {
// StorageMetrics represents storage metrics
type StorageMetrics struct {
TotalDisks int `json:"total_disks"`
TotalRepositories int `json:"total_repositories"`
TotalCapacityBytes int64 `json:"total_capacity_bytes"`
UsedCapacityBytes int64 `json:"used_capacity_bytes"`
AvailableBytes int64 `json:"available_bytes"`
TotalDisks int `json:"total_disks"`
TotalRepositories int `json:"total_repositories"`
TotalCapacityBytes int64 `json:"total_capacity_bytes"`
UsedCapacityBytes int64 `json:"used_capacity_bytes"`
AvailableBytes int64 `json:"available_bytes"`
UsagePercent float64 `json:"usage_percent"`
}
@@ -76,43 +72,28 @@ type VTLMetrics struct {
// TaskMetrics represents task execution metrics
type TaskMetrics struct {
TotalTasks int `json:"total_tasks"`
PendingTasks int `json:"pending_tasks"`
RunningTasks int `json:"running_tasks"`
CompletedTasks int `json:"completed_tasks"`
FailedTasks int `json:"failed_tasks"`
AvgDurationSec float64 `json:"avg_duration_seconds"`
TotalTasks int `json:"total_tasks"`
PendingTasks int `json:"pending_tasks"`
RunningTasks int `json:"running_tasks"`
CompletedTasks int `json:"completed_tasks"`
FailedTasks int `json:"failed_tasks"`
AvgDurationSec float64 `json:"avg_duration_seconds"`
}
// APIMetrics represents API metrics
type APIMetrics struct {
TotalRequests int64 `json:"total_requests"`
RequestsPerSec float64 `json:"requests_per_second"`
ErrorRate float64 `json:"error_rate"`
AvgLatencyMs float64 `json:"avg_latency_ms"`
ActiveConnections int `json:"active_connections"`
TotalRequests int64 `json:"total_requests"`
RequestsPerSec float64 `json:"requests_per_second"`
ErrorRate float64 `json:"error_rate"`
AvgLatencyMs float64 `json:"avg_latency_ms"`
ActiveConnections int `json:"active_connections"`
}
// MetricsService collects and provides system metrics
type MetricsService struct {
db *database.DB
logger *logger.Logger
startTime time.Time
lastCPU *cpuStats // For CPU usage calculation
lastCPUTime time.Time
}
// cpuStats represents CPU statistics from /proc/stat
type cpuStats struct {
user uint64
nice uint64
system uint64
idle uint64
iowait uint64
irq uint64
softirq uint64
steal uint64
guest uint64
db *database.DB
logger *logger.Logger
startTime time.Time
}
// NewMetricsService creates a new metrics service
@@ -134,8 +115,6 @@ func (s *MetricsService) CollectMetrics(ctx context.Context) (*Metrics, error) {
sysMetrics, err := s.collectSystemMetrics(ctx)
if err != nil {
s.logger.Error("Failed to collect system metrics", "error", err)
// Set default/zero values if collection fails
metrics.System = SystemMetrics{}
} else {
metrics.System = *sysMetrics
}
@@ -188,17 +167,21 @@ func (s *MetricsService) CollectMetrics(ctx context.Context) (*Metrics, error) {
// collectSystemMetrics collects system-level metrics
func (s *MetricsService) collectSystemMetrics(ctx context.Context) (*SystemMetrics, error) {
// Get system memory from /proc/meminfo
memoryTotal, memoryUsed, memoryPercent := s.getSystemMemory()
var m runtime.MemStats
runtime.ReadMemStats(&m)
// Get CPU usage from /proc/stat
cpuUsage := s.getCPUUsage()
// Get memory info
memoryUsed := int64(m.Alloc)
memoryTotal := int64(m.Sys)
memoryPercent := float64(memoryUsed) / float64(memoryTotal) * 100
// Get system uptime from /proc/uptime
uptime := s.getSystemUptime()
// Uptime
uptime := time.Since(s.startTime).Seconds()
// CPU and disk would require external tools or system calls
// For now, we'll use placeholders
metrics := &SystemMetrics{
CPUUsagePercent: cpuUsage,
CPUUsagePercent: 0.0, // Would need to read from /proc/stat
MemoryUsed: memoryUsed,
MemoryTotal: memoryTotal,
MemoryPercent: memoryPercent,
@@ -285,7 +268,7 @@ func (s *MetricsService) collectSCSTMetrics(ctx context.Context) (*SCSTMetrics,
TotalTargets: totalTargets,
TotalLUNs: totalLUNs,
TotalInitiators: totalInitiators,
ActiveTargets: activeTargets,
ActiveTargets: activeTargets,
}, nil
}
@@ -420,232 +403,3 @@ func (s *MetricsService) collectTaskMetrics(ctx context.Context) (*TaskMetrics,
}, nil
}
// getSystemUptime reads system uptime from /proc/uptime
// Returns uptime in seconds, or service uptime as fallback
func (s *MetricsService) getSystemUptime() float64 {
file, err := os.Open("/proc/uptime")
if err != nil {
// Fallback to service uptime if /proc/uptime is not available
s.logger.Warn("Failed to read /proc/uptime, using service uptime", "error", err)
return time.Since(s.startTime).Seconds()
}
defer file.Close()
scanner := bufio.NewScanner(file)
if !scanner.Scan() {
// Fallback to service uptime if file is empty
s.logger.Warn("Failed to read /proc/uptime content, using service uptime")
return time.Since(s.startTime).Seconds()
}
line := strings.TrimSpace(scanner.Text())
fields := strings.Fields(line)
if len(fields) == 0 {
// Fallback to service uptime if no data
s.logger.Warn("No data in /proc/uptime, using service uptime")
return time.Since(s.startTime).Seconds()
}
// First field is system uptime in seconds
uptimeSeconds, err := strconv.ParseFloat(fields[0], 64)
if err != nil {
// Fallback to service uptime if parsing fails
s.logger.Warn("Failed to parse /proc/uptime, using service uptime", "error", err)
return time.Since(s.startTime).Seconds()
}
return uptimeSeconds
}
// getSystemMemory reads system memory from /proc/meminfo
// Returns total, used (in bytes), and usage percentage
func (s *MetricsService) getSystemMemory() (int64, int64, float64) {
file, err := os.Open("/proc/meminfo")
if err != nil {
s.logger.Warn("Failed to read /proc/meminfo, using Go runtime memory", "error", err)
var m runtime.MemStats
runtime.ReadMemStats(&m)
memoryUsed := int64(m.Alloc)
memoryTotal := int64(m.Sys)
memoryPercent := float64(memoryUsed) / float64(memoryTotal) * 100
return memoryTotal, memoryUsed, memoryPercent
}
defer file.Close()
var memTotal, memAvailable, memFree, buffers, cached int64
scanner := bufio.NewScanner(file)
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
if line == "" {
continue
}
// Parse line like "MemTotal: 16375596 kB"
// or "MemTotal: 16375596" (some systems don't have unit)
colonIdx := strings.Index(line, ":")
if colonIdx == -1 {
continue
}
key := strings.TrimSpace(line[:colonIdx])
valuePart := strings.TrimSpace(line[colonIdx+1:])
// Split value part to get number (ignore unit like "kB")
fields := strings.Fields(valuePart)
if len(fields) == 0 {
continue
}
value, err := strconv.ParseInt(fields[0], 10, 64)
if err != nil {
continue
}
// Values in /proc/meminfo are in KB, convert to bytes
valueBytes := value * 1024
switch key {
case "MemTotal":
memTotal = valueBytes
case "MemAvailable":
memAvailable = valueBytes
case "MemFree":
memFree = valueBytes
case "Buffers":
buffers = valueBytes
case "Cached":
cached = valueBytes
}
}
if err := scanner.Err(); err != nil {
s.logger.Warn("Error scanning /proc/meminfo", "error", err)
}
if memTotal == 0 {
s.logger.Warn("Failed to get MemTotal from /proc/meminfo, using Go runtime memory", "memTotal", memTotal)
var m runtime.MemStats
runtime.ReadMemStats(&m)
memoryUsed := int64(m.Alloc)
memoryTotal := int64(m.Sys)
memoryPercent := float64(memoryUsed) / float64(memoryTotal) * 100
return memoryTotal, memoryUsed, memoryPercent
}
// Calculate used memory
// If MemAvailable exists (kernel 3.14+), use it for more accurate calculation
var memoryUsed int64
if memAvailable > 0 {
memoryUsed = memTotal - memAvailable
} else {
// Fallback: MemTotal - MemFree - Buffers - Cached
memoryUsed = memTotal - memFree - buffers - cached
if memoryUsed < 0 {
memoryUsed = memTotal - memFree
}
}
memoryPercent := float64(memoryUsed) / float64(memTotal) * 100
s.logger.Debug("System memory stats",
"memTotal", memTotal,
"memAvailable", memAvailable,
"memoryUsed", memoryUsed,
"memoryPercent", memoryPercent)
return memTotal, memoryUsed, memoryPercent
}
// getCPUUsage reads CPU usage from /proc/stat
// Requires two readings to calculate percentage
func (s *MetricsService) getCPUUsage() float64 {
currentCPU, err := s.readCPUStats()
if err != nil {
s.logger.Warn("Failed to read CPU stats", "error", err)
return 0.0
}
// If this is the first reading, store it and return 0
if s.lastCPU == nil {
s.lastCPU = currentCPU
s.lastCPUTime = time.Now()
return 0.0
}
// Calculate time difference
timeDiff := time.Since(s.lastCPUTime).Seconds()
if timeDiff < 0.1 {
// Too soon, return previous value or 0
return 0.0
}
// Calculate total CPU time
prevTotal := s.lastCPU.user + s.lastCPU.nice + s.lastCPU.system + s.lastCPU.idle +
s.lastCPU.iowait + s.lastCPU.irq + s.lastCPU.softirq + s.lastCPU.steal + s.lastCPU.guest
currTotal := currentCPU.user + currentCPU.nice + currentCPU.system + currentCPU.idle +
currentCPU.iowait + currentCPU.irq + currentCPU.softirq + currentCPU.steal + currentCPU.guest
// Calculate idle time
prevIdle := s.lastCPU.idle + s.lastCPU.iowait
currIdle := currentCPU.idle + currentCPU.iowait
// Calculate used time
totalDiff := currTotal - prevTotal
idleDiff := currIdle - prevIdle
if totalDiff == 0 {
return 0.0
}
// Calculate CPU usage percentage
usagePercent := 100.0 * (1.0 - float64(idleDiff)/float64(totalDiff))
// Update last CPU stats
s.lastCPU = currentCPU
s.lastCPUTime = time.Now()
return usagePercent
}
// readCPUStats reads CPU statistics from /proc/stat
func (s *MetricsService) readCPUStats() (*cpuStats, error) {
file, err := os.Open("/proc/stat")
if err != nil {
return nil, fmt.Errorf("failed to open /proc/stat: %w", err)
}
defer file.Close()
scanner := bufio.NewScanner(file)
if !scanner.Scan() {
return nil, fmt.Errorf("failed to read /proc/stat")
}
line := strings.TrimSpace(scanner.Text())
if !strings.HasPrefix(line, "cpu ") {
return nil, fmt.Errorf("invalid /proc/stat format")
}
fields := strings.Fields(line)
if len(fields) < 8 {
return nil, fmt.Errorf("insufficient CPU stats fields")
}
stats := &cpuStats{}
stats.user, _ = strconv.ParseUint(fields[1], 10, 64)
stats.nice, _ = strconv.ParseUint(fields[2], 10, 64)
stats.system, _ = strconv.ParseUint(fields[3], 10, 64)
stats.idle, _ = strconv.ParseUint(fields[4], 10, 64)
stats.iowait, _ = strconv.ParseUint(fields[5], 10, 64)
stats.irq, _ = strconv.ParseUint(fields[6], 10, 64)
stats.softirq, _ = strconv.ParseUint(fields[7], 10, 64)
if len(fields) > 8 {
stats.steal, _ = strconv.ParseUint(fields[8], 10, 64)
}
if len(fields) > 9 {
stats.guest, _ = strconv.ParseUint(fields[9], 10, 64)
}
return stats, nil
}

View File

@@ -1,285 +0,0 @@
package object_storage
import (
"net/http"
"time"
"github.com/atlasos/calypso/internal/common/database"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/gin-gonic/gin"
)
// Handler handles HTTP requests for object storage
type Handler struct {
service *Service
setupService *SetupService
logger *logger.Logger
}
// NewHandler creates a new object storage handler
func NewHandler(service *Service, db *database.DB, log *logger.Logger) *Handler {
return &Handler{
service: service,
setupService: NewSetupService(db, log),
logger: log,
}
}
// ListBuckets lists all buckets
func (h *Handler) ListBuckets(c *gin.Context) {
buckets, err := h.service.ListBuckets(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list buckets", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list buckets: " + err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"buckets": buckets})
}
// GetBucket gets bucket information
func (h *Handler) GetBucket(c *gin.Context) {
bucketName := c.Param("name")
bucket, err := h.service.GetBucketStats(c.Request.Context(), bucketName)
if err != nil {
h.logger.Error("Failed to get bucket", "bucket", bucketName, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get bucket: " + err.Error()})
return
}
c.JSON(http.StatusOK, bucket)
}
// CreateBucketRequest represents a request to create a bucket
type CreateBucketRequest struct {
Name string `json:"name" binding:"required"`
}
// CreateBucket creates a new bucket
func (h *Handler) CreateBucket(c *gin.Context) {
var req CreateBucketRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid create bucket request", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
return
}
if err := h.service.CreateBucket(c.Request.Context(), req.Name); err != nil {
h.logger.Error("Failed to create bucket", "bucket", req.Name, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create bucket: " + err.Error()})
return
}
c.JSON(http.StatusCreated, gin.H{"message": "bucket created successfully", "name": req.Name})
}
// DeleteBucket deletes a bucket
func (h *Handler) DeleteBucket(c *gin.Context) {
bucketName := c.Param("name")
if err := h.service.DeleteBucket(c.Request.Context(), bucketName); err != nil {
h.logger.Error("Failed to delete bucket", "bucket", bucketName, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to delete bucket: " + err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "bucket deleted successfully"})
}
// GetAvailableDatasets gets all available pools and datasets for object storage setup
func (h *Handler) GetAvailableDatasets(c *gin.Context) {
datasets, err := h.setupService.GetAvailableDatasets(c.Request.Context())
if err != nil {
h.logger.Error("Failed to get available datasets", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get available datasets: " + err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"pools": datasets})
}
// SetupObjectStorageRequest represents a request to setup object storage
type SetupObjectStorageRequest struct {
PoolName string `json:"pool_name" binding:"required"`
DatasetName string `json:"dataset_name" binding:"required"`
CreateNew bool `json:"create_new"`
}
// SetupObjectStorage configures object storage with a ZFS dataset
func (h *Handler) SetupObjectStorage(c *gin.Context) {
var req SetupObjectStorageRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid setup request", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
return
}
setupReq := SetupRequest{
PoolName: req.PoolName,
DatasetName: req.DatasetName,
CreateNew: req.CreateNew,
}
result, err := h.setupService.SetupObjectStorage(c.Request.Context(), setupReq)
if err != nil {
h.logger.Error("Failed to setup object storage", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to setup object storage: " + err.Error()})
return
}
c.JSON(http.StatusOK, result)
}
// GetCurrentSetup gets the current object storage configuration
func (h *Handler) GetCurrentSetup(c *gin.Context) {
setup, err := h.setupService.GetCurrentSetup(c.Request.Context())
if err != nil {
h.logger.Error("Failed to get current setup", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get current setup: " + err.Error()})
return
}
if setup == nil {
c.JSON(http.StatusOK, gin.H{"configured": false})
return
}
c.JSON(http.StatusOK, gin.H{"configured": true, "setup": setup})
}
// UpdateObjectStorage updates the object storage configuration
func (h *Handler) UpdateObjectStorage(c *gin.Context) {
var req SetupObjectStorageRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid update request", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
return
}
setupReq := SetupRequest{
PoolName: req.PoolName,
DatasetName: req.DatasetName,
CreateNew: req.CreateNew,
}
result, err := h.setupService.UpdateObjectStorage(c.Request.Context(), setupReq)
if err != nil {
h.logger.Error("Failed to update object storage", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to update object storage: " + err.Error()})
return
}
c.JSON(http.StatusOK, result)
}
// ListUsers lists all IAM users
func (h *Handler) ListUsers(c *gin.Context) {
users, err := h.service.ListUsers(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list users", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list users: " + err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"users": users})
}
// CreateUserRequest represents a request to create a user
type CreateUserRequest struct {
AccessKey string `json:"access_key" binding:"required"`
SecretKey string `json:"secret_key" binding:"required"`
}
// CreateUser creates a new IAM user
func (h *Handler) CreateUser(c *gin.Context) {
var req CreateUserRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid create user request", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
return
}
if err := h.service.CreateUser(c.Request.Context(), req.AccessKey, req.SecretKey); err != nil {
h.logger.Error("Failed to create user", "access_key", req.AccessKey, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create user: " + err.Error()})
return
}
c.JSON(http.StatusCreated, gin.H{"message": "user created successfully", "access_key": req.AccessKey})
}
// DeleteUser deletes an IAM user
func (h *Handler) DeleteUser(c *gin.Context) {
accessKey := c.Param("access_key")
if err := h.service.DeleteUser(c.Request.Context(), accessKey); err != nil {
h.logger.Error("Failed to delete user", "access_key", accessKey, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to delete user: " + err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "user deleted successfully"})
}
// ListServiceAccounts lists all service accounts (access keys)
func (h *Handler) ListServiceAccounts(c *gin.Context) {
accounts, err := h.service.ListServiceAccounts(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list service accounts", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list service accounts: " + err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"service_accounts": accounts})
}
// CreateServiceAccountRequest represents a request to create a service account
type CreateServiceAccountRequest struct {
ParentUser string `json:"parent_user" binding:"required"`
Policy string `json:"policy,omitempty"`
Expiration *string `json:"expiration,omitempty"` // ISO 8601 format
}
// CreateServiceAccount creates a new service account (access key)
func (h *Handler) CreateServiceAccount(c *gin.Context) {
var req CreateServiceAccountRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid create service account request", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
return
}
var expiration *time.Time
if req.Expiration != nil {
exp, err := time.Parse(time.RFC3339, *req.Expiration)
if err != nil {
h.logger.Error("Invalid expiration format", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid expiration format, use ISO 8601 (RFC3339)"})
return
}
expiration = &exp
}
account, err := h.service.CreateServiceAccount(c.Request.Context(), req.ParentUser, req.Policy, expiration)
if err != nil {
h.logger.Error("Failed to create service account", "parent_user", req.ParentUser, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create service account: " + err.Error()})
return
}
c.JSON(http.StatusCreated, account)
}
// DeleteServiceAccount deletes a service account
func (h *Handler) DeleteServiceAccount(c *gin.Context) {
accessKey := c.Param("access_key")
if err := h.service.DeleteServiceAccount(c.Request.Context(), accessKey); err != nil {
h.logger.Error("Failed to delete service account", "access_key", accessKey, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to delete service account: " + err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "service account deleted successfully"})
}

View File

@@ -1,297 +0,0 @@
package object_storage
import (
"context"
"encoding/json"
"fmt"
"strings"
"time"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
madmin "github.com/minio/madmin-go/v3"
)
// Service handles MinIO object storage operations
type Service struct {
client *minio.Client
adminClient *madmin.AdminClient
logger *logger.Logger
endpoint string
accessKey string
secretKey string
}
// NewService creates a new MinIO service
func NewService(endpoint, accessKey, secretKey string, log *logger.Logger) (*Service, error) {
// Create MinIO client
minioClient, err := minio.New(endpoint, &minio.Options{
Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
Secure: false, // Set to true if using HTTPS
})
if err != nil {
return nil, fmt.Errorf("failed to create MinIO client: %w", err)
}
// Create MinIO Admin client
adminClient, err := madmin.New(endpoint, accessKey, secretKey, false)
if err != nil {
return nil, fmt.Errorf("failed to create MinIO admin client: %w", err)
}
return &Service{
client: minioClient,
adminClient: adminClient,
logger: log,
endpoint: endpoint,
accessKey: accessKey,
secretKey: secretKey,
}, nil
}
// Bucket represents a MinIO bucket
type Bucket struct {
Name string `json:"name"`
CreationDate time.Time `json:"creation_date"`
Size int64 `json:"size"` // Total size in bytes
Objects int64 `json:"objects"` // Number of objects
AccessPolicy string `json:"access_policy"` // private, public-read, public-read-write
}
// ListBuckets lists all buckets in MinIO
func (s *Service) ListBuckets(ctx context.Context) ([]*Bucket, error) {
buckets, err := s.client.ListBuckets(ctx)
if err != nil {
return nil, fmt.Errorf("failed to list buckets: %w", err)
}
result := make([]*Bucket, 0, len(buckets))
for _, bucket := range buckets {
bucketInfo, err := s.getBucketInfo(ctx, bucket.Name)
if err != nil {
s.logger.Warn("Failed to get bucket info", "bucket", bucket.Name, "error", err)
// Continue with basic info
result = append(result, &Bucket{
Name: bucket.Name,
CreationDate: bucket.CreationDate,
Size: 0,
Objects: 0,
AccessPolicy: "private",
})
continue
}
result = append(result, bucketInfo)
}
return result, nil
}
// getBucketInfo gets detailed information about a bucket
func (s *Service) getBucketInfo(ctx context.Context, bucketName string) (*Bucket, error) {
// Get bucket creation date
buckets, err := s.client.ListBuckets(ctx)
if err != nil {
return nil, err
}
var creationDate time.Time
for _, b := range buckets {
if b.Name == bucketName {
creationDate = b.CreationDate
break
}
}
// Get bucket size and object count by listing objects
var size int64
var objects int64
// List objects in bucket to calculate size and count
objectCh := s.client.ListObjects(ctx, bucketName, minio.ListObjectsOptions{
Recursive: true,
})
for object := range objectCh {
if object.Err != nil {
s.logger.Warn("Error listing object", "bucket", bucketName, "error", object.Err)
continue
}
objects++
size += object.Size
}
return &Bucket{
Name: bucketName,
CreationDate: creationDate,
Size: size,
Objects: objects,
AccessPolicy: s.getBucketPolicy(ctx, bucketName),
}, nil
}
// getBucketPolicy gets the access policy for a bucket
func (s *Service) getBucketPolicy(ctx context.Context, bucketName string) string {
policy, err := s.client.GetBucketPolicy(ctx, bucketName)
if err != nil {
return "private"
}
// Parse policy JSON to determine access type
// For simplicity, check if policy allows public read
if policy != "" {
// Check if policy contains public read access
if strings.Contains(policy, "s3:GetObject") && strings.Contains(policy, "Principal") && strings.Contains(policy, "*") {
if strings.Contains(policy, "s3:PutObject") {
return "public-read-write"
}
return "public-read"
}
}
return "private"
}
// CreateBucket creates a new bucket
func (s *Service) CreateBucket(ctx context.Context, bucketName string) error {
err := s.client.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{})
if err != nil {
return fmt.Errorf("failed to create bucket: %w", err)
}
return nil
}
// DeleteBucket deletes a bucket
func (s *Service) DeleteBucket(ctx context.Context, bucketName string) error {
err := s.client.RemoveBucket(ctx, bucketName)
if err != nil {
return fmt.Errorf("failed to delete bucket: %w", err)
}
return nil
}
// GetBucketStats gets statistics for a bucket
func (s *Service) GetBucketStats(ctx context.Context, bucketName string) (*Bucket, error) {
return s.getBucketInfo(ctx, bucketName)
}
// User represents a MinIO IAM user
type User struct {
AccessKey string `json:"access_key"`
Status string `json:"status"` // "enabled" or "disabled"
CreatedAt time.Time `json:"created_at"`
}
// ListUsers lists all IAM users in MinIO
func (s *Service) ListUsers(ctx context.Context) ([]*User, error) {
users, err := s.adminClient.ListUsers(ctx)
if err != nil {
return nil, fmt.Errorf("failed to list users: %w", err)
}
result := make([]*User, 0, len(users))
for accessKey, userInfo := range users {
status := "enabled"
if userInfo.Status == madmin.AccountDisabled {
status = "disabled"
}
// MinIO doesn't provide creation date, use current time
result = append(result, &User{
AccessKey: accessKey,
Status: status,
CreatedAt: time.Now(),
})
}
return result, nil
}
// CreateUser creates a new IAM user in MinIO
func (s *Service) CreateUser(ctx context.Context, accessKey, secretKey string) error {
err := s.adminClient.AddUser(ctx, accessKey, secretKey)
if err != nil {
return fmt.Errorf("failed to create user: %w", err)
}
return nil
}
// DeleteUser deletes an IAM user from MinIO
func (s *Service) DeleteUser(ctx context.Context, accessKey string) error {
err := s.adminClient.RemoveUser(ctx, accessKey)
if err != nil {
return fmt.Errorf("failed to delete user: %w", err)
}
return nil
}
// ServiceAccount represents a MinIO service account (access key)
type ServiceAccount struct {
AccessKey string `json:"access_key"`
SecretKey string `json:"secret_key,omitempty"` // Only returned on creation
ParentUser string `json:"parent_user"`
Expiration time.Time `json:"expiration,omitempty"`
CreatedAt time.Time `json:"created_at"`
}
// ListServiceAccounts lists all service accounts in MinIO
func (s *Service) ListServiceAccounts(ctx context.Context) ([]*ServiceAccount, error) {
accounts, err := s.adminClient.ListServiceAccounts(ctx, "")
if err != nil {
return nil, fmt.Errorf("failed to list service accounts: %w", err)
}
result := make([]*ServiceAccount, 0, len(accounts.Accounts))
for _, account := range accounts.Accounts {
var expiration time.Time
if account.Expiration != nil {
expiration = *account.Expiration
}
result = append(result, &ServiceAccount{
AccessKey: account.AccessKey,
ParentUser: account.ParentUser,
Expiration: expiration,
CreatedAt: time.Now(), // MinIO doesn't provide creation date
})
}
return result, nil
}
// CreateServiceAccount creates a new service account (access key) in MinIO
func (s *Service) CreateServiceAccount(ctx context.Context, parentUser string, policy string, expiration *time.Time) (*ServiceAccount, error) {
opts := madmin.AddServiceAccountReq{
TargetUser: parentUser,
}
if policy != "" {
opts.Policy = json.RawMessage(policy)
}
if expiration != nil {
opts.Expiration = expiration
}
creds, err := s.adminClient.AddServiceAccount(ctx, opts)
if err != nil {
return nil, fmt.Errorf("failed to create service account: %w", err)
}
return &ServiceAccount{
AccessKey: creds.AccessKey,
SecretKey: creds.SecretKey,
ParentUser: parentUser,
Expiration: creds.Expiration,
CreatedAt: time.Now(),
}, nil
}
// DeleteServiceAccount deletes a service account from MinIO
func (s *Service) DeleteServiceAccount(ctx context.Context, accessKey string) error {
err := s.adminClient.DeleteServiceAccount(ctx, accessKey)
if err != nil {
return fmt.Errorf("failed to delete service account: %w", err)
}
return nil
}

View File

@@ -1,511 +0,0 @@
package object_storage
import (
"context"
"database/sql"
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/atlasos/calypso/internal/common/database"
"github.com/atlasos/calypso/internal/common/logger"
)
// SetupService handles object storage setup operations
type SetupService struct {
db *database.DB
logger *logger.Logger
}
// NewSetupService creates a new setup service
func NewSetupService(db *database.DB, log *logger.Logger) *SetupService {
return &SetupService{
db: db,
logger: log,
}
}
// PoolDatasetInfo represents a pool with its datasets
type PoolDatasetInfo struct {
PoolID string `json:"pool_id"`
PoolName string `json:"pool_name"`
Datasets []DatasetInfo `json:"datasets"`
}
// DatasetInfo represents a dataset that can be used for object storage
type DatasetInfo struct {
ID string `json:"id"`
Name string `json:"name"`
FullName string `json:"full_name"` // pool/dataset
MountPoint string `json:"mount_point"`
Type string `json:"type"`
UsedBytes int64 `json:"used_bytes"`
AvailableBytes int64 `json:"available_bytes"`
}
// GetAvailableDatasets returns all pools with their datasets that can be used for object storage
func (s *SetupService) GetAvailableDatasets(ctx context.Context) ([]PoolDatasetInfo, error) {
// Get all pools
poolsQuery := `
SELECT id, name
FROM zfs_pools
WHERE is_active = true
ORDER BY name
`
rows, err := s.db.QueryContext(ctx, poolsQuery)
if err != nil {
return nil, fmt.Errorf("failed to query pools: %w", err)
}
defer rows.Close()
var pools []PoolDatasetInfo
for rows.Next() {
var pool PoolDatasetInfo
if err := rows.Scan(&pool.PoolID, &pool.PoolName); err != nil {
s.logger.Warn("Failed to scan pool", "error", err)
continue
}
// Get datasets for this pool
datasetsQuery := `
SELECT id, name, type, mount_point, used_bytes, available_bytes
FROM zfs_datasets
WHERE pool_name = $1 AND type = 'filesystem'
ORDER BY name
`
datasetRows, err := s.db.QueryContext(ctx, datasetsQuery, pool.PoolName)
if err != nil {
s.logger.Warn("Failed to query datasets", "pool", pool.PoolName, "error", err)
pool.Datasets = []DatasetInfo{}
pools = append(pools, pool)
continue
}
var datasets []DatasetInfo
for datasetRows.Next() {
var ds DatasetInfo
var mountPoint sql.NullString
if err := datasetRows.Scan(&ds.ID, &ds.Name, &ds.Type, &mountPoint, &ds.UsedBytes, &ds.AvailableBytes); err != nil {
s.logger.Warn("Failed to scan dataset", "error", err)
continue
}
ds.FullName = fmt.Sprintf("%s/%s", pool.PoolName, ds.Name)
if mountPoint.Valid {
ds.MountPoint = mountPoint.String
} else {
ds.MountPoint = ""
}
datasets = append(datasets, ds)
}
datasetRows.Close()
pool.Datasets = datasets
pools = append(pools, pool)
}
return pools, nil
}
// SetupRequest represents a request to setup object storage
type SetupRequest struct {
PoolName string `json:"pool_name" binding:"required"`
DatasetName string `json:"dataset_name" binding:"required"`
CreateNew bool `json:"create_new"` // If true, create new dataset instead of using existing
}
// SetupResponse represents the response after setup
type SetupResponse struct {
DatasetPath string `json:"dataset_path"`
MountPoint string `json:"mount_point"`
Message string `json:"message"`
}
// SetupObjectStorage configures MinIO to use a specific ZFS dataset
func (s *SetupService) SetupObjectStorage(ctx context.Context, req SetupRequest) (*SetupResponse, error) {
var datasetPath, mountPoint string
// Normalize dataset name - if it already contains pool name, use it as-is
var fullDatasetName string
if strings.HasPrefix(req.DatasetName, req.PoolName+"/") {
// Dataset name already includes pool name (e.g., "pool/dataset")
fullDatasetName = req.DatasetName
} else {
// Dataset name is just the name (e.g., "dataset"), combine with pool
fullDatasetName = fmt.Sprintf("%s/%s", req.PoolName, req.DatasetName)
}
if req.CreateNew {
// Create new dataset for object storage
// Check if dataset already exists
checkCmd := exec.CommandContext(ctx, "sudo", "zfs", "list", "-H", "-o", "name", fullDatasetName)
if err := checkCmd.Run(); err == nil {
return nil, fmt.Errorf("dataset %s already exists", fullDatasetName)
}
// Create dataset
createCmd := exec.CommandContext(ctx, "sudo", "zfs", "create", fullDatasetName)
if output, err := createCmd.CombinedOutput(); err != nil {
return nil, fmt.Errorf("failed to create dataset: %s - %w", string(output), err)
}
// Get mount point
getMountCmd := exec.CommandContext(ctx, "sudo", "zfs", "get", "-H", "-o", "value", "mountpoint", fullDatasetName)
mountOutput, err := getMountCmd.Output()
if err != nil {
return nil, fmt.Errorf("failed to get mount point: %w", err)
}
mountPoint = strings.TrimSpace(string(mountOutput))
datasetPath = fullDatasetName
s.logger.Info("Created new dataset for object storage", "dataset", fullDatasetName, "mount_point", mountPoint)
} else {
// Use existing dataset
// fullDatasetName already set above
// Verify dataset exists
checkCmd := exec.CommandContext(ctx, "sudo", "zfs", "list", "-H", "-o", "name", fullDatasetName)
if err := checkCmd.Run(); err != nil {
return nil, fmt.Errorf("dataset %s does not exist", fullDatasetName)
}
// Get mount point
getMountCmd := exec.CommandContext(ctx, "sudo", "zfs", "get", "-H", "-o", "value", "mountpoint", fullDatasetName)
mountOutput, err := getMountCmd.Output()
if err != nil {
return nil, fmt.Errorf("failed to get mount point: %w", err)
}
mountPoint = strings.TrimSpace(string(mountOutput))
datasetPath = fullDatasetName
s.logger.Info("Using existing dataset for object storage", "dataset", fullDatasetName, "mount_point", mountPoint)
}
// Ensure mount point directory exists
if mountPoint != "none" && mountPoint != "" {
if err := os.MkdirAll(mountPoint, 0755); err != nil {
return nil, fmt.Errorf("failed to create mount point directory: %w", err)
}
} else {
// If no mount point, use default path
mountPoint = filepath.Join("/opt/calypso/data/pool", req.PoolName, req.DatasetName)
if err := os.MkdirAll(mountPoint, 0755); err != nil {
return nil, fmt.Errorf("failed to create default directory: %w", err)
}
}
// Update MinIO configuration to use the selected dataset
if err := s.updateMinIOConfig(ctx, mountPoint); err != nil {
s.logger.Warn("Failed to update MinIO configuration", "error", err)
// Continue anyway, configuration is saved to database
}
// Save configuration to database
_, err := s.db.ExecContext(ctx, `
INSERT INTO object_storage_config (dataset_path, mount_point, pool_name, dataset_name, created_at, updated_at)
VALUES ($1, $2, $3, $4, NOW(), NOW())
ON CONFLICT (id) DO UPDATE
SET dataset_path = $1, mount_point = $2, pool_name = $3, dataset_name = $4, updated_at = NOW()
`, datasetPath, mountPoint, req.PoolName, req.DatasetName)
if err != nil {
// If table doesn't exist, just log warning
s.logger.Warn("Failed to save configuration to database (table may not exist)", "error", err)
}
return &SetupResponse{
DatasetPath: datasetPath,
MountPoint: mountPoint,
Message: fmt.Sprintf("Object storage configured to use dataset %s at %s. MinIO service needs to be restarted to use the new dataset.", datasetPath, mountPoint),
}, nil
}
// GetCurrentSetup returns the current object storage configuration
func (s *SetupService) GetCurrentSetup(ctx context.Context) (*SetupResponse, error) {
// Check if table exists first
var tableExists bool
checkQuery := `
SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = 'object_storage_config'
)
`
err := s.db.QueryRowContext(ctx, checkQuery).Scan(&tableExists)
if err != nil {
s.logger.Warn("Failed to check if object_storage_config table exists", "error", err)
return nil, nil // Return nil if can't check
}
if !tableExists {
s.logger.Debug("object_storage_config table does not exist")
return nil, nil // No table, no configuration
}
query := `
SELECT dataset_path, mount_point, pool_name, dataset_name
FROM object_storage_config
ORDER BY updated_at DESC
LIMIT 1
`
var resp SetupResponse
var poolName, datasetName string
err = s.db.QueryRowContext(ctx, query).Scan(&resp.DatasetPath, &resp.MountPoint, &poolName, &datasetName)
if err == sql.ErrNoRows {
s.logger.Debug("No configuration found in database")
return nil, nil // No configuration found
}
if err != nil {
// Check if error is due to table not existing or permission denied
errStr := err.Error()
if strings.Contains(errStr, "does not exist") || strings.Contains(errStr, "permission denied") {
s.logger.Debug("Table does not exist or permission denied, returning nil", "error", errStr)
return nil, nil // Return nil instead of error
}
s.logger.Error("Failed to scan current setup", "error", err)
return nil, fmt.Errorf("failed to get current setup: %w", err)
}
s.logger.Debug("Found current setup", "dataset_path", resp.DatasetPath, "mount_point", resp.MountPoint, "pool", poolName, "dataset", datasetName)
// Use dataset_path directly since it already contains the full path
resp.Message = fmt.Sprintf("Using dataset %s at %s", resp.DatasetPath, resp.MountPoint)
return &resp, nil
}
// UpdateObjectStorage updates the object storage configuration to use a different dataset
// This will update the configuration but won't migrate existing data
func (s *SetupService) UpdateObjectStorage(ctx context.Context, req SetupRequest) (*SetupResponse, error) {
// First check if there's existing configuration
currentSetup, err := s.GetCurrentSetup(ctx)
if err != nil {
return nil, fmt.Errorf("failed to check current setup: %w", err)
}
if currentSetup == nil {
// No existing setup, just do normal setup
return s.SetupObjectStorage(ctx, req)
}
// There's existing setup, proceed with update
var datasetPath, mountPoint string
// Normalize dataset name - if it already contains pool name, use it as-is
var fullDatasetName string
if strings.HasPrefix(req.DatasetName, req.PoolName+"/") {
// Dataset name already includes pool name (e.g., "pool/dataset")
fullDatasetName = req.DatasetName
} else {
// Dataset name is just the name (e.g., "dataset"), combine with pool
fullDatasetName = fmt.Sprintf("%s/%s", req.PoolName, req.DatasetName)
}
if req.CreateNew {
// Create new dataset for object storage
// Check if dataset already exists
checkCmd := exec.CommandContext(ctx, "sudo", "zfs", "list", "-H", "-o", "name", fullDatasetName)
if err := checkCmd.Run(); err == nil {
return nil, fmt.Errorf("dataset %s already exists", fullDatasetName)
}
// Create dataset
createCmd := exec.CommandContext(ctx, "sudo", "zfs", "create", fullDatasetName)
if output, err := createCmd.CombinedOutput(); err != nil {
return nil, fmt.Errorf("failed to create dataset: %s - %w", string(output), err)
}
// Get mount point
getMountCmd := exec.CommandContext(ctx, "sudo", "zfs", "get", "-H", "-o", "value", "mountpoint", fullDatasetName)
mountOutput, err := getMountCmd.Output()
if err != nil {
return nil, fmt.Errorf("failed to get mount point: %w", err)
}
mountPoint = strings.TrimSpace(string(mountOutput))
datasetPath = fullDatasetName
s.logger.Info("Created new dataset for object storage update", "dataset", fullDatasetName, "mount_point", mountPoint)
} else {
// Use existing dataset
// fullDatasetName already set above
// Verify dataset exists
checkCmd := exec.CommandContext(ctx, "sudo", "zfs", "list", "-H", "-o", "name", fullDatasetName)
if err := checkCmd.Run(); err != nil {
return nil, fmt.Errorf("dataset %s does not exist", fullDatasetName)
}
// Get mount point
getMountCmd := exec.CommandContext(ctx, "sudo", "zfs", "get", "-H", "-o", "value", "mountpoint", fullDatasetName)
mountOutput, err := getMountCmd.Output()
if err != nil {
return nil, fmt.Errorf("failed to get mount point: %w", err)
}
mountPoint = strings.TrimSpace(string(mountOutput))
datasetPath = fullDatasetName
s.logger.Info("Using existing dataset for object storage update", "dataset", fullDatasetName, "mount_point", mountPoint)
}
// Ensure mount point directory exists
if mountPoint != "none" && mountPoint != "" {
if err := os.MkdirAll(mountPoint, 0755); err != nil {
return nil, fmt.Errorf("failed to create mount point directory: %w", err)
}
} else {
// If no mount point, use default path
mountPoint = filepath.Join("/opt/calypso/data/pool", req.PoolName, req.DatasetName)
if err := os.MkdirAll(mountPoint, 0755); err != nil {
return nil, fmt.Errorf("failed to create default directory: %w", err)
}
}
// Update configuration in database
_, err = s.db.ExecContext(ctx, `
UPDATE object_storage_config
SET dataset_path = $1, mount_point = $2, pool_name = $3, dataset_name = $4, updated_at = NOW()
WHERE id = (SELECT id FROM object_storage_config ORDER BY updated_at DESC LIMIT 1)
`, datasetPath, mountPoint, req.PoolName, req.DatasetName)
if err != nil {
// If update fails, try insert
_, err = s.db.ExecContext(ctx, `
INSERT INTO object_storage_config (dataset_path, mount_point, pool_name, dataset_name, created_at, updated_at)
VALUES ($1, $2, $3, $4, NOW(), NOW())
ON CONFLICT (dataset_path) DO UPDATE
SET mount_point = $2, pool_name = $3, dataset_name = $4, updated_at = NOW()
`, datasetPath, mountPoint, req.PoolName, req.DatasetName)
if err != nil {
s.logger.Warn("Failed to update configuration in database", "error", err)
}
}
// Update MinIO configuration to use the selected dataset
if err := s.updateMinIOConfig(ctx, mountPoint); err != nil {
s.logger.Warn("Failed to update MinIO configuration", "error", err)
// Continue anyway, configuration is saved to database
} else {
// Restart MinIO service to apply new configuration
if err := s.restartMinIOService(ctx); err != nil {
s.logger.Warn("Failed to restart MinIO service", "error", err)
// Continue anyway, user can restart manually
}
}
return &SetupResponse{
DatasetPath: datasetPath,
MountPoint: mountPoint,
Message: fmt.Sprintf("Object storage updated to use dataset %s at %s. Note: Existing data in previous dataset (%s) is not migrated automatically. MinIO service has been restarted.", datasetPath, mountPoint, currentSetup.DatasetPath),
}, nil
}
// updateMinIOConfig updates MinIO configuration file to use dataset mount point directly
// Note: MinIO erasure coding requires direct directory paths, not symlinks
func (s *SetupService) updateMinIOConfig(ctx context.Context, datasetMountPoint string) error {
configFile := "/opt/calypso/conf/minio/minio.conf"
// Ensure dataset mount point directory exists and has correct ownership
if err := os.MkdirAll(datasetMountPoint, 0755); err != nil {
return fmt.Errorf("failed to create dataset mount point directory: %w", err)
}
// Set ownership to minio-user so MinIO can write to it
if err := exec.CommandContext(ctx, "sudo", "chown", "-R", "minio-user:minio-user", datasetMountPoint).Run(); err != nil {
s.logger.Warn("Failed to set ownership on dataset mount point", "path", datasetMountPoint, "error", err)
// Continue anyway, might already have correct ownership
}
// Set permissions
if err := exec.CommandContext(ctx, "sudo", "chmod", "755", datasetMountPoint).Run(); err != nil {
s.logger.Warn("Failed to set permissions on dataset mount point", "path", datasetMountPoint, "error", err)
}
s.logger.Info("Prepared dataset mount point for MinIO", "path", datasetMountPoint)
// Read current config file
configContent, err := os.ReadFile(configFile)
if err != nil {
// If file doesn't exist, create it
if os.IsNotExist(err) {
configContent = []byte(fmt.Sprintf("MINIO_ROOT_USER=admin\nMINIO_ROOT_PASSWORD=HqBX1IINqFynkWFa\nMINIO_VOLUMES=%s\n", datasetMountPoint))
} else {
return fmt.Errorf("failed to read MinIO config file: %w", err)
}
} else {
// Update MINIO_VOLUMES in config
lines := strings.Split(string(configContent), "\n")
updated := false
for i, line := range lines {
if strings.HasPrefix(strings.TrimSpace(line), "MINIO_VOLUMES=") {
lines[i] = fmt.Sprintf("MINIO_VOLUMES=%s", datasetMountPoint)
updated = true
break
}
}
if !updated {
// Add MINIO_VOLUMES if not found
lines = append(lines, fmt.Sprintf("MINIO_VOLUMES=%s", datasetMountPoint))
}
configContent = []byte(strings.Join(lines, "\n"))
}
// Write updated config using sudo
// Write temp file to a location we can write to
userTempFile := fmt.Sprintf("/tmp/minio.conf.%d.tmp", os.Getpid())
if err := os.WriteFile(userTempFile, configContent, 0644); err != nil {
return fmt.Errorf("failed to write temp config file: %w", err)
}
defer os.Remove(userTempFile) // Cleanup
// Copy temp file to config location with sudo
if err := exec.CommandContext(ctx, "sudo", "cp", userTempFile, configFile).Run(); err != nil {
return fmt.Errorf("failed to update config file: %w", err)
}
// Set proper ownership and permissions
if err := exec.CommandContext(ctx, "sudo", "chown", "minio-user:minio-user", configFile).Run(); err != nil {
s.logger.Warn("Failed to set config file ownership", "error", err)
}
if err := exec.CommandContext(ctx, "sudo", "chmod", "644", configFile).Run(); err != nil {
s.logger.Warn("Failed to set config file permissions", "error", err)
}
s.logger.Info("Updated MinIO configuration", "config_file", configFile, "volumes", datasetMountPoint)
return nil
}
// restartMinIOService restarts the MinIO service to apply new configuration
func (s *SetupService) restartMinIOService(ctx context.Context) error {
// Restart MinIO service using sudo
cmd := exec.CommandContext(ctx, "sudo", "systemctl", "restart", "minio.service")
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to restart MinIO service: %w", err)
}
// Wait a moment for service to start
time.Sleep(2 * time.Second)
// Verify service is running
checkCmd := exec.CommandContext(ctx, "sudo", "systemctl", "is-active", "minio.service")
output, err := checkCmd.Output()
if err != nil {
return fmt.Errorf("failed to check MinIO service status: %w", err)
}
status := strings.TrimSpace(string(output))
if status != "active" {
return fmt.Errorf("MinIO service is not active after restart, status: %s", status)
}
s.logger.Info("MinIO service restarted successfully")
return nil
}

View File

@@ -1,32 +1,29 @@
package scst
import (
"fmt"
"net/http"
"strings"
"github.com/atlasos/calypso/internal/common/database"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/atlasos/calypso/internal/tasks"
"github.com/gin-gonic/gin"
"github.com/go-playground/validator/v10"
)
// Handler handles SCST-related API requests
type Handler struct {
service *Service
service *Service
taskEngine *tasks.Engine
db *database.DB
logger *logger.Logger
db *database.DB
logger *logger.Logger
}
// NewHandler creates a new SCST handler
func NewHandler(db *database.DB, log *logger.Logger) *Handler {
return &Handler{
service: NewService(db, log),
service: NewService(db, log),
taskEngine: tasks.NewEngine(db, log),
db: db,
logger: log,
db: db,
logger: log,
}
}
@@ -39,11 +36,6 @@ func (h *Handler) ListTargets(c *gin.Context) {
return
}
// Ensure we return an empty array instead of null
if targets == nil {
targets = []Target{}
}
c.JSON(http.StatusOK, gin.H{"targets": targets})
}
@@ -63,34 +55,21 @@ func (h *Handler) GetTarget(c *gin.Context) {
}
// Get LUNs
luns, err := h.service.GetTargetLUNs(c.Request.Context(), targetID)
if err != nil {
h.logger.Warn("Failed to get LUNs", "target_id", targetID, "error", err)
// Return empty array instead of nil
luns = []LUN{}
}
// Get initiator groups
groups, err2 := h.service.GetTargetInitiatorGroups(c.Request.Context(), targetID)
if err2 != nil {
h.logger.Warn("Failed to get initiator groups", "target_id", targetID, "error", err2)
groups = []InitiatorGroup{}
}
luns, _ := h.service.GetTargetLUNs(c.Request.Context(), targetID)
c.JSON(http.StatusOK, gin.H{
"target": target,
"luns": luns,
"initiator_groups": groups,
"target": target,
"luns": luns,
})
}
// CreateTargetRequest represents a target creation request
type CreateTargetRequest struct {
IQN string `json:"iqn" binding:"required"`
TargetType string `json:"target_type" binding:"required"`
Name string `json:"name" binding:"required"`
Description string `json:"description"`
SingleInitiatorOnly bool `json:"single_initiator_only"`
IQN string `json:"iqn" binding:"required"`
TargetType string `json:"target_type" binding:"required"`
Name string `json:"name" binding:"required"`
Description string `json:"description"`
SingleInitiatorOnly bool `json:"single_initiator_only"`
}
// CreateTarget creates a new SCST target
@@ -104,13 +83,13 @@ func (h *Handler) CreateTarget(c *gin.Context) {
userID, _ := c.Get("user_id")
target := &Target{
IQN: req.IQN,
TargetType: req.TargetType,
Name: req.Name,
Description: req.Description,
IsActive: true,
IQN: req.IQN,
TargetType: req.TargetType,
Name: req.Name,
Description: req.Description,
IsActive: true,
SingleInitiatorOnly: req.SingleInitiatorOnly || req.TargetType == "vtl" || req.TargetType == "physical_tape",
CreatedBy: userID.(string),
CreatedBy: userID.(string),
}
if err := h.service.CreateTarget(c.Request.Context(), target); err != nil {
@@ -119,19 +98,14 @@ func (h *Handler) CreateTarget(c *gin.Context) {
return
}
// Set alias to name for frontend compatibility (same as ListTargets)
target.Alias = target.Name
// LUNCount will be 0 for newly created target
target.LUNCount = 0
c.JSON(http.StatusCreated, target)
}
// AddLUNRequest represents a LUN addition request
type AddLUNRequest struct {
DeviceName string `json:"device_name" binding:"required"`
DevicePath string `json:"device_path" binding:"required"`
LUNNumber int `json:"lun_number"` // Note: cannot use binding:"required" for int as 0 is valid
DeviceName string `json:"device_name" binding:"required"`
DevicePath string `json:"device_path" binding:"required"`
LUNNumber int `json:"lun_number" binding:"required"`
HandlerType string `json:"handler_type" binding:"required"`
}
@@ -147,43 +121,7 @@ func (h *Handler) AddLUN(c *gin.Context) {
var req AddLUNRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Failed to bind AddLUN request", "error", err)
// Provide more detailed error message
if validationErr, ok := err.(validator.ValidationErrors); ok {
var errorMessages []string
for _, fieldErr := range validationErr {
errorMessages = append(errorMessages, fmt.Sprintf("%s is required", fieldErr.Field()))
}
c.JSON(http.StatusBadRequest, gin.H{"error": fmt.Sprintf("validation failed: %s", strings.Join(errorMessages, ", "))})
} else {
// Extract error message without full struct name
errMsg := err.Error()
if idx := strings.Index(errMsg, "Key: '"); idx >= 0 {
// Extract field name from error message
fieldStart := idx + 6 // Length of "Key: '"
if fieldEnd := strings.Index(errMsg[fieldStart:], "'"); fieldEnd >= 0 {
fieldName := errMsg[fieldStart : fieldStart+fieldEnd]
c.JSON(http.StatusBadRequest, gin.H{"error": fmt.Sprintf("invalid or missing field: %s", fieldName)})
} else {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request format"})
}
} else {
c.JSON(http.StatusBadRequest, gin.H{"error": fmt.Sprintf("invalid request: %v", err)})
}
}
return
}
// Validate required fields (additional check in case binding doesn't catch it)
if req.DeviceName == "" || req.DevicePath == "" || req.HandlerType == "" {
h.logger.Error("Missing required fields in AddLUN request", "device_name", req.DeviceName, "device_path", req.DevicePath, "handler_type", req.HandlerType)
c.JSON(http.StatusBadRequest, gin.H{"error": "device_name, device_path, and handler_type are required"})
return
}
// Validate LUN number range
if req.LUNNumber < 0 || req.LUNNumber > 255 {
c.JSON(http.StatusBadRequest, gin.H{"error": "lun_number must be between 0 and 255"})
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
return
}
@@ -196,48 +134,6 @@ func (h *Handler) AddLUN(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{"message": "LUN added successfully"})
}
// RemoveLUN removes a LUN from a target
func (h *Handler) RemoveLUN(c *gin.Context) {
targetID := c.Param("id")
lunID := c.Param("lunId")
// Get target
target, err := h.service.GetTarget(c.Request.Context(), targetID)
if err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": "target not found"})
return
}
// Get LUN to get the LUN number
var lunNumber int
err = h.db.QueryRowContext(c.Request.Context(),
"SELECT lun_number FROM scst_luns WHERE id = $1 AND target_id = $2",
lunID, targetID,
).Scan(&lunNumber)
if err != nil {
if strings.Contains(err.Error(), "no rows") {
// LUN already deleted from database - check if it still exists in SCST
// Try to get LUN number from URL or try common LUN numbers
// For now, return success since it's already deleted (idempotent)
h.logger.Info("LUN not found in database, may already be deleted", "lun_id", lunID, "target_id", targetID)
c.JSON(http.StatusOK, gin.H{"message": "LUN already removed or not found"})
return
}
h.logger.Error("Failed to get LUN", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get LUN"})
return
}
// Remove LUN
if err := h.service.RemoveLUN(c.Request.Context(), target.IQN, lunNumber); err != nil {
h.logger.Error("Failed to remove LUN", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "LUN removed successfully"})
}
// AddInitiatorRequest represents an initiator addition request
type AddInitiatorRequest struct {
InitiatorIQN string `json:"initiator_iqn" binding:"required"`
@@ -268,149 +164,6 @@ func (h *Handler) AddInitiator(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{"message": "Initiator added successfully"})
}
// AddInitiatorToGroupRequest represents a request to add an initiator to a group
type AddInitiatorToGroupRequest struct {
InitiatorIQN string `json:"initiator_iqn" binding:"required"`
}
// AddInitiatorToGroup adds an initiator to a specific group
func (h *Handler) AddInitiatorToGroup(c *gin.Context) {
groupID := c.Param("id")
var req AddInitiatorToGroupRequest
if err := c.ShouldBindJSON(&req); err != nil {
validationErrors := make(map[string]string)
if ve, ok := err.(validator.ValidationErrors); ok {
for _, fe := range ve {
field := strings.ToLower(fe.Field())
validationErrors[field] = fmt.Sprintf("Field '%s' is required", field)
}
}
c.JSON(http.StatusBadRequest, gin.H{
"error": "invalid request",
"validation_errors": validationErrors,
})
return
}
err := h.service.AddInitiatorToGroup(c.Request.Context(), groupID, req.InitiatorIQN)
if err != nil {
if strings.Contains(err.Error(), "not found") || strings.Contains(err.Error(), "already exists") || strings.Contains(err.Error(), "single initiator only") {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
h.logger.Error("Failed to add initiator to group", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to add initiator to group"})
return
}
c.JSON(http.StatusOK, gin.H{"message": "Initiator added to group successfully"})
}
// ListAllInitiators lists all initiators across all targets
func (h *Handler) ListAllInitiators(c *gin.Context) {
initiators, err := h.service.ListAllInitiators(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list initiators", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list initiators"})
return
}
if initiators == nil {
initiators = []InitiatorWithTarget{}
}
c.JSON(http.StatusOK, gin.H{"initiators": initiators})
}
// RemoveInitiator removes an initiator
func (h *Handler) RemoveInitiator(c *gin.Context) {
initiatorID := c.Param("id")
if err := h.service.RemoveInitiator(c.Request.Context(), initiatorID); err != nil {
if err.Error() == "initiator not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "initiator not found"})
return
}
h.logger.Error("Failed to remove initiator", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "Initiator removed successfully"})
}
// GetInitiator retrieves an initiator by ID
func (h *Handler) GetInitiator(c *gin.Context) {
initiatorID := c.Param("id")
initiator, err := h.service.GetInitiator(c.Request.Context(), initiatorID)
if err != nil {
if err.Error() == "initiator not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "initiator not found"})
return
}
h.logger.Error("Failed to get initiator", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get initiator"})
return
}
c.JSON(http.StatusOK, initiator)
}
// ListExtents lists all device extents
func (h *Handler) ListExtents(c *gin.Context) {
extents, err := h.service.ListExtents(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list extents", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list extents"})
return
}
if extents == nil {
extents = []Extent{}
}
c.JSON(http.StatusOK, gin.H{"extents": extents})
}
// CreateExtentRequest represents a request to create an extent
type CreateExtentRequest struct {
DeviceName string `json:"device_name" binding:"required"`
DevicePath string `json:"device_path" binding:"required"`
HandlerType string `json:"handler_type" binding:"required"`
}
// CreateExtent creates a new device extent
func (h *Handler) CreateExtent(c *gin.Context) {
var req CreateExtentRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
return
}
if err := h.service.CreateExtent(c.Request.Context(), req.DeviceName, req.DevicePath, req.HandlerType); err != nil {
h.logger.Error("Failed to create extent", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusCreated, gin.H{"message": "Extent created successfully"})
}
// DeleteExtent deletes a device extent
func (h *Handler) DeleteExtent(c *gin.Context) {
deviceName := c.Param("device")
if err := h.service.DeleteExtent(c.Request.Context(), deviceName); err != nil {
h.logger.Error("Failed to delete extent", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "Extent deleted successfully"})
}
// ApplyConfig applies SCST configuration
func (h *Handler) ApplyConfig(c *gin.Context) {
userID, _ := c.Get("user_id")
@@ -456,338 +209,3 @@ func (h *Handler) ListHandlers(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{"handlers": handlers})
}
// ListPortals lists all iSCSI portals
func (h *Handler) ListPortals(c *gin.Context) {
portals, err := h.service.ListPortals(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list portals", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list portals"})
return
}
// Ensure we return an empty array instead of null
if portals == nil {
portals = []Portal{}
}
c.JSON(http.StatusOK, gin.H{"portals": portals})
}
// CreatePortal creates a new portal
func (h *Handler) CreatePortal(c *gin.Context) {
var portal Portal
if err := c.ShouldBindJSON(&portal); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
return
}
if err := h.service.CreatePortal(c.Request.Context(), &portal); err != nil {
h.logger.Error("Failed to create portal", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusCreated, portal)
}
// UpdatePortal updates a portal
func (h *Handler) UpdatePortal(c *gin.Context) {
id := c.Param("id")
var portal Portal
if err := c.ShouldBindJSON(&portal); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
return
}
if err := h.service.UpdatePortal(c.Request.Context(), id, &portal); err != nil {
if err.Error() == "portal not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "portal not found"})
return
}
h.logger.Error("Failed to update portal", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, portal)
}
// EnableTarget enables a target
func (h *Handler) EnableTarget(c *gin.Context) {
targetID := c.Param("id")
target, err := h.service.GetTarget(c.Request.Context(), targetID)
if err != nil {
if err.Error() == "target not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "target not found"})
return
}
h.logger.Error("Failed to get target", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get target"})
return
}
if err := h.service.EnableTarget(c.Request.Context(), target.IQN); err != nil {
h.logger.Error("Failed to enable target", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "Target enabled successfully"})
}
// DisableTarget disables a target
func (h *Handler) DisableTarget(c *gin.Context) {
targetID := c.Param("id")
target, err := h.service.GetTarget(c.Request.Context(), targetID)
if err != nil {
if err.Error() == "target not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "target not found"})
return
}
h.logger.Error("Failed to get target", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get target"})
return
}
if err := h.service.DisableTarget(c.Request.Context(), target.IQN); err != nil {
h.logger.Error("Failed to disable target", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "Target disabled successfully"})
}
// DeleteTarget deletes a target
func (h *Handler) DeleteTarget(c *gin.Context) {
targetID := c.Param("id")
if err := h.service.DeleteTarget(c.Request.Context(), targetID); err != nil {
if err.Error() == "target not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "target not found"})
return
}
h.logger.Error("Failed to delete target", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "Target deleted successfully"})
}
// DeletePortal deletes a portal
func (h *Handler) DeletePortal(c *gin.Context) {
id := c.Param("id")
if err := h.service.DeletePortal(c.Request.Context(), id); err != nil {
if err.Error() == "portal not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "portal not found"})
return
}
h.logger.Error("Failed to delete portal", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "Portal deleted successfully"})
}
// GetPortal retrieves a portal by ID
func (h *Handler) GetPortal(c *gin.Context) {
id := c.Param("id")
portal, err := h.service.GetPortal(c.Request.Context(), id)
if err != nil {
if err.Error() == "portal not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "portal not found"})
return
}
h.logger.Error("Failed to get portal", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get portal"})
return
}
c.JSON(http.StatusOK, portal)
}
// CreateInitiatorGroupRequest represents a request to create an initiator group
type CreateInitiatorGroupRequest struct {
TargetID string `json:"target_id" binding:"required"`
GroupName string `json:"group_name" binding:"required"`
}
// CreateInitiatorGroup creates a new initiator group
func (h *Handler) CreateInitiatorGroup(c *gin.Context) {
var req CreateInitiatorGroupRequest
if err := c.ShouldBindJSON(&req); err != nil {
validationErrors := make(map[string]string)
if ve, ok := err.(validator.ValidationErrors); ok {
for _, fe := range ve {
field := strings.ToLower(fe.Field())
validationErrors[field] = fmt.Sprintf("Field '%s' is required", field)
}
}
c.JSON(http.StatusBadRequest, gin.H{
"error": "invalid request",
"validation_errors": validationErrors,
})
return
}
group, err := h.service.CreateInitiatorGroup(c.Request.Context(), req.TargetID, req.GroupName)
if err != nil {
if strings.Contains(err.Error(), "already exists") || strings.Contains(err.Error(), "not found") {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
h.logger.Error("Failed to create initiator group", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create initiator group"})
return
}
c.JSON(http.StatusOK, group)
}
// UpdateInitiatorGroupRequest represents a request to update an initiator group
type UpdateInitiatorGroupRequest struct {
GroupName string `json:"group_name" binding:"required"`
}
// UpdateInitiatorGroup updates an initiator group
func (h *Handler) UpdateInitiatorGroup(c *gin.Context) {
groupID := c.Param("id")
var req UpdateInitiatorGroupRequest
if err := c.ShouldBindJSON(&req); err != nil {
validationErrors := make(map[string]string)
if ve, ok := err.(validator.ValidationErrors); ok {
for _, fe := range ve {
field := strings.ToLower(fe.Field())
validationErrors[field] = fmt.Sprintf("Field '%s' is required", field)
}
}
c.JSON(http.StatusBadRequest, gin.H{
"error": "invalid request",
"validation_errors": validationErrors,
})
return
}
group, err := h.service.UpdateInitiatorGroup(c.Request.Context(), groupID, req.GroupName)
if err != nil {
if strings.Contains(err.Error(), "not found") || strings.Contains(err.Error(), "already exists") {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
h.logger.Error("Failed to update initiator group", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to update initiator group"})
return
}
c.JSON(http.StatusOK, group)
}
// DeleteInitiatorGroup deletes an initiator group
func (h *Handler) DeleteInitiatorGroup(c *gin.Context) {
groupID := c.Param("id")
err := h.service.DeleteInitiatorGroup(c.Request.Context(), groupID)
if err != nil {
if strings.Contains(err.Error(), "not found") {
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
return
}
if strings.Contains(err.Error(), "cannot delete") || strings.Contains(err.Error(), "contains") {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
h.logger.Error("Failed to delete initiator group", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to delete initiator group"})
return
}
c.JSON(http.StatusOK, gin.H{"message": "initiator group deleted successfully"})
}
// GetInitiatorGroup retrieves an initiator group by ID
func (h *Handler) GetInitiatorGroup(c *gin.Context) {
groupID := c.Param("id")
group, err := h.service.GetInitiatorGroup(c.Request.Context(), groupID)
if err != nil {
if strings.Contains(err.Error(), "not found") {
c.JSON(http.StatusNotFound, gin.H{"error": "initiator group not found"})
return
}
h.logger.Error("Failed to get initiator group", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get initiator group"})
return
}
c.JSON(http.StatusOK, group)
}
// ListAllInitiatorGroups lists all initiator groups
func (h *Handler) ListAllInitiatorGroups(c *gin.Context) {
groups, err := h.service.ListAllInitiatorGroups(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list initiator groups", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list initiator groups"})
return
}
if groups == nil {
groups = []InitiatorGroup{}
}
c.JSON(http.StatusOK, gin.H{"groups": groups})
}
// GetConfigFile reads the SCST configuration file content
func (h *Handler) GetConfigFile(c *gin.Context) {
configPath := c.DefaultQuery("path", "/etc/scst.conf")
content, err := h.service.ReadConfigFile(c.Request.Context(), configPath)
if err != nil {
h.logger.Error("Failed to read config file", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{
"content": content,
"path": configPath,
})
}
// UpdateConfigFile writes content to SCST configuration file
func (h *Handler) UpdateConfigFile(c *gin.Context) {
var req struct {
Content string `json:"content" binding:"required"`
Path string `json:"path"`
}
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
return
}
configPath := req.Path
if configPath == "" {
configPath = "/etc/scst.conf"
}
if err := h.service.WriteConfigFile(c.Request.Context(), configPath, req.Content); err != nil {
h.logger.Error("Failed to write config file", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{
"message": "Configuration file updated successfully",
"path": configPath,
})
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,147 +0,0 @@
package shares
import (
"net/http"
"github.com/atlasos/calypso/internal/common/database"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/gin-gonic/gin"
"github.com/go-playground/validator/v10"
)
// Handler handles Shares-related API requests
type Handler struct {
service *Service
logger *logger.Logger
}
// NewHandler creates a new Shares handler
func NewHandler(db *database.DB, log *logger.Logger) *Handler {
return &Handler{
service: NewService(db, log),
logger: log,
}
}
// ListShares lists all shares
func (h *Handler) ListShares(c *gin.Context) {
shares, err := h.service.ListShares(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list shares", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list shares"})
return
}
// Ensure we return an empty array instead of null
if shares == nil {
shares = []*Share{}
}
c.JSON(http.StatusOK, gin.H{"shares": shares})
}
// GetShare retrieves a share by ID
func (h *Handler) GetShare(c *gin.Context) {
shareID := c.Param("id")
share, err := h.service.GetShare(c.Request.Context(), shareID)
if err != nil {
if err.Error() == "share not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "share not found"})
return
}
h.logger.Error("Failed to get share", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get share"})
return
}
c.JSON(http.StatusOK, share)
}
// CreateShare creates a new share
func (h *Handler) CreateShare(c *gin.Context) {
var req CreateShareRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid create share request", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
return
}
// Validate request
validate := validator.New()
if err := validate.Struct(req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "validation failed: " + err.Error()})
return
}
// Get user ID from context (set by auth middleware)
userID, exists := c.Get("user_id")
if !exists {
c.JSON(http.StatusUnauthorized, gin.H{"error": "unauthorized"})
return
}
share, err := h.service.CreateShare(c.Request.Context(), &req, userID.(string))
if err != nil {
if err.Error() == "dataset not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "dataset not found"})
return
}
if err.Error() == "only filesystem datasets can be shared (not volumes)" {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
if err.Error() == "at least one protocol (NFS or SMB) must be enabled" {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
h.logger.Error("Failed to create share", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusCreated, share)
}
// UpdateShare updates an existing share
func (h *Handler) UpdateShare(c *gin.Context) {
shareID := c.Param("id")
var req UpdateShareRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid update share request", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
return
}
share, err := h.service.UpdateShare(c.Request.Context(), shareID, &req)
if err != nil {
if err.Error() == "share not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "share not found"})
return
}
h.logger.Error("Failed to update share", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, share)
}
// DeleteShare deletes a share
func (h *Handler) DeleteShare(c *gin.Context) {
shareID := c.Param("id")
err := h.service.DeleteShare(c.Request.Context(), shareID)
if err != nil {
if err.Error() == "share not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "share not found"})
return
}
h.logger.Error("Failed to delete share", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "share deleted successfully"})
}

View File

@@ -1,806 +0,0 @@
package shares
import (
"context"
"database/sql"
"fmt"
"os"
"os/exec"
"strings"
"time"
"github.com/atlasos/calypso/internal/common/database"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/lib/pq"
)
// Service handles Shares (CIFS/NFS) operations
type Service struct {
db *database.DB
logger *logger.Logger
}
// NewService creates a new Shares service
func NewService(db *database.DB, log *logger.Logger) *Service {
return &Service{
db: db,
logger: log,
}
}
// Share represents a filesystem share (NFS/SMB)
type Share struct {
ID string `json:"id"`
DatasetID string `json:"dataset_id"`
DatasetName string `json:"dataset_name"`
MountPoint string `json:"mount_point"`
ShareType string `json:"share_type"` // 'nfs', 'smb', 'both'
NFSEnabled bool `json:"nfs_enabled"`
NFSOptions string `json:"nfs_options,omitempty"`
NFSClients []string `json:"nfs_clients,omitempty"`
SMBEnabled bool `json:"smb_enabled"`
SMBShareName string `json:"smb_share_name,omitempty"`
SMBPath string `json:"smb_path,omitempty"`
SMBComment string `json:"smb_comment,omitempty"`
SMBGuestOK bool `json:"smb_guest_ok"`
SMBReadOnly bool `json:"smb_read_only"`
SMBBrowseable bool `json:"smb_browseable"`
IsActive bool `json:"is_active"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
CreatedBy string `json:"created_by"`
}
// ListShares lists all shares
func (s *Service) ListShares(ctx context.Context) ([]*Share, error) {
query := `
SELECT
zs.id, zs.dataset_id, zd.name as dataset_name, zd.mount_point,
zs.share_type, zs.nfs_enabled, zs.nfs_options, zs.nfs_clients,
zs.smb_enabled, zs.smb_share_name, zs.smb_path, zs.smb_comment,
zs.smb_guest_ok, zs.smb_read_only, zs.smb_browseable,
zs.is_active, zs.created_at, zs.updated_at, zs.created_by
FROM zfs_shares zs
JOIN zfs_datasets zd ON zs.dataset_id = zd.id
ORDER BY zd.name
`
rows, err := s.db.QueryContext(ctx, query)
if err != nil {
if strings.Contains(err.Error(), "does not exist") {
s.logger.Warn("zfs_shares table does not exist, returning empty list")
return []*Share{}, nil
}
return nil, fmt.Errorf("failed to list shares: %w", err)
}
defer rows.Close()
var shares []*Share
for rows.Next() {
var share Share
var mountPoint sql.NullString
var nfsOptions sql.NullString
var smbShareName sql.NullString
var smbPath sql.NullString
var smbComment sql.NullString
var nfsClients []string
err := rows.Scan(
&share.ID, &share.DatasetID, &share.DatasetName, &mountPoint,
&share.ShareType, &share.NFSEnabled, &nfsOptions, pq.Array(&nfsClients),
&share.SMBEnabled, &smbShareName, &smbPath, &smbComment,
&share.SMBGuestOK, &share.SMBReadOnly, &share.SMBBrowseable,
&share.IsActive, &share.CreatedAt, &share.UpdatedAt, &share.CreatedBy,
)
if err != nil {
s.logger.Error("Failed to scan share row", "error", err)
continue
}
share.NFSClients = nfsClients
if mountPoint.Valid {
share.MountPoint = mountPoint.String
}
if nfsOptions.Valid {
share.NFSOptions = nfsOptions.String
}
if smbShareName.Valid {
share.SMBShareName = smbShareName.String
}
if smbPath.Valid {
share.SMBPath = smbPath.String
}
if smbComment.Valid {
share.SMBComment = smbComment.String
}
shares = append(shares, &share)
}
if err := rows.Err(); err != nil {
return nil, fmt.Errorf("error iterating share rows: %w", err)
}
return shares, nil
}
// GetShare retrieves a share by ID
func (s *Service) GetShare(ctx context.Context, shareID string) (*Share, error) {
query := `
SELECT
zs.id, zs.dataset_id, zd.name as dataset_name, zd.mount_point,
zs.share_type, zs.nfs_enabled, zs.nfs_options, zs.nfs_clients,
zs.smb_enabled, zs.smb_share_name, zs.smb_path, zs.smb_comment,
zs.smb_guest_ok, zs.smb_read_only, zs.smb_browseable,
zs.is_active, zs.created_at, zs.updated_at, zs.created_by
FROM zfs_shares zs
JOIN zfs_datasets zd ON zs.dataset_id = zd.id
WHERE zs.id = $1
`
var share Share
var mountPoint sql.NullString
var nfsOptions sql.NullString
var smbShareName sql.NullString
var smbPath sql.NullString
var smbComment sql.NullString
var nfsClients []string
err := s.db.QueryRowContext(ctx, query, shareID).Scan(
&share.ID, &share.DatasetID, &share.DatasetName, &mountPoint,
&share.ShareType, &share.NFSEnabled, &nfsOptions, pq.Array(&nfsClients),
&share.SMBEnabled, &smbShareName, &smbPath, &smbComment,
&share.SMBGuestOK, &share.SMBReadOnly, &share.SMBBrowseable,
&share.IsActive, &share.CreatedAt, &share.UpdatedAt, &share.CreatedBy,
)
if err != nil {
if err == sql.ErrNoRows {
return nil, fmt.Errorf("share not found")
}
return nil, fmt.Errorf("failed to get share: %w", err)
}
share.NFSClients = nfsClients
if mountPoint.Valid {
share.MountPoint = mountPoint.String
}
if nfsOptions.Valid {
share.NFSOptions = nfsOptions.String
}
if smbShareName.Valid {
share.SMBShareName = smbShareName.String
}
if smbPath.Valid {
share.SMBPath = smbPath.String
}
if smbComment.Valid {
share.SMBComment = smbComment.String
}
return &share, nil
}
// CreateShareRequest represents a share creation request
type CreateShareRequest struct {
DatasetID string `json:"dataset_id" binding:"required"`
NFSEnabled bool `json:"nfs_enabled"`
NFSOptions string `json:"nfs_options"`
NFSClients []string `json:"nfs_clients"`
SMBEnabled bool `json:"smb_enabled"`
SMBShareName string `json:"smb_share_name"`
SMBPath string `json:"smb_path"`
SMBComment string `json:"smb_comment"`
SMBGuestOK bool `json:"smb_guest_ok"`
SMBReadOnly bool `json:"smb_read_only"`
SMBBrowseable bool `json:"smb_browseable"`
}
// CreateShare creates a new share
func (s *Service) CreateShare(ctx context.Context, req *CreateShareRequest, userID string) (*Share, error) {
// Validate dataset exists and is a filesystem (not volume)
// req.DatasetID can be either UUID or dataset name
var datasetID, datasetType, datasetName, mountPoint string
var mountPointNull sql.NullString
// Try to find by ID first (UUID)
err := s.db.QueryRowContext(ctx,
"SELECT id, type, name, mount_point FROM zfs_datasets WHERE id = $1",
req.DatasetID,
).Scan(&datasetID, &datasetType, &datasetName, &mountPointNull)
// If not found by ID, try by name
if err == sql.ErrNoRows {
err = s.db.QueryRowContext(ctx,
"SELECT id, type, name, mount_point FROM zfs_datasets WHERE name = $1",
req.DatasetID,
).Scan(&datasetID, &datasetType, &datasetName, &mountPointNull)
}
if err != nil {
if err == sql.ErrNoRows {
return nil, fmt.Errorf("dataset not found")
}
return nil, fmt.Errorf("failed to validate dataset: %w", err)
}
if mountPointNull.Valid {
mountPoint = mountPointNull.String
} else {
mountPoint = "none"
}
if datasetType != "filesystem" {
return nil, fmt.Errorf("only filesystem datasets can be shared (not volumes)")
}
// Determine share type
shareType := "none"
if req.NFSEnabled && req.SMBEnabled {
shareType = "both"
} else if req.NFSEnabled {
shareType = "nfs"
} else if req.SMBEnabled {
shareType = "smb"
} else {
return nil, fmt.Errorf("at least one protocol (NFS or SMB) must be enabled")
}
// Set default NFS options if not provided
nfsOptions := req.NFSOptions
if nfsOptions == "" {
nfsOptions = "rw,sync,no_subtree_check"
}
// Set default SMB share name if not provided
smbShareName := req.SMBShareName
if smbShareName == "" {
// Extract dataset name from full path (e.g., "pool/dataset" -> "dataset")
parts := strings.Split(datasetName, "/")
smbShareName = parts[len(parts)-1]
}
// Set SMB path (use mount_point if available, otherwise use dataset name)
smbPath := req.SMBPath
if smbPath == "" {
if mountPoint != "" && mountPoint != "none" {
smbPath = mountPoint
} else {
smbPath = fmt.Sprintf("/mnt/%s", strings.ReplaceAll(datasetName, "/", "_"))
}
}
// Insert into database
query := `
INSERT INTO zfs_shares (
dataset_id, share_type, nfs_enabled, nfs_options, nfs_clients,
smb_enabled, smb_share_name, smb_path, smb_comment,
smb_guest_ok, smb_read_only, smb_browseable, is_active, created_by
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14)
RETURNING id, created_at, updated_at
`
var shareID string
var createdAt, updatedAt time.Time
// Handle nfs_clients array - use empty array if nil
nfsClients := req.NFSClients
if nfsClients == nil {
nfsClients = []string{}
}
err = s.db.QueryRowContext(ctx, query,
datasetID, shareType, req.NFSEnabled, nfsOptions, pq.Array(nfsClients),
req.SMBEnabled, smbShareName, smbPath, req.SMBComment,
req.SMBGuestOK, req.SMBReadOnly, req.SMBBrowseable, true, userID,
).Scan(&shareID, &createdAt, &updatedAt)
if err != nil {
return nil, fmt.Errorf("failed to create share: %w", err)
}
// Apply NFS export if enabled
if req.NFSEnabled {
if err := s.applyNFSExport(ctx, mountPoint, nfsOptions, req.NFSClients); err != nil {
s.logger.Error("Failed to apply NFS export", "error", err, "share_id", shareID)
// Don't fail the creation, but log the error
}
}
// Apply SMB share if enabled
if req.SMBEnabled {
if err := s.applySMBShare(ctx, smbShareName, smbPath, req.SMBComment, req.SMBGuestOK, req.SMBReadOnly, req.SMBBrowseable); err != nil {
s.logger.Error("Failed to apply SMB share", "error", err, "share_id", shareID)
// Don't fail the creation, but log the error
}
}
// Return the created share
return s.GetShare(ctx, shareID)
}
// UpdateShareRequest represents a share update request
type UpdateShareRequest struct {
NFSEnabled *bool `json:"nfs_enabled"`
NFSOptions *string `json:"nfs_options"`
NFSClients *[]string `json:"nfs_clients"`
SMBEnabled *bool `json:"smb_enabled"`
SMBShareName *string `json:"smb_share_name"`
SMBComment *string `json:"smb_comment"`
SMBGuestOK *bool `json:"smb_guest_ok"`
SMBReadOnly *bool `json:"smb_read_only"`
SMBBrowseable *bool `json:"smb_browseable"`
IsActive *bool `json:"is_active"`
}
// UpdateShare updates an existing share
func (s *Service) UpdateShare(ctx context.Context, shareID string, req *UpdateShareRequest) (*Share, error) {
// Get current share
share, err := s.GetShare(ctx, shareID)
if err != nil {
return nil, err
}
// Build update query dynamically
updates := []string{}
args := []interface{}{}
argIndex := 1
if req.NFSEnabled != nil {
updates = append(updates, fmt.Sprintf("nfs_enabled = $%d", argIndex))
args = append(args, *req.NFSEnabled)
argIndex++
}
if req.NFSOptions != nil {
updates = append(updates, fmt.Sprintf("nfs_options = $%d", argIndex))
args = append(args, *req.NFSOptions)
argIndex++
}
if req.NFSClients != nil {
updates = append(updates, fmt.Sprintf("nfs_clients = $%d", argIndex))
args = append(args, pq.Array(*req.NFSClients))
argIndex++
}
if req.SMBEnabled != nil {
updates = append(updates, fmt.Sprintf("smb_enabled = $%d", argIndex))
args = append(args, *req.SMBEnabled)
argIndex++
}
if req.SMBShareName != nil {
updates = append(updates, fmt.Sprintf("smb_share_name = $%d", argIndex))
args = append(args, *req.SMBShareName)
argIndex++
}
if req.SMBComment != nil {
updates = append(updates, fmt.Sprintf("smb_comment = $%d", argIndex))
args = append(args, *req.SMBComment)
argIndex++
}
if req.SMBGuestOK != nil {
updates = append(updates, fmt.Sprintf("smb_guest_ok = $%d", argIndex))
args = append(args, *req.SMBGuestOK)
argIndex++
}
if req.SMBReadOnly != nil {
updates = append(updates, fmt.Sprintf("smb_read_only = $%d", argIndex))
args = append(args, *req.SMBReadOnly)
argIndex++
}
if req.SMBBrowseable != nil {
updates = append(updates, fmt.Sprintf("smb_browseable = $%d", argIndex))
args = append(args, *req.SMBBrowseable)
argIndex++
}
if req.IsActive != nil {
updates = append(updates, fmt.Sprintf("is_active = $%d", argIndex))
args = append(args, *req.IsActive)
argIndex++
}
if len(updates) == 0 {
return share, nil // No changes
}
// Update share_type based on enabled protocols
nfsEnabled := share.NFSEnabled
smbEnabled := share.SMBEnabled
if req.NFSEnabled != nil {
nfsEnabled = *req.NFSEnabled
}
if req.SMBEnabled != nil {
smbEnabled = *req.SMBEnabled
}
shareType := "none"
if nfsEnabled && smbEnabled {
shareType = "both"
} else if nfsEnabled {
shareType = "nfs"
} else if smbEnabled {
shareType = "smb"
}
updates = append(updates, fmt.Sprintf("share_type = $%d", argIndex))
args = append(args, shareType)
argIndex++
updates = append(updates, fmt.Sprintf("updated_at = NOW()"))
args = append(args, shareID)
query := fmt.Sprintf(`
UPDATE zfs_shares
SET %s
WHERE id = $%d
`, strings.Join(updates, ", "), argIndex)
_, err = s.db.ExecContext(ctx, query, args...)
if err != nil {
return nil, fmt.Errorf("failed to update share: %w", err)
}
// Re-apply NFS export if NFS is enabled
if nfsEnabled {
nfsOptions := share.NFSOptions
if req.NFSOptions != nil {
nfsOptions = *req.NFSOptions
}
nfsClients := share.NFSClients
if req.NFSClients != nil {
nfsClients = *req.NFSClients
}
if err := s.applyNFSExport(ctx, share.MountPoint, nfsOptions, nfsClients); err != nil {
s.logger.Error("Failed to apply NFS export", "error", err, "share_id", shareID)
}
} else {
// Remove NFS export if disabled
if err := s.removeNFSExport(ctx, share.MountPoint); err != nil {
s.logger.Error("Failed to remove NFS export", "error", err, "share_id", shareID)
}
}
// Re-apply SMB share if SMB is enabled
if smbEnabled {
smbShareName := share.SMBShareName
if req.SMBShareName != nil {
smbShareName = *req.SMBShareName
}
smbPath := share.SMBPath
smbComment := share.SMBComment
if req.SMBComment != nil {
smbComment = *req.SMBComment
}
smbGuestOK := share.SMBGuestOK
if req.SMBGuestOK != nil {
smbGuestOK = *req.SMBGuestOK
}
smbReadOnly := share.SMBReadOnly
if req.SMBReadOnly != nil {
smbReadOnly = *req.SMBReadOnly
}
smbBrowseable := share.SMBBrowseable
if req.SMBBrowseable != nil {
smbBrowseable = *req.SMBBrowseable
}
if err := s.applySMBShare(ctx, smbShareName, smbPath, smbComment, smbGuestOK, smbReadOnly, smbBrowseable); err != nil {
s.logger.Error("Failed to apply SMB share", "error", err, "share_id", shareID)
}
} else {
// Remove SMB share if disabled
if err := s.removeSMBShare(ctx, share.SMBShareName); err != nil {
s.logger.Error("Failed to remove SMB share", "error", err, "share_id", shareID)
}
}
return s.GetShare(ctx, shareID)
}
// DeleteShare deletes a share
func (s *Service) DeleteShare(ctx context.Context, shareID string) error {
// Get share to get mount point and share name
share, err := s.GetShare(ctx, shareID)
if err != nil {
return err
}
// Remove NFS export
if share.NFSEnabled {
if err := s.removeNFSExport(ctx, share.MountPoint); err != nil {
s.logger.Error("Failed to remove NFS export", "error", err, "share_id", shareID)
}
}
// Remove SMB share
if share.SMBEnabled {
if err := s.removeSMBShare(ctx, share.SMBShareName); err != nil {
s.logger.Error("Failed to remove SMB share", "error", err, "share_id", shareID)
}
}
// Delete from database
_, err = s.db.ExecContext(ctx, "DELETE FROM zfs_shares WHERE id = $1", shareID)
if err != nil {
return fmt.Errorf("failed to delete share: %w", err)
}
return nil
}
// applyNFSExport adds or updates an NFS export in /etc/exports
func (s *Service) applyNFSExport(ctx context.Context, mountPoint, options string, clients []string) error {
if mountPoint == "" || mountPoint == "none" {
return fmt.Errorf("mount point is required for NFS export")
}
// Build client list (default to * if empty)
clientList := "*"
if len(clients) > 0 {
clientList = strings.Join(clients, " ")
}
// Build export line
exportLine := fmt.Sprintf("%s %s(%s)", mountPoint, clientList, options)
// Read current /etc/exports
exportsPath := "/etc/exports"
exportsContent, err := os.ReadFile(exportsPath)
if err != nil && !os.IsNotExist(err) {
return fmt.Errorf("failed to read exports file: %w", err)
}
lines := strings.Split(string(exportsContent), "\n")
var newLines []string
found := false
// Check if this mount point already exists
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" || strings.HasPrefix(line, "#") {
newLines = append(newLines, line)
continue
}
// Check if this line is for our mount point
if strings.HasPrefix(line, mountPoint+" ") {
newLines = append(newLines, exportLine)
found = true
} else {
newLines = append(newLines, line)
}
}
// Add if not found
if !found {
newLines = append(newLines, exportLine)
}
// Write back to file
newContent := strings.Join(newLines, "\n") + "\n"
if err := os.WriteFile(exportsPath, []byte(newContent), 0644); err != nil {
return fmt.Errorf("failed to write exports file: %w", err)
}
// Apply exports
cmd := exec.CommandContext(ctx, "sudo", "exportfs", "-ra")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to apply exports: %s: %w", string(output), err)
}
s.logger.Info("NFS export applied", "mount_point", mountPoint, "clients", clientList)
return nil
}
// removeNFSExport removes an NFS export from /etc/exports
func (s *Service) removeNFSExport(ctx context.Context, mountPoint string) error {
if mountPoint == "" || mountPoint == "none" {
return nil // Nothing to remove
}
exportsPath := "/etc/exports"
exportsContent, err := os.ReadFile(exportsPath)
if err != nil {
if os.IsNotExist(err) {
return nil // File doesn't exist, nothing to remove
}
return fmt.Errorf("failed to read exports file: %w", err)
}
lines := strings.Split(string(exportsContent), "\n")
var newLines []string
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" || strings.HasPrefix(line, "#") {
newLines = append(newLines, line)
continue
}
// Skip lines for this mount point
if strings.HasPrefix(line, mountPoint+" ") {
continue
}
newLines = append(newLines, line)
}
// Write back to file
newContent := strings.Join(newLines, "\n")
if newContent != "" && !strings.HasSuffix(newContent, "\n") {
newContent += "\n"
}
if err := os.WriteFile(exportsPath, []byte(newContent), 0644); err != nil {
return fmt.Errorf("failed to write exports file: %w", err)
}
// Apply exports
cmd := exec.CommandContext(ctx, "sudo", "exportfs", "-ra")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to apply exports: %s: %w", string(output), err)
}
s.logger.Info("NFS export removed", "mount_point", mountPoint)
return nil
}
// applySMBShare adds or updates an SMB share in /etc/samba/smb.conf
func (s *Service) applySMBShare(ctx context.Context, shareName, path, comment string, guestOK, readOnly, browseable bool) error {
if shareName == "" {
return fmt.Errorf("SMB share name is required")
}
if path == "" {
return fmt.Errorf("SMB path is required")
}
smbConfPath := "/etc/samba/smb.conf"
smbContent, err := os.ReadFile(smbConfPath)
if err != nil {
return fmt.Errorf("failed to read smb.conf: %w", err)
}
// Parse and update smb.conf
lines := strings.Split(string(smbContent), "\n")
var newLines []string
inShare := false
shareStart := -1
for i, line := range lines {
trimmed := strings.TrimSpace(line)
// Check if we're entering our share section
if strings.HasPrefix(trimmed, "[") && strings.HasSuffix(trimmed, "]") {
sectionName := trimmed[1 : len(trimmed)-1]
if sectionName == shareName {
inShare = true
shareStart = i
continue
} else if inShare {
// We've left our share section, insert the share config here
newLines = append(newLines, s.buildSMBShareConfig(shareName, path, comment, guestOK, readOnly, browseable))
inShare = false
}
}
if inShare {
// Skip lines until we find the next section or end of file
continue
}
newLines = append(newLines, line)
}
// If we were still in the share at the end, add it
if inShare {
newLines = append(newLines, s.buildSMBShareConfig(shareName, path, comment, guestOK, readOnly, browseable))
} else if shareStart == -1 {
// Share doesn't exist, add it at the end
newLines = append(newLines, "")
newLines = append(newLines, s.buildSMBShareConfig(shareName, path, comment, guestOK, readOnly, browseable))
}
// Write back to file
newContent := strings.Join(newLines, "\n")
if err := os.WriteFile(smbConfPath, []byte(newContent), 0644); err != nil {
return fmt.Errorf("failed to write smb.conf: %w", err)
}
// Reload Samba
cmd := exec.CommandContext(ctx, "sudo", "systemctl", "reload", "smbd")
if output, err := cmd.CombinedOutput(); err != nil {
// Try restart if reload fails
cmd = exec.CommandContext(ctx, "sudo", "systemctl", "restart", "smbd")
if output2, err2 := cmd.CombinedOutput(); err2 != nil {
return fmt.Errorf("failed to reload/restart smbd: %s / %s: %w", string(output), string(output2), err2)
}
}
s.logger.Info("SMB share applied", "share_name", shareName, "path", path)
return nil
}
// buildSMBShareConfig builds the SMB share configuration block
func (s *Service) buildSMBShareConfig(shareName, path, comment string, guestOK, readOnly, browseable bool) string {
var config []string
config = append(config, fmt.Sprintf("[%s]", shareName))
if comment != "" {
config = append(config, fmt.Sprintf(" comment = %s", comment))
}
config = append(config, fmt.Sprintf(" path = %s", path))
if guestOK {
config = append(config, " guest ok = yes")
} else {
config = append(config, " guest ok = no")
}
if readOnly {
config = append(config, " read only = yes")
} else {
config = append(config, " read only = no")
}
if browseable {
config = append(config, " browseable = yes")
} else {
config = append(config, " browseable = no")
}
return strings.Join(config, "\n")
}
// removeSMBShare removes an SMB share from /etc/samba/smb.conf
func (s *Service) removeSMBShare(ctx context.Context, shareName string) error {
if shareName == "" {
return nil // Nothing to remove
}
smbConfPath := "/etc/samba/smb.conf"
smbContent, err := os.ReadFile(smbConfPath)
if err != nil {
if os.IsNotExist(err) {
return nil // File doesn't exist, nothing to remove
}
return fmt.Errorf("failed to read smb.conf: %w", err)
}
lines := strings.Split(string(smbContent), "\n")
var newLines []string
inShare := false
for _, line := range lines {
trimmed := strings.TrimSpace(line)
// Check if we're entering our share section
if strings.HasPrefix(trimmed, "[") && strings.HasSuffix(trimmed, "]") {
sectionName := trimmed[1 : len(trimmed)-1]
if sectionName == shareName {
inShare = true
continue
} else if inShare {
// We've left our share section
inShare = false
}
}
if inShare {
// Skip lines in this share section
continue
}
newLines = append(newLines, line)
}
// Write back to file
newContent := strings.Join(newLines, "\n")
if err := os.WriteFile(smbConfPath, []byte(newContent), 0644); err != nil {
return fmt.Errorf("failed to write smb.conf: %w", err)
}
// Reload Samba
cmd := exec.CommandContext(ctx, "sudo", "systemctl", "reload", "smbd")
if output, err := cmd.CombinedOutput(); err != nil {
// Try restart if reload fails
cmd = exec.CommandContext(ctx, "sudo", "systemctl", "restart", "smbd")
if output2, err2 := cmd.CombinedOutput(); err2 != nil {
return fmt.Errorf("failed to reload/restart smbd: %s / %s: %w", string(output), string(output2), err2)
}
}
s.logger.Info("SMB share removed", "share_name", shareName)
return nil
}

View File

@@ -195,7 +195,7 @@ func (s *DiskService) getZFSPoolForDisk(ctx context.Context, devicePath string)
deviceName := strings.TrimPrefix(devicePath, "/dev/")
// Get all ZFS pools
cmd := exec.CommandContext(ctx, "sudo", "zpool", "list", "-H", "-o", "name")
cmd := exec.CommandContext(ctx, "zpool", "list", "-H", "-o", "name")
output, err := cmd.Output()
if err != nil {
return ""
@@ -208,7 +208,7 @@ func (s *DiskService) getZFSPoolForDisk(ctx context.Context, devicePath string)
}
// Check pool status for this device
statusCmd := exec.CommandContext(ctx, "sudo", "zpool", "status", poolName)
statusCmd := exec.CommandContext(ctx, "zpool", "status", poolName)
statusOutput, err := statusCmd.Output()
if err != nil {
continue

View File

@@ -304,13 +304,6 @@ func (h *Handler) DeleteZFSPool(c *gin.Context) {
return
}
// Invalidate cache for pools list
if h.cache != nil {
cacheKey := "http:/api/v1/storage/zfs/pools:"
h.cache.Delete(cacheKey)
h.logger.Debug("Cache invalidated for pools list", "key", cacheKey)
}
c.JSON(http.StatusOK, gin.H{"message": "ZFS pool deleted successfully"})
}

View File

@@ -7,7 +7,6 @@ import (
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"time"
@@ -16,16 +15,6 @@ import (
"github.com/lib/pq"
)
// zfsCommand executes a ZFS command with sudo
func zfsCommand(ctx context.Context, args ...string) *exec.Cmd {
return exec.CommandContext(ctx, "sudo", append([]string{"zfs"}, args...)...)
}
// zpoolCommand executes a ZPOOL command with sudo
func zpoolCommand(ctx context.Context, args ...string) *exec.Cmd {
return exec.CommandContext(ctx, "sudo", append([]string{"zpool"}, args...)...)
}
// ZFSService handles ZFS pool management
type ZFSService struct {
db *database.DB
@@ -55,8 +44,7 @@ type ZFSPool struct {
AutoExpand bool `json:"auto_expand"`
ScrubInterval int `json:"scrub_interval"` // days
IsActive bool `json:"is_active"`
HealthStatus string `json:"health_status"` // online, degraded, faulted, offline
CompressRatio float64 `json:"compress_ratio"` // compression ratio (e.g., 1.45x)
HealthStatus string `json:"health_status"` // online, degraded, faulted, offline
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
CreatedBy string `json:"created_by"`
@@ -125,10 +113,6 @@ func (s *ZFSService) CreatePool(ctx context.Context, name string, raidLevel stri
var args []string
args = append(args, "create", "-f") // -f to force creation
// Set default mountpoint to /opt/calypso/data/pool/<pool-name>
mountPoint := fmt.Sprintf("/opt/calypso/data/pool/%s", name)
args = append(args, "-m", mountPoint)
// Note: compression is a filesystem property, not a pool property
// We'll set it after pool creation using zfs set
@@ -169,15 +153,9 @@ func (s *ZFSService) CreatePool(ctx context.Context, name string, raidLevel stri
args = append(args, disks...)
}
// Create mountpoint directory if it doesn't exist
if err := os.MkdirAll(mountPoint, 0755); err != nil {
return nil, fmt.Errorf("failed to create mountpoint directory %s: %w", mountPoint, err)
}
s.logger.Info("Created mountpoint directory", "path", mountPoint)
// Execute zpool create (with sudo for permissions)
s.logger.Info("Creating ZFS pool", "name", name, "raid_level", raidLevel, "disks", disks, "mountpoint", mountPoint, "args", args)
cmd := zpoolCommand(ctx, args...)
// Execute zpool create
s.logger.Info("Creating ZFS pool", "name", name, "raid_level", raidLevel, "disks", disks, "args", args)
cmd := exec.CommandContext(ctx, "zpool", args...)
output, err := cmd.CombinedOutput()
if err != nil {
errorMsg := string(output)
@@ -190,7 +168,7 @@ func (s *ZFSService) CreatePool(ctx context.Context, name string, raidLevel stri
// Set filesystem properties (compression, etc.) after pool creation
// ZFS creates a root filesystem with the same name as the pool
if compression != "" && compression != "off" {
cmd = zfsCommand(ctx, "set", fmt.Sprintf("compression=%s", compression), name)
cmd = exec.CommandContext(ctx, "zfs", "set", fmt.Sprintf("compression=%s", compression), name)
output, err = cmd.CombinedOutput()
if err != nil {
s.logger.Warn("Failed to set compression property", "pool", name, "compression", compression, "error", string(output))
@@ -205,7 +183,7 @@ func (s *ZFSService) CreatePool(ctx context.Context, name string, raidLevel stri
if err != nil {
// Try to destroy the pool if we can't get info
s.logger.Warn("Failed to get pool info, attempting to destroy pool", "name", name, "error", err)
zpoolCommand(ctx, "destroy", "-f", name).Run()
exec.CommandContext(ctx, "zpool", "destroy", "-f", name).Run()
return nil, fmt.Errorf("failed to get pool info after creation: %w", err)
}
@@ -239,7 +217,7 @@ func (s *ZFSService) CreatePool(ctx context.Context, name string, raidLevel stri
if err != nil {
// Cleanup: destroy pool if database insert fails
s.logger.Error("Failed to save pool to database, destroying pool", "name", name, "error", err)
zpoolCommand(ctx, "destroy", "-f", name).Run()
exec.CommandContext(ctx, "zpool", "destroy", "-f", name).Run()
return nil, fmt.Errorf("failed to save pool to database: %w", err)
}
@@ -263,7 +241,7 @@ func (s *ZFSService) CreatePool(ctx context.Context, name string, raidLevel stri
// getPoolInfo retrieves information about a ZFS pool
func (s *ZFSService) getPoolInfo(ctx context.Context, poolName string) (*ZFSPool, error) {
// Get pool size and used space
cmd := zpoolCommand(ctx, "list", "-H", "-o", "name,size,allocated", poolName)
cmd := exec.CommandContext(ctx, "zpool", "list", "-H", "-o", "name,size,allocated", poolName)
output, err := cmd.CombinedOutput()
if err != nil {
errorMsg := string(output)
@@ -342,7 +320,7 @@ func parseZFSSize(sizeStr string) (int64, error) {
// getSpareDisks retrieves spare disks from zpool status
func (s *ZFSService) getSpareDisks(ctx context.Context, poolName string) ([]string, error) {
cmd := zpoolCommand(ctx, "status", poolName)
cmd := exec.CommandContext(ctx, "zpool", "status", poolName)
output, err := cmd.CombinedOutput()
if err != nil {
return nil, fmt.Errorf("failed to get pool status: %w", err)
@@ -381,26 +359,6 @@ func (s *ZFSService) getSpareDisks(ctx context.Context, poolName string) ([]stri
return spareDisks, nil
}
// getCompressRatio gets the compression ratio from ZFS
func (s *ZFSService) getCompressRatio(ctx context.Context, poolName string) (float64, error) {
cmd := zfsCommand(ctx, "get", "-H", "-o", "value", "compressratio", poolName)
output, err := cmd.Output()
if err != nil {
return 1.0, err
}
ratioStr := strings.TrimSpace(string(output))
// Remove 'x' suffix if present (e.g., "1.45x" -> "1.45")
ratioStr = strings.TrimSuffix(ratioStr, "x")
ratio, err := strconv.ParseFloat(ratioStr, 64)
if err != nil {
return 1.0, err
}
return ratio, nil
}
// ListPools lists all ZFS pools
func (s *ZFSService) ListPools(ctx context.Context) ([]*ZFSPool, error) {
query := `
@@ -426,20 +384,16 @@ func (s *ZFSService) ListPools(ctx context.Context) ([]*ZFSPool, error) {
for rows.Next() {
var pool ZFSPool
var description sql.NullString
var createdBy sql.NullString
err := rows.Scan(
&pool.ID, &pool.Name, &description, &pool.RaidLevel, pq.Array(&pool.Disks),
&pool.SizeBytes, &pool.UsedBytes, &pool.Compression, &pool.Deduplication,
&pool.AutoExpand, &pool.ScrubInterval, &pool.IsActive, &pool.HealthStatus,
&pool.CreatedAt, &pool.UpdatedAt, &createdBy,
&pool.CreatedAt, &pool.UpdatedAt, &pool.CreatedBy,
)
if err != nil {
s.logger.Error("Failed to scan pool row", "error", err, "error_type", fmt.Sprintf("%T", err))
s.logger.Error("Failed to scan pool row", "error", err)
continue // Skip this pool instead of failing entire query
}
if createdBy.Valid {
pool.CreatedBy = createdBy.String
}
if description.Valid {
pool.Description = description.String
}
@@ -453,17 +407,8 @@ func (s *ZFSService) ListPools(ctx context.Context) ([]*ZFSPool, error) {
pool.SpareDisks = spareDisks
}
// Get compressratio from ZFS system
compressRatio, err := s.getCompressRatio(ctx, pool.Name)
if err != nil {
s.logger.Warn("Failed to get compressratio", "pool", pool.Name, "error", err)
pool.CompressRatio = 1.0 // Default to 1.0 if can't get ratio
} else {
pool.CompressRatio = compressRatio
}
pools = append(pools, &pool)
s.logger.Debug("Added pool to list", "pool_id", pool.ID, "name", pool.Name, "compressratio", pool.CompressRatio)
s.logger.Debug("Added pool to list", "pool_id", pool.ID, "name", pool.Name)
}
if err := rows.Err(); err != nil {
@@ -525,7 +470,7 @@ func (s *ZFSService) DeletePool(ctx context.Context, poolID string) error {
// Destroy ZFS pool with -f flag to force destroy (works for both empty and non-empty pools)
// The -f flag is needed to destroy pools even if they have datasets or are in use
s.logger.Info("Destroying ZFS pool", "pool", pool.Name)
cmd := zpoolCommand(ctx, "destroy", "-f", pool.Name)
cmd := exec.CommandContext(ctx, "zpool", "destroy", "-f", pool.Name)
output, err := cmd.CombinedOutput()
if err != nil {
errorMsg := string(output)
@@ -540,15 +485,6 @@ func (s *ZFSService) DeletePool(ctx context.Context, poolID string) error {
s.logger.Info("ZFS pool destroyed successfully", "pool", pool.Name)
}
// Remove mount point directory (default: /opt/calypso/data/pool/<pool-name>)
mountPoint := fmt.Sprintf("/opt/calypso/data/pool/%s", pool.Name)
if err := os.RemoveAll(mountPoint); err != nil {
s.logger.Warn("Failed to remove mount point directory", "mountpoint", mountPoint, "error", err)
// Don't fail pool deletion if mount point removal fails
} else {
s.logger.Info("Removed mount point directory", "mountpoint", mountPoint)
}
// Mark disks as unused
for _, diskPath := range pool.Disks {
_, err = s.db.ExecContext(ctx,
@@ -583,7 +519,7 @@ func (s *ZFSService) AddSpareDisk(ctx context.Context, poolID string, diskPaths
}
// Verify pool exists in ZFS and check if disks are already spare
cmd := zpoolCommand(ctx, "status", pool.Name)
cmd := exec.CommandContext(ctx, "zpool", "status", pool.Name)
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("pool %s does not exist in ZFS: %w", pool.Name, err)
@@ -608,7 +544,7 @@ func (s *ZFSService) AddSpareDisk(ctx context.Context, poolID string, diskPaths
// Execute zpool add
s.logger.Info("Adding spare disks to ZFS pool", "pool", pool.Name, "disks", diskPaths)
cmd = zpoolCommand(ctx, args...)
cmd = exec.CommandContext(ctx, "zpool", args...)
output, err = cmd.CombinedOutput()
if err != nil {
errorMsg := string(output)
@@ -643,7 +579,6 @@ func (s *ZFSService) AddSpareDisk(ctx context.Context, poolID string, diskPaths
// ZFSDataset represents a ZFS dataset
type ZFSDataset struct {
ID string `json:"id"`
Name string `json:"name"`
Pool string `json:"pool"`
Type string `json:"type"` // filesystem, volume, snapshot
@@ -662,7 +597,7 @@ type ZFSDataset struct {
func (s *ZFSService) ListDatasets(ctx context.Context, poolName string) ([]*ZFSDataset, error) {
// Get datasets from database
query := `
SELECT id, name, pool_name, type, mount_point,
SELECT name, pool_name, type, mount_point,
used_bytes, available_bytes, referenced_bytes,
compression, deduplication, quota, reservation,
created_at
@@ -688,7 +623,7 @@ func (s *ZFSService) ListDatasets(ctx context.Context, poolName string) ([]*ZFSD
var mountPoint sql.NullString
err := rows.Scan(
&ds.ID, &ds.Name, &ds.Pool, &ds.Type, &mountPoint,
&ds.Name, &ds.Pool, &ds.Type, &mountPoint,
&ds.UsedBytes, &ds.AvailableBytes, &ds.ReferencedBytes,
&ds.Compression, &ds.Deduplication, &ds.Quota, &ds.Reservation,
&ds.CreatedAt,
@@ -730,36 +665,10 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
// Construct full dataset name
fullName := poolName + "/" + req.Name
// Get pool mount point to validate dataset mount point is within pool directory
poolMountPoint := fmt.Sprintf("/opt/calypso/data/pool/%s", poolName)
var mountPath string
// For filesystem datasets, validate and set mount point
if req.Type == "filesystem" {
if req.MountPoint != "" {
// User provided mount point - validate it's within pool directory
mountPath = filepath.Clean(req.MountPoint)
// Check if mount point is within pool mount point directory
poolMountAbs, err := filepath.Abs(poolMountPoint)
if err != nil {
return nil, fmt.Errorf("failed to resolve pool mount point: %w", err)
}
mountPathAbs, err := filepath.Abs(mountPath)
if err != nil {
return nil, fmt.Errorf("failed to resolve mount point: %w", err)
}
// Check if mount path is within pool mount point directory
relPath, err := filepath.Rel(poolMountAbs, mountPathAbs)
if err != nil || strings.HasPrefix(relPath, "..") {
return nil, fmt.Errorf("mount point must be within pool directory: %s (pool mount: %s)", mountPath, poolMountPoint)
}
} else {
// No mount point provided - use default: /opt/calypso/data/pool/<pool-name>/<dataset-name>/
mountPath = filepath.Join(poolMountPoint, req.Name)
}
// For filesystem datasets, create mount directory if mount point is provided
if req.Type == "filesystem" && req.MountPoint != "" {
// Clean and validate mount point path
mountPath := filepath.Clean(req.MountPoint)
// Check if directory already exists
if info, err := os.Stat(mountPath); err == nil {
@@ -808,14 +717,14 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
args = append(args, "-o", fmt.Sprintf("compression=%s", req.Compression))
}
// Set mount point for filesystems (always set, either user-provided or default)
if req.Type == "filesystem" {
args = append(args, "-o", fmt.Sprintf("mountpoint=%s", mountPath))
// Set mount point if provided (only for filesystems, not volumes)
if req.Type == "filesystem" && req.MountPoint != "" {
args = append(args, "-o", fmt.Sprintf("mountpoint=%s", req.MountPoint))
}
// Execute zfs create
s.logger.Info("Creating ZFS dataset", "name", fullName, "type", req.Type)
cmd := zfsCommand(ctx, args...)
cmd := exec.CommandContext(ctx, "zfs", args...)
output, err := cmd.CombinedOutput()
if err != nil {
errorMsg := string(output)
@@ -825,7 +734,7 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
// Set quota if specified (for filesystems)
if req.Type == "filesystem" && req.Quota > 0 {
quotaCmd := zfsCommand(ctx, "set", fmt.Sprintf("quota=%d", req.Quota), fullName)
quotaCmd := exec.CommandContext(ctx, "zfs", "set", fmt.Sprintf("quota=%d", req.Quota), fullName)
if quotaOutput, err := quotaCmd.CombinedOutput(); err != nil {
s.logger.Warn("Failed to set quota", "dataset", fullName, "error", err, "output", string(quotaOutput))
}
@@ -833,7 +742,7 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
// Set reservation if specified
if req.Reservation > 0 {
resvCmd := zfsCommand(ctx, "set", fmt.Sprintf("reservation=%d", req.Reservation), fullName)
resvCmd := exec.CommandContext(ctx, "zfs", "set", fmt.Sprintf("reservation=%d", req.Reservation), fullName)
if resvOutput, err := resvCmd.CombinedOutput(); err != nil {
s.logger.Warn("Failed to set reservation", "dataset", fullName, "error", err, "output", string(resvOutput))
}
@@ -845,30 +754,30 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
if err != nil {
s.logger.Error("Failed to get pool ID", "pool", poolName, "error", err)
// Try to destroy the dataset if we can't save to database
zfsCommand(ctx, "destroy", "-r", fullName).Run()
exec.CommandContext(ctx, "zfs", "destroy", "-r", fullName).Run()
return nil, fmt.Errorf("failed to get pool ID: %w", err)
}
// Get dataset info from ZFS to save to database
cmd = zfsCommand(ctx, "list", "-H", "-o", "name,used,avail,refer,compress,dedup,quota,reservation,mountpoint", fullName)
cmd = exec.CommandContext(ctx, "zfs", "list", "-H", "-o", "name,used,avail,refer,compress,dedup,quota,reservation,mountpoint", fullName)
output, err = cmd.CombinedOutput()
if err != nil {
s.logger.Error("Failed to get dataset info", "name", fullName, "error", err)
// Try to destroy the dataset if we can't get info
zfsCommand(ctx, "destroy", "-r", fullName).Run()
exec.CommandContext(ctx, "zfs", "destroy", "-r", fullName).Run()
return nil, fmt.Errorf("failed to get dataset info: %w", err)
}
// Parse dataset info
lines := strings.TrimSpace(string(output))
if lines == "" {
zfsCommand(ctx, "destroy", "-r", fullName).Run()
exec.CommandContext(ctx, "zfs", "destroy", "-r", fullName).Run()
return nil, fmt.Errorf("dataset not found after creation")
}
fields := strings.Fields(lines)
if len(fields) < 9 {
zfsCommand(ctx, "destroy", "-r", fullName).Run()
exec.CommandContext(ctx, "zfs", "destroy", "-r", fullName).Run()
return nil, fmt.Errorf("invalid dataset info format")
}
@@ -883,7 +792,7 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
// Determine dataset type
datasetType := req.Type
typeCmd := zfsCommand(ctx, "get", "-H", "-o", "value", "type", fullName)
typeCmd := exec.CommandContext(ctx, "zfs", "get", "-H", "-o", "value", "type", fullName)
if typeOutput, err := typeCmd.Output(); err == nil {
volType := strings.TrimSpace(string(typeOutput))
if volType == "volume" {
@@ -897,7 +806,7 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
quota := int64(-1)
if datasetType == "volume" {
// For volumes, get volsize
volsizeCmd := zfsCommand(ctx, "get", "-H", "-o", "value", "volsize", fullName)
volsizeCmd := exec.CommandContext(ctx, "zfs", "get", "-H", "-o", "value", "volsize", fullName)
if volsizeOutput, err := volsizeCmd.Output(); err == nil {
volsizeStr := strings.TrimSpace(string(volsizeOutput))
if volsizeStr != "-" && volsizeStr != "none" {
@@ -927,7 +836,7 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
// Get creation time
createdAt := time.Now()
creationCmd := zfsCommand(ctx, "get", "-H", "-o", "value", "creation", fullName)
creationCmd := exec.CommandContext(ctx, "zfs", "get", "-H", "-o", "value", "creation", fullName)
if creationOutput, err := creationCmd.Output(); err == nil {
creationStr := strings.TrimSpace(string(creationOutput))
if t, err := time.Parse("Mon Jan 2 15:04:05 2006", creationStr); err == nil {
@@ -959,7 +868,7 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
if err != nil {
s.logger.Error("Failed to save dataset to database", "name", fullName, "error", err)
// Try to destroy the dataset if we can't save to database
zfsCommand(ctx, "destroy", "-r", fullName).Run()
exec.CommandContext(ctx, "zfs", "destroy", "-r", fullName).Run()
return nil, fmt.Errorf("failed to save dataset to database: %w", err)
}
@@ -987,7 +896,7 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
func (s *ZFSService) DeleteDataset(ctx context.Context, datasetName string) error {
// Check if dataset exists and get its mount point before deletion
var mountPoint string
cmd := zfsCommand(ctx, "list", "-H", "-o", "name,mountpoint", datasetName)
cmd := exec.CommandContext(ctx, "zfs", "list", "-H", "-o", "name,mountpoint", datasetName)
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("dataset %s does not exist: %w", datasetName, err)
@@ -1006,7 +915,7 @@ func (s *ZFSService) DeleteDataset(ctx context.Context, datasetName string) erro
// Get dataset type to determine if we should clean up mount directory
var datasetType string
typeCmd := zfsCommand(ctx, "get", "-H", "-o", "value", "type", datasetName)
typeCmd := exec.CommandContext(ctx, "zfs", "get", "-H", "-o", "value", "type", datasetName)
typeOutput, err := typeCmd.Output()
if err == nil {
datasetType = strings.TrimSpace(string(typeOutput))
@@ -1029,7 +938,7 @@ func (s *ZFSService) DeleteDataset(ctx context.Context, datasetName string) erro
// Delete the dataset from ZFS (use -r for recursive to delete children)
s.logger.Info("Deleting ZFS dataset", "name", datasetName, "mountpoint", mountPoint)
cmd = zfsCommand(ctx, "destroy", "-r", datasetName)
cmd = exec.CommandContext(ctx, "zfs", "destroy", "-r", datasetName)
output, err = cmd.CombinedOutput()
if err != nil {
errorMsg := string(output)

View File

@@ -2,7 +2,6 @@ package storage
import (
"context"
"fmt"
"os/exec"
"regexp"
"strconv"
@@ -99,17 +98,11 @@ type PoolInfo struct {
func (m *ZFSPoolMonitor) getSystemPools(ctx context.Context) (map[string]PoolInfo, error) {
pools := make(map[string]PoolInfo)
// Get pool list (with sudo for permissions)
cmd := exec.CommandContext(ctx, "sudo", "zpool", "list", "-H", "-o", "name,size,alloc,free,health")
output, err := cmd.CombinedOutput()
// Get pool list
cmd := exec.CommandContext(ctx, "zpool", "list", "-H", "-o", "name,size,alloc,free,health")
output, err := cmd.Output()
if err != nil {
// If no pools exist, zpool list returns exit code 1 but that's OK
// Check if output is empty (no pools) vs actual error
outputStr := strings.TrimSpace(string(output))
if outputStr == "" || strings.Contains(outputStr, "no pools available") {
return pools, nil // No pools, return empty map (not an error)
}
return nil, fmt.Errorf("zpool list failed: %w, output: %s", err, outputStr)
return nil, err
}
lines := strings.Split(strings.TrimSpace(string(output)), "\n")
@@ -225,7 +218,7 @@ func (m *ZFSPoolMonitor) updatePoolStatus(ctx context.Context, poolName string,
return nil
}
// markMissingPoolsOffline marks pools that exist in database but not in system as offline or deletes them
// markMissingPoolsOffline marks pools that exist in database but not in system as offline
func (m *ZFSPoolMonitor) markMissingPoolsOffline(ctx context.Context, systemPools map[string]PoolInfo) error {
// Get all pools from database
rows, err := m.zfsService.db.QueryContext(ctx, "SELECT id, name FROM zfs_pools WHERE is_active = true")
@@ -242,13 +235,17 @@ func (m *ZFSPoolMonitor) markMissingPoolsOffline(ctx context.Context, systemPool
// Check if pool exists in system
if _, exists := systemPools[poolName]; !exists {
// Pool doesn't exist in system - delete from database (pool was destroyed)
m.logger.Info("Pool not found in system, removing from database", "pool", poolName)
_, err = m.zfsService.db.ExecContext(ctx, "DELETE FROM zfs_pools WHERE id = $1", poolID)
// Pool doesn't exist in system, mark as offline
_, err = m.zfsService.db.ExecContext(ctx, `
UPDATE zfs_pools SET
health_status = 'offline',
updated_at = NOW()
WHERE id = $1
`, poolID)
if err != nil {
m.logger.Warn("Failed to delete missing pool from database", "pool", poolName, "error", err)
m.logger.Warn("Failed to mark pool as offline", "pool", poolName, "error", err)
} else {
m.logger.Info("Removed missing pool from database", "pool", poolName)
m.logger.Info("Marked pool as offline (not found in system)", "pool", poolName)
}
}
}

View File

@@ -3,7 +3,6 @@ package system
import (
"net/http"
"strconv"
"time"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/atlasos/calypso/internal/tasks"
@@ -116,179 +115,3 @@ func (h *Handler) GenerateSupportBundle(c *gin.Context) {
c.JSON(http.StatusAccepted, gin.H{"task_id": taskID})
}
// ListNetworkInterfaces lists all network interfaces
func (h *Handler) ListNetworkInterfaces(c *gin.Context) {
interfaces, err := h.service.ListNetworkInterfaces(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list network interfaces", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list network interfaces"})
return
}
// Ensure we return an empty array instead of null
if interfaces == nil {
interfaces = []NetworkInterface{}
}
c.JSON(http.StatusOK, gin.H{"interfaces": interfaces})
}
// GetManagementIPAddress returns the management IP address
func (h *Handler) GetManagementIPAddress(c *gin.Context) {
ip, err := h.service.GetManagementIPAddress(c.Request.Context())
if err != nil {
h.logger.Error("Failed to get management IP address", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get management IP address"})
return
}
c.JSON(http.StatusOK, gin.H{"ip_address": ip})
}
// SaveNTPSettings saves NTP configuration to the OS
func (h *Handler) SaveNTPSettings(c *gin.Context) {
var settings NTPSettings
if err := c.ShouldBindJSON(&settings); err != nil {
h.logger.Error("Invalid request body", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request body"})
return
}
// Validate timezone
if settings.Timezone == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "timezone is required"})
return
}
// Validate NTP servers
if len(settings.NTPServers) == 0 {
c.JSON(http.StatusBadRequest, gin.H{"error": "at least one NTP server is required"})
return
}
if err := h.service.SaveNTPSettings(c.Request.Context(), settings); err != nil {
h.logger.Error("Failed to save NTP settings", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "NTP settings saved successfully"})
}
// GetNTPSettings retrieves current NTP configuration
func (h *Handler) GetNTPSettings(c *gin.Context) {
settings, err := h.service.GetNTPSettings(c.Request.Context())
if err != nil {
h.logger.Error("Failed to get NTP settings", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get NTP settings"})
return
}
c.JSON(http.StatusOK, gin.H{"settings": settings})
}
// UpdateNetworkInterface updates a network interface configuration
func (h *Handler) UpdateNetworkInterface(c *gin.Context) {
ifaceName := c.Param("name")
if ifaceName == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "interface name is required"})
return
}
var req struct {
IPAddress string `json:"ip_address" binding:"required"`
Subnet string `json:"subnet" binding:"required"`
Gateway string `json:"gateway,omitempty"`
DNS1 string `json:"dns1,omitempty"`
DNS2 string `json:"dns2,omitempty"`
Role string `json:"role,omitempty"`
}
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid request body", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request body"})
return
}
// Convert to service request
serviceReq := UpdateNetworkInterfaceRequest{
IPAddress: req.IPAddress,
Subnet: req.Subnet,
Gateway: req.Gateway,
DNS1: req.DNS1,
DNS2: req.DNS2,
Role: req.Role,
}
updatedIface, err := h.service.UpdateNetworkInterface(c.Request.Context(), ifaceName, serviceReq)
if err != nil {
h.logger.Error("Failed to update network interface", "interface", ifaceName, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"interface": updatedIface})
}
// GetSystemLogs retrieves recent system logs
func (h *Handler) GetSystemLogs(c *gin.Context) {
limitStr := c.DefaultQuery("limit", "30")
limit, err := strconv.Atoi(limitStr)
if err != nil || limit <= 0 || limit > 100 {
limit = 30
}
logs, err := h.service.GetSystemLogs(c.Request.Context(), limit)
if err != nil {
h.logger.Error("Failed to get system logs", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get system logs"})
return
}
c.JSON(http.StatusOK, gin.H{"logs": logs})
}
// GetNetworkThroughput retrieves network throughput data from RRD
func (h *Handler) GetNetworkThroughput(c *gin.Context) {
// Default to last 5 minutes
durationStr := c.DefaultQuery("duration", "5m")
duration, err := time.ParseDuration(durationStr)
if err != nil {
duration = 5 * time.Minute
}
data, err := h.service.GetNetworkThroughput(c.Request.Context(), duration)
if err != nil {
h.logger.Error("Failed to get network throughput", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get network throughput"})
return
}
c.JSON(http.StatusOK, gin.H{"data": data})
}
// ExecuteCommand executes a shell command
func (h *Handler) ExecuteCommand(c *gin.Context) {
var req struct {
Command string `json:"command" binding:"required"`
Service string `json:"service,omitempty"` // Optional: system, scst, storage, backup, tape
}
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid request body", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "command is required"})
return
}
// Execute command based on service context
output, err := h.service.ExecuteCommand(c.Request.Context(), req.Command, req.Service)
if err != nil {
h.logger.Error("Failed to execute command", "error", err, "command", req.Command, "service", req.Service)
c.JSON(http.StatusInternalServerError, gin.H{
"error": err.Error(),
"output": output, // Include output even on error
})
return
}
c.JSON(http.StatusOK, gin.H{"output": output})
}

View File

@@ -1,292 +0,0 @@
package system
import (
"context"
"fmt"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/atlasos/calypso/internal/common/logger"
)
// RRDService handles RRD database operations for network monitoring
type RRDService struct {
logger *logger.Logger
rrdDir string
interfaceName string
}
// NewRRDService creates a new RRD service
func NewRRDService(log *logger.Logger, rrdDir string, interfaceName string) *RRDService {
return &RRDService{
logger: log,
rrdDir: rrdDir,
interfaceName: interfaceName,
}
}
// NetworkStats represents network interface statistics
type NetworkStats struct {
Interface string `json:"interface"`
RxBytes uint64 `json:"rx_bytes"`
TxBytes uint64 `json:"tx_bytes"`
RxPackets uint64 `json:"rx_packets"`
TxPackets uint64 `json:"tx_packets"`
Timestamp time.Time `json:"timestamp"`
}
// GetNetworkStats reads network statistics from /proc/net/dev
func (r *RRDService) GetNetworkStats(ctx context.Context, interfaceName string) (*NetworkStats, error) {
data, err := os.ReadFile("/proc/net/dev")
if err != nil {
return nil, fmt.Errorf("failed to read /proc/net/dev: %w", err)
}
lines := strings.Split(string(data), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if !strings.HasPrefix(line, interfaceName+":") {
continue
}
// Parse line: interface: rx_bytes rx_packets ... tx_bytes tx_packets ...
parts := strings.Fields(line)
if len(parts) < 17 {
continue
}
// Extract statistics
// Format: interface: rx_bytes rx_packets rx_errs rx_drop ... tx_bytes tx_packets ...
rxBytes, err := strconv.ParseUint(parts[1], 10, 64)
if err != nil {
continue
}
rxPackets, err := strconv.ParseUint(parts[2], 10, 64)
if err != nil {
continue
}
txBytes, err := strconv.ParseUint(parts[9], 10, 64)
if err != nil {
continue
}
txPackets, err := strconv.ParseUint(parts[10], 10, 64)
if err != nil {
continue
}
return &NetworkStats{
Interface: interfaceName,
RxBytes: rxBytes,
TxBytes: txBytes,
RxPackets: rxPackets,
TxPackets: txPackets,
Timestamp: time.Now(),
}, nil
}
return nil, fmt.Errorf("interface %s not found in /proc/net/dev", interfaceName)
}
// InitializeRRD creates RRD database if it doesn't exist
func (r *RRDService) InitializeRRD(ctx context.Context) error {
// Ensure RRD directory exists
if err := os.MkdirAll(r.rrdDir, 0755); err != nil {
return fmt.Errorf("failed to create RRD directory: %w", err)
}
rrdFile := filepath.Join(r.rrdDir, fmt.Sprintf("network-%s.rrd", r.interfaceName))
// Check if RRD file already exists
if _, err := os.Stat(rrdFile); err == nil {
r.logger.Info("RRD file already exists", "file", rrdFile)
return nil
}
// Create RRD database
// Use COUNTER type to track cumulative bytes, RRD will calculate rate automatically
// DS:inbound:COUNTER:20:0:U - inbound cumulative bytes, 20s heartbeat
// DS:outbound:COUNTER:20:0:U - outbound cumulative bytes, 20s heartbeat
// RRA:AVERAGE:0.5:1:600 - 1 sample per step, 600 steps (100 minutes at 10s interval)
// RRA:AVERAGE:0.5:6:700 - 6 samples per step, 700 steps (11.6 hours at 1min interval)
// RRA:AVERAGE:0.5:60:730 - 60 samples per step, 730 steps (5 days at 1hour interval)
// RRA:MAX:0.5:1:600 - Max values for same intervals
// RRA:MAX:0.5:6:700
// RRA:MAX:0.5:60:730
cmd := exec.CommandContext(ctx, "rrdtool", "create", rrdFile,
"--step", "10", // 10 second step
"DS:inbound:COUNTER:20:0:U", // Inbound cumulative bytes, 20s heartbeat
"DS:outbound:COUNTER:20:0:U", // Outbound cumulative bytes, 20s heartbeat
"RRA:AVERAGE:0.5:1:600", // 10s resolution, 100 minutes
"RRA:AVERAGE:0.5:6:700", // 1min resolution, 11.6 hours
"RRA:AVERAGE:0.5:60:730", // 1hour resolution, 5 days
"RRA:MAX:0.5:1:600", // Max values
"RRA:MAX:0.5:6:700",
"RRA:MAX:0.5:60:730",
)
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("failed to create RRD: %s: %w", string(output), err)
}
r.logger.Info("RRD database created", "file", rrdFile)
return nil
}
// UpdateRRD updates RRD database with new network statistics
func (r *RRDService) UpdateRRD(ctx context.Context, stats *NetworkStats) error {
rrdFile := filepath.Join(r.rrdDir, fmt.Sprintf("network-%s.rrd", stats.Interface))
// Update with cumulative byte counts (COUNTER type)
// RRD will automatically calculate the rate (bytes per second)
cmd := exec.CommandContext(ctx, "rrdtool", "update", rrdFile,
fmt.Sprintf("%d:%d:%d", stats.Timestamp.Unix(), stats.RxBytes, stats.TxBytes),
)
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("failed to update RRD: %s: %w", string(output), err)
}
return nil
}
// FetchRRDData fetches data from RRD database for graphing
func (r *RRDService) FetchRRDData(ctx context.Context, startTime time.Time, endTime time.Time, resolution string) ([]NetworkDataPoint, error) {
rrdFile := filepath.Join(r.rrdDir, fmt.Sprintf("network-%s.rrd", r.interfaceName))
// Check if RRD file exists
if _, err := os.Stat(rrdFile); os.IsNotExist(err) {
return []NetworkDataPoint{}, nil
}
// Fetch data using rrdtool fetch
// Use AVERAGE consolidation with appropriate resolution
cmd := exec.CommandContext(ctx, "rrdtool", "fetch", rrdFile,
"AVERAGE",
"--start", fmt.Sprintf("%d", startTime.Unix()),
"--end", fmt.Sprintf("%d", endTime.Unix()),
"--resolution", resolution,
)
output, err := cmd.CombinedOutput()
if err != nil {
return nil, fmt.Errorf("failed to fetch RRD data: %s: %w", string(output), err)
}
// Parse rrdtool fetch output
// Format:
// inbound outbound
// 1234567890: 1.2345678901e+06 2.3456789012e+06
points := []NetworkDataPoint{}
lines := strings.Split(string(output), "\n")
// Skip header lines
dataStart := false
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" {
continue
}
// Check if this is the data section
if strings.Contains(line, "inbound") && strings.Contains(line, "outbound") {
dataStart = true
continue
}
if !dataStart {
continue
}
// Parse data line: timestamp: inbound_value outbound_value
parts := strings.Fields(line)
if len(parts) < 3 {
continue
}
// Parse timestamp
timestampStr := strings.TrimSuffix(parts[0], ":")
timestamp, err := strconv.ParseInt(timestampStr, 10, 64)
if err != nil {
continue
}
// Parse inbound (bytes per second from COUNTER, convert to Mbps)
inboundStr := parts[1]
inbound, err := strconv.ParseFloat(inboundStr, 64)
if err != nil || inbound < 0 {
// Skip NaN or negative values
continue
}
// Convert bytes per second to Mbps (bytes/s * 8 / 1000000)
inboundMbps := inbound * 8 / 1000000
// Parse outbound
outboundStr := parts[2]
outbound, err := strconv.ParseFloat(outboundStr, 64)
if err != nil || outbound < 0 {
// Skip NaN or negative values
continue
}
outboundMbps := outbound * 8 / 1000000
// Format time as MM:SS
t := time.Unix(timestamp, 0)
timeStr := fmt.Sprintf("%02d:%02d", t.Minute(), t.Second())
points = append(points, NetworkDataPoint{
Time: timeStr,
Inbound: inboundMbps,
Outbound: outboundMbps,
})
}
return points, nil
}
// NetworkDataPoint represents a single data point for graphing
type NetworkDataPoint struct {
Time string `json:"time"`
Inbound float64 `json:"inbound"` // Mbps
Outbound float64 `json:"outbound"` // Mbps
}
// StartCollector starts a background goroutine to periodically collect and update RRD
func (r *RRDService) StartCollector(ctx context.Context, interval time.Duration) error {
// Initialize RRD if needed
if err := r.InitializeRRD(ctx); err != nil {
return fmt.Errorf("failed to initialize RRD: %w", err)
}
go func() {
ticker := time.NewTicker(interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
// Get current stats
stats, err := r.GetNetworkStats(ctx, r.interfaceName)
if err != nil {
r.logger.Warn("Failed to get network stats", "error", err)
continue
}
// Update RRD with cumulative byte counts
// RRD COUNTER type will automatically calculate rate
if err := r.UpdateRRD(ctx, stats); err != nil {
r.logger.Warn("Failed to update RRD", "error", err)
}
}
}
}()
return nil
}

View File

@@ -4,7 +4,6 @@ import (
"context"
"encoding/json"
"fmt"
"os"
"os/exec"
"strings"
"time"
@@ -12,98 +11,18 @@ import (
"github.com/atlasos/calypso/internal/common/logger"
)
// NTPSettings represents NTP configuration
type NTPSettings struct {
Timezone string `json:"timezone"`
NTPServers []string `json:"ntp_servers"`
}
// Service handles system management operations
type Service struct {
logger *logger.Logger
rrdService *RRDService
}
// detectPrimaryInterface detects the primary network interface (first non-loopback with IP)
func detectPrimaryInterface(ctx context.Context) string {
// Try to get default route interface
cmd := exec.CommandContext(ctx, "ip", "route", "show", "default")
output, err := cmd.Output()
if err == nil {
lines := strings.Split(string(output), "\n")
for _, line := range lines {
if strings.Contains(line, "dev ") {
parts := strings.Fields(line)
for i, part := range parts {
if part == "dev" && i+1 < len(parts) {
iface := parts[i+1]
if iface != "lo" {
return iface
}
}
}
}
}
}
// Fallback: get first non-loopback interface with IP
cmd = exec.CommandContext(ctx, "ip", "-4", "addr", "show")
output, err = cmd.Output()
if err == nil {
lines := strings.Split(string(output), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
// Look for interface name line (e.g., "2: ens18: <BROADCAST...")
if len(line) > 0 && line[0] >= '0' && line[0] <= '9' && strings.Contains(line, ":") {
parts := strings.Fields(line)
if len(parts) >= 2 {
iface := strings.TrimSuffix(parts[1], ":")
if iface != "" && iface != "lo" {
// Check if this interface has an IP (next lines will have "inet")
// For simplicity, return first non-loopback interface
return iface
}
}
}
}
}
// Final fallback
return "eth0"
logger *logger.Logger
}
// NewService creates a new system service
func NewService(log *logger.Logger) *Service {
// Initialize RRD service for network monitoring
rrdDir := "/var/lib/calypso/rrd"
// Auto-detect primary interface
ctx := context.Background()
interfaceName := detectPrimaryInterface(ctx)
log.Info("Detected primary network interface", "interface", interfaceName)
rrdService := NewRRDService(log, rrdDir, interfaceName)
return &Service{
logger: log,
rrdService: rrdService,
logger: log,
}
}
// StartNetworkMonitoring starts the RRD collector for network monitoring
func (s *Service) StartNetworkMonitoring(ctx context.Context) error {
return s.rrdService.StartCollector(ctx, 10*time.Second)
}
// GetNetworkThroughput fetches network throughput data from RRD
func (s *Service) GetNetworkThroughput(ctx context.Context, duration time.Duration) ([]NetworkDataPoint, error) {
endTime := time.Now()
startTime := endTime.Add(-duration)
// Use 10 second resolution for recent data
return s.rrdService.FetchRRDData(ctx, startTime, endTime, "10")
}
// ServiceStatus represents a systemd service status
type ServiceStatus struct {
Name string `json:"name"`
@@ -116,37 +35,31 @@ type ServiceStatus struct {
// GetServiceStatus retrieves the status of a systemd service
func (s *Service) GetServiceStatus(ctx context.Context, serviceName string) (*ServiceStatus, error) {
status := &ServiceStatus{
Name: serviceName,
}
// Get each property individually to ensure correct parsing
properties := map[string]*string{
"ActiveState": &status.ActiveState,
"SubState": &status.SubState,
"LoadState": &status.LoadState,
"Description": &status.Description,
}
for prop, target := range properties {
cmd := exec.CommandContext(ctx, "systemctl", "show", serviceName, "--property", prop, "--value", "--no-pager")
output, err := cmd.Output()
if err != nil {
s.logger.Warn("Failed to get property", "service", serviceName, "property", prop, "error", err)
continue
}
*target = strings.TrimSpace(string(output))
}
// Get timestamp if available
cmd := exec.CommandContext(ctx, "systemctl", "show", serviceName, "--property", "ActiveEnterTimestamp", "--value", "--no-pager")
cmd := exec.CommandContext(ctx, "systemctl", "show", serviceName,
"--property=ActiveState,SubState,LoadState,Description,ActiveEnterTimestamp",
"--value", "--no-pager")
output, err := cmd.Output()
if err == nil {
timestamp := strings.TrimSpace(string(output))
if timestamp != "" {
if t, err := time.Parse("Mon 2006-01-02 15:04:05 MST", timestamp); err == nil {
status.Since = t
}
if err != nil {
return nil, fmt.Errorf("failed to get service status: %w", err)
}
lines := strings.Split(strings.TrimSpace(string(output)), "\n")
if len(lines) < 4 {
return nil, fmt.Errorf("invalid service status output")
}
status := &ServiceStatus{
Name: serviceName,
ActiveState: strings.TrimSpace(lines[0]),
SubState: strings.TrimSpace(lines[1]),
LoadState: strings.TrimSpace(lines[2]),
Description: strings.TrimSpace(lines[3]),
}
// Parse timestamp if available
if len(lines) > 4 && lines[4] != "" {
if t, err := time.Parse("Mon 2006-01-02 15:04:05 MST", strings.TrimSpace(lines[4])); err == nil {
status.Since = t
}
}
@@ -156,15 +69,10 @@ func (s *Service) GetServiceStatus(ctx context.Context, serviceName string) (*Se
// ListServices lists all Calypso-related services
func (s *Service) ListServices(ctx context.Context) ([]ServiceStatus, error) {
services := []string{
"ssh",
"sshd",
"smbd",
"iscsi-scst",
"nfs-server",
"nfs",
"mhvtl",
"calypso-api",
"scst",
"iscsi-scst",
"mhvtl",
"postgresql",
}
@@ -220,108 +128,6 @@ func (s *Service) GetJournalLogs(ctx context.Context, serviceName string, lines
return logs, nil
}
// SystemLogEntry represents a parsed system log entry
type SystemLogEntry struct {
Time string `json:"time"`
Level string `json:"level"`
Source string `json:"source"`
Message string `json:"message"`
}
// GetSystemLogs retrieves recent system logs from journalctl
func (s *Service) GetSystemLogs(ctx context.Context, limit int) ([]SystemLogEntry, error) {
if limit <= 0 || limit > 100 {
limit = 30 // Default to 30 logs
}
cmd := exec.CommandContext(ctx, "journalctl",
"-n", fmt.Sprintf("%d", limit),
"-o", "json",
"--no-pager",
"--since", "1 hour ago") // Only get logs from last hour
output, err := cmd.Output()
if err != nil {
return nil, fmt.Errorf("failed to get system logs: %w", err)
}
var logs []SystemLogEntry
linesOutput := strings.Split(strings.TrimSpace(string(output)), "\n")
for _, line := range linesOutput {
if line == "" {
continue
}
var logEntry map[string]interface{}
if err := json.Unmarshal([]byte(line), &logEntry); err != nil {
continue
}
// Parse timestamp (__REALTIME_TIMESTAMP is in microseconds)
var timeStr string
if timestamp, ok := logEntry["__REALTIME_TIMESTAMP"].(float64); ok {
// Convert microseconds to nanoseconds for time.Unix (1 microsecond = 1000 nanoseconds)
t := time.Unix(0, int64(timestamp)*1000)
timeStr = t.Format("15:04:05")
} else if timestamp, ok := logEntry["_SOURCE_REALTIME_TIMESTAMP"].(float64); ok {
t := time.Unix(0, int64(timestamp)*1000)
timeStr = t.Format("15:04:05")
} else {
timeStr = time.Now().Format("15:04:05")
}
// Parse log level (priority)
level := "INFO"
if priority, ok := logEntry["PRIORITY"].(float64); ok {
switch int(priority) {
case 0: // emerg
level = "EMERG"
case 1, 2, 3: // alert, crit, err
level = "ERROR"
case 4: // warning
level = "WARN"
case 5: // notice
level = "NOTICE"
case 6: // info
level = "INFO"
case 7: // debug
level = "DEBUG"
}
}
// Parse source (systemd unit or syslog identifier)
source := "system"
if unit, ok := logEntry["_SYSTEMD_UNIT"].(string); ok && unit != "" {
// Remove .service suffix if present
source = strings.TrimSuffix(unit, ".service")
} else if ident, ok := logEntry["SYSLOG_IDENTIFIER"].(string); ok && ident != "" {
source = ident
} else if comm, ok := logEntry["_COMM"].(string); ok && comm != "" {
source = comm
}
// Parse message
message := ""
if msg, ok := logEntry["MESSAGE"].(string); ok {
message = msg
}
if message != "" {
logs = append(logs, SystemLogEntry{
Time: timeStr,
Level: level,
Source: source,
Message: message,
})
}
}
// Reverse to get newest first
for i, j := 0, len(logs)-1; i < j; i, j = i+1, j-1 {
logs[i], logs[j] = logs[j], logs[i]
}
return logs, nil
}
// GenerateSupportBundle generates a diagnostic support bundle
func (s *Service) GenerateSupportBundle(ctx context.Context, outputPath string) error {
// Create bundle directory
@@ -369,679 +175,3 @@ func (s *Service) GenerateSupportBundle(ctx context.Context, outputPath string)
return nil
}
// NetworkInterface represents a network interface
type NetworkInterface struct {
Name string `json:"name"`
IPAddress string `json:"ip_address"`
Subnet string `json:"subnet"`
Status string `json:"status"` // "Connected" or "Down"
Speed string `json:"speed"` // e.g., "10 Gbps", "1 Gbps"
Role string `json:"role"` // "Management", "ISCSI", or empty
Gateway string `json:"gateway,omitempty"`
DNS1 string `json:"dns1,omitempty"`
DNS2 string `json:"dns2,omitempty"`
}
// ListNetworkInterfaces lists all network interfaces
func (s *Service) ListNetworkInterfaces(ctx context.Context) ([]NetworkInterface, error) {
// First, get all interface names and their states
cmd := exec.CommandContext(ctx, "ip", "link", "show")
output, err := cmd.Output()
if err != nil {
s.logger.Error("Failed to list interfaces", "error", err)
return nil, fmt.Errorf("failed to list interfaces: %w", err)
}
interfaceMap := make(map[string]*NetworkInterface)
lines := strings.Split(string(output), "\n")
s.logger.Debug("Parsing network interfaces", "output_lines", len(lines))
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" {
continue
}
// Parse interface name and state
// Format: "2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000"
// Look for lines that start with a number followed by ":" (interface definition line)
// Simple check: line starts with digit, contains ":", and contains "state"
if len(line) > 0 && line[0] >= '0' && line[0] <= '9' && strings.Contains(line, ":") && strings.Contains(line, "state") {
parts := strings.Fields(line)
if len(parts) < 2 {
continue
}
// Extract interface name (e.g., "ens18:" or "lo:")
ifaceName := strings.TrimSuffix(parts[1], ":")
if ifaceName == "" || ifaceName == "lo" {
continue // Skip loopback
}
// Extract state - look for "state UP" or "state DOWN" in the line
state := "Down"
if strings.Contains(line, "state UP") {
state = "Connected"
} else if strings.Contains(line, "state DOWN") {
state = "Down"
}
s.logger.Info("Found interface", "name", ifaceName, "state", state)
interfaceMap[ifaceName] = &NetworkInterface{
Name: ifaceName,
Status: state,
Speed: "Unknown",
}
}
}
s.logger.Debug("Found interfaces from ip link", "count", len(interfaceMap))
// Get IP addresses for each interface
cmd = exec.CommandContext(ctx, "ip", "-4", "addr", "show")
output, err = cmd.Output()
if err != nil {
s.logger.Warn("Failed to get IP addresses", "error", err)
} else {
lines = strings.Split(string(output), "\n")
var currentIfaceName string
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" {
continue
}
// Parse interface name (e.g., "2: ens18: <BROADCAST,MULTICAST,UP,LOWER_UP>")
if strings.Contains(line, ":") && !strings.Contains(line, "inet") && !strings.HasPrefix(line, "valid_lft") && !strings.HasPrefix(line, "altname") {
parts := strings.Fields(line)
if len(parts) >= 2 {
currentIfaceName = strings.TrimSuffix(parts[1], ":")
s.logger.Debug("Processing interface for IP", "name", currentIfaceName)
}
continue
}
// Parse IP address (e.g., "inet 10.10.14.16/24 brd 10.10.14.255 scope global ens18")
if strings.HasPrefix(line, "inet ") && currentIfaceName != "" && currentIfaceName != "lo" {
parts := strings.Fields(line)
if len(parts) >= 2 {
ipWithSubnet := parts[1] // e.g., "10.10.14.16/24"
ipParts := strings.Split(ipWithSubnet, "/")
if len(ipParts) == 2 {
ip := ipParts[0]
subnet := ipParts[1]
// Find or create interface
iface, exists := interfaceMap[currentIfaceName]
if !exists {
s.logger.Debug("Creating new interface entry", "name", currentIfaceName)
iface = &NetworkInterface{
Name: currentIfaceName,
Status: "Down",
Speed: "Unknown",
}
interfaceMap[currentIfaceName] = iface
}
iface.IPAddress = ip
iface.Subnet = subnet
s.logger.Debug("Set IP for interface", "name", currentIfaceName, "ip", ip, "subnet", subnet)
}
}
}
}
}
// Get default gateway for each interface
cmd = exec.CommandContext(ctx, "ip", "route", "show")
output, err = cmd.Output()
if err == nil {
lines = strings.Split(string(output), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" {
continue
}
// Parse default route: "default via 10.10.14.1 dev ens18"
if strings.HasPrefix(line, "default via ") {
parts := strings.Fields(line)
// Find "via" and "dev" in the parts
var gateway string
var ifaceName string
for i, part := range parts {
if part == "via" && i+1 < len(parts) {
gateway = parts[i+1]
}
if part == "dev" && i+1 < len(parts) {
ifaceName = parts[i+1]
}
}
if gateway != "" && ifaceName != "" {
if iface, exists := interfaceMap[ifaceName]; exists {
iface.Gateway = gateway
s.logger.Info("Set default gateway for interface", "name", ifaceName, "gateway", gateway)
}
}
} else if strings.Contains(line, " via ") && strings.Contains(line, " dev ") {
// Parse network route: "10.10.14.0/24 via 10.10.14.1 dev ens18"
// Or: "192.168.1.0/24 via 192.168.1.1 dev eth0"
parts := strings.Fields(line)
var gateway string
var ifaceName string
for i, part := range parts {
if part == "via" && i+1 < len(parts) {
gateway = parts[i+1]
}
if part == "dev" && i+1 < len(parts) {
ifaceName = parts[i+1]
}
}
// Only set gateway if it's not already set (prefer default route)
if gateway != "" && ifaceName != "" {
if iface, exists := interfaceMap[ifaceName]; exists {
if iface.Gateway == "" {
iface.Gateway = gateway
s.logger.Info("Set gateway from network route for interface", "name", ifaceName, "gateway", gateway)
}
}
}
}
}
} else {
s.logger.Warn("Failed to get routes", "error", err)
}
// Get DNS servers from systemd-resolved or /etc/resolv.conf
// Try systemd-resolved first
cmd = exec.CommandContext(ctx, "systemd-resolve", "--status")
output, err = cmd.Output()
dnsServers := []string{}
if err == nil {
// Parse DNS from systemd-resolve output
lines = strings.Split(string(output), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "DNS Servers:") {
// Format: "DNS Servers: 8.8.8.8 8.8.4.4"
parts := strings.Fields(line)
if len(parts) >= 3 {
dnsServers = parts[2:]
}
break
}
}
} else {
// Fallback to /etc/resolv.conf
data, err := os.ReadFile("/etc/resolv.conf")
if err == nil {
lines = strings.Split(string(data), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "nameserver ") {
dns := strings.TrimPrefix(line, "nameserver ")
dns = strings.TrimSpace(dns)
if dns != "" {
dnsServers = append(dnsServers, dns)
}
}
}
}
}
// Convert map to slice
var interfaces []NetworkInterface
s.logger.Debug("Converting interface map to slice", "map_size", len(interfaceMap))
for _, iface := range interfaceMap {
// Get speed for each interface using ethtool
if iface.Name != "" && iface.Name != "lo" {
cmd := exec.CommandContext(ctx, "ethtool", iface.Name)
output, err := cmd.Output()
if err == nil {
// Parse speed from ethtool output
ethtoolLines := strings.Split(string(output), "\n")
for _, ethtoolLine := range ethtoolLines {
if strings.Contains(ethtoolLine, "Speed:") {
parts := strings.Fields(ethtoolLine)
if len(parts) >= 2 {
iface.Speed = parts[1]
}
break
}
}
}
// Set DNS servers (use first two if available)
if len(dnsServers) > 0 {
iface.DNS1 = dnsServers[0]
}
if len(dnsServers) > 1 {
iface.DNS2 = dnsServers[1]
}
// Determine role based on interface name or IP (simple heuristic)
// You can enhance this with configuration file or database lookup
if strings.Contains(iface.Name, "eth") || strings.Contains(iface.Name, "ens") {
// Default to Management for first interface, ISCSI for others
if iface.Name == "eth0" || iface.Name == "ens18" {
iface.Role = "Management"
} else {
// Check if IP is in typical iSCSI range (10.x.x.x)
if strings.HasPrefix(iface.IPAddress, "10.") && iface.IPAddress != "" {
iface.Role = "ISCSI"
}
}
}
}
interfaces = append(interfaces, *iface)
}
// If no interfaces found, return empty slice
if len(interfaces) == 0 {
s.logger.Warn("No network interfaces found")
return []NetworkInterface{}, nil
}
s.logger.Info("Listed network interfaces", "count", len(interfaces))
return interfaces, nil
}
// GetManagementIPAddress returns the IP address of the management interface
func (s *Service) GetManagementIPAddress(ctx context.Context) (string, error) {
interfaces, err := s.ListNetworkInterfaces(ctx)
if err != nil {
return "", fmt.Errorf("failed to list network interfaces: %w", err)
}
// First, try to find interface with Role "Management"
for _, iface := range interfaces {
if iface.Role == "Management" && iface.IPAddress != "" && iface.Status == "Connected" {
s.logger.Info("Found management interface", "interface", iface.Name, "ip", iface.IPAddress)
return iface.IPAddress, nil
}
}
// Fallback: use interface with default route (primary interface)
for _, iface := range interfaces {
if iface.Gateway != "" && iface.IPAddress != "" && iface.Status == "Connected" {
s.logger.Info("Using primary interface as management", "interface", iface.Name, "ip", iface.IPAddress)
return iface.IPAddress, nil
}
}
// Final fallback: use first connected interface with IP
for _, iface := range interfaces {
if iface.IPAddress != "" && iface.Status == "Connected" && iface.Name != "lo" {
s.logger.Info("Using first connected interface as management", "interface", iface.Name, "ip", iface.IPAddress)
return iface.IPAddress, nil
}
}
return "", fmt.Errorf("no management interface found")
}
// UpdateNetworkInterfaceRequest represents the request to update a network interface
type UpdateNetworkInterfaceRequest struct {
IPAddress string `json:"ip_address"`
Subnet string `json:"subnet"`
Gateway string `json:"gateway,omitempty"`
DNS1 string `json:"dns1,omitempty"`
DNS2 string `json:"dns2,omitempty"`
Role string `json:"role,omitempty"`
}
// UpdateNetworkInterface updates network interface configuration
func (s *Service) UpdateNetworkInterface(ctx context.Context, ifaceName string, req UpdateNetworkInterfaceRequest) (*NetworkInterface, error) {
// Validate interface exists
cmd := exec.CommandContext(ctx, "ip", "link", "show", ifaceName)
if err := cmd.Run(); err != nil {
return nil, fmt.Errorf("interface %s not found: %w", ifaceName, err)
}
// Remove existing IP address if any
cmd = exec.CommandContext(ctx, "ip", "addr", "flush", "dev", ifaceName)
cmd.Run() // Ignore error, interface might not have IP
// Set new IP address and subnet
ipWithSubnet := fmt.Sprintf("%s/%s", req.IPAddress, req.Subnet)
cmd = exec.CommandContext(ctx, "ip", "addr", "add", ipWithSubnet, "dev", ifaceName)
output, err := cmd.CombinedOutput()
if err != nil {
s.logger.Error("Failed to set IP address", "interface", ifaceName, "error", err, "output", string(output))
return nil, fmt.Errorf("failed to set IP address: %w", err)
}
// Remove existing default route if any
cmd = exec.CommandContext(ctx, "ip", "route", "del", "default")
cmd.Run() // Ignore error, might not exist
// Set gateway if provided
if req.Gateway != "" {
cmd = exec.CommandContext(ctx, "ip", "route", "add", "default", "via", req.Gateway, "dev", ifaceName)
output, err = cmd.CombinedOutput()
if err != nil {
s.logger.Error("Failed to set gateway", "interface", ifaceName, "error", err, "output", string(output))
return nil, fmt.Errorf("failed to set gateway: %w", err)
}
}
// Update DNS in systemd-resolved or /etc/resolv.conf
if req.DNS1 != "" || req.DNS2 != "" {
// Try using systemd-resolve first
cmd = exec.CommandContext(ctx, "systemd-resolve", "--status")
if cmd.Run() == nil {
// systemd-resolve is available, use it
dnsServers := []string{}
if req.DNS1 != "" {
dnsServers = append(dnsServers, req.DNS1)
}
if req.DNS2 != "" {
dnsServers = append(dnsServers, req.DNS2)
}
if len(dnsServers) > 0 {
// Use resolvectl to set DNS (newer systemd)
cmd = exec.CommandContext(ctx, "resolvectl", "dns", ifaceName, strings.Join(dnsServers, " "))
if cmd.Run() != nil {
// Fallback to systemd-resolve
cmd = exec.CommandContext(ctx, "systemd-resolve", "--interface", ifaceName, "--set-dns", strings.Join(dnsServers, " "))
output, err = cmd.CombinedOutput()
if err != nil {
s.logger.Warn("Failed to set DNS via systemd-resolve", "error", err, "output", string(output))
}
}
}
} else {
// Fallback: update /etc/resolv.conf
resolvContent := "# Generated by Calypso\n"
if req.DNS1 != "" {
resolvContent += fmt.Sprintf("nameserver %s\n", req.DNS1)
}
if req.DNS2 != "" {
resolvContent += fmt.Sprintf("nameserver %s\n", req.DNS2)
}
tmpPath := "/tmp/resolv.conf." + fmt.Sprintf("%d", time.Now().Unix())
if err := os.WriteFile(tmpPath, []byte(resolvContent), 0644); err != nil {
s.logger.Warn("Failed to write temporary resolv.conf", "error", err)
} else {
cmd = exec.CommandContext(ctx, "sh", "-c", fmt.Sprintf("mv %s /etc/resolv.conf", tmpPath))
output, err = cmd.CombinedOutput()
if err != nil {
s.logger.Warn("Failed to update /etc/resolv.conf", "error", err, "output", string(output))
}
}
}
}
// Bring interface up
cmd = exec.CommandContext(ctx, "ip", "link", "set", ifaceName, "up")
output, err = cmd.CombinedOutput()
if err != nil {
s.logger.Warn("Failed to bring interface up", "interface", ifaceName, "error", err, "output", string(output))
}
// Return updated interface
updatedIface := &NetworkInterface{
Name: ifaceName,
IPAddress: req.IPAddress,
Subnet: req.Subnet,
Gateway: req.Gateway,
DNS1: req.DNS1,
DNS2: req.DNS2,
Role: req.Role,
Status: "Connected",
Speed: "Unknown", // Will be updated on next list
}
s.logger.Info("Updated network interface", "interface", ifaceName, "ip", req.IPAddress, "subnet", req.Subnet)
return updatedIface, nil
}
// SaveNTPSettings saves NTP configuration to the OS
func (s *Service) SaveNTPSettings(ctx context.Context, settings NTPSettings) error {
// Set timezone using timedatectl
if settings.Timezone != "" {
cmd := exec.CommandContext(ctx, "timedatectl", "set-timezone", settings.Timezone)
output, err := cmd.CombinedOutput()
if err != nil {
s.logger.Error("Failed to set timezone", "timezone", settings.Timezone, "error", err, "output", string(output))
return fmt.Errorf("failed to set timezone: %w", err)
}
s.logger.Info("Timezone set", "timezone", settings.Timezone)
}
// Configure NTP servers in systemd-timesyncd
if len(settings.NTPServers) > 0 {
configPath := "/etc/systemd/timesyncd.conf"
// Build config content
configContent := "[Time]\n"
configContent += "NTP="
for i, server := range settings.NTPServers {
if i > 0 {
configContent += " "
}
configContent += server
}
configContent += "\n"
// Write to temporary file first, then move to final location (requires root)
tmpPath := "/tmp/timesyncd.conf." + fmt.Sprintf("%d", time.Now().Unix())
if err := os.WriteFile(tmpPath, []byte(configContent), 0644); err != nil {
s.logger.Error("Failed to write temporary NTP config", "error", err)
return fmt.Errorf("failed to write temporary NTP configuration: %w", err)
}
// Move to final location using sudo (requires root privileges)
cmd := exec.CommandContext(ctx, "sh", "-c", fmt.Sprintf("mv %s %s", tmpPath, configPath))
output, err := cmd.CombinedOutput()
if err != nil {
s.logger.Error("Failed to move NTP config", "error", err, "output", string(output))
os.Remove(tmpPath) // Clean up temp file
return fmt.Errorf("failed to move NTP configuration: %w", err)
}
// Restart systemd-timesyncd to apply changes
cmd = exec.CommandContext(ctx, "systemctl", "restart", "systemd-timesyncd")
output, err = cmd.CombinedOutput()
if err != nil {
s.logger.Error("Failed to restart systemd-timesyncd", "error", err, "output", string(output))
return fmt.Errorf("failed to restart systemd-timesyncd: %w", err)
}
s.logger.Info("NTP servers configured", "servers", settings.NTPServers)
}
return nil
}
// GetNTPSettings retrieves current NTP configuration from the OS
func (s *Service) GetNTPSettings(ctx context.Context) (*NTPSettings, error) {
settings := &NTPSettings{
NTPServers: []string{},
}
// Get current timezone using timedatectl
cmd := exec.CommandContext(ctx, "timedatectl", "show", "--property=Timezone", "--value")
output, err := cmd.Output()
if err != nil {
s.logger.Warn("Failed to get timezone", "error", err)
settings.Timezone = "Etc/UTC" // Default fallback
} else {
settings.Timezone = strings.TrimSpace(string(output))
if settings.Timezone == "" {
settings.Timezone = "Etc/UTC"
}
}
// Read NTP servers from systemd-timesyncd config
configPath := "/etc/systemd/timesyncd.conf"
data, err := os.ReadFile(configPath)
if err != nil {
s.logger.Warn("Failed to read NTP config", "error", err)
// Default NTP servers if config file doesn't exist
settings.NTPServers = []string{"pool.ntp.org", "time.google.com"}
} else {
// Parse NTP servers from config file
lines := strings.Split(string(data), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "NTP=") {
ntpLine := strings.TrimPrefix(line, "NTP=")
if ntpLine != "" {
servers := strings.Fields(ntpLine)
settings.NTPServers = servers
break
}
}
}
// If no NTP servers found in config, use defaults
if len(settings.NTPServers) == 0 {
settings.NTPServers = []string{"pool.ntp.org", "time.google.com"}
}
}
return settings, nil
}
// ExecuteCommand executes a shell command and returns the output
// service parameter is optional and can be: system, scst, storage, backup, tape
func (s *Service) ExecuteCommand(ctx context.Context, command string, service string) (string, error) {
// Sanitize command - basic security check
command = strings.TrimSpace(command)
if command == "" {
return "", fmt.Errorf("command cannot be empty")
}
// Block dangerous commands that could harm the system
dangerousCommands := []string{
"rm -rf /",
"dd if=",
":(){ :|:& };:",
"mkfs",
"fdisk",
"parted",
"format",
"> /dev/sd",
"mkfs.ext",
"mkfs.xfs",
"mkfs.btrfs",
"wipefs",
}
commandLower := strings.ToLower(command)
for _, dangerous := range dangerousCommands {
if strings.Contains(commandLower, dangerous) {
return "", fmt.Errorf("command blocked for security reasons")
}
}
// Service-specific command handling
switch service {
case "scst":
// Allow SCST admin commands
if strings.HasPrefix(command, "scstadmin") {
// SCST commands are safe
break
}
case "backup":
// Allow bconsole commands
if strings.HasPrefix(command, "bconsole") {
// Backup console commands are safe
break
}
case "storage":
// Allow ZFS and storage commands
if strings.HasPrefix(command, "zfs") || strings.HasPrefix(command, "zpool") || strings.HasPrefix(command, "lsblk") {
// Storage commands are safe
break
}
case "tape":
// Allow tape library commands
if strings.HasPrefix(command, "mtx") || strings.HasPrefix(command, "lsscsi") || strings.HasPrefix(command, "sg_") {
// Tape commands are safe
break
}
}
// Execute command with timeout (30 seconds)
ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
// Check if command already has sudo (reuse commandLower from above)
hasSudo := strings.HasPrefix(commandLower, "sudo ")
// Determine if command needs sudo based on service and command type
needsSudo := false
if !hasSudo {
// Commands that typically need sudo
sudoCommands := []string{
"scstadmin",
"systemctl",
"zfs",
"zpool",
"mount",
"umount",
"ip link",
"ip addr",
"iptables",
"journalctl",
}
for _, sudoCmd := range sudoCommands {
if strings.HasPrefix(commandLower, sudoCmd) {
needsSudo = true
break
}
}
// Service-specific sudo requirements
switch service {
case "scst":
// All SCST admin commands need sudo
if strings.HasPrefix(commandLower, "scstadmin") {
needsSudo = true
}
case "storage":
// ZFS commands typically need sudo
if strings.HasPrefix(commandLower, "zfs") || strings.HasPrefix(commandLower, "zpool") {
needsSudo = true
}
case "system":
// System commands like systemctl need sudo
if strings.HasPrefix(commandLower, "systemctl") || strings.HasPrefix(commandLower, "journalctl") {
needsSudo = true
}
}
}
// Build command with or without sudo
var cmd *exec.Cmd
if needsSudo && !hasSudo {
// Use sudo for privileged commands (if not already present)
cmd = exec.CommandContext(ctx, "sudo", "sh", "-c", command)
} else {
// Regular command (or already has sudo)
cmd = exec.CommandContext(ctx, "sh", "-c", command)
}
cmd.Env = append(os.Environ(), "TERM=xterm-256color")
cmd.Env = append(os.Environ(), "TERM=xterm-256color")
output, err := cmd.CombinedOutput()
if err != nil {
// Return output even if there's an error (some commands return non-zero exit codes)
outputStr := string(output)
if len(outputStr) > 0 {
return outputStr, nil
}
return "", fmt.Errorf("command execution failed: %w", err)
}
return string(output), nil
}

View File

@@ -1,328 +0,0 @@
package system
import (
"encoding/json"
"io"
"net/http"
"os"
"os/exec"
"os/user"
"sync"
"syscall"
"time"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/creack/pty"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
)
const (
// WebSocket timeouts
writeWait = 10 * time.Second
pongWait = 60 * time.Second
pingPeriod = (pongWait * 9) / 10
)
var upgrader = websocket.Upgrader{
ReadBufferSize: 4096,
WriteBufferSize: 4096,
CheckOrigin: func(r *http.Request) bool {
// Allow all origins - in production, validate against allowed domains
return true
},
}
// TerminalSession manages a single terminal session
type TerminalSession struct {
conn *websocket.Conn
pty *os.File
cmd *exec.Cmd
logger *logger.Logger
mu sync.RWMutex
closed bool
username string
done chan struct{}
}
// HandleTerminalWebSocket handles WebSocket connection for terminal
func HandleTerminalWebSocket(c *gin.Context, log *logger.Logger) {
// Verify authentication
userID, exists := c.Get("user_id")
if !exists {
log.Warn("Terminal WebSocket: unauthorized access", "ip", c.ClientIP())
c.JSON(http.StatusUnauthorized, gin.H{"error": "unauthorized"})
return
}
username, _ := c.Get("username")
if username == nil {
username = userID
}
log.Info("Terminal WebSocket: connection attempt", "username", username, "ip", c.ClientIP())
// Upgrade connection
conn, err := upgrader.Upgrade(c.Writer, c.Request, nil)
if err != nil {
log.Error("Terminal WebSocket: upgrade failed", "error", err)
return
}
log.Info("Terminal WebSocket: connection upgraded", "username", username)
// Create session
session := &TerminalSession{
conn: conn,
logger: log,
username: username.(string),
done: make(chan struct{}),
}
// Start terminal
if err := session.startPTY(); err != nil {
log.Error("Terminal WebSocket: failed to start PTY", "error", err, "username", username)
session.sendError(err.Error())
session.close()
return
}
// Handle messages and PTY output
go session.handleRead()
go session.handleWrite()
}
// startPTY starts the PTY session
func (s *TerminalSession) startPTY() error {
// Get user info
currentUser, err := user.Lookup(s.username)
if err != nil {
// Fallback to current user
currentUser, err = user.Current()
if err != nil {
return err
}
}
// Determine shell
shell := os.Getenv("SHELL")
if shell == "" {
shell = "/bin/bash"
}
// Create command
s.cmd = exec.Command(shell)
s.cmd.Env = append(os.Environ(),
"TERM=xterm-256color",
"HOME="+currentUser.HomeDir,
"USER="+currentUser.Username,
"USERNAME="+currentUser.Username,
)
s.cmd.Dir = currentUser.HomeDir
// Start PTY
ptyFile, err := pty.Start(s.cmd)
if err != nil {
return err
}
s.pty = ptyFile
// Set initial size
pty.Setsize(ptyFile, &pty.Winsize{
Rows: 24,
Cols: 80,
})
return nil
}
// handleRead handles incoming WebSocket messages
func (s *TerminalSession) handleRead() {
defer s.close()
// Set read deadline and pong handler
s.conn.SetReadDeadline(time.Now().Add(pongWait))
s.conn.SetPongHandler(func(string) error {
s.conn.SetReadDeadline(time.Now().Add(pongWait))
return nil
})
for {
select {
case <-s.done:
return
default:
messageType, data, err := s.conn.ReadMessage()
if err != nil {
if websocket.IsUnexpectedCloseError(err, websocket.CloseGoingAway, websocket.CloseAbnormalClosure) {
s.logger.Error("Terminal WebSocket: read error", "error", err)
}
return
}
// Handle binary messages (raw input)
if messageType == websocket.BinaryMessage {
s.writeToPTY(data)
continue
}
// Handle text messages (JSON commands)
if messageType == websocket.TextMessage {
var msg map[string]interface{}
if err := json.Unmarshal(data, &msg); err != nil {
continue
}
switch msg["type"] {
case "input":
if data, ok := msg["data"].(string); ok {
s.writeToPTY([]byte(data))
}
case "resize":
if cols, ok1 := msg["cols"].(float64); ok1 {
if rows, ok2 := msg["rows"].(float64); ok2 {
s.resizePTY(uint16(cols), uint16(rows))
}
}
case "ping":
s.writeWS(websocket.TextMessage, []byte(`{"type":"pong"}`))
}
}
}
}
}
// handleWrite handles PTY output to WebSocket
func (s *TerminalSession) handleWrite() {
defer s.close()
ticker := time.NewTicker(pingPeriod)
defer ticker.Stop()
// Read from PTY and write to WebSocket
buffer := make([]byte, 4096)
for {
select {
case <-s.done:
return
case <-ticker.C:
// Send ping
if err := s.writeWS(websocket.PingMessage, nil); err != nil {
return
}
default:
// Read from PTY
if s.pty != nil {
n, err := s.pty.Read(buffer)
if err != nil {
if err != io.EOF {
s.logger.Error("Terminal WebSocket: PTY read error", "error", err)
}
return
}
if n > 0 {
// Write binary data to WebSocket
if err := s.writeWS(websocket.BinaryMessage, buffer[:n]); err != nil {
return
}
}
}
}
}
}
// writeToPTY writes data to PTY
func (s *TerminalSession) writeToPTY(data []byte) {
s.mu.RLock()
closed := s.closed
pty := s.pty
s.mu.RUnlock()
if closed || pty == nil {
return
}
if _, err := pty.Write(data); err != nil {
s.logger.Error("Terminal WebSocket: PTY write error", "error", err)
}
}
// resizePTY resizes the PTY
func (s *TerminalSession) resizePTY(cols, rows uint16) {
s.mu.RLock()
closed := s.closed
ptyFile := s.pty
s.mu.RUnlock()
if closed || ptyFile == nil {
return
}
// Use pty.Setsize from package, not method from variable
pty.Setsize(ptyFile, &pty.Winsize{
Cols: cols,
Rows: rows,
})
}
// writeWS writes message to WebSocket
func (s *TerminalSession) writeWS(messageType int, data []byte) error {
s.mu.RLock()
closed := s.closed
conn := s.conn
s.mu.RUnlock()
if closed || conn == nil {
return io.ErrClosedPipe
}
conn.SetWriteDeadline(time.Now().Add(writeWait))
return conn.WriteMessage(messageType, data)
}
// sendError sends error message
func (s *TerminalSession) sendError(errMsg string) {
msg := map[string]interface{}{
"type": "error",
"error": errMsg,
}
data, _ := json.Marshal(msg)
s.writeWS(websocket.TextMessage, data)
}
// close closes the terminal session
func (s *TerminalSession) close() {
s.mu.Lock()
defer s.mu.Unlock()
if s.closed {
return
}
s.closed = true
close(s.done)
// Close PTY
if s.pty != nil {
s.pty.Close()
}
// Kill process
if s.cmd != nil && s.cmd.Process != nil {
s.cmd.Process.Signal(syscall.SIGTERM)
time.Sleep(100 * time.Millisecond)
if s.cmd.ProcessState == nil || !s.cmd.ProcessState.Exited() {
s.cmd.Process.Kill()
}
}
// Close WebSocket
if s.conn != nil {
s.conn.Close()
}
s.logger.Info("Terminal WebSocket: session closed", "username", s.username)
}

View File

@@ -1,7 +1,6 @@
package tape_vtl
import (
"fmt"
"net/http"
"github.com/atlasos/calypso/internal/common/database"
@@ -30,7 +29,6 @@ func NewHandler(db *database.DB, log *logger.Logger) *Handler {
// ListLibraries lists all virtual tape libraries
func (h *Handler) ListLibraries(c *gin.Context) {
h.logger.Info("ListLibraries called")
libraries, err := h.service.ListLibraries(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list libraries", "error", err)
@@ -38,36 +36,7 @@ func (h *Handler) ListLibraries(c *gin.Context) {
return
}
h.logger.Info("ListLibraries result", "count", len(libraries), "is_nil", libraries == nil)
// Ensure we return an empty array instead of null
if libraries == nil {
h.logger.Warn("Libraries is nil, converting to empty array")
libraries = []VirtualTapeLibrary{}
}
h.logger.Info("Returning libraries", "count", len(libraries), "libraries", libraries)
// Ensure we always return an array, never null
if libraries == nil {
libraries = []VirtualTapeLibrary{}
}
// Force empty array if nil (double check)
if libraries == nil {
h.logger.Warn("Libraries is still nil in handler, forcing empty array")
libraries = []VirtualTapeLibrary{}
}
// Use explicit JSON marshalling to ensure empty array, not null
response := map[string]interface{}{
"libraries": libraries,
}
h.logger.Info("Response payload", "count", len(libraries), "response_type", fmt.Sprintf("%T", libraries))
// Use JSON marshalling that handles empty slices correctly
c.JSON(http.StatusOK, response)
c.JSON(http.StatusOK, gin.H{"libraries": libraries})
}
// GetLibrary retrieves a library by ID
@@ -100,11 +69,11 @@ func (h *Handler) GetLibrary(c *gin.Context) {
// CreateLibraryRequest represents a library creation request
type CreateLibraryRequest struct {
Name string `json:"name" binding:"required"`
Description string `json:"description"`
Name string `json:"name" binding:"required"`
Description string `json:"description"`
BackingStorePath string `json:"backing_store_path" binding:"required"`
SlotCount int `json:"slot_count" binding:"required"`
DriveCount int `json:"drive_count" binding:"required"`
SlotCount int `json:"slot_count" binding:"required"`
DriveCount int `json:"drive_count" binding:"required"`
}
// CreateLibrary creates a new virtual tape library
@@ -192,10 +161,10 @@ func (h *Handler) GetLibraryTapes(c *gin.Context) {
// CreateTapeRequest represents a tape creation request
type CreateTapeRequest struct {
Barcode string `json:"barcode" binding:"required"`
SlotNumber int `json:"slot_number" binding:"required"`
TapeType string `json:"tape_type" binding:"required"`
SizeGB int64 `json:"size_gb" binding:"required"`
Barcode string `json:"barcode" binding:"required"`
SlotNumber int `json:"slot_number" binding:"required"`
TapeType string `json:"tape_type" binding:"required"`
SizeGB int64 `json:"size_gb" binding:"required"`
}
// CreateTape creates a new virtual tape
@@ -249,9 +218,9 @@ func (h *Handler) LoadTape(c *gin.Context) {
// Create async task
taskID, err := h.taskEngine.CreateTask(c.Request.Context(),
tasks.TaskTypeLoadUnload, userID.(string), map[string]interface{}{
"operation": "load_tape",
"library_id": libraryID,
"slot_number": req.SlotNumber,
"operation": "load_tape",
"library_id": libraryID,
"slot_number": req.SlotNumber,
"drive_number": req.DriveNumber,
})
if err != nil {
@@ -299,9 +268,9 @@ func (h *Handler) UnloadTape(c *gin.Context) {
// Create async task
taskID, err := h.taskEngine.CreateTask(c.Request.Context(),
tasks.TaskTypeLoadUnload, userID.(string), map[string]interface{}{
"operation": "unload_tape",
"library_id": libraryID,
"slot_number": req.SlotNumber,
"operation": "unload_tape",
"library_id": libraryID,
"slot_number": req.SlotNumber,
"drive_number": req.DriveNumber,
})
if err != nil {
@@ -326,3 +295,4 @@ func (h *Handler) UnloadTape(c *gin.Context) {
c.JSON(http.StatusAccepted, gin.H{"task_id": taskID})
}

View File

@@ -67,7 +67,7 @@ func (m *MHVTLMonitor) Stop() {
// syncMHVTL parses mhvtl configuration and syncs to database
func (m *MHVTLMonitor) syncMHVTL(ctx context.Context) {
m.logger.Info("Running MHVTL configuration sync")
m.logger.Debug("Running MHVTL configuration sync")
deviceConfPath := filepath.Join(m.configPath, "device.conf")
if _, err := os.Stat(deviceConfPath); os.IsNotExist(err) {
@@ -84,11 +84,6 @@ func (m *MHVTLMonitor) syncMHVTL(ctx context.Context) {
m.logger.Info("Parsed MHVTL configuration", "libraries", len(libraries), "drives", len(drives))
// Log parsed drives for debugging
for _, drive := range drives {
m.logger.Debug("Parsed drive", "drive_id", drive.DriveID, "library_id", drive.LibraryID, "slot", drive.Slot)
}
// Sync libraries to database
for _, lib := range libraries {
if err := m.syncLibrary(ctx, lib); err != nil {
@@ -99,9 +94,7 @@ func (m *MHVTLMonitor) syncMHVTL(ctx context.Context) {
// Sync drives to database
for _, drive := range drives {
if err := m.syncDrive(ctx, drive); err != nil {
m.logger.Error("Failed to sync drive", "drive_id", drive.DriveID, "library_id", drive.LibraryID, "slot", drive.Slot, "error", err)
} else {
m.logger.Debug("Synced drive", "drive_id", drive.DriveID, "library_id", drive.LibraryID, "slot", drive.Slot)
m.logger.Error("Failed to sync drive", "drive_id", drive.DriveID, "error", err)
}
}
@@ -113,7 +106,7 @@ func (m *MHVTLMonitor) syncMHVTL(ctx context.Context) {
}
}
m.logger.Info("MHVTL configuration sync completed")
m.logger.Debug("MHVTL configuration sync completed")
}
// LibraryInfo represents a library from device.conf
@@ -196,7 +189,6 @@ func (m *MHVTLMonitor) parseDeviceConf(ctx context.Context, path string) ([]Libr
Target: matches[3],
LUN: matches[4],
}
// Library ID and Slot might be on the same line or next line
if matches := libraryIDRegex.FindStringSubmatch(line); matches != nil {
libID, _ := strconv.Atoi(matches[1])
slot, _ := strconv.Atoi(matches[2])
@@ -206,63 +198,34 @@ func (m *MHVTLMonitor) parseDeviceConf(ctx context.Context, path string) ([]Libr
continue
}
// Parse library fields (only if we're in a library section and not in a drive section)
if currentLibrary != nil && currentDrive == nil {
// Handle both "Vendor identification:" and " Vendor identification:" (with leading space)
if strings.Contains(line, "Vendor identification:") {
parts := strings.Split(line, "Vendor identification:")
if len(parts) > 1 {
currentLibrary.Vendor = strings.TrimSpace(parts[1])
m.logger.Debug("Parsed vendor", "vendor", currentLibrary.Vendor, "library_id", currentLibrary.LibraryID)
}
} else if strings.Contains(line, "Product identification:") {
parts := strings.Split(line, "Product identification:")
if len(parts) > 1 {
currentLibrary.Product = strings.TrimSpace(parts[1])
m.logger.Info("Parsed library product", "product", currentLibrary.Product, "library_id", currentLibrary.LibraryID)
}
} else if strings.Contains(line, "Unit serial number:") {
parts := strings.Split(line, "Unit serial number:")
if len(parts) > 1 {
currentLibrary.SerialNumber = strings.TrimSpace(parts[1])
}
} else if strings.Contains(line, "Home directory:") {
parts := strings.Split(line, "Home directory:")
if len(parts) > 1 {
currentLibrary.HomeDirectory = strings.TrimSpace(parts[1])
}
// Parse library fields
if currentLibrary != nil {
if strings.HasPrefix(line, "Vendor identification:") {
currentLibrary.Vendor = strings.TrimSpace(strings.TrimPrefix(line, "Vendor identification:"))
} else if strings.HasPrefix(line, "Product identification:") {
currentLibrary.Product = strings.TrimSpace(strings.TrimPrefix(line, "Product identification:"))
} else if strings.HasPrefix(line, "Unit serial number:") {
currentLibrary.SerialNumber = strings.TrimSpace(strings.TrimPrefix(line, "Unit serial number:"))
} else if strings.HasPrefix(line, "Home directory:") {
currentLibrary.HomeDirectory = strings.TrimSpace(strings.TrimPrefix(line, "Home directory:"))
}
}
// Parse drive fields
if currentDrive != nil {
// Check for Library ID and Slot first (can be on separate line)
if strings.Contains(line, "Library ID:") && strings.Contains(line, "Slot:") {
if strings.HasPrefix(line, "Vendor identification:") {
currentDrive.Vendor = strings.TrimSpace(strings.TrimPrefix(line, "Vendor identification:"))
} else if strings.HasPrefix(line, "Product identification:") {
currentDrive.Product = strings.TrimSpace(strings.TrimPrefix(line, "Product identification:"))
} else if strings.HasPrefix(line, "Unit serial number:") {
currentDrive.SerialNumber = strings.TrimSpace(strings.TrimPrefix(line, "Unit serial number:"))
} else if strings.HasPrefix(line, "Library ID:") && strings.Contains(line, "Slot:") {
matches := libraryIDRegex.FindStringSubmatch(line)
if matches != nil {
libID, _ := strconv.Atoi(matches[1])
slot, _ := strconv.Atoi(matches[2])
currentDrive.LibraryID = libID
currentDrive.Slot = slot
m.logger.Debug("Parsed drive Library ID and Slot", "drive_id", currentDrive.DriveID, "library_id", libID, "slot", slot)
continue
}
}
// Handle both "Vendor identification:" and " Vendor identification:" (with leading space)
if strings.Contains(line, "Vendor identification:") {
parts := strings.Split(line, "Vendor identification:")
if len(parts) > 1 {
currentDrive.Vendor = strings.TrimSpace(parts[1])
}
} else if strings.Contains(line, "Product identification:") {
parts := strings.Split(line, "Product identification:")
if len(parts) > 1 {
currentDrive.Product = strings.TrimSpace(parts[1])
}
} else if strings.Contains(line, "Unit serial number:") {
parts := strings.Split(line, "Unit serial number:")
if len(parts) > 1 {
currentDrive.SerialNumber = strings.TrimSpace(parts[1])
}
}
}
@@ -292,17 +255,9 @@ func (m *MHVTLMonitor) syncLibrary(ctx context.Context, libInfo LibraryInfo) err
libInfo.LibraryID,
).Scan(&existingID)
m.logger.Debug("Syncing library", "library_id", libInfo.LibraryID, "vendor", libInfo.Vendor, "product", libInfo.Product)
// Use product identification for library name (without library ID)
libraryName := fmt.Sprintf("VTL-%d", libInfo.LibraryID)
if libInfo.Product != "" {
// Use only product name, without library ID
libraryName = libInfo.Product
m.logger.Info("Using product for library name", "product", libInfo.Product, "library_id", libInfo.LibraryID, "name", libraryName)
} else if libInfo.Vendor != "" {
libraryName = libInfo.Vendor
m.logger.Info("Using vendor for library name (product not available)", "vendor", libInfo.Vendor, "library_id", libInfo.LibraryID)
libraryName = fmt.Sprintf("%s-%d", libInfo.Product, libInfo.LibraryID)
}
if err == sql.ErrNoRows {
@@ -320,41 +275,23 @@ func (m *MHVTLMonitor) syncLibrary(ctx context.Context, libInfo LibraryInfo) err
_, err = m.service.db.ExecContext(ctx, `
INSERT INTO virtual_tape_libraries (
name, description, mhvtl_library_id, backing_store_path,
vendor, slot_count, drive_count, is_active
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
slot_count, drive_count, is_active
) VALUES ($1, $2, $3, $4, $5, $6, $7)
`, libraryName, fmt.Sprintf("MHVTL Library %d (%s)", libInfo.LibraryID, libInfo.Product),
libInfo.LibraryID, backingStorePath, libInfo.Vendor, slotCount, driveCount, true)
libInfo.LibraryID, backingStorePath, slotCount, driveCount, true)
if err != nil {
return fmt.Errorf("failed to insert library: %w", err)
}
m.logger.Info("Created virtual library from MHVTL", "library_id", libInfo.LibraryID, "name", libraryName)
} else if err == nil {
// Update existing library - also update name if product is available
updateName := libraryName
// If product exists and current name doesn't match, update it
if libInfo.Product != "" {
var currentName string
err := m.service.db.QueryRowContext(ctx,
"SELECT name FROM virtual_tape_libraries WHERE id = $1", existingID,
).Scan(&currentName)
if err == nil {
// Use only product name, without library ID
expectedName := libInfo.Product
if currentName != expectedName {
updateName = expectedName
m.logger.Info("Updating library name", "old", currentName, "new", updateName, "product", libInfo.Product)
}
}
}
m.logger.Info("Updating existing library", "library_id", libInfo.LibraryID, "product", libInfo.Product, "vendor", libInfo.Vendor, "old_name", libraryName, "new_name", updateName)
// Update existing library
_, err = m.service.db.ExecContext(ctx, `
UPDATE virtual_tape_libraries SET
name = $1, description = $2, backing_store_path = $3,
vendor = $4, is_active = $5, updated_at = NOW()
WHERE id = $6
`, updateName, fmt.Sprintf("MHVTL Library %d (%s)", libInfo.LibraryID, libInfo.Product),
libInfo.HomeDirectory, libInfo.Vendor, true, existingID)
is_active = $4, updated_at = NOW()
WHERE id = $5
`, libraryName, fmt.Sprintf("MHVTL Library %d (%s)", libInfo.LibraryID, libInfo.Product),
libInfo.HomeDirectory, true, existingID)
if err != nil {
return fmt.Errorf("failed to update library: %w", err)
}

View File

@@ -28,47 +28,46 @@ func NewService(db *database.DB, log *logger.Logger) *Service {
// VirtualTapeLibrary represents a virtual tape library
type VirtualTapeLibrary struct {
ID string `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
MHVTLibraryID int `json:"mhvtl_library_id"`
BackingStorePath string `json:"backing_store_path"`
Vendor string `json:"vendor,omitempty"`
SlotCount int `json:"slot_count"`
DriveCount int `json:"drive_count"`
IsActive bool `json:"is_active"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
CreatedBy string `json:"created_by"`
ID string `json:"id"`
Name string `json:"name"`
Description string `json:"description"`
MHVTLibraryID int `json:"mhvtl_library_id"`
BackingStorePath string `json:"backing_store_path"`
SlotCount int `json:"slot_count"`
DriveCount int `json:"drive_count"`
IsActive bool `json:"is_active"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
CreatedBy string `json:"created_by"`
}
// VirtualTapeDrive represents a virtual tape drive
type VirtualTapeDrive struct {
ID string `json:"id"`
LibraryID string `json:"library_id"`
DriveNumber int `json:"drive_number"`
DevicePath *string `json:"device_path,omitempty"`
StablePath *string `json:"stable_path,omitempty"`
Status string `json:"status"`
CurrentTapeID string `json:"current_tape_id,omitempty"`
IsActive bool `json:"is_active"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
ID string `json:"id"`
LibraryID string `json:"library_id"`
DriveNumber int `json:"drive_number"`
DevicePath *string `json:"device_path,omitempty"`
StablePath *string `json:"stable_path,omitempty"`
Status string `json:"status"`
CurrentTapeID string `json:"current_tape_id,omitempty"`
IsActive bool `json:"is_active"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
// VirtualTape represents a virtual tape
type VirtualTape struct {
ID string `json:"id"`
LibraryID string `json:"library_id"`
Barcode string `json:"barcode"`
SlotNumber int `json:"slot_number"`
ImageFilePath string `json:"image_file_path"`
SizeBytes int64 `json:"size_bytes"`
UsedBytes int64 `json:"used_bytes"`
TapeType string `json:"tape_type"`
Status string `json:"status"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
ID string `json:"id"`
LibraryID string `json:"library_id"`
Barcode string `json:"barcode"`
SlotNumber int `json:"slot_number"`
ImageFilePath string `json:"image_file_path"`
SizeBytes int64 `json:"size_bytes"`
UsedBytes int64 `json:"used_bytes"`
TapeType string `json:"tape_type"`
Status string `json:"status"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
// CreateLibrary creates a new virtual tape library
@@ -136,14 +135,14 @@ func (s *Service) CreateLibrary(ctx context.Context, name, description, backingS
for i := 1; i <= slotCount; i++ {
barcode := fmt.Sprintf("V%05d", i)
tape := VirtualTape{
LibraryID: lib.ID,
Barcode: barcode,
SlotNumber: i,
LibraryID: lib.ID,
Barcode: barcode,
SlotNumber: i,
ImageFilePath: filepath.Join(tapesPath, fmt.Sprintf("%s.img", barcode)),
SizeBytes: 800 * 1024 * 1024 * 1024, // 800 GB default (LTO-8)
UsedBytes: 0,
TapeType: "LTO-8",
Status: "idle",
SizeBytes: 800 * 1024 * 1024 * 1024, // 800 GB default (LTO-8)
UsedBytes: 0,
TapeType: "LTO-8",
Status: "idle",
}
if err := s.createTape(ctx, &tape); err != nil {
s.logger.Error("Failed to create tape", "slot", i, "error", err)
@@ -224,83 +223,49 @@ func (s *Service) createTape(ctx context.Context, tape *VirtualTape) error {
func (s *Service) ListLibraries(ctx context.Context) ([]VirtualTapeLibrary, error) {
query := `
SELECT id, name, description, mhvtl_library_id, backing_store_path,
COALESCE(vendor, '') as vendor,
slot_count, drive_count, is_active, created_at, updated_at, created_by
FROM virtual_tape_libraries
ORDER BY name
`
s.logger.Info("Executing query to list libraries")
rows, err := s.db.QueryContext(ctx, query)
if err != nil {
s.logger.Error("Failed to query libraries", "error", err)
return nil, fmt.Errorf("failed to list libraries: %w", err)
}
s.logger.Info("Query executed successfully, got rows")
defer rows.Close()
libraries := make([]VirtualTapeLibrary, 0) // Initialize as empty slice, not nil
s.logger.Info("Starting to scan library rows", "query", query)
rowCount := 0
var libraries []VirtualTapeLibrary
for rows.Next() {
rowCount++
var lib VirtualTapeLibrary
var description sql.NullString
var createdBy sql.NullString
err := rows.Scan(
&lib.ID, &lib.Name, &description, &lib.MHVTLibraryID, &lib.BackingStorePath,
&lib.Vendor,
&lib.ID, &lib.Name, &lib.Description, &lib.MHVTLibraryID, &lib.BackingStorePath,
&lib.SlotCount, &lib.DriveCount, &lib.IsActive,
&lib.CreatedAt, &lib.UpdatedAt, &createdBy,
&lib.CreatedAt, &lib.UpdatedAt, &lib.CreatedBy,
)
if err != nil {
s.logger.Error("Failed to scan library", "error", err, "row", rowCount)
s.logger.Error("Failed to scan library", "error", err)
continue
}
if description.Valid {
lib.Description = description.String
}
if createdBy.Valid {
lib.CreatedBy = createdBy.String
}
libraries = append(libraries, lib)
s.logger.Info("Added library to list", "library_id", lib.ID, "name", lib.Name, "mhvtl_id", lib.MHVTLibraryID)
}
s.logger.Info("Finished scanning library rows", "total_rows", rowCount, "libraries_added", len(libraries))
if err := rows.Err(); err != nil {
s.logger.Error("Error iterating library rows", "error", err)
return nil, fmt.Errorf("error iterating library rows: %w", err)
}
s.logger.Info("Listed virtual tape libraries", "count", len(libraries), "is_nil", libraries == nil)
// Ensure we return an empty slice, not nil
if libraries == nil {
s.logger.Warn("Libraries is nil in service, converting to empty array")
libraries = []VirtualTapeLibrary{}
}
s.logger.Info("Returning from service", "count", len(libraries), "is_nil", libraries == nil)
return libraries, nil
return libraries, rows.Err()
}
// GetLibrary retrieves a library by ID
func (s *Service) GetLibrary(ctx context.Context, id string) (*VirtualTapeLibrary, error) {
query := `
SELECT id, name, description, mhvtl_library_id, backing_store_path,
COALESCE(vendor, '') as vendor,
slot_count, drive_count, is_active, created_at, updated_at, created_by
FROM virtual_tape_libraries
WHERE id = $1
`
var lib VirtualTapeLibrary
var description sql.NullString
var createdBy sql.NullString
err := s.db.QueryRowContext(ctx, query, id).Scan(
&lib.ID, &lib.Name, &description, &lib.MHVTLibraryID, &lib.BackingStorePath,
&lib.Vendor,
&lib.ID, &lib.Name, &lib.Description, &lib.MHVTLibraryID, &lib.BackingStorePath,
&lib.SlotCount, &lib.DriveCount, &lib.IsActive,
&lib.CreatedAt, &lib.UpdatedAt, &createdBy,
&lib.CreatedAt, &lib.UpdatedAt, &lib.CreatedBy,
)
if err != nil {
if err == sql.ErrNoRows {
@@ -309,13 +274,6 @@ func (s *Service) GetLibrary(ctx context.Context, id string) (*VirtualTapeLibrar
return nil, fmt.Errorf("failed to get library: %w", err)
}
if description.Valid {
lib.Description = description.String
}
if createdBy.Valid {
lib.CreatedBy = createdBy.String
}
return &lib, nil
}
@@ -542,3 +500,4 @@ func (s *Service) DeleteLibrary(ctx context.Context, id string) error {
s.logger.Info("Virtual tape library deleted", "id", id, "name", lib.Name)
return nil
}

View File

@@ -1 +0,0 @@
/opt/calypso/conf/bacula

View File

@@ -1,25 +0,0 @@
[Unit]
Description=Calypso Stack Log Aggregator
Documentation=https://github.com/atlasos/calypso
After=network.target
Wants=calypso-api.service calypso-frontend.service
[Service]
Type=simple
# Run as root to access journald and write to /var/syslog
# Format: timestamp [service] message
ExecStart=/bin/bash -c '/usr/bin/journalctl -u calypso-api.service -u calypso-frontend.service -f --no-pager -o short-iso >> /var/syslog/calypso.log 2>&1'
Restart=always
RestartSec=5
# Security hardening
NoNewPrivileges=false
PrivateTmp=true
ReadWritePaths=/var/syslog
# Resource limits
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

Some files were not shown because too many files have changed in this diff Show More