2 Commits

177 changed files with 701 additions and 30314 deletions

View File

@@ -1,146 +0,0 @@
# Calypso Application Build Complete
**Tanggal:** 2025-01-09
**Workdir:** `/opt/calypso`
**Config:** `/opt/calypso/conf`
**Status:****BUILD SUCCESS**
## Build Summary
### ✅ Backend (Go Application)
- **Binary:** `/opt/calypso/bin/calypso-api`
- **Size:** 12 MB
- **Type:** ELF 64-bit LSB executable, statically linked
- **Build Flags:**
- Version: 1.0.0
- Build Time: $(date -u +%Y-%m-%dT%H:%M:%SZ)
- Git Commit: $(git rev-parse --short HEAD)
- Stripped: Yes (optimized for production)
### ✅ Frontend (React + Vite)
- **Build Output:** `/opt/calypso/web/`
- **Build Size:**
- index.html: 0.67 kB
- CSS: 58.25 kB (gzip: 10.30 kB)
- JS: 1,235.25 kB (gzip: 299.52 kB)
- **Build Time:** ~10.46s
- **Status:** Production build complete
## Directory Structure
```
/opt/calypso/
├── bin/
│ └── calypso-api # Backend binary (12 MB)
├── web/ # Frontend static files
│ ├── index.html
│ ├── assets/
│ └── logo.png
├── conf/ # Configuration files
│ ├── config.yaml # Main config
│ ├── secrets.env # Secrets (600 permissions)
│ ├── bacula/ # Bacula configs
│ ├── clamav/ # ClamAV configs
│ ├── nfs/ # NFS configs
│ ├── scst/ # SCST configs
│ ├── vtl/ # VTL configs
│ └── zfs/ # ZFS configs
├── data/ # Data directory
│ ├── storage/
│ └── vtl/
└── releases/
└── 1.0.0/ # Versioned release
├── bin/
│ └── calypso-api # Versioned binary
└── web/ # Versioned frontend
```
## Files Created
### Backend
-`/opt/calypso/bin/calypso-api` - Main backend binary
-`/opt/calypso/releases/1.0.0/bin/calypso-api` - Versioned binary
### Frontend
-`/opt/calypso/web/` - Production frontend build
-`/opt/calypso/releases/1.0.0/web/` - Versioned frontend
### Configuration
-`/opt/calypso/conf/config.yaml` - Main configuration
-`/opt/calypso/conf/secrets.env` - Secrets (600 permissions)
## Ownership & Permissions
- **Owner:** `calypso:calypso` (for application files)
- **Owner:** `root:root` (for secrets.env)
- **Permissions:**
- Binaries: `755` (executable)
- Config: `644` (readable)
- Secrets: `600` (owner only)
## Build Tools Used
- **Go:** 1.22.2 (installed via apt)
- **Node.js:** v23.11.1
- **npm:** 11.7.0
- **Build Command:**
```bash
# Backend
CGO_ENABLED=0 GOOS=linux go build -ldflags "-w -s" -a -installsuffix cgo -o /opt/calypso/bin/calypso-api ./cmd/calypso-api
# Frontend
cd frontend && npm run build
```
## Verification
✅ **Backend Binary:**
- File exists and is executable
- Statically linked (no external dependencies)
- Stripped (optimized size)
✅ **Frontend Build:**
- All assets built successfully
- Production optimized
- Ready for static file serving
✅ **Configuration:**
- Config files in place
- Secrets file secured (600 permissions)
- All component configs present
## Next Steps
1. ✅ Application built and ready
2. ⏭️ Configure systemd service to use `/opt/calypso/bin/calypso-api`
3. ⏭️ Setup reverse proxy (Caddy/Nginx) for frontend
4. ⏭️ Test application startup
5. ⏭️ Run database migrations (auto on first start)
## Configuration Notes
- **Config Location:** `/opt/calypso/conf/config.yaml`
- **Secrets Location:** `/opt/calypso/conf/secrets.env`
- **Database:** Will use credentials from secrets.env
- **Workdir:** `/opt/calypso` (as specified)
## Production Readiness
**Backend:**
- Statically linked binary (no runtime dependencies)
- Stripped and optimized
- Version information embedded
**Frontend:**
- Production build with minification
- Assets optimized
- Ready for CDN/static hosting
**Configuration:**
- Secure secrets management
- Organized config structure
- All component configs in place
---
**Build Status:****COMPLETE**
**Ready for Deployment:****YES**

View File

@@ -1,540 +0,0 @@
# Calypso Appliance Component Review
**Tanggal Review:** 2025-01-09
**Installation Directory:** `/opt/calypso`
**System:** Ubuntu 24.04 LTS
## Executive Summary
Review komprehensif semua komponen utama di appliance Calypso:
-**ZFS** - Storage layer utama
-**SCST** - iSCSI target framework
-**NFS** - Network File System sharing
-**SMB** - Samba/CIFS file sharing
-**ClamAV** - Antivirus scanning
-**MHVTL** - Virtual Tape Library
-**Bacula** - Backup software integration
**Status Keseluruhan:** Semua komponen terinstall dan berjalan dengan baik.
---
## 1. ZFS (Zettabyte File System)
### Status: ✅ **FULLY IMPLEMENTED**
### Lokasi Implementasi
- **Backend Service:** `backend/internal/storage/zfs.go`
- **Handler:** `backend/internal/storage/handler.go`
- **Database Schema:** `backend/internal/common/database/migrations/002_storage_and_tape_schema.sql`
- **Frontend:** `frontend/src/pages/Storage.tsx`
- **API Client:** `frontend/src/api/storage.ts`
### Fitur yang Diimplementasikan
1. **Pool Management**
- Create pool dengan berbagai RAID level (stripe, mirror, raidz, raidz2, raidz3)
- List pools dengan status kesehatan
- Delete pool (dengan validasi)
- Add spare disks
- Pool health monitoring (online, degraded, faulted, offline)
2. **Dataset Management**
- Create filesystem dan volume datasets
- Set compression (off, lz4, zstd, gzip)
- Set quota dan reservation
- Mount point management
- List datasets per pool
3. **ARC Statistics**
- Cache hit/miss statistics
- Memory usage tracking
- Performance metrics
### Konfigurasi
- **Config Directory:** `/opt/calypso/conf/zfs/`
- **Service:** `zfs-zed.service` (ZFS Event Daemon) - ✅ Running
### API Endpoints
```
GET /api/v1/storage/zfs/pools
POST /api/v1/storage/zfs/pools
GET /api/v1/storage/zfs/pools/:id
DELETE /api/v1/storage/zfs/pools/:id
POST /api/v1/storage/zfs/pools/:id/spare
GET /api/v1/storage/zfs/pools/:id/datasets
POST /api/v1/storage/zfs/pools/:id/datasets
DELETE /api/v1/storage/zfs/pools/:id/datasets/:name
GET /api/v1/storage/zfs/arc/stats
```
### Catatan
- ✅ Implementasi lengkap dengan error handling yang baik
- ✅ Support untuk semua RAID level standar ZFS
- ✅ Database persistence untuk tracking pools dan datasets
- ✅ Integration dengan task engine untuk operasi async
---
## 2. SCST (Generic SCSI Target Subsystem)
### Status: ✅ **FULLY IMPLEMENTED**
### Lokasi Implementasi
- **Backend Service:** `backend/internal/scst/service.go` (1135+ lines)
- **Handler:** `backend/internal/scst/handler.go` (794+ lines)
- **Database Schema:** `backend/internal/common/database/migrations/003_add_scst_schema.sql`
- **Frontend:** `frontend/src/pages/ISCSITargets.tsx`
- **API Client:** `frontend/src/api/scst.ts`
### Fitur yang Diimplementasikan
1. **Target Management**
- Create iSCSI targets dengan IQN
- Enable/disable targets
- Delete targets
- Target types: disk, vtl, physical_tape
- Single initiator policy untuk tape targets
2. **LUN Management**
- Add/remove LUNs ke targets
- LUN numbering otomatis
- Handler types: vdisk_fileio, vdisk_blockio, tape, sg
- Device path mapping
3. **Initiator Management**
- Create initiator groups
- Add/remove initiators ke groups
- ACL management per target
- CHAP authentication support
4. **Extent Management**
- Create/delete extents (backend devices)
- Handler selection (vdisk, tape, sg)
- Device path configuration
5. **Portal Management**
- Create/update/delete iSCSI portals
- IP address dan port configuration
- Network interface binding
6. **Configuration Management**
- Apply SCST configuration
- Get/update config file
- List available handlers
### Konfigurasi
- **Config Directory:** `/opt/calypso/conf/scst/`
- **Config File:** `/opt/calypso/conf/scst/scst.conf`
- **Service:** `iscsi-scstd.service` - ✅ Running (port 3260)
### API Endpoints
```
GET /api/v1/scst/targets
POST /api/v1/scst/targets
GET /api/v1/scst/targets/:id
POST /api/v1/scst/targets/:id/enable
POST /api/v1/scst/targets/:id/disable
DELETE /api/v1/scst/targets/:id
POST /api/v1/scst/targets/:id/luns
DELETE /api/v1/scst/targets/:id/luns/:lunId
GET /api/v1/scst/extents
POST /api/v1/scst/extents
DELETE /api/v1/scst/extents/:device
GET /api/v1/scst/initiators
GET /api/v1/scst/initiator-groups
POST /api/v1/scst/initiator-groups
GET /api/v1/scst/portals
POST /api/v1/scst/portals
POST /api/v1/scst/config/apply
GET /api/v1/scst/handlers
```
### Catatan
- ✅ Implementasi sangat lengkap dengan error handling yang baik
- ✅ Support untuk disk, VTL, dan physical tape targets
- ✅ Automatic config file management
- ✅ Real-time target status monitoring
- ✅ Frontend dengan auto-refresh setiap 3 detik
---
## 3. NFS (Network File System)
### Status: ✅ **FULLY IMPLEMENTED**
### Lokasi Implementasi
- **Backend Service:** `backend/internal/shares/service.go`
- **Handler:** `backend/internal/shares/handler.go`
- **Database Schema:** `backend/internal/common/database/migrations/006_add_zfs_shares_and_iscsi.sql`
- **Frontend:** `frontend/src/pages/Shares.tsx`
- **API Client:** `frontend/src/api/shares.ts`
### Fitur yang Diimplementasikan
1. **Share Management**
- Create shares dengan NFS enabled
- Update share configuration
- Delete shares
- List all shares
2. **NFS Configuration**
- NFS options (rw, sync, no_subtree_check, dll)
- Client access control (IP addresses/networks)
- Export management via `/etc/exports`
3. **Integration dengan ZFS**
- Shares dibuat dari ZFS datasets
- Mount point otomatis dari dataset
- Path validation
### Konfigurasi
- **Config Directory:** `/opt/calypso/conf/nfs/`
- **Exports File:** `/etc/exports` (managed by Calypso)
- **Services:**
- `nfs-server.service` - ✅ Running
- `nfs-mountd.service` - ✅ Running
- `nfs-idmapd.service` - ✅ Running
### API Endpoints
```
GET /api/v1/shares
POST /api/v1/shares
GET /api/v1/shares/:id
PUT /api/v1/shares/:id
DELETE /api/v1/shares/:id
```
### Catatan
- ✅ Automatic `/etc/exports` management
- ✅ Support untuk NFS v3 dan v4
- ✅ Client access control via IP/networks
- ✅ Integration dengan ZFS datasets
---
## 4. SMB (Samba/CIFS)
### Status: ✅ **FULLY IMPLEMENTED**
### Lokasi Implementasi
- **Backend Service:** `backend/internal/shares/service.go` (shared dengan NFS)
- **Handler:** `backend/internal/shares/handler.go`
- **Database Schema:** `backend/internal/common/database/migrations/006_add_zfs_shares_and_iscsi.sql`
- **Frontend:** `frontend/src/pages/Shares.tsx`
- **API Client:** `frontend/src/api/shares.ts`
### Fitur yang Diimplementasikan
1. **SMB Share Management**
- Create shares dengan SMB enabled
- Update share configuration
- Delete shares
- Support untuk "both" (NFS + SMB) shares
2. **SMB Configuration**
- Share name customization
- Share path configuration
- Comment/description
- Guest access control
- Read-only option
- Browseable option
3. **Samba Integration**
- Automatic `/etc/samba/smb.conf` management
- Share section generation
- Service restart setelah perubahan
### Konfigurasi
- **Config Directory:** `/opt/calypso/conf/samba/` (dokumentasi)
- **Samba Config:** `/etc/samba/smb.conf` (managed by Calypso)
- **Service:** `smbd.service` - ✅ Running
### API Endpoints
```
GET /api/v1/shares
POST /api/v1/shares
GET /api/v1/shares/:id
PUT /api/v1/shares/:id
DELETE /api/v1/shares/:id
```
### Catatan
- ✅ Automatic Samba config management
- ✅ Support untuk guest access dan read-only
- ✅ Integration dengan ZFS datasets
- ✅ Bisa dikombinasikan dengan NFS (share type: "both")
---
## 5. ClamAV (Antivirus)
### Status: ⚠️ **INSTALLED BUT NOT INTEGRATED**
### Lokasi Implementasi
- **Installer Scripts:**
- `installer/alpha/scripts/dependencies.sh` (install_antivirus)
- `installer/alpha/scripts/configure-services.sh` (configure_clamav)
- **Documentation:** `docs/alpha/components/clamav/ClamAV-Installation-Guide.md`
### Fitur yang Diimplementasikan
1. **Installation**
- ✅ ClamAV daemon installation
- ✅ FreshClam (virus definition updater)
- ✅ ClamAV unofficial signatures
2. **Configuration**
- ✅ Quarantine directory: `/srv/calypso/quarantine`
- ✅ Config directory: `/opt/calypso/conf/clamav/`
- ✅ Systemd service override untuk custom config path
### Konfigurasi
- **Config Directory:** `/opt/calypso/conf/clamav/`
- **Config Files:**
- `clamd.conf` - ClamAV daemon config
- `freshclam.conf` - Virus definition updater config
- **Quarantine:** `/srv/calypso/quarantine`
- **Services:**
- `clamav-daemon.service` - ✅ Running
- `clamav-freshclam.service` - ✅ Running
### API Integration
**BELUM ADA** - Tidak ada backend service atau API endpoints untuk:
- File scanning
- Quarantine management
- Scan scheduling
- Scan reports
### Catatan
- ⚠️ ClamAV terinstall dan berjalan, tapi **belum terintegrasi** dengan Calypso API
- ⚠️ Tidak ada API endpoints untuk scan files di shares
- ⚠️ Tidak ada UI untuk manage scans atau quarantine
- 💡 **Rekomendasi:** Implementasi "Share Shield" feature untuk:
- On-access scanning untuk SMB shares
- Scheduled scans untuk NFS shares
- Quarantine management UI
- Scan reports dan alerts
---
## 6. MHVTL (Virtual Tape Library)
### Status: ✅ **FULLY IMPLEMENTED**
### Lokasi Implementasi
- **Backend Service:** `backend/internal/tape_vtl/service.go`
- **Handler:** `backend/internal/tape_vtl/handler.go`
- **MHVTL Monitor:** `backend/internal/tape_vtl/mhvtl_monitor.go`
- **Database Schema:** `backend/internal/common/database/migrations/007_add_vtl_schema.sql`
- **Frontend:** `frontend/src/pages/VTLDetail.tsx`, `frontend/src/pages/TapeLibraries.tsx`
- **API Client:** `frontend/src/api/tape.ts`
### Fitur yang Diimplementasikan
1. **Library Management**
- Create virtual tape libraries
- List libraries
- Get library details dengan drives dan tapes
- Delete libraries (dengan safety checks)
- MHVTL library ID assignment otomatis
2. **Tape Management**
- Create virtual tapes dengan barcode
- Slot assignment
- Tape size configuration
- Tape status tracking (idle, in_drive, exported)
- Tape image file management
3. **Drive Management**
- Automatic drive creation saat library dibuat
- Drive status tracking (idle, ready, error)
- Current tape tracking per drive
- Device path management
4. **Operations**
- Load tape dari slot ke drive (async)
- Unload tape dari drive ke slot (async)
- Database state synchronization
5. **MHVTL Integration**
- Automatic MHVTL config generation
- MHVTL monitor service (sync setiap 5 menit)
- Device path discovery
- Library ID management
### Konfigurasi
- **Config Directory:** `/opt/calypso/conf/vtl/`
- **Config Files:**
- `mhvtl.conf` - MHVTL main config
- `device.conf` - Device configuration
- **Backing Store:** `/srv/calypso/vtl/` (per library)
- **MHVTL Config:** `/etc/mhvtl/` (monitored by Calypso)
### API Endpoints
```
GET /api/v1/tape/vtl/libraries
POST /api/v1/tape/vtl/libraries
GET /api/v1/tape/vtl/libraries/:id
DELETE /api/v1/tape/vtl/libraries/:id
GET /api/v1/tape/vtl/libraries/:id/drives
GET /api/v1/tape/vtl/libraries/:id/tapes
POST /api/v1/tape/vtl/libraries/:id/tapes
POST /api/v1/tape/vtl/libraries/:id/load
POST /api/v1/tape/vtl/libraries/:id/unload
```
### Catatan
- ✅ Implementasi sangat lengkap dengan MHVTL integration
- ✅ Automatic backing store directory creation
- ✅ MHVTL monitor service untuk state synchronization
- ✅ Async task support untuk load/unload operations
- ✅ Frontend UI lengkap dengan real-time updates
---
## 7. Bacula (Backup Software)
### Status: ✅ **FULLY IMPLEMENTED**
### Lokasi Implementasi
- **Backend Service:** `backend/internal/backup/service.go`
- **Handler:** `backend/internal/backup/handler.go`
- **Database Integration:** Direct PostgreSQL connection ke Bacula database
- **Frontend:** `frontend/src/pages/Backup.tsx` (implied)
- **API Client:** `frontend/src/api/backup.ts`
### Fitur yang Diimplementasikan
1. **Job Management**
- List backup jobs dengan filters (status, type, client, name)
- Get job details
- Create jobs
- Pagination support
2. **Client Management**
- List Bacula clients
- Client status tracking
3. **Storage Management**
- List storage pools
- Create/delete storage pools
- List storage volumes
- Create/update/delete volumes
- List storage daemons
4. **Media Management**
- List media (tapes/volumes)
- Media status tracking
5. **Bconsole Integration**
- Execute bconsole commands
- Direct Bacula Director communication
6. **Dashboard Statistics**
- Job statistics
- Storage statistics
- System health metrics
### Konfigurasi
- **Config Directory:** `/opt/calypso/conf/bacula/`
- **Config Files:**
- `bacula-dir.conf` - Director configuration
- `bacula-sd.conf` - Storage Daemon configuration
- `bacula-fd.conf` - File Daemon configuration
- `scripts/mtx-changer.conf` - Changer script config
- **Database:** PostgreSQL database `bacula` (default) atau `bareos`
- **Services:**
- `bacula-director.service` - ✅ Running
- `bacula-sd.service` - ✅ Running
- `bacula-fd.service` - ✅ Running
### API Endpoints
```
GET /api/v1/backup/dashboard/stats
GET /api/v1/backup/jobs
GET /api/v1/backup/jobs/:id
POST /api/v1/backup/jobs
GET /api/v1/backup/clients
GET /api/v1/backup/storage/pools
POST /api/v1/backup/storage/pools
DELETE /api/v1/backup/storage/pools/:id
GET /api/v1/backup/storage/volumes
POST /api/v1/backup/storage/volumes
PUT /api/v1/backup/storage/volumes/:id
DELETE /api/v1/backup/storage/volumes/:id
GET /api/v1/backup/media
GET /api/v1/backup/storage/daemons
POST /api/v1/backup/console/execute
```
### Catatan
- ✅ Direct database connection untuk performa optimal
- ✅ Fallback ke bconsole jika database tidak tersedia
- ✅ Support untuk Bacula dan Bareos
- ✅ Integration dengan Calypso storage (ZFS datasets)
- ✅ Comprehensive job dan storage management
---
## Summary & Recommendations
### Status Komponen
| Komponen | Status | API Integration | UI Integration | Notes |
|----------|--------|-----------------|----------------|-------|
| **ZFS** | ✅ Complete | ✅ Full | ✅ Full | Production ready |
| **SCST** | ✅ Complete | ✅ Full | ✅ Full | Production ready |
| **NFS** | ✅ Complete | ✅ Full | ✅ Full | Production ready |
| **SMB** | ✅ Complete | ✅ Full | ✅ Full | Production ready |
| **ClamAV** | ⚠️ Partial | ❌ None | ❌ None | Installed but not integrated |
| **MHVTL** | ✅ Complete | ✅ Full | ✅ Full | Production ready |
| **Bacula** | ✅ Complete | ✅ Full | ⚠️ Partial | API ready, UI may need enhancement |
### Rekomendasi Prioritas
1. **HIGH PRIORITY: ClamAV Integration**
- Implementasi backend service untuk file scanning
- API endpoints untuk scan management
- UI untuk quarantine management
- On-access scanning untuk SMB shares
- Scheduled scans untuk NFS shares
2. **MEDIUM PRIORITY: Bacula UI Enhancement**
- Review dan enhance frontend untuk Bacula management
- Job scheduling UI
- Restore operations UI
3. **LOW PRIORITY: Monitoring & Alerts**
- Enhanced monitoring untuk semua komponen
- Alert rules untuk ClamAV scans
- Performance metrics collection
### Konfigurasi Directory Structure
```
/opt/calypso/
├── conf/
│ ├── bacula/ ✅ Configured
│ ├── clamav/ ✅ Configured (but not integrated)
│ ├── nfs/ ✅ Configured
│ ├── scst/ ✅ Configured
│ ├── vtl/ ✅ Configured
│ └── zfs/ ✅ Configured
└── data/
├── storage/ ✅ Created
└── vtl/ ✅ Created
```
### Service Status
Semua services utama berjalan dengan baik:
-`zfs-zed.service` - Running
-`iscsi-scstd.service` - Running
-`nfs-server.service` - Running
-`smbd.service` - Running
-`clamav-daemon.service` - Running
-`clamav-freshclam.service` - Running
-`bacula-director.service` - Running
-`bacula-sd.service` - Running
-`bacula-fd.service` - Running
---
## Kesimpulan
Calypso appliance memiliki implementasi yang sangat lengkap untuk semua komponen utama. Hanya ClamAV yang masih perlu integrasi dengan API dan UI. Semua komponen lainnya sudah production-ready dengan fitur lengkap, error handling yang baik, dan integration yang solid.
**Overall Status: 95% Complete**

View File

@@ -1,79 +0,0 @@
# Database Check Report
**Tanggal:** 2025-01-09
**System:** Ubuntu 24.04 LTS
## Hasil Pengecekan PostgreSQL
### ✅ User Database yang EXIST:
1. **bacula** - User untuk Bacula backup software
- Status: ✅ **EXIST**
- Attributes: (no special attributes)
### ❌ User Database yang TIDAK EXIST:
1. **calypso** - User untuk Calypso application
- Status: ❌ **TIDAK EXIST**
- Expected: User untuk Calypso API backend
### ✅ Database yang EXIST:
1. **bacula**
- Owner: `bacula`
- Encoding: SQL_ASCII
- Status: ✅ **EXIST**
### ❌ Database yang TIDAK EXIST:
1. **calypso**
- Expected Owner: `calypso`
- Expected Encoding: UTF8
- Status: ❌ **TIDAK EXIST**
---
## Summary
| Item | Status | Notes |
|------|--------|-------|
| User `bacula` | ✅ EXIST | Ready untuk Bacula |
| Database `bacula` | ✅ EXIST | Ready untuk Bacula |
| User `calypso` | ❌ **TIDAK EXIST** | **PERLU DIBUAT** |
| Database `calypso` | ❌ **TIDAK EXIST** | **PERLU DIBUAT** |
---
## Action Required
Calypso application memerlukan:
1. **User PostgreSQL:** `calypso`
2. **Database PostgreSQL:** `calypso`
### Langkah untuk Membuat Database Calypso:
```bash
# 1. Create user calypso
sudo -u postgres psql -c "CREATE USER calypso WITH PASSWORD 'your_secure_password';"
# 2. Create database calypso
sudo -u postgres psql -c "CREATE DATABASE calypso OWNER calypso;"
# 3. Grant privileges
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE calypso TO calypso;"
# 4. Verify
sudo -u postgres psql -c "\du" | grep calypso
sudo -u postgres psql -c "\l" | grep calypso
```
### Atau menggunakan installer script:
```bash
# Jalankan installer database script
cd /src/calypso/installer/alpha/scripts
sudo bash database.sh
```
---
## Catatan
- Bacula database sudah terinstall dengan benar ✅
- Calypso database belum dibuat, kemungkinan installer belum dijalankan atau ada masalah saat instalasi
- Setelah database dibuat, migrations akan otomatis dijalankan saat Calypso API pertama kali start

View File

@@ -1,88 +0,0 @@
# Database Setup Complete
**Tanggal:** 2025-01-09
**Status:****BERHASIL**
## Yang Telah Dibuat
### ✅ User PostgreSQL: `calypso`
- Status: ✅ **CREATED**
- Password: `calypso_secure_2025` (disimpan di script, perlu diubah untuk production)
### ✅ Database: `calypso`
- Owner: `calypso`
- Encoding: UTF8
- Status: ✅ **CREATED**
### ✅ Database Access: `bacula`
- User `calypso` memiliki **READ ACCESS** ke database `bacula`
- Privileges:
- ✅ CONNECT ke database `bacula`
- ✅ USAGE pada schema `public`
- ✅ SELECT pada semua tables (32 tables)
- ✅ Default privileges untuk tables baru
## Verifikasi
### User yang Ada:
```
bacula |
calypso |
```
### Database yang Ada:
```
bacula | bacula | SQL_ASCII | ... | calypso=c/bacula
calypso | calypso | UTF8 | ... | calypso=CTc/calypso
```
### Access Test:
- ✅ User `calypso` bisa connect ke database `calypso`
- ✅ User `calypso` bisa connect ke database `bacula`
- ✅ User `calypso` bisa SELECT dari tables di database `bacula` (32 tables accessible)
## Konfigurasi untuk Calypso API
Update `/etc/calypso/config.yaml` atau set environment variables:
```bash
export CALYPSO_DB_PASSWORD="calypso_secure_2025"
export CALYPSO_DB_USER="calypso"
export CALYPSO_DB_NAME="calypso"
```
Atau di config file:
```yaml
database:
host: "localhost"
port: 5432
user: "calypso"
password: "calypso_secure_2025" # Atau via env var CALYPSO_DB_PASSWORD
database: "calypso"
ssl_mode: "disable"
```
## Catatan Penting
⚠️ **Security Note:**
- Password `calypso_secure_2025` adalah default password
- **WAJIB diubah** untuk production environment
- Gunakan strong password generator
- Simpan password di `/etc/calypso/secrets.env` atau environment variables
## Next Steps
1. ✅ Database `calypso` siap untuk migrations
2. ✅ Calypso API bisa connect ke database sendiri
3. ✅ Calypso API bisa read data dari Bacula database
4. ⏭️ Jalankan Calypso API untuk auto-migration
5. ⏭️ Update password ke production-grade password
## Bacula Database Access
User `calypso` sekarang bisa:
- ✅ Read semua tables di database `bacula`
- ✅ Query job history, clients, storage pools, volumes, media
- ✅ Monitor backup operations
-**TIDAK bisa** write/modify data di database `bacula` (read-only access)
Ini sesuai dengan requirement Calypso untuk monitoring dan reporting Bacula operations tanpa bisa mengubah konfigurasi Bacula.

View File

@@ -1,121 +0,0 @@
# Dataset Mountpoint Validation
## Issue
User meminta validasi bahwa mount point untuk dataset dan volume harus berada di dalam directory pool yang terkait.
## Solution
Menambahkan validasi untuk memastikan mount point dataset berada di dalam pool mount point directory (`/opt/calypso/data/pool/<pool-name>/`).
## Changes Made
### Updated `backend/internal/storage/zfs.go`
**File**: `backend/internal/storage/zfs.go` (line 728-814)
**Key Changes:**
1. **Mount Point Validation**
- Validasi bahwa mount point yang user berikan harus berada di dalam pool directory
- Menggunakan `filepath.Rel()` untuk memastikan mount point tidak keluar dari pool directory
2. **Default Mount Point**
- Jika mount point tidak disediakan, set default ke `/opt/calypso/data/pool/<pool-name>/<dataset-name>/`
- Memastikan semua dataset mount point berada di dalam pool directory
3. **Mount Point Always Set**
- Untuk filesystem datasets, mount point selalu di-set (baik user-provided atau default)
- Tidak lagi conditional pada `req.MountPoint != ""`
**Before:**
```go
if req.Type == "filesystem" && req.MountPoint != "" {
mountPath := filepath.Clean(req.MountPoint)
// ... create directory ...
}
// Later:
if req.Type == "filesystem" && req.MountPoint != "" {
args = append(args, "-o", fmt.Sprintf("mountpoint=%s", req.MountPoint))
}
```
**After:**
```go
poolMountPoint := fmt.Sprintf("/opt/calypso/data/pool/%s", poolName)
var mountPath string
if req.Type == "filesystem" {
if req.MountPoint != "" {
// Validate mount point is within pool directory
mountPath = filepath.Clean(req.MountPoint)
// ... validation logic ...
} else {
// Use default mount point
mountPath = filepath.Join(poolMountPoint, req.Name)
}
// ... create directory ...
}
// Later:
if req.Type == "filesystem" {
args = append(args, "-o", fmt.Sprintf("mountpoint=%s", mountPath))
}
```
## Mount Point Structure
### Pool Mount Point
```
/opt/calypso/data/pool/<pool-name>/
```
### Dataset Mount Point (Default)
```
/opt/calypso/data/pool/<pool-name>/<dataset-name>/
```
### Dataset Mount Point (Custom - must be within pool)
```
/opt/calypso/data/pool/<pool-name>/<custom-path>/
```
## Validation Rules
1. **User-provided mount point**:
- Must be within `/opt/calypso/data/pool/<pool-name>/`
- Cannot use `..` to escape pool directory
- Must be a valid directory path
2. **Default mount point**:
- Automatically set to `/opt/calypso/data/pool/<pool-name>/<dataset-name>/`
- Always within pool directory
3. **Volumes**:
- Volumes cannot have mount points (already validated in handler)
## Error Messages
- `mount point must be within pool directory: <path> (pool mount: <pool-mount>)` - Jika mount point di luar pool directory
- `mount point path exists but is not a directory: <path>` - Jika path sudah ada tapi bukan directory
- `failed to create mount directory <path>` - Jika gagal membuat directory
## Testing
1. **Create dataset without mount point**:
- Should use default: `/opt/calypso/data/pool/<pool-name>/<dataset-name>/`
2. **Create dataset with valid mount point**:
- Mount point: `/opt/calypso/data/pool/<pool-name>/custom-path/`
- Should succeed
3. **Create dataset with invalid mount point**:
- Mount point: `/opt/calypso/data/other-path/`
- Should fail with validation error
4. **Create volume**:
- Should not set mount point (volumes don't have mount points)
## Status
**COMPLETED** - Mount point validation untuk dataset sudah diterapkan
## Date
2026-01-09

View File

@@ -1,103 +0,0 @@
# Default User Credentials untuk Calypso Appliance
**Tanggal:** 2025-01-09
**Status:****READY**
## 🔐 Default Admin User
### Credentials
- **Username:** `admin`
- **Password:** `admin123`
- **Email:** `admin@calypso.local`
- **Role:** `admin` (Full system access)
## 📋 Informasi User
- **Full Name:** Administrator
- **Status:** Active
- **Permissions:** All permissions (admin role)
- **Access Level:** Full system access and configuration
## 🚀 Cara Login
### Via Frontend Portal
1. Buka browser dan akses: **http://localhost/** atau **http://10.10.14.18/**
2. Masuk ke halaman login (akan redirect otomatis jika belum login)
3. Masukkan credentials:
- **Username:** `admin`
- **Password:** `admin123`
4. Klik "Sign In"
### Via API
```bash
curl -X POST http://localhost/api/v1/auth/login \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"admin123"}'
```
## ⚠️ Security Notes
### Untuk Development/Testing
- ✅ Password `admin123` dapat digunakan
- ✅ User sudah dibuat dengan role admin
- ✅ Password sudah di-hash dengan Argon2id (secure)
### Untuk Production
- ⚠️ **WAJIB** ubah password default setelah first login
- ⚠️ Gunakan password yang kuat (minimal 12 karakter, kombinasi huruf, angka, simbol)
- ⚠️ Pertimbangkan untuk disable user default dan buat user baru
- ⚠️ Enable 2FA jika tersedia
## 🔧 Membuat/Update Admin User
### Jika User Belum Ada
```bash
cd /src/calypso
bash scripts/setup-test-user.sh
```
Script ini akan:
- Membuat user `admin` dengan password `admin123`
- Assign role `admin`
- Set email ke `admin@calypso.local`
### Update Password (jika perlu)
```bash
cd /src/calypso
bash scripts/update-admin-password.sh
```
## ✅ Verifikasi User
### Cek User di Database
```bash
sudo -u postgres psql -d calypso -c "SELECT username, email, is_active FROM users WHERE username = 'admin';"
```
### Cek Role Assignment
```bash
sudo -u postgres psql -d calypso -c "SELECT u.username, r.name as role FROM users u JOIN user_roles ur ON u.id = ur.user_id JOIN roles r ON ur.role_id = r.id WHERE u.username = 'admin';"
```
### Test Login
```bash
curl -X POST http://localhost/api/v1/auth/login \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"admin123"}' | jq .
```
## 📝 Summary
**Default Credentials:**
- Username: `admin`
- Password: `admin123`
- Role: `admin` (Full access)
**Access URLs:**
- Frontend: http://localhost/ atau http://10.10.14.18/
- API: http://localhost/api/v1/
**Status:** ✅ User sudah dibuat dan siap digunakan
---
**⚠️ REMEMBER:** Ubah password default untuk production environment!

View File

@@ -1,225 +0,0 @@
# Frontend Access Setup Complete
**Tanggal:** 2025-01-09
**Reverse Proxy:** Nginx
**Status:****CONFIGURED & RUNNING**
## Configuration Summary
### Nginx Configuration
- **Config File:** `/etc/nginx/sites-available/calypso`
- **Enabled:** `/etc/nginx/sites-enabled/calypso`
- **Port:** 80 (HTTP)
- **Root Directory:** `/opt/calypso/web`
- **API Backend:** `http://localhost:8080`
### Service Status
-**Nginx:** Running
-**Calypso API:** Running on port 8080
-**Frontend Files:** Served from `/opt/calypso/web`
## Access URLs
### Local Access
- **Frontend:** http://localhost/
- **API:** http://localhost/api/v1/health
- **Login Page:** http://localhost/login
### Network Access
- **Frontend:** http://<server-ip>/
- **API:** http://<server-ip>/api/v1/health
## Nginx Configuration Details
### Static Files Serving
```nginx
root /opt/calypso/web;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
```
### API Proxy
```nginx
location /api {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
```
### WebSocket Support
```nginx
location /ws {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
```
### Terminal WebSocket
```nginx
location /api/v1/system/terminal/ws {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
```
## Features Enabled
**Static File Serving**
- Frontend files served from `/opt/calypso/web`
- SPA routing support (try_files fallback to index.html)
- Static asset caching (1 year)
**API Proxy**
- All `/api/*` requests proxied to backend
- Proper headers forwarding
- Timeout configuration
**WebSocket Support**
- `/ws` endpoint for monitoring events
- `/api/v1/system/terminal/ws` for terminal console
- Long timeout for persistent connections
**Security Headers**
- X-Frame-Options: SAMEORIGIN
- X-Content-Type-Options: nosniff
- X-XSS-Protection: 1; mode=block
**Performance**
- Gzip compression enabled
- Static asset caching
- Optimized timeouts
## Service Management
### Nginx Commands
```bash
# Start/Stop/Restart
sudo systemctl start nginx
sudo systemctl stop nginx
sudo systemctl restart nginx
# Reload configuration (without downtime)
sudo systemctl reload nginx
# Check status
sudo systemctl status nginx
# Test configuration
sudo nginx -t
```
### View Logs
```bash
# Access logs
sudo tail -f /var/log/nginx/calypso-access.log
# Error logs
sudo tail -f /var/log/nginx/calypso-error.log
# All Nginx logs
sudo journalctl -u nginx -f
```
## Testing
### Test Frontend
```bash
# Check if frontend is accessible
curl http://localhost/
# Check if index.html is served
curl http://localhost/index.html
```
### Test API Proxy
```bash
# Health check
curl http://localhost/api/v1/health
# Should return JSON response
```
### Test WebSocket
```bash
# Test WebSocket connection (requires wscat or similar)
wscat -c ws://localhost/ws
```
## Troubleshooting
### Frontend Not Loading
1. Check Nginx status: `sudo systemctl status nginx`
2. Check Nginx config: `sudo nginx -t`
3. Check file permissions: `ls -la /opt/calypso/web/`
4. Check Nginx error logs: `sudo tail -f /var/log/nginx/calypso-error.log`
### API Calls Failing
1. Check backend is running: `sudo systemctl status calypso-api`
2. Test backend directly: `curl http://localhost:8080/api/v1/health`
3. Check Nginx proxy logs: `sudo tail -f /var/log/nginx/calypso-access.log`
### WebSocket Not Working
1. Check WebSocket headers in browser DevTools
2. Verify backend WebSocket endpoint is working
3. Check Nginx WebSocket configuration
4. Verify proxy_set_header Upgrade and Connection are set
### Permission Issues
1. Check file ownership: `ls -la /opt/calypso/web/`
2. Check Nginx user: `grep user /etc/nginx/nginx.conf`
3. Ensure files are readable: `sudo chmod -R 755 /opt/calypso/web`
## Firewall Configuration
If firewall is enabled, allow HTTP traffic:
```bash
# UFW
sudo ufw allow 80/tcp
sudo ufw allow 'Nginx Full'
# firewalld
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --reload
```
## Next Steps
1. ✅ Frontend accessible via Nginx
2. ⏭️ Setup SSL/TLS (HTTPS) - Recommended for production
3. ⏭️ Configure domain name (if applicable)
4. ⏭️ Setup monitoring/alerting
5. ⏭️ Configure backup strategy
## SSL/TLS Setup (Optional)
For production, setup HTTPS:
```bash
# Install Certbot
sudo apt-get install certbot python3-certbot-nginx
# Get certificate (replace with your domain)
sudo certbot --nginx -d your-domain.com
# Auto-renewal is configured automatically
```
---
**Status:****FRONTEND ACCESSIBLE**
**URL:** http://localhost/ (or http://<server-ip>/)
**API:** http://localhost/api/v1/health

View File

@@ -1,236 +0,0 @@
# MinIO Installation Recommendation for Calypso Appliance
## Executive Summary
**Rekomendasi: Native Installation**
Untuk Calypso appliance, **native installation** MinIO lebih sesuai daripada Docker karena:
1. Konsistensi dengan komponen lain (semua native)
2. Performa lebih baik (tanpa overhead container)
3. Integrasi lebih mudah dengan ZFS dan systemd
4. Sesuai dengan filosofi appliance (minimal dependencies)
---
## Analisis Arsitektur Calypso
### Komponen yang Sudah Terinstall (Semuanya Native)
| Komponen | Installation Method | Service Management |
|----------|-------------------|-------------------|
| **ZFS** | Native (kernel modules) | systemd (zfs-zed.service) |
| **SCST** | Native (kernel modules) | systemd (scst.service) |
| **NFS** | Native (nfs-kernel-server) | systemd (nfs-server.service) |
| **SMB** | Native (Samba) | systemd (smbd.service, nmbd.service) |
| **ClamAV** | Native (clamav-daemon) | systemd (clamav-daemon.service) |
| **MHVTL** | Native (kernel modules) | systemd (mhvtl.target) |
| **Bacula** | Native (bacula packages) | systemd (bacula-*.service) |
| **PostgreSQL** | Native (postgresql-16) | systemd (postgresql.service) |
| **Calypso API** | Native (Go binary) | systemd (calypso-api.service) |
**Kesimpulan:** Semua komponen menggunakan native installation dan dikelola melalui systemd.
---
## Perbandingan: Native vs Docker
### Native Installation ✅ **RECOMMENDED**
**Pros:**
-**Konsistensi**: Semua komponen lain native, MinIO juga native
-**Performa**: Tidak ada overhead container, akses langsung ke ZFS
-**Integrasi**: Lebih mudah integrasi dengan ZFS datasets sebagai storage backend
-**Monitoring**: Logs langsung ke journald, metrics mudah diakses
-**Resource**: Lebih efisien (tidak perlu Docker daemon)
-**Security**: Sesuai dengan security model appliance (systemd security hardening)
-**Management**: Dikelola melalui systemd seperti komponen lain
-**Dependencies**: MinIO binary standalone, tidak perlu Docker runtime
**Cons:**
- ⚠️ Update: Perlu download binary baru dan restart service
- ⚠️ Dependencies: Perlu manage MinIO binary sendiri
**Mitigation:**
- Update bisa diotomasi dengan script
- MinIO binary bisa disimpan di `/opt/calypso/bin/` seperti komponen lain
### Docker Installation ❌ **NOT RECOMMENDED**
**Pros:**
- ✅ Isolasi yang lebih baik
- ✅ Update lebih mudah (pull image baru)
- ✅ Tidak perlu manage dependencies
**Cons:**
-**Inkonsistensi**: Semua komponen lain native, Docker akan jadi exception
-**Overhead**: Docker daemon memakan resource (~50-100MB RAM)
-**Kompleksitas**: Tambahan layer management (Docker + systemd)
-**Integrasi**: Lebih sulit integrasi dengan ZFS (perlu volume mapping)
-**Performance**: Overhead container, terutama untuk I/O intensive workload
-**Security**: Tambahan attack surface (Docker daemon)
-**Monitoring**: Logs perlu di-forward dari container ke journald
-**Dependencies**: Perlu install Docker (tidak sesuai filosofi minimal dependencies)
---
## Rekomendasi Implementasi
### Native Installation Setup
#### 1. Binary Location
```
/opt/calypso/bin/minio
```
#### 2. Configuration Location
```
/opt/calypso/conf/minio/
├── config.json
└── minio.env
```
#### 3. Data Location (ZFS Dataset)
```
/opt/calypso/data/pool/<pool-name>/object/
```
#### 4. Systemd Service
```ini
[Unit]
Description=MinIO Object Storage
After=network.target zfs.target
Wants=zfs.target
[Service]
Type=simple
User=calypso
Group=calypso
WorkingDirectory=/opt/calypso
ExecStart=/opt/calypso/bin/minio server /opt/calypso/data/pool/%i/object --config-dir /opt/calypso/conf/minio
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=minio
# Security
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/calypso/data /opt/calypso/conf/minio /var/log/calypso
# Resource limits
LimitNOFILE=65536
LimitNPROC=4096
[Install]
WantedBy=multi-user.target
```
#### 5. Integration dengan ZFS
- MinIO storage backend menggunakan ZFS dataset
- Dataset dibuat di pool yang sudah ada
- Mount point: `/opt/calypso/data/pool/<pool-name>/object/`
- Manfaatkan ZFS features: compression, snapshots, replication
---
## Arsitektur yang Disarankan
```
┌─────────────────────────────────────┐
│ Calypso Appliance │
├─────────────────────────────────────┤
│ │
│ ┌──────────────────────────────┐ │
│ │ Calypso API (Go) │ │
│ │ Port: 8080 │ │
│ └───────────┬──────────────────┘ │
│ │ │
│ ┌───────────▼──────────────────┐ │
│ │ MinIO (Native Binary) │ │
│ │ Port: 9000, 9001 │ │
│ │ Storage: ZFS Dataset │ │
│ └───────────┬──────────────────┘ │
│ │ │
│ ┌───────────▼──────────────────┐ │
│ │ ZFS Pool │ │
│ │ Dataset: object/ │ │
│ └──────────────────────────────┘ │
│ │
└─────────────────────────────────────┘
```
---
## Installation Steps (Native)
### 1. Download MinIO Binary
```bash
# Download latest MinIO binary
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
sudo mv minio /opt/calypso/bin/
sudo chown calypso:calypso /opt/calypso/bin/minio
```
### 2. Create ZFS Dataset for Object Storage
```bash
# Create dataset in existing pool
sudo zfs create <pool-name>/object
sudo zfs set mountpoint=/opt/calypso/data/pool/<pool-name>/object <pool-name>/object
sudo chown -R calypso:calypso /opt/calypso/data/pool/<pool-name>/object
```
### 3. Create Configuration Directory
```bash
sudo mkdir -p /opt/calypso/conf/minio
sudo chown calypso:calypso /opt/calypso/conf/minio
```
### 4. Create Systemd Service
```bash
sudo cp /src/calypso/deploy/systemd/minio.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable minio.service
sudo systemctl start minio.service
```
### 5. Integration dengan Calypso API
- Backend API mengelola MinIO melalui MinIO Admin API atau Go SDK
- Configuration disimpan di database Calypso
- UI untuk manage buckets, policies, users
---
## Kesimpulan
**Native Installation** adalah pilihan terbaik untuk Calypso appliance karena:
1.**Konsistensi**: Semua komponen lain native
2.**Performa**: Optimal untuk I/O intensive workload
3.**Integrasi**: Seamless dengan ZFS dan systemd
4.**Filosofi**: Sesuai dengan "appliance-first" dan "minimal dependencies"
5.**Management**: Unified management melalui systemd
6.**Security**: Sesuai dengan security model appliance
**Docker Installation** tidak direkomendasikan karena:
- ❌ Menambah kompleksitas tanpa benefit yang signifikan
- ❌ Inkonsisten dengan arsitektur yang ada
- ❌ Overhead yang tidak perlu untuk appliance
---
## Next Steps
1. ✅ Implementasi native MinIO installation
2. ✅ Create systemd service file
3. ✅ Integrasi dengan ZFS dataset
4. ✅ Backend API integration
5. ✅ Frontend UI untuk MinIO management
---
## Date
2026-01-09

View File

@@ -1,193 +0,0 @@
# MinIO Integration Complete
**Tanggal:** 2026-01-09
**Status:****COMPLETE**
## Summary
Integrasi MinIO dengan Calypso appliance telah selesai. Frontend Object Storage page sekarang menggunakan data real dari MinIO service, bukan dummy data lagi.
---
## Changes Made
### 1. Backend Integration ✅
#### Created MinIO Service (`backend/internal/object_storage/service.go`)
- **Service**: Menggunakan MinIO Go SDK untuk berinteraksi dengan MinIO server
- **Features**:
- List buckets dengan informasi detail (size, objects, access policy)
- Get bucket statistics
- Create bucket
- Delete bucket
- Get bucket access policy
#### Created MinIO Handler (`backend/internal/object_storage/handler.go`)
- **Handler**: HTTP handlers untuk API endpoints
- **Endpoints**:
- `GET /api/v1/object-storage/buckets` - List all buckets
- `GET /api/v1/object-storage/buckets/:name` - Get bucket info
- `POST /api/v1/object-storage/buckets` - Create bucket
- `DELETE /api/v1/object-storage/buckets/:name` - Delete bucket
#### Updated Configuration (`backend/internal/common/config/config.go`)
- Added `ObjectStorageConfig` struct untuk MinIO configuration
- Fields:
- `endpoint`: MinIO server endpoint (default: `localhost:9000`)
- `access_key`: MinIO access key
- `secret_key`: MinIO secret key
- `use_ssl`: Whether to use SSL/TLS
#### Updated Router (`backend/internal/common/router/router.go`)
- Added object storage routes group
- Routes protected dengan permission `storage:read` dan `storage:write`
- Service initialization dengan error handling
### 2. Configuration ✅
#### Updated `/opt/calypso/conf/config.yaml`
```yaml
# Object Storage (MinIO) Configuration
object_storage:
endpoint: "localhost:9000"
access_key: "admin"
secret_key: "HqBX1IINqFynkWFa"
use_ssl: false
```
### 3. Frontend Integration ✅
#### Created API Client (`frontend/src/api/objectStorage.ts`)
- **API Client**: TypeScript client untuk object storage API
- **Interfaces**:
- `Bucket`: Bucket data structure
- **Methods**:
- `listBuckets()`: Fetch all buckets
- `getBucket(name)`: Get bucket details
- `createBucket(name)`: Create new bucket
- `deleteBucket(name)`: Delete bucket
#### Updated ObjectStorage Page (`frontend/src/pages/ObjectStorage.tsx`)
- **Removed**: Mock data (`MOCK_BUCKETS`)
- **Added**: Real API integration dengan React Query
- **Features**:
- Fetch buckets dari API dengan auto-refresh setiap 5 detik
- Transform API data ke format UI
- Loading state untuk buckets
- Empty state ketika tidak ada buckets
- Mutations untuk create/delete bucket
- Error handling dengan alerts
### 4. Dependencies ✅
#### Added Go Packages
- `github.com/minio/minio-go/v7` - MinIO Go SDK
- `github.com/minio/madmin-go/v3` - MinIO Admin API
---
## API Endpoints
### List Buckets
```http
GET /api/v1/object-storage/buckets
Authorization: Bearer <token>
```
**Response:**
```json
{
"buckets": [
{
"name": "my-bucket",
"creation_date": "2026-01-09T20:13:27Z",
"size": 1024000,
"objects": 42,
"access_policy": "private"
}
]
}
```
### Get Bucket
```http
GET /api/v1/object-storage/buckets/:name
Authorization: Bearer <token>
```
### Create Bucket
```http
POST /api/v1/object-storage/buckets
Authorization: Bearer <token>
Content-Type: application/json
{
"name": "new-bucket"
}
```
### Delete Bucket
```http
DELETE /api/v1/object-storage/buckets/:name
Authorization: Bearer <token>
```
---
## Testing
### Backend Test
```bash
# Test API endpoint
curl -H "Authorization: Bearer <token>" http://localhost:8080/api/v1/object-storage/buckets
```
### Frontend Test
1. Login ke Calypso UI
2. Navigate ke "Object Storage" page
3. Verify buckets dari MinIO muncul di UI
4. Test create bucket (jika ada button)
5. Test delete bucket (jika ada button)
---
## MinIO Service Status
**Service:** `minio.service`
**Status:** ✅ Running
**Endpoint:** `http://localhost:9000` (API), `http://localhost:9001` (Console)
**Storage:** `/opt/calypso/data/storage/s3`
**Credentials:**
- Access Key: `admin`
- Secret Key: `HqBX1IINqFynkWFa`
---
## Next Steps (Optional)
1. **Add Create/Delete Bucket UI**: Tambahkan modal/form untuk create/delete bucket dari UI
2. **Bucket Policies Management**: UI untuk manage bucket access policies
3. **Object Management**: UI untuk browse dan manage objects dalam bucket
4. **Bucket Quotas**: Implementasi quota management untuk buckets
5. **Bucket Lifecycle**: Implementasi lifecycle policies untuk buckets
6. **S3 Users & Keys**: Management untuk S3 access keys (MinIO users)
---
## Files Modified
### Backend
- `/src/calypso/backend/internal/object_storage/service.go` (NEW)
- `/src/calypso/backend/internal/object_storage/handler.go` (NEW)
- `/src/calypso/backend/internal/common/config/config.go` (MODIFIED)
- `/src/calypso/backend/internal/common/router/router.go` (MODIFIED)
- `/opt/calypso/conf/config.yaml` (MODIFIED)
### Frontend
- `/src/calypso/frontend/src/api/objectStorage.ts` (NEW)
- `/src/calypso/frontend/src/pages/ObjectStorage.tsx` (MODIFIED)
---
## Date
2026-01-09

View File

@@ -1,55 +0,0 @@
# Password Update Complete
**Tanggal:** 2025-01-09
**User:** PostgreSQL `calypso`
**Status:****UPDATED**
## Update Summary
Password user PostgreSQL `calypso` telah di-update sesuai dengan password yang ada di `/etc/calypso/secrets.env`.
### Action Performed
```sql
ALTER USER calypso WITH PASSWORD '<password_from_secrets.env>';
```
### Verification
**Password Updated:** Successfully executed `ALTER ROLE`
**Connection Test:** User `calypso` dapat connect ke database `calypso`
**Bacula Access:** User `calypso` masih dapat access database `bacula` (32 tables accessible)
### Test Results
1. **Database Connection Test:**
```bash
psql -h localhost -U calypso -d calypso
```
✅ **SUCCESS** - Connection established
2. **Bacula Database Access Test:**
```bash
psql -h localhost -U calypso -d bacula
```
✅ **SUCCESS** - 32 tables accessible
## Current Configuration
- **User:** `calypso`
- **Password Source:** `/etc/calypso/secrets.env` (CALYPSO_DB_PASSWORD)
- **Database Access:**
- ✅ Full access to `calypso` database
- ✅ Read-only access to `bacula` database
## Next Steps
1. ✅ Password sudah sync dengan secrets.env
2. ✅ Calypso API akan otomatis menggunakan password dari secrets.env
3. ⏭️ Test Calypso API connection untuk memastikan semuanya bekerja
## Important Notes
- Password sekarang sync dengan `/etc/calypso/secrets.env`
- Calypso API service akan otomatis load password dari file tersebut
- Tidak perlu set environment variable manual lagi
- Password di secrets.env adalah source of truth

View File

@@ -1,135 +0,0 @@
# Permissions Fix Complete
**Tanggal:** 2025-01-09
**Status:****FIXED**
## Problem
User `calypso` tidak memiliki permission untuk:
- Mengakses raw disk devices (`/dev/sd*`)
- Menjalankan ZFS commands (`zpool`, `zfs`)
- Membuat ZFS pools
Error yang muncul:
```
failed to create ZFS pool: cannot open '/dev/sdb': Permission denied
cannot create 'default': permission denied
```
## Solution Implemented
### 1. Group Membership ✅
User `calypso` ditambahkan ke groups:
- `disk` - Access to disk devices (`/dev/sd*`)
- `tape` - Access to tape devices
```bash
sudo usermod -aG disk,tape calypso
```
### 2. Sudoers Configuration ✅
File `/etc/sudoers.d/calypso` dibuat dengan permissions:
```sudoers
# ZFS Commands
calypso ALL=(ALL) NOPASSWD: /usr/sbin/zpool, /usr/sbin/zfs, /usr/bin/zpool, /usr/bin/zfs
# SCST Commands
calypso ALL=(ALL) NOPASSWD: /usr/sbin/scstadmin, /usr/bin/scstadmin
# Tape Utilities
calypso ALL=(ALL) NOPASSWD: /usr/bin/mtx, /usr/bin/mt, /usr/bin/sg_*, /usr/bin/sg3_utils/*
# System Monitoring
calypso ALL=(ALL) NOPASSWD: /usr/bin/systemctl status *, /usr/bin/systemctl is-active *, /usr/bin/journalctl -u *
```
### 3. Backend Code Updates ✅
**Helper Functions Added:**
```go
// zfsCommand executes a ZFS command with sudo
func zfsCommand(ctx context.Context, args ...string) *exec.Cmd {
return exec.CommandContext(ctx, "sudo", append([]string{"zfs"}, args...)...)
}
// zpoolCommand executes a ZPOOL command with sudo
func zpoolCommand(ctx context.Context, args ...string) *exec.Cmd {
return exec.CommandContext(ctx, "sudo", append([]string{"zpool"}, args...)...)
}
```
**All ZFS/ZPOOL Commands Updated:**
-`zpool create``zpoolCommand(ctx, "create", ...)`
-`zpool destroy``zpoolCommand(ctx, "destroy", ...)`
-`zpool list``zpoolCommand(ctx, "list", ...)`
-`zpool status``zpoolCommand(ctx, "status", ...)`
-`zfs create``zfsCommand(ctx, "create", ...)`
-`zfs destroy``zfsCommand(ctx, "destroy", ...)`
-`zfs set``zfsCommand(ctx, "set", ...)`
-`zfs get``zfsCommand(ctx, "get", ...)`
-`zfs list``zfsCommand(ctx, "list", ...)`
**Files Updated:**
-`backend/internal/storage/zfs.go` - All ZFS/ZPOOL commands
-`backend/internal/storage/zfs_pool_monitor.go` - Monitor commands
-`backend/internal/storage/disk.go` - Disk discovery commands
-`backend/internal/scst/service.go` - Already using sudo ✅
### 4. Service Restart ✅
Calypso API service telah di-restart dengan binary baru:
- ✅ Binary rebuilt dengan sudo support
- ✅ Service restarted
- ✅ Running successfully
## Verification
### Test ZFS Commands
```bash
# Test zpool list (should work)
sudo -u calypso sudo zpool list
# Output: no pools available (success - no error)
# Test zpool create/destroy (should work)
sudo -u calypso sudo zpool create -f test_pool /dev/sdb
sudo -u calypso sudo zpool destroy -f test_pool
# Should complete without permission errors
```
### Test Device Access
```bash
# Test device access (should work with disk group)
sudo -u calypso ls -la /dev/sdb
# Should show device (not permission denied)
```
## Current Status
**Groups:** User calypso in `disk` and `tape` groups
**Sudoers:** Configured and validated
**Backend Code:** All ZFS commands use sudo
**SCST:** Already using sudo (no changes needed)
**Service:** Restarted with new binary
**Permissions:** Fixed
## Next Steps
1. ✅ Permissions configured
2. ✅ Code updated
3. ✅ Service restarted
4. ⏭️ **Test ZFS pool creation via frontend**
## Testing
Sekarang user bisa test membuat ZFS pool via frontend:
1. Login ke portal: http://localhost/ atau http://10.10.14.18/
2. Navigate ke Storage → ZFS Pools
3. Create new pool dengan disks yang tersedia
4. Should work tanpa permission errors
---
**Status:****PERMISSIONS FIXED**
**Ready for:** ZFS pool creation via frontend

View File

@@ -1,82 +0,0 @@
# Permissions Fix Summary
**Tanggal:** 2025-01-09
**Status:****FIXED & VERIFIED**
## Problem Solved
User `calypso` sekarang memiliki permission yang cukup untuk:
- ✅ Mengakses raw disk devices (`/dev/sd*`)
- ✅ Menjalankan ZFS commands (`zpool`, `zfs`)
- ✅ Membuat dan menghapus ZFS pools
- ✅ Mengakses tape devices
- ✅ Menjalankan SCST commands
## Changes Made
### 1. System Groups ✅
```bash
sudo usermod -aG disk,tape calypso
```
### 2. Sudoers Configuration ✅
File: `/etc/sudoers.d/calypso`
- ZFS commands: `zpool`, `zfs`
- SCST commands: `scstadmin`
- Tape utilities: `mtx`, `mt`, `sg_*`
- System monitoring: `systemctl`, `journalctl`
### 3. Backend Code Updates ✅
- Added helper functions: `zfsCommand()`, `zpoolCommand()`
- All ZFS/ZPOOL commands now use `sudo`
- Updated files:
- `backend/internal/storage/zfs.go`
- `backend/internal/storage/zfs_pool_monitor.go`
- `backend/internal/storage/disk.go`
- `backend/internal/scst/service.go` (already had sudo)
### 4. Service Restart ✅
- Binary rebuilt with sudo support
- Service restarted successfully
## Verification
### Test Results
```bash
# ZFS commands work
sudo -u calypso sudo zpool list
# Output: no pools available (success)
# ZFS pool create/destroy works
sudo -u calypso sudo zpool create -f test_pool /dev/sdb
sudo -u calypso sudo zpool destroy -f test_pool
# Success: No permission errors
```
### Device Access
```bash
# Device access works
sudo -u calypso ls -la /dev/sdb
# Shows device (not permission denied)
```
## Current Status
**Groups:** calypso in `disk` and `tape` groups
**Sudoers:** Configured and validated
**Backend Code:** All privileged commands use sudo
**Service:** Running with new binary
**Permissions:** Fixed and verified
## Next Steps
1. ✅ Permissions fixed
2. ✅ Code updated
3. ✅ Service restarted
4. ✅ Verified working
5. ⏭️ **Test ZFS pool creation via frontend**
Sekarang user bisa membuat ZFS pool via frontend tanpa permission errors!
---
**Status:****READY FOR TESTING**

View File

@@ -1,117 +0,0 @@
# Calypso User Permissions Setup
**Tanggal:** 2025-01-09
**User:** `calypso`
**Status:****CONFIGURED**
## Problem
User `calypso` tidak memiliki permission yang cukup untuk:
- Mengakses raw disk devices (`/dev/sd*`)
- Menjalankan ZFS commands (`zpool`, `zfs`)
- Mengakses tape devices
- Menjalankan SCST commands
## Solution
### 1. Group Membership
User `calypso` telah ditambahkan ke groups berikut:
- `disk` - Access to disk devices
- `tape` - Access to tape devices
- `storage` - Storage-related permissions
```bash
sudo usermod -aG disk,tape,storage calypso
```
### 2. Sudoers Configuration
File `/etc/sudoers.d/calypso` telah dibuat dengan permissions berikut:
#### ZFS Commands
```sudoers
calypso ALL=(ALL) NOPASSWD: /usr/sbin/zpool, /usr/sbin/zfs, /usr/bin/zpool, /usr/bin/zfs
```
#### SCST Commands
```sudoers
calypso ALL=(ALL) NOPASSWD: /usr/sbin/scstadmin, /usr/bin/scstadmin
```
#### Tape Utilities
```sudoers
calypso ALL=(ALL) NOPASSWD: /usr/bin/mtx, /usr/bin/mt, /usr/bin/sg_*, /usr/bin/sg3_utils/*
```
#### System Monitoring
```sudoers
calypso ALL=(ALL) NOPASSWD: /usr/bin/systemctl status *, /usr/bin/systemctl is-active *, /usr/bin/journalctl -u *
```
## Verification
### Check Group Membership
```bash
groups calypso
# Output should include: disk tape storage
```
### Check Sudoers File
```bash
sudo visudo -c -f /etc/sudoers.d/calypso
# Should return: /etc/sudoers.d/calypso: parsed OK
```
### Test ZFS Access
```bash
sudo -u calypso zpool list
# Should work without errors
```
### Test Device Access
```bash
sudo -u calypso ls -la /dev/sdb
# Should show device permissions
```
## Backend Code Changes Needed
Backend code perlu menggunakan `sudo` untuk ZFS commands. Contoh:
```go
// Before (will fail with permission denied)
cmd := exec.CommandContext(ctx, "zpool", "create", ...)
// After (with sudo)
cmd := exec.CommandContext(ctx, "sudo", "zpool", "create", ...)
```
## Current Status
**Groups:** User calypso added to disk, tape, storage groups
**Sudoers:** Configuration file created and validated
**Permissions:** File permissions set to 0440 (secure)
⏭️ **Code Update:** Backend code needs to use `sudo` for privileged commands
## Next Steps
1. ✅ Groups configured
2. ✅ Sudoers configured
3. ⏭️ Update backend code to use `sudo` for:
- ZFS operations (`zpool`, `zfs`)
- SCST operations (`scstadmin`)
- Tape operations (`mtx`, `mt`, `sg_*`)
4. ⏭️ Restart Calypso API service
5. ⏭️ Test ZFS pool creation via frontend
## Important Notes
- Sudoers file uses `NOPASSWD` for convenience (service account)
- Only specific commands are allowed (security best practice)
- File permissions are 0440 (read-only for root and group)
- Service restart required after permission changes
---
**Status:****PERMISSIONS CONFIGURED**
**Action Required:** Update backend code to use `sudo` for privileged commands

View File

@@ -1,79 +0,0 @@
# Pool Delete Mountpoint Cleanup
## Issue
Ketika pool dihapus, mount point directory tidak dihapus dari sistem. Mount point directory tetap ada di `/opt/calypso/data/pool/<pool-name>` meskipun pool sudah di-destroy.
## Root Cause
Fungsi `DeletePool` tidak melakukan cleanup untuk mount point directory setelah pool di-destroy.
## Solution
Menambahkan kode untuk menghapus mount point directory setelah pool di-destroy.
## Changes Made
### Updated `backend/internal/storage/zfs.go`
**File**: `backend/internal/storage/zfs.go` (line 518-562)
Menambahkan cleanup untuk mount point directory setelah pool di-destroy:
**Before:**
```go
// Mark disks as unused
for _, diskPath := range pool.Disks {
// ...
}
// Delete from database
_, err = s.db.ExecContext(ctx, "DELETE FROM zfs_pools WHERE id = $1", poolID)
// ...
```
**After:**
```go
// Remove mount point directory (default: /opt/calypso/data/pool/<pool-name>)
mountPoint := fmt.Sprintf("/opt/calypso/data/pool/%s", pool.Name)
if err := os.RemoveAll(mountPoint); err != nil {
s.logger.Warn("Failed to remove mount point directory", "mountpoint", mountPoint, "error", err)
// Don't fail pool deletion if mount point removal fails
} else {
s.logger.Info("Removed mount point directory", "mountpoint", mountPoint)
}
// Mark disks as unused
for _, diskPath := range pool.Disks {
// ...
}
// Delete from database
_, err = s.db.ExecContext(ctx, "DELETE FROM zfs_pools WHERE id = $1", poolID)
// ...
```
## Mount Point Location
Default mount point untuk semua pools adalah:
```
/opt/calypso/data/pool/<pool-name>/
```
## Behavior
1. Pool di-destroy dari ZFS system
2. Mount point directory dihapus dengan `os.RemoveAll()`
3. Disks ditandai sebagai unused di database
4. Pool dihapus dari database
## Error Handling
- Jika mount point removal gagal, hanya log warning
- Pool deletion tetap berhasil meskipun mount point removal gagal
- Ini memastikan bahwa pool deletion tidak gagal hanya karena mount point cleanup
## Testing
1. Create pool dengan nama "test-pool"
2. Verify mount point directory dibuat: `/opt/calypso/data/pool/test-pool/`
3. Delete pool
4. Verify mount point directory dihapus: `ls /opt/calypso/data/pool/test-pool` should fail
## Status
**FIXED** - Mount point directory sekarang dihapus saat pool di-delete
## Date
2026-01-09

View File

@@ -1,64 +0,0 @@
# Pool Refresh Fix
## Issue
UI tidak terupdate setelah klik tombol "Refresh Pools", meskipun pool ada di database dan sistem.
## Root Cause
Masalahnya ada di backend - field `created_by` di database bisa null, tapi di struct `ZFSPool` adalah `string` (bukan pointer atau `sql.NullString`). Saat scan, jika `created_by` null, scan akan gagal dan pool di-skip.
## Solution
Menggunakan `sql.NullString` untuk scan `created_by`, lalu assign ke string jika valid.
## Changes Made
### Updated `backend/internal/storage/zfs.go`
**File**: `backend/internal/storage/zfs.go` (line 425-442)
**Before:**
```go
var pool ZFSPool
var description sql.NullString
err := rows.Scan(
&pool.ID, &pool.Name, &description, &pool.RaidLevel, pq.Array(&pool.Disks),
&pool.SizeBytes, &pool.UsedBytes, &pool.Compression, &pool.Deduplication,
&pool.AutoExpand, &pool.ScrubInterval, &pool.IsActive, &pool.HealthStatus,
&pool.CreatedAt, &pool.UpdatedAt, &pool.CreatedBy, // Direct scan to string
)
```
**After:**
```go
var pool ZFSPool
var description sql.NullString
var createdBy sql.NullString
err := rows.Scan(
&pool.ID, &pool.Name, &description, &pool.RaidLevel, pq.Array(&pool.Disks),
&pool.SizeBytes, &pool.UsedBytes, &pool.Compression, &pool.Deduplication,
&pool.AutoExpand, &pool.ScrubInterval, &pool.IsActive, &pool.HealthStatus,
&pool.CreatedAt, &pool.UpdatedAt, &createdBy, // Scan to NullString
)
if err != nil {
s.logger.Error("Failed to scan pool row", "error", err, "error_type", fmt.Sprintf("%T", err))
continue
}
if createdBy.Valid {
pool.CreatedBy = createdBy.String
}
```
## Testing
1. Pool ada di database: `default-pool`
2. Pool ada di sistem ZFS: `zpool list` shows `default-pool`
3. API sekarang mengembalikan pool dengan benar
4. Frontend sudah di-deploy
## Status
**FIXED** - Backend sekarang mengembalikan pools dengan benar
## Next Steps
- Refresh browser untuk melihat perubahan
- Klik tombol "Refresh Pools" untuk manual refresh
- Pool seharusnya muncul di UI sekarang
## Date
2026-01-09

View File

@@ -1,72 +0,0 @@
# Rebuild and Restart Script
## Overview
Script untuk rebuild dan restart Calypso API + Frontend service secara otomatis.
## File
`/src/calypso/rebuild-and-restart.sh`
## Usage
### Basic Usage
```bash
cd /src/calypso
./rebuild-and-restart.sh
```
### Dengan sudo (jika diperlukan)
```bash
sudo /src/calypso/rebuild-and-restart.sh
```
## What It Does
### 1. Rebuild Backend
- Build Go binary dari `backend/cmd/calypso-api`
- Output ke `/opt/calypso/bin/calypso-api`
- Set permissions dan ownership ke `calypso:calypso`
### 2. Rebuild Frontend
- Install dependencies (jika diperlukan)
- Build frontend dengan `npm run build`
- Output ke `frontend/dist/`
### 3. Deploy Frontend
- Copy files dari `frontend/dist/` ke `/opt/calypso/web/`
- Set ownership ke `www-data:www-data`
### 4. Restart Services
- Restart `calypso-api.service`
- Reload Nginx (jika tersedia)
- Check service status
## Features
- ✅ Color-coded output untuk mudah dibaca
- ✅ Error handling dengan `set -e`
- ✅ Status checks setelah restart
- ✅ Informative progress messages
## Requirements
- Go installed (untuk backend build)
- Node.js dan npm installed (untuk frontend build)
- sudo access (untuk service management)
- Calypso project di `/src/calypso`
## Troubleshooting
### Backend build fails
- Check Go installation: `go version`
- Check Go modules: `cd backend && go mod download`
### Frontend build fails
- Check Node.js: `node --version`
- Check npm: `npm --version`
- Install dependencies: `cd frontend && npm install`
### Service restart fails
- Check service exists: `systemctl list-units | grep calypso`
- Check service status: `sudo systemctl status calypso-api.service`
- Check logs: `sudo journalctl -u calypso-api.service -n 50`
## Date
2026-01-09

View File

@@ -1,78 +0,0 @@
# Refresh Pools Button
## Issue
UI tidak update secara otomatis setelah create atau destroy pool. User meminta tombol refresh pools untuk manual refresh.
## Solution
Menambahkan tombol "Refresh Pools" yang melakukan refetch pools dari database, dan memperbaiki createPoolMutation untuk melakukan refetch dengan benar.
## Changes Made
### 1. Added Refresh Pools Button
**File**: `frontend/src/pages/Storage.tsx` (line 446-459)
Menambahkan tombol baru di antara "Rescan Disks" dan "Create Pool":
```typescript
<button
onClick={async () => {
await queryClient.invalidateQueries({ queryKey: ['storage', 'zfs', 'pools'] })
await queryClient.refetchQueries({ queryKey: ['storage', 'zfs', 'pools'] })
}}
disabled={poolsLoading}
className="flex items-center gap-2 px-4 py-2 rounded-lg border border-border-dark bg-card-dark text-white text-sm font-bold hover:bg-[#233648] transition-colors disabled:opacity-50"
title="Refresh pools list from database"
>
<span className={`material-symbols-outlined text-[20px] ${poolsLoading ? 'animate-spin' : ''}`}>
sync
</span>
{poolsLoading ? 'Refreshing...' : 'Refresh Pools'}
</button>
```
**Features:**
- Icon `sync` dengan animasi spin saat loading
- Disabled saat pools sedang loading
- Tooltip: "Refresh pools list from database"
- Styling konsisten dengan tombol lainnya
### 2. Fixed createPoolMutation
**File**: `frontend/src/pages/Storage.tsx` (line 219-239)
Memperbaiki `createPoolMutation` untuk melakukan refetch dengan `await`:
```typescript
onSuccess: async () => {
// Invalidate and immediately refetch pools
await queryClient.invalidateQueries({ queryKey: ['storage', 'zfs', 'pools'] })
await queryClient.refetchQueries({ queryKey: ['storage', 'zfs', 'pools'] })
await queryClient.invalidateQueries({ queryKey: ['storage', 'disks'] })
// ... rest of the code
alert('Pool created successfully!')
}
```
**Improvements:**
- Menambahkan `await` pada `refetchQueries` untuk memastikan refetch selesai
- Menambahkan success alert untuk feedback ke user
## Button Layout
Sekarang ada 3 tombol di header:
1. **Rescan Disks** - Rescan physical disks dari sistem
2. **Refresh Pools** - Refresh pools list dari database (NEW)
3. **Create Pool** - Create new ZFS pool
## Usage
User dapat klik tombol "Refresh Pools" kapan saja untuk:
- Manual refresh setelah create pool
- Manual refresh setelah destroy pool
- Manual refresh jika auto-refresh (3 detik) tidak cukup cepat
## Testing
1. Create pool → Klik "Refresh Pools" → Pool muncul
2. Destroy pool → Klik "Refresh Pools" → Pool hilang
3. Auto-refresh tetap berjalan setiap 3 detik
## Status
**COMPLETED** - Tombol Refresh Pools ditambahkan dan createPoolMutation diperbaiki
## Date
2026-01-09

View File

@@ -1,89 +0,0 @@
# Refresh Pools UX Improvement
## Issue
UI refresh update masih terlalu lama, sehingga user merasa command-nya gagal padahal sebenarnya tidak. User tidak mendapat feedback yang jelas bahwa proses sedang berjalan.
## Solution
Menambahkan loading state yang lebih jelas dan feedback visual yang lebih baik untuk memberikan indikasi bahwa proses refresh sedang berjalan.
## Changes Made
### 1. Added Loading State
**File**: `frontend/src/pages/Storage.tsx`
Menambahkan state untuk tracking manual refresh:
```typescript
const [isRefreshingPools, setIsRefreshingPools] = useState(false)
```
### 2. Improved Refresh Button
**File**: `frontend/src/pages/Storage.tsx` (line 446-465)
**Before:**
```typescript
<button
onClick={async () => {
await queryClient.invalidateQueries({ queryKey: ['storage', 'zfs', 'pools'] })
await queryClient.refetchQueries({ queryKey: ['storage', 'zfs', 'pools'] })
}}
disabled={poolsLoading}
...
>
```
**After:**
```typescript
<button
onClick={async () => {
setIsRefreshingPools(true)
try {
await queryClient.invalidateQueries({ queryKey: ['storage', 'zfs', 'pools'] })
await queryClient.refetchQueries({ queryKey: ['storage', 'zfs', 'pools'] })
// Small delay to show feedback
await new Promise(resolve => setTimeout(resolve, 300))
alert('Pools refreshed successfully!')
} catch (error) {
console.error('Failed to refresh pools:', error)
alert('Failed to refresh pools. Please try again.')
} finally {
setIsRefreshingPools(false)
}
}}
disabled={poolsLoading || isRefreshingPools}
className="... disabled:cursor-not-allowed"
...
>
<span className={`... ${(poolsLoading || isRefreshingPools) ? 'animate-spin' : ''}`}>
sync
</span>
{(poolsLoading || isRefreshingPools) ? 'Refreshing...' : 'Refresh Pools'}
</button>
```
## Improvements
### Visual Feedback
1. **Loading Spinner**: Icon `sync` berputar saat refresh
2. **Button Text**: Berubah menjadi "Refreshing..." saat loading
3. **Disabled State**: Button disabled dengan cursor `not-allowed` saat loading
4. **Success Alert**: Menampilkan alert setelah refresh selesai
5. **Error Handling**: Menampilkan alert jika refresh gagal
### User Experience
- User mendapat feedback visual yang jelas bahwa proses sedang berjalan
- User mendapat konfirmasi setelah refresh selesai
- User mendapat notifikasi jika terjadi error
- Button tidak bisa diklik berulang kali saat proses berjalan
## Testing
1. Klik "Refresh Pools"
2. Verify button menunjukkan loading state (spinner + "Refreshing...")
3. Verify button disabled saat loading
4. Verify success alert muncul setelah refresh selesai
5. Verify pools list ter-update
## Status
**COMPLETED** - UX improvement untuk refresh pools button
## Date
2026-01-09

View File

@@ -1,77 +0,0 @@
# Secrets Environment File Setup
**Tanggal:** 2025-01-09
**File:** `/etc/calypso/secrets.env`
**Status:****CREATED**
## File Details
- **Location:** `/etc/calypso/secrets.env`
- **Owner:** `root:root`
- **Permissions:** `600` (read/write owner only)
- **Size:** 413 bytes
## Contents
File berisi environment variables untuk Calypso:
1. **CALYPSO_DB_PASSWORD**
- Database password untuk user PostgreSQL `calypso`
- Value: `calypso_secure_2025`
- Length: 19 characters
2. **CALYPSO_JWT_SECRET**
- JWT secret key untuk authentication tokens
- Generated: Random base64 string (44 characters)
- Minimum requirement: 32 characters ✅
## Security
**Permissions:** `600` (read/write owner only)
**Owner:** `root:root`
**Location:** `/etc/calypso/` (protected directory)
**JWT Secret:** Random generated, secure
⚠️ **Note:** Password default perlu diubah untuk production
## Usage
File ini akan di-load oleh systemd service via `EnvironmentFile` directive:
```ini
[Service]
EnvironmentFile=/etc/calypso/secrets.env
```
Atau bisa di-source manual:
```bash
source /etc/calypso/secrets.env
export CALYPSO_DB_PASSWORD
export CALYPSO_JWT_SECRET
```
## Verification
File sudah diverifikasi:
- ✅ File exists
- ✅ Permissions correct (600)
- ✅ Owner correct (root:root)
- ✅ Variables dapat di-source dengan benar
- ✅ JWT secret length >= 32 characters
## Next Steps
1. ✅ File sudah siap digunakan
2. ⏭️ Calypso API service akan otomatis load file ini
3. ⏭️ Update password untuk production environment (recommended)
## Important Notes
⚠️ **DO NOT:**
- Commit file ini ke version control
- Share file ini publicly
- Use default password in production
**DO:**
- Keep file permissions at 600
- Rotate secrets periodically
- Use strong passwords in production
- Backup securely if needed

View File

@@ -1,229 +0,0 @@
# Calypso Systemd Service Setup
**Tanggal:** 2025-01-09
**Service:** `calypso-api.service`
**Status:****ACTIVE & RUNNING**
## Service File
**Location:** `/etc/systemd/system/calypso-api.service`
### Configuration
```ini
[Unit]
Description=AtlasOS - Calypso API Service
Documentation=https://github.com/atlasos/calypso
After=network.target postgresql.service
Wants=postgresql.service
[Service]
Type=simple
User=calypso
Group=calypso
WorkingDirectory=/opt/calypso
ExecStart=/opt/calypso/bin/calypso-api -config /opt/calypso/conf/config.yaml
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=calypso-api
# Environment
EnvironmentFile=/opt/calypso/conf/secrets.env
Environment="CALYPSO_DB_HOST=localhost"
Environment="CALYPSO_DB_PORT=5432"
Environment="CALYPSO_DB_USER=calypso"
Environment="CALYPSO_DB_NAME=calypso"
# Security
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/opt/calypso/data /opt/calypso/conf /var/log/calypso /var/lib/calypso /run/calypso
ReadOnlyPaths=/opt/calypso/bin /opt/calypso/web /opt/calypso/releases
# Resource limits
LimitNOFILE=65536
LimitNPROC=4096
[Install]
WantedBy=multi-user.target
```
## Service Status
**Status:** Active (running)
**Enabled:** Yes (auto-start on boot)
**PID:** Running
**Memory:** ~12.4M
**Port:** 8080
## Service Management
### Start Service
```bash
sudo systemctl start calypso-api
```
### Stop Service
```bash
sudo systemctl stop calypso-api
```
### Restart Service
```bash
sudo systemctl restart calypso-api
```
### Reload Configuration (without restart)
```bash
sudo systemctl reload calypso-api
```
### Check Status
```bash
sudo systemctl status calypso-api
```
### Enable/Disable Auto-start
```bash
# Enable auto-start on boot
sudo systemctl enable calypso-api
# Disable auto-start
sudo systemctl disable calypso-api
# Check if enabled
sudo systemctl is-enabled calypso-api
```
## Viewing Logs
### Real-time Logs (Follow Mode)
```bash
sudo journalctl -u calypso-api -f
```
### Last 50 Lines
```bash
sudo journalctl -u calypso-api -n 50
```
### Logs Since Today
```bash
sudo journalctl -u calypso-api --since today
```
### Logs with Timestamps
```bash
sudo journalctl -u calypso-api --no-pager
```
## Service Configuration Details
### Working Directory
- **Path:** `/opt/calypso`
- **Purpose:** Base directory for application
### Binary Location
- **Path:** `/opt/calypso/bin/calypso-api`
- **Config:** `/opt/calypso/conf/config.yaml`
### Environment Variables
- **Secrets File:** `/opt/calypso/conf/secrets.env`
- `CALYPSO_DB_PASSWORD` - Database password
- `CALYPSO_JWT_SECRET` - JWT secret key
- **Database Config:**
- `CALYPSO_DB_HOST=localhost`
- `CALYPSO_DB_PORT=5432`
- `CALYPSO_DB_USER=calypso`
- `CALYPSO_DB_NAME=calypso`
### Security Settings
-**NoNewPrivileges:** Prevents privilege escalation
-**PrivateTmp:** Isolated temporary directory
-**ProtectSystem:** Read-only system directories
-**ProtectHome:** Read-only home directories
-**ReadWritePaths:** Only specific paths writable
-**ReadOnlyPaths:** Application binaries read-only
### Resource Limits
- **Max Open Files:** 65536
- **Max Processes:** 4096
## Runtime Directories
- **Logs:** `/var/log/calypso/` (calypso:calypso)
- **Data:** `/var/lib/calypso/` (calypso:calypso)
- **Runtime:** `/run/calypso/` (calypso:calypso)
## Service Verification
### Check Service Status
```bash
sudo systemctl is-active calypso-api
# Output: active
```
### Check HTTP Endpoint
```bash
curl http://localhost:8080/api/v1/health
```
### Check Process
```bash
ps aux | grep calypso-api
```
### Check Port
```bash
sudo netstat -tlnp | grep 8080
# or
sudo ss -tlnp | grep 8080
```
## Startup Logs Analysis
From initial startup logs:
- ✅ Database connection successful
- ✅ Connected to Bacula database
- ✅ HTTP server started on port 8080
- ✅ MHVTL configuration sync completed
- ✅ Disk discovery completed (5 disks)
- ✅ Alert rules registered
- ✅ Monitoring services started
- ⚠️ Warning: RRD tool not found (network monitoring optional)
## Troubleshooting
### Service Won't Start
1. Check logs: `sudo journalctl -u calypso-api -n 50`
2. Check config file: `cat /opt/calypso/conf/config.yaml`
3. Check secrets file permissions: `ls -la /opt/calypso/conf/secrets.env`
4. Check database connection: `sudo -u postgres psql -U calypso -d calypso`
### Service Crashes/Restarts
1. Check logs for errors: `sudo journalctl -u calypso-api --since "10 minutes ago"`
2. Check system resources: `free -h` and `df -h`
3. Check database status: `sudo systemctl status postgresql`
### Permission Issues
1. Check ownership: `ls -la /opt/calypso/bin/calypso-api`
2. Check user exists: `id calypso`
3. Check directory permissions: `ls -la /opt/calypso/`
## Next Steps
1. ✅ Service installed and running
2. ⏭️ Setup reverse proxy (Caddy/Nginx) for frontend
3. ⏭️ Configure firewall rules (if needed)
4. ⏭️ Setup SSL/TLS certificates
5. ⏭️ Configure monitoring/alerting
---
**Service Status:****OPERATIONAL**
**API Endpoint:** `http://localhost:8080`
**Health Check:** `http://localhost:8080/api/v1/health`

View File

@@ -1,59 +0,0 @@
# ZFS Pool Mountpoint Fix
## Issue
ZFS pool creation was failing with error:
```
cannot mount '/default': failed to create mountpoint: Read-only file system
```
The issue was that ZFS was trying to mount pools to the root filesystem (`/default`), which is read-only.
## Solution
Updated the ZFS pool creation code to set a default mountpoint to `/opt/calypso/data/pool/<pool-name>` for all pools.
## Changes Made
### 1. Updated `backend/internal/storage/zfs.go`
- Added mountpoint configuration during pool creation using `-m` flag
- Set default mountpoint to `/opt/calypso/data/pool/<pool-name>`
- Added code to create the mountpoint directory before pool creation
- Added logging for mountpoint creation
**Key Changes:**
```go
// Set default mountpoint to /opt/calypso/data/pool/<pool-name>
mountPoint := fmt.Sprintf("/opt/calypso/data/pool/%s", name)
args = append(args, "-m", mountPoint)
// Create mountpoint directory if it doesn't exist
if err := os.MkdirAll(mountPoint, 0755); err != nil {
return nil, fmt.Errorf("failed to create mountpoint directory %s: %w", mountPoint, err)
}
```
### 2. Directory Setup
- Created `/opt/calypso/data/pool` directory
- Set ownership to `calypso:calypso`
- Set permissions to `0755`
## Default Mountpoint Structure
All ZFS pools will now be mounted under:
```
/opt/calypso/data/pool/
├── pool-name-1/
├── pool-name-2/
└── ...
```
## Testing
1. Backend rebuilt successfully
2. Service restarted successfully
3. Ready to test pool creation from frontend
## Next Steps
- Test pool creation from the frontend UI
- Verify that pools are mounted correctly at `/opt/calypso/data/pool/<pool-name>`
- Ensure proper permissions for pool mountpoints
## Date
2026-01-09

View File

@@ -1,44 +0,0 @@
# ZFS Pool Delete UI Update Fix
## Issue
When a ZFS pool is destroyed, the pool is removed from the system and database, but the UI doesn't update immediately to reflect the deletion.
## Root Cause
The frontend `deletePoolMutation` was not properly awaiting the refetch operation, which could cause race conditions where the UI doesn't update before the alert is shown.
## Solution
Added `await` to `refetchQueries` to ensure the query is refetched before showing the success alert.
## Changes Made
### Updated `frontend/src/pages/Storage.tsx`
- Added `await` to `refetchQueries` call in `deletePoolMutation.onSuccess`
- This ensures the pool list is refetched from the server before showing the success message
**Key Changes:**
```typescript
onSuccess: async () => {
// Invalidate and immediately refetch
await queryClient.invalidateQueries({ queryKey: ['storage', 'zfs', 'pools'] })
await queryClient.refetchQueries({ queryKey: ['storage', 'zfs', 'pools'] }) // Added await
await queryClient.invalidateQueries({ queryKey: ['storage', 'disks'] })
setSelectedPool(null)
alert('Pool destroyed successfully!')
},
```
## Additional Notes
- The frontend already has `refetchInterval: 3000` (3 seconds) for automatic pool list refresh
- Backend properly deletes pool from database in `DeletePool` function
- ZFS Pool Monitor syncs pools every 2 minutes to catch manually deleted pools
## Testing
1. Destroy pool through UI
2. Verify pool disappears from UI immediately
3. Verify success alert is shown after UI update
## Status
**FIXED** - Pool deletion now properly updates UI
## Date
2026-01-09

View File

@@ -1,40 +0,0 @@
# ZFS Pool UI Display Fix
## Issue
ZFS pool was successfully created in the system and database, but it was not appearing in the UI. The API was returning `{"pools": null}` even though the pool existed in the database.
## Root Cause
The issue was likely related to:
1. Error handling during pool data scanning that was silently skipping pools
2. Missing debug logging to identify scan failures
## Solution
Added debug logging to identify scan failures and ensure pools are properly scanned from the database.
## Changes Made
### Updated `backend/internal/storage/zfs.go`
- Added debug logging after successful pool row scan
- This helps identify if pools are being skipped during scan
**Key Changes:**
```go
// Added debug logging after scan
s.logger.Debug("Scanned pool row", "pool_id", pool.ID, "name", pool.Name)
```
## Testing
1. Pool "default" now appears correctly in API response
2. API returns pool data with all fields populated:
- id, name, description
- raid_level, disks, spare_disks
- size_bytes, used_bytes
- compression, deduplication, auto_expand
- health_status, compress_ratio
- created_at, updated_at, created_by
## Status
**FIXED** - Pool now appears correctly in UI
## Date
2026-01-09

Binary file not shown.

View File

@@ -65,13 +65,12 @@ func main() {
r := router.NewRouter(cfg, db, logger)
// Create HTTP server
// Note: WriteTimeout should be 0 for WebSocket connections (they handle their own timeouts)
srv := &http.Server{
Addr: fmt.Sprintf(":%d", cfg.Server.Port),
Handler: r,
ReadTimeout: 15 * time.Second,
WriteTimeout: 0, // 0 means no timeout - needed for WebSocket connections
IdleTimeout: 120 * time.Second, // Increased for WebSocket keep-alive
WriteTimeout: 15 * time.Second,
IdleTimeout: 60 * time.Second,
}
// Setup graceful shutdown

View File

@@ -5,19 +5,15 @@ go 1.24.0
toolchain go1.24.11
require (
github.com/creack/pty v1.1.24
github.com/gin-gonic/gin v1.10.0
github.com/go-playground/validator/v10 v10.20.0
github.com/golang-jwt/jwt/v5 v5.2.1
github.com/google/uuid v1.6.0
github.com/gorilla/websocket v1.5.3
github.com/lib/pq v1.10.9
github.com/minio/madmin-go/v3 v3.0.110
github.com/minio/minio-go/v7 v7.0.97
github.com/stretchr/testify v1.11.1
go.uber.org/zap v1.27.0
golang.org/x/crypto v0.37.0
golang.org/x/sync v0.15.0
golang.org/x/crypto v0.23.0
golang.org/x/sync v0.7.0
golang.org/x/time v0.14.0
gopkg.in/yaml.v3 v3.0.1
)
@@ -25,57 +21,29 @@ require (
require (
github.com/bytedance/sonic v1.11.6 // indirect
github.com/bytedance/sonic/loader v0.1.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cloudwego/base64x v0.1.4 // indirect
github.com/cloudwego/iasm v0.2.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/gabriel-vasile/mimetype v1.4.3 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/golang-jwt/jwt/v4 v4.5.2 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/go-playground/validator/v10 v10.20.0 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.11 // indirect
github.com/klauspost/crc32 v1.3.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.7 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/lufia/plan9stats v0.0.0-20250317134145-8bc96cf8fc35 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/minio/crc64nvme v1.1.0 // indirect
github.com/minio/md5-simd v1.1.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
github.com/philhofer/fwd v1.2.0 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.63.0 // indirect
github.com/prometheus/procfs v0.16.0 // indirect
github.com/prometheus/prom2json v1.4.2 // indirect
github.com/prometheus/prometheus v0.303.0 // indirect
github.com/rs/xid v1.6.0 // indirect
github.com/safchain/ethtool v0.5.10 // indirect
github.com/secure-io/sio-go v0.3.1 // indirect
github.com/shirou/gopsutil/v3 v3.24.5 // indirect
github.com/shoenig/go-m1cpu v0.1.6 // indirect
github.com/tinylib/msgp v1.3.0 // indirect
github.com/tklauser/go-sysconf v0.3.15 // indirect
github.com/tklauser/numcpus v0.10.0 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.12 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/multierr v1.10.0 // indirect
golang.org/x/arch v0.8.0 // indirect
golang.org/x/net v0.39.0 // indirect
golang.org/x/sys v0.34.0 // indirect
golang.org/x/text v0.26.0 // indirect
google.golang.org/protobuf v1.36.6 // indirect
golang.org/x/net v0.25.0 // indirect
golang.org/x/sys v0.20.0 // indirect
golang.org/x/text v0.15.0 // indirect
google.golang.org/protobuf v1.34.1 // indirect
)

View File

@@ -2,31 +2,19 @@ github.com/bytedance/sonic v1.11.6 h1:oUp34TzMlL+OY1OUWxHqsdkgC/Zfc85zGqw9siXjrc
github.com/bytedance/sonic v1.11.6/go.mod h1:LysEHSvpvDySVdC2f87zGWf6CIKJcAvqab1ZaiQtds4=
github.com/bytedance/sonic/loader v0.1.1 h1:c+e5Pt1k/cy5wMveRDyk2X4B9hF4g7an8N3zCYjJFNM=
github.com/bytedance/sonic/loader v0.1.1/go.mod h1:ncP89zfokxS5LZrJxl5z0UJcsk4M4yY2JpfqGeCtNLU=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cloudwego/base64x v0.1.4 h1:jwCgWpFanWmN8xoIUHa2rtzmkd5J2plF/dnLS6Xd/0Y=
github.com/cloudwego/base64x v0.1.4/go.mod h1:0zlkT4Wn5C6NdauXdJRhSKRlJvmclQ1hhJgA0rcu/8w=
github.com/cloudwego/iasm v0.2.0 h1:1KNIy1I1H9hNNFEEH3DVnI4UujN+1zjpuk6gwHLTssg=
github.com/cloudwego/iasm v0.2.0/go.mod h1:8rXZaNYT2n95jn+zTI1sDr+IgcD2GVs0nlbbQPiEFhY=
github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0=
github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk=
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/gin-gonic/gin v1.10.0 h1:nTuyha1TYqgedzytsKYqna+DfLos46nTv2ygFy86HFU=
github.com/gin-gonic/gin v1.10.0/go.mod h1:4PMNQiOhvDRa013RKVbsiNwoyezlm2rm0uX/T7kzp5Y=
github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A=
github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
@@ -35,17 +23,12 @@ github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJn
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.20.0 h1:K9ISHbSaI0lyB2eWMPJo+kOS/FBExVwjEviJTixqxL8=
github.com/go-playground/validator/v10 v10.20.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM=
github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI=
github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
@@ -53,75 +36,25 @@ github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aN
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.11 h1:0OwqZRYI2rFrjS4kvkDnqJkKHdHaRnCm68/DY4OxRzU=
github.com/klauspost/cpuid/v2 v2.2.11/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/klauspost/crc32 v1.3.0 h1:sSmTt3gUt81RP655XGZPElI0PelVTZ6YwCRnPSupoFM=
github.com/klauspost/crc32 v1.3.0/go.mod h1:D7kQaZhnkX/Y0tstFGf8VUzv2UofNGqCjnC3zdHB0Hw=
github.com/klauspost/cpuid/v2 v2.2.7 h1:ZWSB3igEs+d0qvnxR/ZBzXVmxkgt8DdzP6m9pfuVLDM=
github.com/klauspost/cpuid/v2 v2.2.7/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/knz/go-libedit v1.10.1/go.mod h1:MZTVkCWyz0oBc7JOWP3wNAzd002ZbM/5hgShxwh4x8M=
github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lufia/plan9stats v0.0.0-20250317134145-8bc96cf8fc35 h1:PpXWgLPs+Fqr325bN2FD2ISlRRztXibcX6e8f5FR5Dc=
github.com/lufia/plan9stats v0.0.0-20250317134145-8bc96cf8fc35/go.mod h1:autxFIvghDt3jPTLoqZ9OZ7s9qTGNAWmYCjVFWPX/zg=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/minio/crc64nvme v1.1.0 h1:e/tAguZ+4cw32D+IO/8GSf5UVr9y+3eJcxZI2WOO/7Q=
github.com/minio/crc64nvme v1.1.0/go.mod h1:eVfm2fAzLlxMdUGc0EEBGSMmPwmXD5XiNRpnu9J3bvg=
github.com/minio/madmin-go/v3 v3.0.110 h1:FIYekj7YPc430ffpXFWiUtyut3qBt/unIAcDzJn9H5M=
github.com/minio/madmin-go/v3 v3.0.110/go.mod h1:WOe2kYmYl1OIlY2DSRHVQ8j1v4OItARQ6jGyQqcCud8=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
github.com/minio/minio-go/v7 v7.0.97 h1:lqhREPyfgHTB/ciX8k2r8k0D93WaFqxbJX36UZq5occ=
github.com/minio/minio-go/v7 v7.0.97/go.mod h1:re5VXuo0pwEtoNLsNuSr0RrLfT/MBtohwdaSmPPSRSk=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM=
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
github.com/philhofer/fwd v1.2.0 h1:e6DnBTl7vGY+Gz322/ASL4Gyp1FspeMvx1RNDoToZuM=
github.com/philhofer/fwd v1.2.0/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55 h1:o4JXh1EVt9k/+g42oCprj/FisM4qX9L3sZB3upGN2ZU=
github.com/power-devops/perfstat v0.0.0-20240221224432-82ca36839d55/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.63.0 h1:YR/EIY1o3mEFP/kZCD7iDMnLPlGyuU2Gb3HIcXnA98k=
github.com/prometheus/common v0.63.0/go.mod h1:VVFF/fBIoToEnWRVkYoXEkq3R3paCoxG9PXP74SnV18=
github.com/prometheus/procfs v0.16.0 h1:xh6oHhKwnOJKMYiYBDWmkHqQPyiY40sny36Cmx2bbsM=
github.com/prometheus/procfs v0.16.0/go.mod h1:8veyXUu3nGP7oaCxhX6yeaM5u4stL2FeMXnCqhDthZg=
github.com/prometheus/prom2json v1.4.2 h1:PxCTM+Whqi/eykO1MKsEL0p/zMpxp9ybpsmdFamw6po=
github.com/prometheus/prom2json v1.4.2/go.mod h1:zuvPm7u3epZSbXPWHny6G+o8ETgu6eAK3oPr6yFkRWE=
github.com/prometheus/prometheus v0.303.0 h1:wsNNsbd4EycMCphYnTmNY9JASBVbp7NWwJna857cGpA=
github.com/prometheus/prometheus v0.303.0/go.mod h1:8PMRi+Fk1WzopMDeb0/6hbNs9nV6zgySkU/zds5Lu3o=
github.com/rs/xid v1.6.0 h1:fV591PaemRlL6JfRxGDEPl69wICngIQ3shQtzfy2gxU=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/safchain/ethtool v0.5.10 h1:Im294gZtuf4pSGJRAOGKaASNi3wMeFaGaWuSaomedpc=
github.com/safchain/ethtool v0.5.10/go.mod h1:w9jh2Lx7YBR4UwzLkzCmWl85UY0W2uZdd7/DckVE5+c=
github.com/secure-io/sio-go v0.3.1 h1:dNvY9awjabXTYGsTF1PiCySl9Ltofk9GA3VdWlo7rRc=
github.com/secure-io/sio-go v0.3.1/go.mod h1:+xbkjDzPjwh4Axd07pRKSNriS9SCiYksWnZqdnfpQxs=
github.com/shirou/gopsutil/v3 v3.24.5 h1:i0t8kL+kQTvpAYToeuiVk3TgDeKOFioZO3Ztz/iZ9pI=
github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk=
github.com/shoenig/go-m1cpu v0.1.6 h1:nxdKQNcEB6vzgA2E2bvzKIYRuNj7XNJ4S/aRSwKzFtM=
github.com/shoenig/go-m1cpu v0.1.6/go.mod h1:1JJMcUBvfNwpq05QDQVAnx3gUHr9IYF7GNg9SUEw2VQ=
github.com/shoenig/test v0.6.4 h1:kVTaSd7WLz5WZ2IaoM0RSzRsUD+m8wRR+5qvntpn4LU=
github.com/shoenig/test v0.6.4/go.mod h1:byHiCGXqrVaflBLAMq/srcZIHynQPQgeyvkvXnjqq0k=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
@@ -135,57 +68,39 @@ github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXl
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tinylib/msgp v1.3.0 h1:ULuf7GPooDaIlbyvgAxBV/FI7ynli6LZ1/nVUNu+0ww=
github.com/tinylib/msgp v1.3.0/go.mod h1:ykjzy2wzgrlvpDCRc4LA8UXy6D8bzMSuAF3WD57Gok0=
github.com/tklauser/go-sysconf v0.3.15 h1:VE89k0criAymJ/Os65CSn1IXaol+1wrsFHEB8Ol49K4=
github.com/tklauser/go-sysconf v0.3.15/go.mod h1:Dmjwr6tYFIseJw7a3dRLJfsHAMXZ3nEnL/aZY+0IuI4=
github.com/tklauser/numcpus v0.10.0 h1:18njr6LDBk1zuna922MgdjQuJFjrdppsZG60sHGfjso=
github.com/tklauser/numcpus v0.10.0/go.mod h1:BiTKazU708GQTYF4mB+cmlpT2Is1gLk7XVuEeem8LsQ=
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
github.com/ugorji/go/codec v1.2.12 h1:9LC83zGrHhuUA9l16C9AHXAqEV/2wBQ4nkvumAE65EE=
github.com/ugorji/go/codec v1.2.12/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/multierr v1.10.0 h1:S0h4aNzvfcFsC3dRF1jLoaov7oRaKqRGC/pUEJ2yvPQ=
go.uber.org/multierr v1.10.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/arch v0.8.0 h1:3wRIsP3pM4yUptoR96otTUOXI367OS0+c9eeRi9doIc=
golang.org/x/arch v0.8.0/go.mod h1:FEVrYAQjsQXMVJ1nsMoVVXPZg6p2JE2mx8psSWTDQys=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.37.0 h1:kJNSjF/Xp7kU0iB2Z+9viTPMW4EqqsrywMXLJOOsXSE=
golang.org/x/crypto v0.37.0/go.mod h1:vg+k43peMZ0pUMhYmVAWysMK35e6ioLh3wB8ZCAfbVc=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.39.0 h1:ZCu7HMWDxpXpaiKdhzIfaltL9Lp31x/3fCP11bc6/fY=
golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.34.0 h1:H5Y5sJ2L2JRdyv7ROF1he/lPdvFsd0mJHFw2ThKHxLA=
golang.org/x/sys v0.34.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M=
golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA=
golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.34.1 h1:9ddQBjfCyZPOHPUiPxpYESBLc+T8P3E+Vo4IbKZgFWg=
google.golang.org/protobuf v1.34.1/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -116,268 +116,3 @@ func (h *Handler) CreateJob(c *gin.Context) {
c.JSON(http.StatusCreated, job)
}
// ExecuteBconsoleCommand executes a bconsole command
func (h *Handler) ExecuteBconsoleCommand(c *gin.Context) {
var req struct {
Command string `json:"command" binding:"required"`
}
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "command is required"})
return
}
output, err := h.service.ExecuteBconsoleCommand(c.Request.Context(), req.Command)
if err != nil {
h.logger.Error("Failed to execute bconsole command", "error", err, "command", req.Command)
c.JSON(http.StatusInternalServerError, gin.H{
"error": "failed to execute command",
"output": output,
"details": err.Error(),
})
return
}
c.JSON(http.StatusOK, gin.H{
"output": output,
})
}
// ListClients lists all backup clients with optional filters
func (h *Handler) ListClients(c *gin.Context) {
opts := ListClientsOptions{}
// Parse enabled filter
if enabledStr := c.Query("enabled"); enabledStr != "" {
enabled := enabledStr == "true"
opts.Enabled = &enabled
}
// Parse search query
opts.Search = c.Query("search")
clients, err := h.service.ListClients(c.Request.Context(), opts)
if err != nil {
h.logger.Error("Failed to list clients", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{
"error": "failed to list clients",
"details": err.Error(),
})
return
}
if clients == nil {
clients = []Client{}
}
c.JSON(http.StatusOK, gin.H{
"clients": clients,
"total": len(clients),
})
}
// GetDashboardStats returns dashboard statistics
func (h *Handler) GetDashboardStats(c *gin.Context) {
stats, err := h.service.GetDashboardStats(c.Request.Context())
if err != nil {
h.logger.Error("Failed to get dashboard stats", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get dashboard stats"})
return
}
c.JSON(http.StatusOK, stats)
}
// ListStoragePools lists all storage pools
func (h *Handler) ListStoragePools(c *gin.Context) {
pools, err := h.service.ListStoragePools(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list storage pools", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list storage pools"})
return
}
if pools == nil {
pools = []StoragePool{}
}
h.logger.Info("Listed storage pools", "count", len(pools))
c.JSON(http.StatusOK, gin.H{
"pools": pools,
"total": len(pools),
})
}
// ListStorageVolumes lists all storage volumes
func (h *Handler) ListStorageVolumes(c *gin.Context) {
poolName := c.Query("pool_name")
volumes, err := h.service.ListStorageVolumes(c.Request.Context(), poolName)
if err != nil {
h.logger.Error("Failed to list storage volumes", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list storage volumes"})
return
}
if volumes == nil {
volumes = []StorageVolume{}
}
c.JSON(http.StatusOK, gin.H{
"volumes": volumes,
"total": len(volumes),
})
}
// ListStorageDaemons lists all storage daemons
func (h *Handler) ListStorageDaemons(c *gin.Context) {
daemons, err := h.service.ListStorageDaemons(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list storage daemons", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list storage daemons"})
return
}
if daemons == nil {
daemons = []StorageDaemon{}
}
c.JSON(http.StatusOK, gin.H{
"daemons": daemons,
"total": len(daemons),
})
}
// CreateStoragePool creates a new storage pool
func (h *Handler) CreateStoragePool(c *gin.Context) {
var req CreatePoolRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
pool, err := h.service.CreateStoragePool(c.Request.Context(), req)
if err != nil {
h.logger.Error("Failed to create storage pool", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusCreated, pool)
}
// DeleteStoragePool deletes a storage pool
func (h *Handler) DeleteStoragePool(c *gin.Context) {
idStr := c.Param("id")
if idStr == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "pool ID is required"})
return
}
var poolID int
if _, err := fmt.Sscanf(idStr, "%d", &poolID); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid pool ID"})
return
}
err := h.service.DeleteStoragePool(c.Request.Context(), poolID)
if err != nil {
h.logger.Error("Failed to delete storage pool", "error", err, "pool_id", poolID)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "pool deleted successfully"})
}
// CreateStorageVolume creates a new storage volume
func (h *Handler) CreateStorageVolume(c *gin.Context) {
var req CreateVolumeRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
volume, err := h.service.CreateStorageVolume(c.Request.Context(), req)
if err != nil {
h.logger.Error("Failed to create storage volume", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusCreated, volume)
}
// UpdateStorageVolume updates a storage volume
func (h *Handler) UpdateStorageVolume(c *gin.Context) {
idStr := c.Param("id")
if idStr == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "volume ID is required"})
return
}
var volumeID int
if _, err := fmt.Sscanf(idStr, "%d", &volumeID); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid volume ID"})
return
}
var req UpdateVolumeRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
volume, err := h.service.UpdateStorageVolume(c.Request.Context(), volumeID, req)
if err != nil {
h.logger.Error("Failed to update storage volume", "error", err, "volume_id", volumeID)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, volume)
}
// DeleteStorageVolume deletes a storage volume
func (h *Handler) DeleteStorageVolume(c *gin.Context) {
idStr := c.Param("id")
if idStr == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "volume ID is required"})
return
}
var volumeID int
if _, err := fmt.Sscanf(idStr, "%d", &volumeID); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid volume ID"})
return
}
err := h.service.DeleteStorageVolume(c.Request.Context(), volumeID)
if err != nil {
h.logger.Error("Failed to delete storage volume", "error", err, "volume_id", volumeID)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "volume deleted successfully"})
}
// ListMedia lists all media from bconsole "list media" command
func (h *Handler) ListMedia(c *gin.Context) {
media, err := h.service.ListMedia(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list media", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
if media == nil {
media = []Media{}
}
h.logger.Info("Listed media", "count", len(media))
c.JSON(http.StatusOK, gin.H{
"media": media,
"total": len(media),
})
}

File diff suppressed because it is too large Load Diff

View File

@@ -10,12 +10,11 @@ import (
// Config represents the application configuration
type Config struct {
Server ServerConfig `yaml:"server"`
Database DatabaseConfig `yaml:"database"`
Auth AuthConfig `yaml:"auth"`
Logging LoggingConfig `yaml:"logging"`
Security SecurityConfig `yaml:"security"`
ObjectStorage ObjectStorageConfig `yaml:"object_storage"`
Server ServerConfig `yaml:"server"`
Database DatabaseConfig `yaml:"database"`
Auth AuthConfig `yaml:"auth"`
Logging LoggingConfig `yaml:"logging"`
Security SecurityConfig `yaml:"security"`
}
// ServerConfig holds HTTP server configuration
@@ -97,14 +96,6 @@ type SecurityHeadersConfig struct {
Enabled bool `yaml:"enabled"`
}
// ObjectStorageConfig holds MinIO configuration
type ObjectStorageConfig struct {
Endpoint string `yaml:"endpoint"`
AccessKey string `yaml:"access_key"`
SecretKey string `yaml:"secret_key"`
UseSSL bool `yaml:"use_ssl"`
}
// Load reads configuration from file and environment variables
func Load(path string) (*Config, error) {
cfg := DefaultConfig()

View File

@@ -1,22 +0,0 @@
-- Migration: Object Storage Configuration
-- Description: Creates table for storing MinIO object storage configuration
-- Date: 2026-01-09
CREATE TABLE IF NOT EXISTS object_storage_config (
id SERIAL PRIMARY KEY,
dataset_path VARCHAR(255) NOT NULL UNIQUE,
mount_point VARCHAR(512) NOT NULL,
pool_name VARCHAR(255) NOT NULL,
dataset_name VARCHAR(255) NOT NULL,
created_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS idx_object_storage_config_pool_name ON object_storage_config(pool_name);
CREATE INDEX IF NOT EXISTS idx_object_storage_config_updated_at ON object_storage_config(updated_at);
COMMENT ON TABLE object_storage_config IS 'Stores MinIO object storage configuration, linking to ZFS datasets';
COMMENT ON COLUMN object_storage_config.dataset_path IS 'Full ZFS dataset path (e.g., pool/dataset)';
COMMENT ON COLUMN object_storage_config.mount_point IS 'Mount point path for the dataset';
COMMENT ON COLUMN object_storage_config.pool_name IS 'ZFS pool name';
COMMENT ON COLUMN object_storage_config.dataset_name IS 'ZFS dataset name';

View File

@@ -13,30 +13,24 @@ import (
// authMiddleware validates JWT tokens and sets user context
func authMiddleware(authHandler *auth.Handler) gin.HandlerFunc {
return func(c *gin.Context) {
var token string
// Try to extract token from Authorization header first
// Extract token from Authorization header
authHeader := c.GetHeader("Authorization")
if authHeader != "" {
// Parse Bearer token
parts := strings.SplitN(authHeader, " ", 2)
if len(parts) == 2 && parts[0] == "Bearer" {
token = parts[1]
}
}
// If no token from header, try query parameter (for WebSocket)
if token == "" {
token = c.Query("token")
}
// If still no token, return error
if token == "" {
c.JSON(http.StatusUnauthorized, gin.H{"error": "missing authorization token"})
if authHeader == "" {
c.JSON(http.StatusUnauthorized, gin.H{"error": "missing authorization header"})
c.Abort()
return
}
// Parse Bearer token
parts := strings.SplitN(authHeader, " ", 2)
if len(parts) != 2 || parts[0] != "Bearer" {
c.JSON(http.StatusUnauthorized, gin.H{"error": "invalid authorization header format"})
c.Abort()
return
}
token := parts[1]
// Validate token and get user
user, err := authHandler.ValidateToken(token)
if err != nil {

View File

@@ -13,9 +13,7 @@ import (
"github.com/atlasos/calypso/internal/common/logger"
"github.com/atlasos/calypso/internal/iam"
"github.com/atlasos/calypso/internal/monitoring"
"github.com/atlasos/calypso/internal/object_storage"
"github.com/atlasos/calypso/internal/scst"
"github.com/atlasos/calypso/internal/shares"
"github.com/atlasos/calypso/internal/storage"
"github.com/atlasos/calypso/internal/system"
"github.com/atlasos/calypso/internal/tape_physical"
@@ -200,57 +198,6 @@ func NewRouter(cfg *config.Config, db *database.DB, log *logger.Logger) *gin.Eng
storageGroup.GET("/zfs/arc/stats", storageHandler.GetARCStats)
}
// Shares (CIFS/NFS)
sharesHandler := shares.NewHandler(db, log)
sharesGroup := protected.Group("/shares")
sharesGroup.Use(requirePermission("storage", "read"))
{
sharesGroup.GET("", sharesHandler.ListShares)
sharesGroup.GET("/:id", sharesHandler.GetShare)
sharesGroup.POST("", requirePermission("storage", "write"), sharesHandler.CreateShare)
sharesGroup.PUT("/:id", requirePermission("storage", "write"), sharesHandler.UpdateShare)
sharesGroup.DELETE("/:id", requirePermission("storage", "write"), sharesHandler.DeleteShare)
}
// Object Storage (MinIO)
// Initialize MinIO service if configured
if cfg.ObjectStorage.Endpoint != "" {
objectStorageService, err := object_storage.NewService(
cfg.ObjectStorage.Endpoint,
cfg.ObjectStorage.AccessKey,
cfg.ObjectStorage.SecretKey,
log,
)
if err != nil {
log.Error("Failed to initialize MinIO service", "error", err)
} else {
objectStorageHandler := object_storage.NewHandler(objectStorageService, db, log)
objectStorageGroup := protected.Group("/object-storage")
objectStorageGroup.Use(requirePermission("storage", "read"))
{
// Setup endpoints
objectStorageGroup.GET("/setup/datasets", objectStorageHandler.GetAvailableDatasets)
objectStorageGroup.GET("/setup/current", objectStorageHandler.GetCurrentSetup)
objectStorageGroup.POST("/setup", requirePermission("storage", "write"), objectStorageHandler.SetupObjectStorage)
objectStorageGroup.PUT("/setup", requirePermission("storage", "write"), objectStorageHandler.UpdateObjectStorage)
// Bucket endpoints
objectStorageGroup.GET("/buckets", objectStorageHandler.ListBuckets)
objectStorageGroup.GET("/buckets/:name", objectStorageHandler.GetBucket)
objectStorageGroup.POST("/buckets", requirePermission("storage", "write"), objectStorageHandler.CreateBucket)
objectStorageGroup.DELETE("/buckets/:name", requirePermission("storage", "write"), objectStorageHandler.DeleteBucket)
// User management routes
objectStorageGroup.GET("/users", objectStorageHandler.ListUsers)
objectStorageGroup.POST("/users", requirePermission("storage", "write"), objectStorageHandler.CreateUser)
objectStorageGroup.DELETE("/users/:access_key", requirePermission("storage", "write"), objectStorageHandler.DeleteUser)
// Service account (access key) management routes
objectStorageGroup.GET("/service-accounts", objectStorageHandler.ListServiceAccounts)
objectStorageGroup.POST("/service-accounts", requirePermission("storage", "write"), objectStorageHandler.CreateServiceAccount)
objectStorageGroup.DELETE("/service-accounts/:access_key", requirePermission("storage", "write"), objectStorageHandler.DeleteServiceAccount)
}
}
}
// SCST
scstHandler := scst.NewHandler(db, log)
scstGroup := protected.Group("/scst")
@@ -259,12 +206,10 @@ func NewRouter(cfg *config.Config, db *database.DB, log *logger.Logger) *gin.Eng
scstGroup.GET("/targets", scstHandler.ListTargets)
scstGroup.GET("/targets/:id", scstHandler.GetTarget)
scstGroup.POST("/targets", scstHandler.CreateTarget)
scstGroup.POST("/targets/:id/luns", requirePermission("iscsi", "write"), scstHandler.AddLUN)
scstGroup.DELETE("/targets/:id/luns/:lunId", requirePermission("iscsi", "write"), scstHandler.RemoveLUN)
scstGroup.POST("/targets/:id/luns", scstHandler.AddLUN)
scstGroup.POST("/targets/:id/initiators", scstHandler.AddInitiator)
scstGroup.POST("/targets/:id/enable", scstHandler.EnableTarget)
scstGroup.POST("/targets/:id/disable", scstHandler.DisableTarget)
scstGroup.DELETE("/targets/:id", requirePermission("iscsi", "write"), scstHandler.DeleteTarget)
scstGroup.GET("/initiators", scstHandler.ListAllInitiators)
scstGroup.GET("/initiators/:id", scstHandler.GetInitiator)
scstGroup.DELETE("/initiators/:id", scstHandler.RemoveInitiator)
@@ -278,16 +223,6 @@ func NewRouter(cfg *config.Config, db *database.DB, log *logger.Logger) *gin.Eng
scstGroup.POST("/portals", scstHandler.CreatePortal)
scstGroup.PUT("/portals/:id", scstHandler.UpdatePortal)
scstGroup.DELETE("/portals/:id", scstHandler.DeletePortal)
// Initiator Groups routes
scstGroup.GET("/initiator-groups", scstHandler.ListAllInitiatorGroups)
scstGroup.GET("/initiator-groups/:id", scstHandler.GetInitiatorGroup)
scstGroup.POST("/initiator-groups", requirePermission("iscsi", "write"), scstHandler.CreateInitiatorGroup)
scstGroup.PUT("/initiator-groups/:id", requirePermission("iscsi", "write"), scstHandler.UpdateInitiatorGroup)
scstGroup.DELETE("/initiator-groups/:id", requirePermission("iscsi", "write"), scstHandler.DeleteInitiatorGroup)
scstGroup.POST("/initiator-groups/:id/initiators", requirePermission("iscsi", "write"), scstHandler.AddInitiatorToGroup)
// Config file management
scstGroup.GET("/config/file", requirePermission("iscsi", "read"), scstHandler.GetConfigFile)
scstGroup.PUT("/config/file", requirePermission("iscsi", "write"), scstHandler.UpdateConfigFile)
}
// Physical Tape Libraries
@@ -325,18 +260,7 @@ func NewRouter(cfg *config.Config, db *database.DB, log *logger.Logger) *gin.Eng
}
// System Management
systemService := system.NewService(log)
systemHandler := system.NewHandler(log, tasks.NewEngine(db, log))
// Set service in handler (if handler needs direct access)
// Note: Handler already has service via NewHandler, but we need to ensure it's the same instance
// Start network monitoring with RRD
if err := systemService.StartNetworkMonitoring(context.Background()); err != nil {
log.Warn("Failed to start network monitoring", "error", err)
} else {
log.Info("Network monitoring started with RRD")
}
systemGroup := protected.Group("/system")
systemGroup.Use(requirePermission("system", "read"))
{
@@ -344,15 +268,8 @@ func NewRouter(cfg *config.Config, db *database.DB, log *logger.Logger) *gin.Eng
systemGroup.GET("/services/:name", systemHandler.GetServiceStatus)
systemGroup.POST("/services/:name/restart", systemHandler.RestartService)
systemGroup.GET("/services/:name/logs", systemHandler.GetServiceLogs)
systemGroup.GET("/logs", systemHandler.GetSystemLogs)
systemGroup.GET("/network/throughput", systemHandler.GetNetworkThroughput)
systemGroup.POST("/support-bundle", systemHandler.GenerateSupportBundle)
systemGroup.GET("/interfaces", systemHandler.ListNetworkInterfaces)
systemGroup.GET("/management-ip", systemHandler.GetManagementIPAddress)
systemGroup.PUT("/interfaces/:name", systemHandler.UpdateNetworkInterface)
systemGroup.GET("/ntp", systemHandler.GetNTPSettings)
systemGroup.POST("/ntp", systemHandler.SaveNTPSettings)
systemGroup.POST("/execute", requirePermission("system", "write"), systemHandler.ExecuteCommand)
systemGroup.GET("/interfaces", systemHandler.ListNetworkInterfaces)
}
// IAM routes - GetUser can be accessed by user viewing own profile or admin
@@ -413,21 +330,9 @@ func NewRouter(cfg *config.Config, db *database.DB, log *logger.Logger) *gin.Eng
backupGroup := protected.Group("/backup")
backupGroup.Use(requirePermission("backup", "read"))
{
backupGroup.GET("/dashboard/stats", backupHandler.GetDashboardStats)
backupGroup.GET("/jobs", backupHandler.ListJobs)
backupGroup.GET("/jobs/:id", backupHandler.GetJob)
backupGroup.POST("/jobs", requirePermission("backup", "write"), backupHandler.CreateJob)
backupGroup.GET("/clients", backupHandler.ListClients)
backupGroup.GET("/storage/pools", backupHandler.ListStoragePools)
backupGroup.POST("/storage/pools", requirePermission("backup", "write"), backupHandler.CreateStoragePool)
backupGroup.DELETE("/storage/pools/:id", requirePermission("backup", "write"), backupHandler.DeleteStoragePool)
backupGroup.GET("/storage/volumes", backupHandler.ListStorageVolumes)
backupGroup.POST("/storage/volumes", requirePermission("backup", "write"), backupHandler.CreateStorageVolume)
backupGroup.PUT("/storage/volumes/:id", requirePermission("backup", "write"), backupHandler.UpdateStorageVolume)
backupGroup.DELETE("/storage/volumes/:id", requirePermission("backup", "write"), backupHandler.DeleteStorageVolume)
backupGroup.GET("/media", backupHandler.ListMedia)
backupGroup.GET("/storage/daemons", backupHandler.ListStorageDaemons)
backupGroup.POST("/console/execute", requirePermission("backup", "write"), backupHandler.ExecuteBconsoleCommand)
}
// Monitoring

View File

@@ -88,14 +88,11 @@ func GetUserGroups(db *database.DB, userID string) ([]string, error) {
for rows.Next() {
var groupName string
if err := rows.Scan(&groupName); err != nil {
return []string{}, err
return nil, err
}
groups = append(groups, groupName)
}
if groups == nil {
groups = []string{}
}
return groups, rows.Err()
}

View File

@@ -69,17 +69,6 @@ func (h *Handler) ListUsers(c *gin.Context) {
permissions, _ := GetUserPermissions(h.db, u.ID)
groups, _ := GetUserGroups(h.db, u.ID)
// Ensure arrays are never nil (use empty slice instead)
if roles == nil {
roles = []string{}
}
if permissions == nil {
permissions = []string{}
}
if groups == nil {
groups = []string{}
}
users = append(users, map[string]interface{}{
"id": u.ID,
"username": u.Username,
@@ -149,17 +138,6 @@ func (h *Handler) GetUser(c *gin.Context) {
permissions, _ := GetUserPermissions(h.db, userID)
groups, _ := GetUserGroups(h.db, userID)
// Ensure arrays are never nil (use empty slice instead)
if roles == nil {
roles = []string{}
}
if permissions == nil {
permissions = []string{}
}
if groups == nil {
groups = []string{}
}
c.JSON(http.StatusOK, gin.H{
"id": user.ID,
"username": user.Username,
@@ -258,8 +236,6 @@ func (h *Handler) UpdateUser(c *gin.Context) {
}
// Allow update if roles or groups are provided, even if no other fields are updated
// Note: req.Roles and req.Groups can be empty arrays ([]), which is different from nil
// Empty array means "remove all roles/groups", nil means "don't change roles/groups"
if len(updates) == 1 && req.Roles == nil && req.Groups == nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "no fields to update"})
return
@@ -283,14 +259,13 @@ func (h *Handler) UpdateUser(c *gin.Context) {
// Update roles if provided
if req.Roles != nil {
h.logger.Info("Updating user roles", "user_id", userID, "requested_roles", *req.Roles)
h.logger.Info("Updating user roles", "user_id", userID, "roles", *req.Roles)
currentRoles, err := GetUserRoles(h.db, userID)
if err != nil {
h.logger.Error("Failed to get current roles for user", "user_id", userID, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to process user roles"})
return
}
h.logger.Info("Current user roles", "user_id", userID, "current_roles", currentRoles)
rolesToAdd := []string{}
rolesToRemove := []string{}
@@ -323,15 +298,8 @@ func (h *Handler) UpdateUser(c *gin.Context) {
}
}
h.logger.Info("Roles to add", "user_id", userID, "roles_to_add", rolesToAdd, "count", len(rolesToAdd))
h.logger.Info("Roles to remove", "user_id", userID, "roles_to_remove", rolesToRemove, "count", len(rolesToRemove))
// Add new roles
if len(rolesToAdd) == 0 {
h.logger.Info("No roles to add", "user_id", userID)
}
for _, roleName := range rolesToAdd {
h.logger.Info("Processing role to add", "user_id", userID, "role_name", roleName)
roleID, err := GetRoleIDByName(h.db, roleName)
if err != nil {
if err == sql.ErrNoRows {
@@ -343,13 +311,12 @@ func (h *Handler) UpdateUser(c *gin.Context) {
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to process roles"})
return
}
h.logger.Info("Attempting to add role", "user_id", userID, "role_id", roleID, "role_name", roleName, "assigned_by", currentUser.ID)
if err := AddUserRole(h.db, userID, roleID, currentUser.ID); err != nil {
h.logger.Error("Failed to add role to user", "user_id", userID, "role_id", roleID, "role_name", roleName, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": fmt.Sprintf("failed to add role '%s': %v", roleName, err)})
return
h.logger.Error("Failed to add role to user", "user_id", userID, "role_id", roleID, "error", err)
// Don't return early, continue with other roles
continue
}
h.logger.Info("Role successfully added to user", "user_id", userID, "role_id", roleID, "role_name", roleName)
h.logger.Info("Role added to user", "user_id", userID, "role_name", roleName)
}
// Remove old roles
@@ -448,48 +415,8 @@ func (h *Handler) UpdateUser(c *gin.Context) {
}
}
// Fetch updated user data to return
updatedUser, err := GetUserByID(h.db, userID)
if err != nil {
h.logger.Error("Failed to fetch updated user", "user_id", userID, "error", err)
c.JSON(http.StatusOK, gin.H{"message": "user updated successfully"})
return
}
// Get updated roles, permissions, and groups
updatedRoles, _ := GetUserRoles(h.db, userID)
updatedPermissions, _ := GetUserPermissions(h.db, userID)
updatedGroups, _ := GetUserGroups(h.db, userID)
// Ensure arrays are never nil
if updatedRoles == nil {
updatedRoles = []string{}
}
if updatedPermissions == nil {
updatedPermissions = []string{}
}
if updatedGroups == nil {
updatedGroups = []string{}
}
h.logger.Info("User updated", "user_id", userID, "roles", updatedRoles, "groups", updatedGroups)
c.JSON(http.StatusOK, gin.H{
"message": "user updated successfully",
"user": gin.H{
"id": updatedUser.ID,
"username": updatedUser.Username,
"email": updatedUser.Email,
"full_name": updatedUser.FullName,
"is_active": updatedUser.IsActive,
"is_system": updatedUser.IsSystem,
"roles": updatedRoles,
"permissions": updatedPermissions,
"groups": updatedGroups,
"created_at": updatedUser.CreatedAt,
"updated_at": updatedUser.UpdatedAt,
"last_login_at": updatedUser.LastLoginAt,
},
})
h.logger.Info("User updated", "user_id", userID)
c.JSON(http.StatusOK, gin.H{"message": "user updated successfully"})
}
// DeleteUser deletes a user

View File

@@ -2,7 +2,6 @@ package iam
import (
"database/sql"
"fmt"
"time"
"github.com/atlasos/calypso/internal/common/database"
@@ -91,14 +90,11 @@ func GetUserRoles(db *database.DB, userID string) ([]string, error) {
for rows.Next() {
var role string
if err := rows.Scan(&role); err != nil {
return []string{}, err
return nil, err
}
roles = append(roles, role)
}
if roles == nil {
roles = []string{}
}
return roles, rows.Err()
}
@@ -122,14 +118,11 @@ func GetUserPermissions(db *database.DB, userID string) ([]string, error) {
for rows.Next() {
var perm string
if err := rows.Scan(&perm); err != nil {
return []string{}, err
return nil, err
}
permissions = append(permissions, perm)
}
if permissions == nil {
permissions = []string{}
}
return permissions, rows.Err()
}
@@ -140,23 +133,8 @@ func AddUserRole(db *database.DB, userID, roleID, assignedBy string) error {
VALUES ($1, $2, $3)
ON CONFLICT (user_id, role_id) DO NOTHING
`
result, err := db.Exec(query, userID, roleID, assignedBy)
if err != nil {
return fmt.Errorf("failed to insert user role: %w", err)
}
// Check if row was actually inserted (not just skipped due to conflict)
rowsAffected, err := result.RowsAffected()
if err != nil {
return fmt.Errorf("failed to get rows affected: %w", err)
}
if rowsAffected == 0 {
// Row already exists, this is not an error but we should know about it
return nil // ON CONFLICT DO NOTHING means this is expected
}
return nil
_, err := db.Exec(query, userID, roleID, assignedBy)
return err
}
// RemoveUserRole removes a role from a user

View File

@@ -1,285 +0,0 @@
package object_storage
import (
"net/http"
"time"
"github.com/atlasos/calypso/internal/common/database"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/gin-gonic/gin"
)
// Handler handles HTTP requests for object storage
type Handler struct {
service *Service
setupService *SetupService
logger *logger.Logger
}
// NewHandler creates a new object storage handler
func NewHandler(service *Service, db *database.DB, log *logger.Logger) *Handler {
return &Handler{
service: service,
setupService: NewSetupService(db, log),
logger: log,
}
}
// ListBuckets lists all buckets
func (h *Handler) ListBuckets(c *gin.Context) {
buckets, err := h.service.ListBuckets(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list buckets", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list buckets: " + err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"buckets": buckets})
}
// GetBucket gets bucket information
func (h *Handler) GetBucket(c *gin.Context) {
bucketName := c.Param("name")
bucket, err := h.service.GetBucketStats(c.Request.Context(), bucketName)
if err != nil {
h.logger.Error("Failed to get bucket", "bucket", bucketName, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get bucket: " + err.Error()})
return
}
c.JSON(http.StatusOK, bucket)
}
// CreateBucketRequest represents a request to create a bucket
type CreateBucketRequest struct {
Name string `json:"name" binding:"required"`
}
// CreateBucket creates a new bucket
func (h *Handler) CreateBucket(c *gin.Context) {
var req CreateBucketRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid create bucket request", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
return
}
if err := h.service.CreateBucket(c.Request.Context(), req.Name); err != nil {
h.logger.Error("Failed to create bucket", "bucket", req.Name, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create bucket: " + err.Error()})
return
}
c.JSON(http.StatusCreated, gin.H{"message": "bucket created successfully", "name": req.Name})
}
// DeleteBucket deletes a bucket
func (h *Handler) DeleteBucket(c *gin.Context) {
bucketName := c.Param("name")
if err := h.service.DeleteBucket(c.Request.Context(), bucketName); err != nil {
h.logger.Error("Failed to delete bucket", "bucket", bucketName, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to delete bucket: " + err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "bucket deleted successfully"})
}
// GetAvailableDatasets gets all available pools and datasets for object storage setup
func (h *Handler) GetAvailableDatasets(c *gin.Context) {
datasets, err := h.setupService.GetAvailableDatasets(c.Request.Context())
if err != nil {
h.logger.Error("Failed to get available datasets", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get available datasets: " + err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"pools": datasets})
}
// SetupObjectStorageRequest represents a request to setup object storage
type SetupObjectStorageRequest struct {
PoolName string `json:"pool_name" binding:"required"`
DatasetName string `json:"dataset_name" binding:"required"`
CreateNew bool `json:"create_new"`
}
// SetupObjectStorage configures object storage with a ZFS dataset
func (h *Handler) SetupObjectStorage(c *gin.Context) {
var req SetupObjectStorageRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid setup request", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
return
}
setupReq := SetupRequest{
PoolName: req.PoolName,
DatasetName: req.DatasetName,
CreateNew: req.CreateNew,
}
result, err := h.setupService.SetupObjectStorage(c.Request.Context(), setupReq)
if err != nil {
h.logger.Error("Failed to setup object storage", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to setup object storage: " + err.Error()})
return
}
c.JSON(http.StatusOK, result)
}
// GetCurrentSetup gets the current object storage configuration
func (h *Handler) GetCurrentSetup(c *gin.Context) {
setup, err := h.setupService.GetCurrentSetup(c.Request.Context())
if err != nil {
h.logger.Error("Failed to get current setup", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get current setup: " + err.Error()})
return
}
if setup == nil {
c.JSON(http.StatusOK, gin.H{"configured": false})
return
}
c.JSON(http.StatusOK, gin.H{"configured": true, "setup": setup})
}
// UpdateObjectStorage updates the object storage configuration
func (h *Handler) UpdateObjectStorage(c *gin.Context) {
var req SetupObjectStorageRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid update request", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
return
}
setupReq := SetupRequest{
PoolName: req.PoolName,
DatasetName: req.DatasetName,
CreateNew: req.CreateNew,
}
result, err := h.setupService.UpdateObjectStorage(c.Request.Context(), setupReq)
if err != nil {
h.logger.Error("Failed to update object storage", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to update object storage: " + err.Error()})
return
}
c.JSON(http.StatusOK, result)
}
// ListUsers lists all IAM users
func (h *Handler) ListUsers(c *gin.Context) {
users, err := h.service.ListUsers(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list users", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list users: " + err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"users": users})
}
// CreateUserRequest represents a request to create a user
type CreateUserRequest struct {
AccessKey string `json:"access_key" binding:"required"`
SecretKey string `json:"secret_key" binding:"required"`
}
// CreateUser creates a new IAM user
func (h *Handler) CreateUser(c *gin.Context) {
var req CreateUserRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid create user request", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
return
}
if err := h.service.CreateUser(c.Request.Context(), req.AccessKey, req.SecretKey); err != nil {
h.logger.Error("Failed to create user", "access_key", req.AccessKey, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create user: " + err.Error()})
return
}
c.JSON(http.StatusCreated, gin.H{"message": "user created successfully", "access_key": req.AccessKey})
}
// DeleteUser deletes an IAM user
func (h *Handler) DeleteUser(c *gin.Context) {
accessKey := c.Param("access_key")
if err := h.service.DeleteUser(c.Request.Context(), accessKey); err != nil {
h.logger.Error("Failed to delete user", "access_key", accessKey, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to delete user: " + err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "user deleted successfully"})
}
// ListServiceAccounts lists all service accounts (access keys)
func (h *Handler) ListServiceAccounts(c *gin.Context) {
accounts, err := h.service.ListServiceAccounts(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list service accounts", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list service accounts: " + err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"service_accounts": accounts})
}
// CreateServiceAccountRequest represents a request to create a service account
type CreateServiceAccountRequest struct {
ParentUser string `json:"parent_user" binding:"required"`
Policy string `json:"policy,omitempty"`
Expiration *string `json:"expiration,omitempty"` // ISO 8601 format
}
// CreateServiceAccount creates a new service account (access key)
func (h *Handler) CreateServiceAccount(c *gin.Context) {
var req CreateServiceAccountRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid create service account request", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
return
}
var expiration *time.Time
if req.Expiration != nil {
exp, err := time.Parse(time.RFC3339, *req.Expiration)
if err != nil {
h.logger.Error("Invalid expiration format", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid expiration format, use ISO 8601 (RFC3339)"})
return
}
expiration = &exp
}
account, err := h.service.CreateServiceAccount(c.Request.Context(), req.ParentUser, req.Policy, expiration)
if err != nil {
h.logger.Error("Failed to create service account", "parent_user", req.ParentUser, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create service account: " + err.Error()})
return
}
c.JSON(http.StatusCreated, account)
}
// DeleteServiceAccount deletes a service account
func (h *Handler) DeleteServiceAccount(c *gin.Context) {
accessKey := c.Param("access_key")
if err := h.service.DeleteServiceAccount(c.Request.Context(), accessKey); err != nil {
h.logger.Error("Failed to delete service account", "access_key", accessKey, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to delete service account: " + err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "service account deleted successfully"})
}

View File

@@ -1,297 +0,0 @@
package object_storage
import (
"context"
"encoding/json"
"fmt"
"strings"
"time"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
madmin "github.com/minio/madmin-go/v3"
)
// Service handles MinIO object storage operations
type Service struct {
client *minio.Client
adminClient *madmin.AdminClient
logger *logger.Logger
endpoint string
accessKey string
secretKey string
}
// NewService creates a new MinIO service
func NewService(endpoint, accessKey, secretKey string, log *logger.Logger) (*Service, error) {
// Create MinIO client
minioClient, err := minio.New(endpoint, &minio.Options{
Creds: credentials.NewStaticV4(accessKey, secretKey, ""),
Secure: false, // Set to true if using HTTPS
})
if err != nil {
return nil, fmt.Errorf("failed to create MinIO client: %w", err)
}
// Create MinIO Admin client
adminClient, err := madmin.New(endpoint, accessKey, secretKey, false)
if err != nil {
return nil, fmt.Errorf("failed to create MinIO admin client: %w", err)
}
return &Service{
client: minioClient,
adminClient: adminClient,
logger: log,
endpoint: endpoint,
accessKey: accessKey,
secretKey: secretKey,
}, nil
}
// Bucket represents a MinIO bucket
type Bucket struct {
Name string `json:"name"`
CreationDate time.Time `json:"creation_date"`
Size int64 `json:"size"` // Total size in bytes
Objects int64 `json:"objects"` // Number of objects
AccessPolicy string `json:"access_policy"` // private, public-read, public-read-write
}
// ListBuckets lists all buckets in MinIO
func (s *Service) ListBuckets(ctx context.Context) ([]*Bucket, error) {
buckets, err := s.client.ListBuckets(ctx)
if err != nil {
return nil, fmt.Errorf("failed to list buckets: %w", err)
}
result := make([]*Bucket, 0, len(buckets))
for _, bucket := range buckets {
bucketInfo, err := s.getBucketInfo(ctx, bucket.Name)
if err != nil {
s.logger.Warn("Failed to get bucket info", "bucket", bucket.Name, "error", err)
// Continue with basic info
result = append(result, &Bucket{
Name: bucket.Name,
CreationDate: bucket.CreationDate,
Size: 0,
Objects: 0,
AccessPolicy: "private",
})
continue
}
result = append(result, bucketInfo)
}
return result, nil
}
// getBucketInfo gets detailed information about a bucket
func (s *Service) getBucketInfo(ctx context.Context, bucketName string) (*Bucket, error) {
// Get bucket creation date
buckets, err := s.client.ListBuckets(ctx)
if err != nil {
return nil, err
}
var creationDate time.Time
for _, b := range buckets {
if b.Name == bucketName {
creationDate = b.CreationDate
break
}
}
// Get bucket size and object count by listing objects
var size int64
var objects int64
// List objects in bucket to calculate size and count
objectCh := s.client.ListObjects(ctx, bucketName, minio.ListObjectsOptions{
Recursive: true,
})
for object := range objectCh {
if object.Err != nil {
s.logger.Warn("Error listing object", "bucket", bucketName, "error", object.Err)
continue
}
objects++
size += object.Size
}
return &Bucket{
Name: bucketName,
CreationDate: creationDate,
Size: size,
Objects: objects,
AccessPolicy: s.getBucketPolicy(ctx, bucketName),
}, nil
}
// getBucketPolicy gets the access policy for a bucket
func (s *Service) getBucketPolicy(ctx context.Context, bucketName string) string {
policy, err := s.client.GetBucketPolicy(ctx, bucketName)
if err != nil {
return "private"
}
// Parse policy JSON to determine access type
// For simplicity, check if policy allows public read
if policy != "" {
// Check if policy contains public read access
if strings.Contains(policy, "s3:GetObject") && strings.Contains(policy, "Principal") && strings.Contains(policy, "*") {
if strings.Contains(policy, "s3:PutObject") {
return "public-read-write"
}
return "public-read"
}
}
return "private"
}
// CreateBucket creates a new bucket
func (s *Service) CreateBucket(ctx context.Context, bucketName string) error {
err := s.client.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{})
if err != nil {
return fmt.Errorf("failed to create bucket: %w", err)
}
return nil
}
// DeleteBucket deletes a bucket
func (s *Service) DeleteBucket(ctx context.Context, bucketName string) error {
err := s.client.RemoveBucket(ctx, bucketName)
if err != nil {
return fmt.Errorf("failed to delete bucket: %w", err)
}
return nil
}
// GetBucketStats gets statistics for a bucket
func (s *Service) GetBucketStats(ctx context.Context, bucketName string) (*Bucket, error) {
return s.getBucketInfo(ctx, bucketName)
}
// User represents a MinIO IAM user
type User struct {
AccessKey string `json:"access_key"`
Status string `json:"status"` // "enabled" or "disabled"
CreatedAt time.Time `json:"created_at"`
}
// ListUsers lists all IAM users in MinIO
func (s *Service) ListUsers(ctx context.Context) ([]*User, error) {
users, err := s.adminClient.ListUsers(ctx)
if err != nil {
return nil, fmt.Errorf("failed to list users: %w", err)
}
result := make([]*User, 0, len(users))
for accessKey, userInfo := range users {
status := "enabled"
if userInfo.Status == madmin.AccountDisabled {
status = "disabled"
}
// MinIO doesn't provide creation date, use current time
result = append(result, &User{
AccessKey: accessKey,
Status: status,
CreatedAt: time.Now(),
})
}
return result, nil
}
// CreateUser creates a new IAM user in MinIO
func (s *Service) CreateUser(ctx context.Context, accessKey, secretKey string) error {
err := s.adminClient.AddUser(ctx, accessKey, secretKey)
if err != nil {
return fmt.Errorf("failed to create user: %w", err)
}
return nil
}
// DeleteUser deletes an IAM user from MinIO
func (s *Service) DeleteUser(ctx context.Context, accessKey string) error {
err := s.adminClient.RemoveUser(ctx, accessKey)
if err != nil {
return fmt.Errorf("failed to delete user: %w", err)
}
return nil
}
// ServiceAccount represents a MinIO service account (access key)
type ServiceAccount struct {
AccessKey string `json:"access_key"`
SecretKey string `json:"secret_key,omitempty"` // Only returned on creation
ParentUser string `json:"parent_user"`
Expiration time.Time `json:"expiration,omitempty"`
CreatedAt time.Time `json:"created_at"`
}
// ListServiceAccounts lists all service accounts in MinIO
func (s *Service) ListServiceAccounts(ctx context.Context) ([]*ServiceAccount, error) {
accounts, err := s.adminClient.ListServiceAccounts(ctx, "")
if err != nil {
return nil, fmt.Errorf("failed to list service accounts: %w", err)
}
result := make([]*ServiceAccount, 0, len(accounts.Accounts))
for _, account := range accounts.Accounts {
var expiration time.Time
if account.Expiration != nil {
expiration = *account.Expiration
}
result = append(result, &ServiceAccount{
AccessKey: account.AccessKey,
ParentUser: account.ParentUser,
Expiration: expiration,
CreatedAt: time.Now(), // MinIO doesn't provide creation date
})
}
return result, nil
}
// CreateServiceAccount creates a new service account (access key) in MinIO
func (s *Service) CreateServiceAccount(ctx context.Context, parentUser string, policy string, expiration *time.Time) (*ServiceAccount, error) {
opts := madmin.AddServiceAccountReq{
TargetUser: parentUser,
}
if policy != "" {
opts.Policy = json.RawMessage(policy)
}
if expiration != nil {
opts.Expiration = expiration
}
creds, err := s.adminClient.AddServiceAccount(ctx, opts)
if err != nil {
return nil, fmt.Errorf("failed to create service account: %w", err)
}
return &ServiceAccount{
AccessKey: creds.AccessKey,
SecretKey: creds.SecretKey,
ParentUser: parentUser,
Expiration: creds.Expiration,
CreatedAt: time.Now(),
}, nil
}
// DeleteServiceAccount deletes a service account from MinIO
func (s *Service) DeleteServiceAccount(ctx context.Context, accessKey string) error {
err := s.adminClient.DeleteServiceAccount(ctx, accessKey)
if err != nil {
return fmt.Errorf("failed to delete service account: %w", err)
}
return nil
}

View File

@@ -1,511 +0,0 @@
package object_storage
import (
"context"
"database/sql"
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/atlasos/calypso/internal/common/database"
"github.com/atlasos/calypso/internal/common/logger"
)
// SetupService handles object storage setup operations
type SetupService struct {
db *database.DB
logger *logger.Logger
}
// NewSetupService creates a new setup service
func NewSetupService(db *database.DB, log *logger.Logger) *SetupService {
return &SetupService{
db: db,
logger: log,
}
}
// PoolDatasetInfo represents a pool with its datasets
type PoolDatasetInfo struct {
PoolID string `json:"pool_id"`
PoolName string `json:"pool_name"`
Datasets []DatasetInfo `json:"datasets"`
}
// DatasetInfo represents a dataset that can be used for object storage
type DatasetInfo struct {
ID string `json:"id"`
Name string `json:"name"`
FullName string `json:"full_name"` // pool/dataset
MountPoint string `json:"mount_point"`
Type string `json:"type"`
UsedBytes int64 `json:"used_bytes"`
AvailableBytes int64 `json:"available_bytes"`
}
// GetAvailableDatasets returns all pools with their datasets that can be used for object storage
func (s *SetupService) GetAvailableDatasets(ctx context.Context) ([]PoolDatasetInfo, error) {
// Get all pools
poolsQuery := `
SELECT id, name
FROM zfs_pools
WHERE is_active = true
ORDER BY name
`
rows, err := s.db.QueryContext(ctx, poolsQuery)
if err != nil {
return nil, fmt.Errorf("failed to query pools: %w", err)
}
defer rows.Close()
var pools []PoolDatasetInfo
for rows.Next() {
var pool PoolDatasetInfo
if err := rows.Scan(&pool.PoolID, &pool.PoolName); err != nil {
s.logger.Warn("Failed to scan pool", "error", err)
continue
}
// Get datasets for this pool
datasetsQuery := `
SELECT id, name, type, mount_point, used_bytes, available_bytes
FROM zfs_datasets
WHERE pool_name = $1 AND type = 'filesystem'
ORDER BY name
`
datasetRows, err := s.db.QueryContext(ctx, datasetsQuery, pool.PoolName)
if err != nil {
s.logger.Warn("Failed to query datasets", "pool", pool.PoolName, "error", err)
pool.Datasets = []DatasetInfo{}
pools = append(pools, pool)
continue
}
var datasets []DatasetInfo
for datasetRows.Next() {
var ds DatasetInfo
var mountPoint sql.NullString
if err := datasetRows.Scan(&ds.ID, &ds.Name, &ds.Type, &mountPoint, &ds.UsedBytes, &ds.AvailableBytes); err != nil {
s.logger.Warn("Failed to scan dataset", "error", err)
continue
}
ds.FullName = fmt.Sprintf("%s/%s", pool.PoolName, ds.Name)
if mountPoint.Valid {
ds.MountPoint = mountPoint.String
} else {
ds.MountPoint = ""
}
datasets = append(datasets, ds)
}
datasetRows.Close()
pool.Datasets = datasets
pools = append(pools, pool)
}
return pools, nil
}
// SetupRequest represents a request to setup object storage
type SetupRequest struct {
PoolName string `json:"pool_name" binding:"required"`
DatasetName string `json:"dataset_name" binding:"required"`
CreateNew bool `json:"create_new"` // If true, create new dataset instead of using existing
}
// SetupResponse represents the response after setup
type SetupResponse struct {
DatasetPath string `json:"dataset_path"`
MountPoint string `json:"mount_point"`
Message string `json:"message"`
}
// SetupObjectStorage configures MinIO to use a specific ZFS dataset
func (s *SetupService) SetupObjectStorage(ctx context.Context, req SetupRequest) (*SetupResponse, error) {
var datasetPath, mountPoint string
// Normalize dataset name - if it already contains pool name, use it as-is
var fullDatasetName string
if strings.HasPrefix(req.DatasetName, req.PoolName+"/") {
// Dataset name already includes pool name (e.g., "pool/dataset")
fullDatasetName = req.DatasetName
} else {
// Dataset name is just the name (e.g., "dataset"), combine with pool
fullDatasetName = fmt.Sprintf("%s/%s", req.PoolName, req.DatasetName)
}
if req.CreateNew {
// Create new dataset for object storage
// Check if dataset already exists
checkCmd := exec.CommandContext(ctx, "sudo", "zfs", "list", "-H", "-o", "name", fullDatasetName)
if err := checkCmd.Run(); err == nil {
return nil, fmt.Errorf("dataset %s already exists", fullDatasetName)
}
// Create dataset
createCmd := exec.CommandContext(ctx, "sudo", "zfs", "create", fullDatasetName)
if output, err := createCmd.CombinedOutput(); err != nil {
return nil, fmt.Errorf("failed to create dataset: %s - %w", string(output), err)
}
// Get mount point
getMountCmd := exec.CommandContext(ctx, "sudo", "zfs", "get", "-H", "-o", "value", "mountpoint", fullDatasetName)
mountOutput, err := getMountCmd.Output()
if err != nil {
return nil, fmt.Errorf("failed to get mount point: %w", err)
}
mountPoint = strings.TrimSpace(string(mountOutput))
datasetPath = fullDatasetName
s.logger.Info("Created new dataset for object storage", "dataset", fullDatasetName, "mount_point", mountPoint)
} else {
// Use existing dataset
// fullDatasetName already set above
// Verify dataset exists
checkCmd := exec.CommandContext(ctx, "sudo", "zfs", "list", "-H", "-o", "name", fullDatasetName)
if err := checkCmd.Run(); err != nil {
return nil, fmt.Errorf("dataset %s does not exist", fullDatasetName)
}
// Get mount point
getMountCmd := exec.CommandContext(ctx, "sudo", "zfs", "get", "-H", "-o", "value", "mountpoint", fullDatasetName)
mountOutput, err := getMountCmd.Output()
if err != nil {
return nil, fmt.Errorf("failed to get mount point: %w", err)
}
mountPoint = strings.TrimSpace(string(mountOutput))
datasetPath = fullDatasetName
s.logger.Info("Using existing dataset for object storage", "dataset", fullDatasetName, "mount_point", mountPoint)
}
// Ensure mount point directory exists
if mountPoint != "none" && mountPoint != "" {
if err := os.MkdirAll(mountPoint, 0755); err != nil {
return nil, fmt.Errorf("failed to create mount point directory: %w", err)
}
} else {
// If no mount point, use default path
mountPoint = filepath.Join("/opt/calypso/data/pool", req.PoolName, req.DatasetName)
if err := os.MkdirAll(mountPoint, 0755); err != nil {
return nil, fmt.Errorf("failed to create default directory: %w", err)
}
}
// Update MinIO configuration to use the selected dataset
if err := s.updateMinIOConfig(ctx, mountPoint); err != nil {
s.logger.Warn("Failed to update MinIO configuration", "error", err)
// Continue anyway, configuration is saved to database
}
// Save configuration to database
_, err := s.db.ExecContext(ctx, `
INSERT INTO object_storage_config (dataset_path, mount_point, pool_name, dataset_name, created_at, updated_at)
VALUES ($1, $2, $3, $4, NOW(), NOW())
ON CONFLICT (id) DO UPDATE
SET dataset_path = $1, mount_point = $2, pool_name = $3, dataset_name = $4, updated_at = NOW()
`, datasetPath, mountPoint, req.PoolName, req.DatasetName)
if err != nil {
// If table doesn't exist, just log warning
s.logger.Warn("Failed to save configuration to database (table may not exist)", "error", err)
}
return &SetupResponse{
DatasetPath: datasetPath,
MountPoint: mountPoint,
Message: fmt.Sprintf("Object storage configured to use dataset %s at %s. MinIO service needs to be restarted to use the new dataset.", datasetPath, mountPoint),
}, nil
}
// GetCurrentSetup returns the current object storage configuration
func (s *SetupService) GetCurrentSetup(ctx context.Context) (*SetupResponse, error) {
// Check if table exists first
var tableExists bool
checkQuery := `
SELECT EXISTS (
SELECT FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name = 'object_storage_config'
)
`
err := s.db.QueryRowContext(ctx, checkQuery).Scan(&tableExists)
if err != nil {
s.logger.Warn("Failed to check if object_storage_config table exists", "error", err)
return nil, nil // Return nil if can't check
}
if !tableExists {
s.logger.Debug("object_storage_config table does not exist")
return nil, nil // No table, no configuration
}
query := `
SELECT dataset_path, mount_point, pool_name, dataset_name
FROM object_storage_config
ORDER BY updated_at DESC
LIMIT 1
`
var resp SetupResponse
var poolName, datasetName string
err = s.db.QueryRowContext(ctx, query).Scan(&resp.DatasetPath, &resp.MountPoint, &poolName, &datasetName)
if err == sql.ErrNoRows {
s.logger.Debug("No configuration found in database")
return nil, nil // No configuration found
}
if err != nil {
// Check if error is due to table not existing or permission denied
errStr := err.Error()
if strings.Contains(errStr, "does not exist") || strings.Contains(errStr, "permission denied") {
s.logger.Debug("Table does not exist or permission denied, returning nil", "error", errStr)
return nil, nil // Return nil instead of error
}
s.logger.Error("Failed to scan current setup", "error", err)
return nil, fmt.Errorf("failed to get current setup: %w", err)
}
s.logger.Debug("Found current setup", "dataset_path", resp.DatasetPath, "mount_point", resp.MountPoint, "pool", poolName, "dataset", datasetName)
// Use dataset_path directly since it already contains the full path
resp.Message = fmt.Sprintf("Using dataset %s at %s", resp.DatasetPath, resp.MountPoint)
return &resp, nil
}
// UpdateObjectStorage updates the object storage configuration to use a different dataset
// This will update the configuration but won't migrate existing data
func (s *SetupService) UpdateObjectStorage(ctx context.Context, req SetupRequest) (*SetupResponse, error) {
// First check if there's existing configuration
currentSetup, err := s.GetCurrentSetup(ctx)
if err != nil {
return nil, fmt.Errorf("failed to check current setup: %w", err)
}
if currentSetup == nil {
// No existing setup, just do normal setup
return s.SetupObjectStorage(ctx, req)
}
// There's existing setup, proceed with update
var datasetPath, mountPoint string
// Normalize dataset name - if it already contains pool name, use it as-is
var fullDatasetName string
if strings.HasPrefix(req.DatasetName, req.PoolName+"/") {
// Dataset name already includes pool name (e.g., "pool/dataset")
fullDatasetName = req.DatasetName
} else {
// Dataset name is just the name (e.g., "dataset"), combine with pool
fullDatasetName = fmt.Sprintf("%s/%s", req.PoolName, req.DatasetName)
}
if req.CreateNew {
// Create new dataset for object storage
// Check if dataset already exists
checkCmd := exec.CommandContext(ctx, "sudo", "zfs", "list", "-H", "-o", "name", fullDatasetName)
if err := checkCmd.Run(); err == nil {
return nil, fmt.Errorf("dataset %s already exists", fullDatasetName)
}
// Create dataset
createCmd := exec.CommandContext(ctx, "sudo", "zfs", "create", fullDatasetName)
if output, err := createCmd.CombinedOutput(); err != nil {
return nil, fmt.Errorf("failed to create dataset: %s - %w", string(output), err)
}
// Get mount point
getMountCmd := exec.CommandContext(ctx, "sudo", "zfs", "get", "-H", "-o", "value", "mountpoint", fullDatasetName)
mountOutput, err := getMountCmd.Output()
if err != nil {
return nil, fmt.Errorf("failed to get mount point: %w", err)
}
mountPoint = strings.TrimSpace(string(mountOutput))
datasetPath = fullDatasetName
s.logger.Info("Created new dataset for object storage update", "dataset", fullDatasetName, "mount_point", mountPoint)
} else {
// Use existing dataset
// fullDatasetName already set above
// Verify dataset exists
checkCmd := exec.CommandContext(ctx, "sudo", "zfs", "list", "-H", "-o", "name", fullDatasetName)
if err := checkCmd.Run(); err != nil {
return nil, fmt.Errorf("dataset %s does not exist", fullDatasetName)
}
// Get mount point
getMountCmd := exec.CommandContext(ctx, "sudo", "zfs", "get", "-H", "-o", "value", "mountpoint", fullDatasetName)
mountOutput, err := getMountCmd.Output()
if err != nil {
return nil, fmt.Errorf("failed to get mount point: %w", err)
}
mountPoint = strings.TrimSpace(string(mountOutput))
datasetPath = fullDatasetName
s.logger.Info("Using existing dataset for object storage update", "dataset", fullDatasetName, "mount_point", mountPoint)
}
// Ensure mount point directory exists
if mountPoint != "none" && mountPoint != "" {
if err := os.MkdirAll(mountPoint, 0755); err != nil {
return nil, fmt.Errorf("failed to create mount point directory: %w", err)
}
} else {
// If no mount point, use default path
mountPoint = filepath.Join("/opt/calypso/data/pool", req.PoolName, req.DatasetName)
if err := os.MkdirAll(mountPoint, 0755); err != nil {
return nil, fmt.Errorf("failed to create default directory: %w", err)
}
}
// Update configuration in database
_, err = s.db.ExecContext(ctx, `
UPDATE object_storage_config
SET dataset_path = $1, mount_point = $2, pool_name = $3, dataset_name = $4, updated_at = NOW()
WHERE id = (SELECT id FROM object_storage_config ORDER BY updated_at DESC LIMIT 1)
`, datasetPath, mountPoint, req.PoolName, req.DatasetName)
if err != nil {
// If update fails, try insert
_, err = s.db.ExecContext(ctx, `
INSERT INTO object_storage_config (dataset_path, mount_point, pool_name, dataset_name, created_at, updated_at)
VALUES ($1, $2, $3, $4, NOW(), NOW())
ON CONFLICT (dataset_path) DO UPDATE
SET mount_point = $2, pool_name = $3, dataset_name = $4, updated_at = NOW()
`, datasetPath, mountPoint, req.PoolName, req.DatasetName)
if err != nil {
s.logger.Warn("Failed to update configuration in database", "error", err)
}
}
// Update MinIO configuration to use the selected dataset
if err := s.updateMinIOConfig(ctx, mountPoint); err != nil {
s.logger.Warn("Failed to update MinIO configuration", "error", err)
// Continue anyway, configuration is saved to database
} else {
// Restart MinIO service to apply new configuration
if err := s.restartMinIOService(ctx); err != nil {
s.logger.Warn("Failed to restart MinIO service", "error", err)
// Continue anyway, user can restart manually
}
}
return &SetupResponse{
DatasetPath: datasetPath,
MountPoint: mountPoint,
Message: fmt.Sprintf("Object storage updated to use dataset %s at %s. Note: Existing data in previous dataset (%s) is not migrated automatically. MinIO service has been restarted.", datasetPath, mountPoint, currentSetup.DatasetPath),
}, nil
}
// updateMinIOConfig updates MinIO configuration file to use dataset mount point directly
// Note: MinIO erasure coding requires direct directory paths, not symlinks
func (s *SetupService) updateMinIOConfig(ctx context.Context, datasetMountPoint string) error {
configFile := "/opt/calypso/conf/minio/minio.conf"
// Ensure dataset mount point directory exists and has correct ownership
if err := os.MkdirAll(datasetMountPoint, 0755); err != nil {
return fmt.Errorf("failed to create dataset mount point directory: %w", err)
}
// Set ownership to minio-user so MinIO can write to it
if err := exec.CommandContext(ctx, "sudo", "chown", "-R", "minio-user:minio-user", datasetMountPoint).Run(); err != nil {
s.logger.Warn("Failed to set ownership on dataset mount point", "path", datasetMountPoint, "error", err)
// Continue anyway, might already have correct ownership
}
// Set permissions
if err := exec.CommandContext(ctx, "sudo", "chmod", "755", datasetMountPoint).Run(); err != nil {
s.logger.Warn("Failed to set permissions on dataset mount point", "path", datasetMountPoint, "error", err)
}
s.logger.Info("Prepared dataset mount point for MinIO", "path", datasetMountPoint)
// Read current config file
configContent, err := os.ReadFile(configFile)
if err != nil {
// If file doesn't exist, create it
if os.IsNotExist(err) {
configContent = []byte(fmt.Sprintf("MINIO_ROOT_USER=admin\nMINIO_ROOT_PASSWORD=HqBX1IINqFynkWFa\nMINIO_VOLUMES=%s\n", datasetMountPoint))
} else {
return fmt.Errorf("failed to read MinIO config file: %w", err)
}
} else {
// Update MINIO_VOLUMES in config
lines := strings.Split(string(configContent), "\n")
updated := false
for i, line := range lines {
if strings.HasPrefix(strings.TrimSpace(line), "MINIO_VOLUMES=") {
lines[i] = fmt.Sprintf("MINIO_VOLUMES=%s", datasetMountPoint)
updated = true
break
}
}
if !updated {
// Add MINIO_VOLUMES if not found
lines = append(lines, fmt.Sprintf("MINIO_VOLUMES=%s", datasetMountPoint))
}
configContent = []byte(strings.Join(lines, "\n"))
}
// Write updated config using sudo
// Write temp file to a location we can write to
userTempFile := fmt.Sprintf("/tmp/minio.conf.%d.tmp", os.Getpid())
if err := os.WriteFile(userTempFile, configContent, 0644); err != nil {
return fmt.Errorf("failed to write temp config file: %w", err)
}
defer os.Remove(userTempFile) // Cleanup
// Copy temp file to config location with sudo
if err := exec.CommandContext(ctx, "sudo", "cp", userTempFile, configFile).Run(); err != nil {
return fmt.Errorf("failed to update config file: %w", err)
}
// Set proper ownership and permissions
if err := exec.CommandContext(ctx, "sudo", "chown", "minio-user:minio-user", configFile).Run(); err != nil {
s.logger.Warn("Failed to set config file ownership", "error", err)
}
if err := exec.CommandContext(ctx, "sudo", "chmod", "644", configFile).Run(); err != nil {
s.logger.Warn("Failed to set config file permissions", "error", err)
}
s.logger.Info("Updated MinIO configuration", "config_file", configFile, "volumes", datasetMountPoint)
return nil
}
// restartMinIOService restarts the MinIO service to apply new configuration
func (s *SetupService) restartMinIOService(ctx context.Context) error {
// Restart MinIO service using sudo
cmd := exec.CommandContext(ctx, "sudo", "systemctl", "restart", "minio.service")
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to restart MinIO service: %w", err)
}
// Wait a moment for service to start
time.Sleep(2 * time.Second)
// Verify service is running
checkCmd := exec.CommandContext(ctx, "sudo", "systemctl", "is-active", "minio.service")
output, err := checkCmd.Output()
if err != nil {
return fmt.Errorf("failed to check MinIO service status: %w", err)
}
status := strings.TrimSpace(string(output))
if status != "active" {
return fmt.Errorf("MinIO service is not active after restart, status: %s", status)
}
s.logger.Info("MinIO service restarted successfully")
return nil
}

View File

@@ -3,13 +3,11 @@ package scst
import (
"fmt"
"net/http"
"strings"
"github.com/atlasos/calypso/internal/common/database"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/atlasos/calypso/internal/tasks"
"github.com/gin-gonic/gin"
"github.com/go-playground/validator/v10"
)
// Handler handles SCST-related API requests
@@ -39,11 +37,6 @@ func (h *Handler) ListTargets(c *gin.Context) {
return
}
// Ensure we return an empty array instead of null
if targets == nil {
targets = []Target{}
}
c.JSON(http.StatusOK, gin.H{"targets": targets})
}
@@ -119,11 +112,6 @@ func (h *Handler) CreateTarget(c *gin.Context) {
return
}
// Set alias to name for frontend compatibility (same as ListTargets)
target.Alias = target.Name
// LUNCount will be 0 for newly created target
target.LUNCount = 0
c.JSON(http.StatusCreated, target)
}
@@ -131,7 +119,7 @@ func (h *Handler) CreateTarget(c *gin.Context) {
type AddLUNRequest struct {
DeviceName string `json:"device_name" binding:"required"`
DevicePath string `json:"device_path" binding:"required"`
LUNNumber int `json:"lun_number"` // Note: cannot use binding:"required" for int as 0 is valid
LUNNumber int `json:"lun_number" binding:"required"`
HandlerType string `json:"handler_type" binding:"required"`
}
@@ -148,45 +136,17 @@ func (h *Handler) AddLUN(c *gin.Context) {
var req AddLUNRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Failed to bind AddLUN request", "error", err)
// Provide more detailed error message
if validationErr, ok := err.(validator.ValidationErrors); ok {
var errorMessages []string
for _, fieldErr := range validationErr {
errorMessages = append(errorMessages, fmt.Sprintf("%s is required", fieldErr.Field()))
}
c.JSON(http.StatusBadRequest, gin.H{"error": fmt.Sprintf("validation failed: %s", strings.Join(errorMessages, ", "))})
} else {
// Extract error message without full struct name
errMsg := err.Error()
if idx := strings.Index(errMsg, "Key: '"); idx >= 0 {
// Extract field name from error message
fieldStart := idx + 6 // Length of "Key: '"
if fieldEnd := strings.Index(errMsg[fieldStart:], "'"); fieldEnd >= 0 {
fieldName := errMsg[fieldStart : fieldStart+fieldEnd]
c.JSON(http.StatusBadRequest, gin.H{"error": fmt.Sprintf("invalid or missing field: %s", fieldName)})
} else {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request format"})
}
} else {
c.JSON(http.StatusBadRequest, gin.H{"error": fmt.Sprintf("invalid request: %v", err)})
}
}
c.JSON(http.StatusBadRequest, gin.H{"error": fmt.Sprintf("invalid request: %v", err)})
return
}
// Validate required fields (additional check in case binding doesn't catch it)
// Validate required fields
if req.DeviceName == "" || req.DevicePath == "" || req.HandlerType == "" {
h.logger.Error("Missing required fields in AddLUN request", "device_name", req.DeviceName, "device_path", req.DevicePath, "handler_type", req.HandlerType)
c.JSON(http.StatusBadRequest, gin.H{"error": "device_name, device_path, and handler_type are required"})
return
}
// Validate LUN number range
if req.LUNNumber < 0 || req.LUNNumber > 255 {
c.JSON(http.StatusBadRequest, gin.H{"error": "lun_number must be between 0 and 255"})
return
}
if err := h.service.AddLUN(c.Request.Context(), target.IQN, req.DeviceName, req.DevicePath, req.LUNNumber, req.HandlerType); err != nil {
h.logger.Error("Failed to add LUN", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
@@ -196,48 +156,6 @@ func (h *Handler) AddLUN(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{"message": "LUN added successfully"})
}
// RemoveLUN removes a LUN from a target
func (h *Handler) RemoveLUN(c *gin.Context) {
targetID := c.Param("id")
lunID := c.Param("lunId")
// Get target
target, err := h.service.GetTarget(c.Request.Context(), targetID)
if err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": "target not found"})
return
}
// Get LUN to get the LUN number
var lunNumber int
err = h.db.QueryRowContext(c.Request.Context(),
"SELECT lun_number FROM scst_luns WHERE id = $1 AND target_id = $2",
lunID, targetID,
).Scan(&lunNumber)
if err != nil {
if strings.Contains(err.Error(), "no rows") {
// LUN already deleted from database - check if it still exists in SCST
// Try to get LUN number from URL or try common LUN numbers
// For now, return success since it's already deleted (idempotent)
h.logger.Info("LUN not found in database, may already be deleted", "lun_id", lunID, "target_id", targetID)
c.JSON(http.StatusOK, gin.H{"message": "LUN already removed or not found"})
return
}
h.logger.Error("Failed to get LUN", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get LUN"})
return
}
// Remove LUN
if err := h.service.RemoveLUN(c.Request.Context(), target.IQN, lunNumber); err != nil {
h.logger.Error("Failed to remove LUN", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "LUN removed successfully"})
}
// AddInitiatorRequest represents an initiator addition request
type AddInitiatorRequest struct {
InitiatorIQN string `json:"initiator_iqn" binding:"required"`
@@ -268,45 +186,6 @@ func (h *Handler) AddInitiator(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{"message": "Initiator added successfully"})
}
// AddInitiatorToGroupRequest represents a request to add an initiator to a group
type AddInitiatorToGroupRequest struct {
InitiatorIQN string `json:"initiator_iqn" binding:"required"`
}
// AddInitiatorToGroup adds an initiator to a specific group
func (h *Handler) AddInitiatorToGroup(c *gin.Context) {
groupID := c.Param("id")
var req AddInitiatorToGroupRequest
if err := c.ShouldBindJSON(&req); err != nil {
validationErrors := make(map[string]string)
if ve, ok := err.(validator.ValidationErrors); ok {
for _, fe := range ve {
field := strings.ToLower(fe.Field())
validationErrors[field] = fmt.Sprintf("Field '%s' is required", field)
}
}
c.JSON(http.StatusBadRequest, gin.H{
"error": "invalid request",
"validation_errors": validationErrors,
})
return
}
err := h.service.AddInitiatorToGroup(c.Request.Context(), groupID, req.InitiatorIQN)
if err != nil {
if strings.Contains(err.Error(), "not found") || strings.Contains(err.Error(), "already exists") || strings.Contains(err.Error(), "single initiator only") {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
h.logger.Error("Failed to add initiator to group", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to add initiator to group"})
return
}
c.JSON(http.StatusOK, gin.H{"message": "Initiator added to group successfully"})
}
// ListAllInitiators lists all initiators across all targets
func (h *Handler) ListAllInitiators(c *gin.Context) {
initiators, err := h.service.ListAllInitiators(c.Request.Context())
@@ -561,23 +440,6 @@ func (h *Handler) DisableTarget(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{"message": "Target disabled successfully"})
}
// DeleteTarget deletes a target
func (h *Handler) DeleteTarget(c *gin.Context) {
targetID := c.Param("id")
if err := h.service.DeleteTarget(c.Request.Context(), targetID); err != nil {
if err.Error() == "target not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "target not found"})
return
}
h.logger.Error("Failed to delete target", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "Target deleted successfully"})
}
// DeletePortal deletes a portal
func (h *Handler) DeletePortal(c *gin.Context) {
id := c.Param("id")
@@ -612,182 +474,3 @@ func (h *Handler) GetPortal(c *gin.Context) {
c.JSON(http.StatusOK, portal)
}
// CreateInitiatorGroupRequest represents a request to create an initiator group
type CreateInitiatorGroupRequest struct {
TargetID string `json:"target_id" binding:"required"`
GroupName string `json:"group_name" binding:"required"`
}
// CreateInitiatorGroup creates a new initiator group
func (h *Handler) CreateInitiatorGroup(c *gin.Context) {
var req CreateInitiatorGroupRequest
if err := c.ShouldBindJSON(&req); err != nil {
validationErrors := make(map[string]string)
if ve, ok := err.(validator.ValidationErrors); ok {
for _, fe := range ve {
field := strings.ToLower(fe.Field())
validationErrors[field] = fmt.Sprintf("Field '%s' is required", field)
}
}
c.JSON(http.StatusBadRequest, gin.H{
"error": "invalid request",
"validation_errors": validationErrors,
})
return
}
group, err := h.service.CreateInitiatorGroup(c.Request.Context(), req.TargetID, req.GroupName)
if err != nil {
if strings.Contains(err.Error(), "already exists") || strings.Contains(err.Error(), "not found") {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
h.logger.Error("Failed to create initiator group", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to create initiator group"})
return
}
c.JSON(http.StatusOK, group)
}
// UpdateInitiatorGroupRequest represents a request to update an initiator group
type UpdateInitiatorGroupRequest struct {
GroupName string `json:"group_name" binding:"required"`
}
// UpdateInitiatorGroup updates an initiator group
func (h *Handler) UpdateInitiatorGroup(c *gin.Context) {
groupID := c.Param("id")
var req UpdateInitiatorGroupRequest
if err := c.ShouldBindJSON(&req); err != nil {
validationErrors := make(map[string]string)
if ve, ok := err.(validator.ValidationErrors); ok {
for _, fe := range ve {
field := strings.ToLower(fe.Field())
validationErrors[field] = fmt.Sprintf("Field '%s' is required", field)
}
}
c.JSON(http.StatusBadRequest, gin.H{
"error": "invalid request",
"validation_errors": validationErrors,
})
return
}
group, err := h.service.UpdateInitiatorGroup(c.Request.Context(), groupID, req.GroupName)
if err != nil {
if strings.Contains(err.Error(), "not found") || strings.Contains(err.Error(), "already exists") {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
h.logger.Error("Failed to update initiator group", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to update initiator group"})
return
}
c.JSON(http.StatusOK, group)
}
// DeleteInitiatorGroup deletes an initiator group
func (h *Handler) DeleteInitiatorGroup(c *gin.Context) {
groupID := c.Param("id")
err := h.service.DeleteInitiatorGroup(c.Request.Context(), groupID)
if err != nil {
if strings.Contains(err.Error(), "not found") {
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
return
}
if strings.Contains(err.Error(), "cannot delete") || strings.Contains(err.Error(), "contains") {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
h.logger.Error("Failed to delete initiator group", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to delete initiator group"})
return
}
c.JSON(http.StatusOK, gin.H{"message": "initiator group deleted successfully"})
}
// GetInitiatorGroup retrieves an initiator group by ID
func (h *Handler) GetInitiatorGroup(c *gin.Context) {
groupID := c.Param("id")
group, err := h.service.GetInitiatorGroup(c.Request.Context(), groupID)
if err != nil {
if strings.Contains(err.Error(), "not found") {
c.JSON(http.StatusNotFound, gin.H{"error": "initiator group not found"})
return
}
h.logger.Error("Failed to get initiator group", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get initiator group"})
return
}
c.JSON(http.StatusOK, group)
}
// ListAllInitiatorGroups lists all initiator groups
func (h *Handler) ListAllInitiatorGroups(c *gin.Context) {
groups, err := h.service.ListAllInitiatorGroups(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list initiator groups", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list initiator groups"})
return
}
if groups == nil {
groups = []InitiatorGroup{}
}
c.JSON(http.StatusOK, gin.H{"groups": groups})
}
// GetConfigFile reads the SCST configuration file content
func (h *Handler) GetConfigFile(c *gin.Context) {
configPath := c.DefaultQuery("path", "/etc/scst.conf")
content, err := h.service.ReadConfigFile(c.Request.Context(), configPath)
if err != nil {
h.logger.Error("Failed to read config file", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{
"content": content,
"path": configPath,
})
}
// UpdateConfigFile writes content to SCST configuration file
func (h *Handler) UpdateConfigFile(c *gin.Context) {
var req struct {
Content string `json:"content" binding:"required"`
Path string `json:"path"`
}
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request"})
return
}
configPath := req.Path
if configPath == "" {
configPath = "/etc/scst.conf"
}
if err := h.service.WriteConfigFile(c.Request.Context(), configPath, req.Content); err != nil {
h.logger.Error("Failed to write config file", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{
"message": "Configuration file updated successfully",
"path": configPath,
})
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,147 +0,0 @@
package shares
import (
"net/http"
"github.com/atlasos/calypso/internal/common/database"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/gin-gonic/gin"
"github.com/go-playground/validator/v10"
)
// Handler handles Shares-related API requests
type Handler struct {
service *Service
logger *logger.Logger
}
// NewHandler creates a new Shares handler
func NewHandler(db *database.DB, log *logger.Logger) *Handler {
return &Handler{
service: NewService(db, log),
logger: log,
}
}
// ListShares lists all shares
func (h *Handler) ListShares(c *gin.Context) {
shares, err := h.service.ListShares(c.Request.Context())
if err != nil {
h.logger.Error("Failed to list shares", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to list shares"})
return
}
// Ensure we return an empty array instead of null
if shares == nil {
shares = []*Share{}
}
c.JSON(http.StatusOK, gin.H{"shares": shares})
}
// GetShare retrieves a share by ID
func (h *Handler) GetShare(c *gin.Context) {
shareID := c.Param("id")
share, err := h.service.GetShare(c.Request.Context(), shareID)
if err != nil {
if err.Error() == "share not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "share not found"})
return
}
h.logger.Error("Failed to get share", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get share"})
return
}
c.JSON(http.StatusOK, share)
}
// CreateShare creates a new share
func (h *Handler) CreateShare(c *gin.Context) {
var req CreateShareRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid create share request", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
return
}
// Validate request
validate := validator.New()
if err := validate.Struct(req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "validation failed: " + err.Error()})
return
}
// Get user ID from context (set by auth middleware)
userID, exists := c.Get("user_id")
if !exists {
c.JSON(http.StatusUnauthorized, gin.H{"error": "unauthorized"})
return
}
share, err := h.service.CreateShare(c.Request.Context(), &req, userID.(string))
if err != nil {
if err.Error() == "dataset not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "dataset not found"})
return
}
if err.Error() == "only filesystem datasets can be shared (not volumes)" {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
if err.Error() == "at least one protocol (NFS or SMB) must be enabled" {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
h.logger.Error("Failed to create share", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusCreated, share)
}
// UpdateShare updates an existing share
func (h *Handler) UpdateShare(c *gin.Context) {
shareID := c.Param("id")
var req UpdateShareRequest
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid update share request", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request: " + err.Error()})
return
}
share, err := h.service.UpdateShare(c.Request.Context(), shareID, &req)
if err != nil {
if err.Error() == "share not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "share not found"})
return
}
h.logger.Error("Failed to update share", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, share)
}
// DeleteShare deletes a share
func (h *Handler) DeleteShare(c *gin.Context) {
shareID := c.Param("id")
err := h.service.DeleteShare(c.Request.Context(), shareID)
if err != nil {
if err.Error() == "share not found" {
c.JSON(http.StatusNotFound, gin.H{"error": "share not found"})
return
}
h.logger.Error("Failed to delete share", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "share deleted successfully"})
}

View File

@@ -1,806 +0,0 @@
package shares
import (
"context"
"database/sql"
"fmt"
"os"
"os/exec"
"strings"
"time"
"github.com/atlasos/calypso/internal/common/database"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/lib/pq"
)
// Service handles Shares (CIFS/NFS) operations
type Service struct {
db *database.DB
logger *logger.Logger
}
// NewService creates a new Shares service
func NewService(db *database.DB, log *logger.Logger) *Service {
return &Service{
db: db,
logger: log,
}
}
// Share represents a filesystem share (NFS/SMB)
type Share struct {
ID string `json:"id"`
DatasetID string `json:"dataset_id"`
DatasetName string `json:"dataset_name"`
MountPoint string `json:"mount_point"`
ShareType string `json:"share_type"` // 'nfs', 'smb', 'both'
NFSEnabled bool `json:"nfs_enabled"`
NFSOptions string `json:"nfs_options,omitempty"`
NFSClients []string `json:"nfs_clients,omitempty"`
SMBEnabled bool `json:"smb_enabled"`
SMBShareName string `json:"smb_share_name,omitempty"`
SMBPath string `json:"smb_path,omitempty"`
SMBComment string `json:"smb_comment,omitempty"`
SMBGuestOK bool `json:"smb_guest_ok"`
SMBReadOnly bool `json:"smb_read_only"`
SMBBrowseable bool `json:"smb_browseable"`
IsActive bool `json:"is_active"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
CreatedBy string `json:"created_by"`
}
// ListShares lists all shares
func (s *Service) ListShares(ctx context.Context) ([]*Share, error) {
query := `
SELECT
zs.id, zs.dataset_id, zd.name as dataset_name, zd.mount_point,
zs.share_type, zs.nfs_enabled, zs.nfs_options, zs.nfs_clients,
zs.smb_enabled, zs.smb_share_name, zs.smb_path, zs.smb_comment,
zs.smb_guest_ok, zs.smb_read_only, zs.smb_browseable,
zs.is_active, zs.created_at, zs.updated_at, zs.created_by
FROM zfs_shares zs
JOIN zfs_datasets zd ON zs.dataset_id = zd.id
ORDER BY zd.name
`
rows, err := s.db.QueryContext(ctx, query)
if err != nil {
if strings.Contains(err.Error(), "does not exist") {
s.logger.Warn("zfs_shares table does not exist, returning empty list")
return []*Share{}, nil
}
return nil, fmt.Errorf("failed to list shares: %w", err)
}
defer rows.Close()
var shares []*Share
for rows.Next() {
var share Share
var mountPoint sql.NullString
var nfsOptions sql.NullString
var smbShareName sql.NullString
var smbPath sql.NullString
var smbComment sql.NullString
var nfsClients []string
err := rows.Scan(
&share.ID, &share.DatasetID, &share.DatasetName, &mountPoint,
&share.ShareType, &share.NFSEnabled, &nfsOptions, pq.Array(&nfsClients),
&share.SMBEnabled, &smbShareName, &smbPath, &smbComment,
&share.SMBGuestOK, &share.SMBReadOnly, &share.SMBBrowseable,
&share.IsActive, &share.CreatedAt, &share.UpdatedAt, &share.CreatedBy,
)
if err != nil {
s.logger.Error("Failed to scan share row", "error", err)
continue
}
share.NFSClients = nfsClients
if mountPoint.Valid {
share.MountPoint = mountPoint.String
}
if nfsOptions.Valid {
share.NFSOptions = nfsOptions.String
}
if smbShareName.Valid {
share.SMBShareName = smbShareName.String
}
if smbPath.Valid {
share.SMBPath = smbPath.String
}
if smbComment.Valid {
share.SMBComment = smbComment.String
}
shares = append(shares, &share)
}
if err := rows.Err(); err != nil {
return nil, fmt.Errorf("error iterating share rows: %w", err)
}
return shares, nil
}
// GetShare retrieves a share by ID
func (s *Service) GetShare(ctx context.Context, shareID string) (*Share, error) {
query := `
SELECT
zs.id, zs.dataset_id, zd.name as dataset_name, zd.mount_point,
zs.share_type, zs.nfs_enabled, zs.nfs_options, zs.nfs_clients,
zs.smb_enabled, zs.smb_share_name, zs.smb_path, zs.smb_comment,
zs.smb_guest_ok, zs.smb_read_only, zs.smb_browseable,
zs.is_active, zs.created_at, zs.updated_at, zs.created_by
FROM zfs_shares zs
JOIN zfs_datasets zd ON zs.dataset_id = zd.id
WHERE zs.id = $1
`
var share Share
var mountPoint sql.NullString
var nfsOptions sql.NullString
var smbShareName sql.NullString
var smbPath sql.NullString
var smbComment sql.NullString
var nfsClients []string
err := s.db.QueryRowContext(ctx, query, shareID).Scan(
&share.ID, &share.DatasetID, &share.DatasetName, &mountPoint,
&share.ShareType, &share.NFSEnabled, &nfsOptions, pq.Array(&nfsClients),
&share.SMBEnabled, &smbShareName, &smbPath, &smbComment,
&share.SMBGuestOK, &share.SMBReadOnly, &share.SMBBrowseable,
&share.IsActive, &share.CreatedAt, &share.UpdatedAt, &share.CreatedBy,
)
if err != nil {
if err == sql.ErrNoRows {
return nil, fmt.Errorf("share not found")
}
return nil, fmt.Errorf("failed to get share: %w", err)
}
share.NFSClients = nfsClients
if mountPoint.Valid {
share.MountPoint = mountPoint.String
}
if nfsOptions.Valid {
share.NFSOptions = nfsOptions.String
}
if smbShareName.Valid {
share.SMBShareName = smbShareName.String
}
if smbPath.Valid {
share.SMBPath = smbPath.String
}
if smbComment.Valid {
share.SMBComment = smbComment.String
}
return &share, nil
}
// CreateShareRequest represents a share creation request
type CreateShareRequest struct {
DatasetID string `json:"dataset_id" binding:"required"`
NFSEnabled bool `json:"nfs_enabled"`
NFSOptions string `json:"nfs_options"`
NFSClients []string `json:"nfs_clients"`
SMBEnabled bool `json:"smb_enabled"`
SMBShareName string `json:"smb_share_name"`
SMBPath string `json:"smb_path"`
SMBComment string `json:"smb_comment"`
SMBGuestOK bool `json:"smb_guest_ok"`
SMBReadOnly bool `json:"smb_read_only"`
SMBBrowseable bool `json:"smb_browseable"`
}
// CreateShare creates a new share
func (s *Service) CreateShare(ctx context.Context, req *CreateShareRequest, userID string) (*Share, error) {
// Validate dataset exists and is a filesystem (not volume)
// req.DatasetID can be either UUID or dataset name
var datasetID, datasetType, datasetName, mountPoint string
var mountPointNull sql.NullString
// Try to find by ID first (UUID)
err := s.db.QueryRowContext(ctx,
"SELECT id, type, name, mount_point FROM zfs_datasets WHERE id = $1",
req.DatasetID,
).Scan(&datasetID, &datasetType, &datasetName, &mountPointNull)
// If not found by ID, try by name
if err == sql.ErrNoRows {
err = s.db.QueryRowContext(ctx,
"SELECT id, type, name, mount_point FROM zfs_datasets WHERE name = $1",
req.DatasetID,
).Scan(&datasetID, &datasetType, &datasetName, &mountPointNull)
}
if err != nil {
if err == sql.ErrNoRows {
return nil, fmt.Errorf("dataset not found")
}
return nil, fmt.Errorf("failed to validate dataset: %w", err)
}
if mountPointNull.Valid {
mountPoint = mountPointNull.String
} else {
mountPoint = "none"
}
if datasetType != "filesystem" {
return nil, fmt.Errorf("only filesystem datasets can be shared (not volumes)")
}
// Determine share type
shareType := "none"
if req.NFSEnabled && req.SMBEnabled {
shareType = "both"
} else if req.NFSEnabled {
shareType = "nfs"
} else if req.SMBEnabled {
shareType = "smb"
} else {
return nil, fmt.Errorf("at least one protocol (NFS or SMB) must be enabled")
}
// Set default NFS options if not provided
nfsOptions := req.NFSOptions
if nfsOptions == "" {
nfsOptions = "rw,sync,no_subtree_check"
}
// Set default SMB share name if not provided
smbShareName := req.SMBShareName
if smbShareName == "" {
// Extract dataset name from full path (e.g., "pool/dataset" -> "dataset")
parts := strings.Split(datasetName, "/")
smbShareName = parts[len(parts)-1]
}
// Set SMB path (use mount_point if available, otherwise use dataset name)
smbPath := req.SMBPath
if smbPath == "" {
if mountPoint != "" && mountPoint != "none" {
smbPath = mountPoint
} else {
smbPath = fmt.Sprintf("/mnt/%s", strings.ReplaceAll(datasetName, "/", "_"))
}
}
// Insert into database
query := `
INSERT INTO zfs_shares (
dataset_id, share_type, nfs_enabled, nfs_options, nfs_clients,
smb_enabled, smb_share_name, smb_path, smb_comment,
smb_guest_ok, smb_read_only, smb_browseable, is_active, created_by
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14)
RETURNING id, created_at, updated_at
`
var shareID string
var createdAt, updatedAt time.Time
// Handle nfs_clients array - use empty array if nil
nfsClients := req.NFSClients
if nfsClients == nil {
nfsClients = []string{}
}
err = s.db.QueryRowContext(ctx, query,
datasetID, shareType, req.NFSEnabled, nfsOptions, pq.Array(nfsClients),
req.SMBEnabled, smbShareName, smbPath, req.SMBComment,
req.SMBGuestOK, req.SMBReadOnly, req.SMBBrowseable, true, userID,
).Scan(&shareID, &createdAt, &updatedAt)
if err != nil {
return nil, fmt.Errorf("failed to create share: %w", err)
}
// Apply NFS export if enabled
if req.NFSEnabled {
if err := s.applyNFSExport(ctx, mountPoint, nfsOptions, req.NFSClients); err != nil {
s.logger.Error("Failed to apply NFS export", "error", err, "share_id", shareID)
// Don't fail the creation, but log the error
}
}
// Apply SMB share if enabled
if req.SMBEnabled {
if err := s.applySMBShare(ctx, smbShareName, smbPath, req.SMBComment, req.SMBGuestOK, req.SMBReadOnly, req.SMBBrowseable); err != nil {
s.logger.Error("Failed to apply SMB share", "error", err, "share_id", shareID)
// Don't fail the creation, but log the error
}
}
// Return the created share
return s.GetShare(ctx, shareID)
}
// UpdateShareRequest represents a share update request
type UpdateShareRequest struct {
NFSEnabled *bool `json:"nfs_enabled"`
NFSOptions *string `json:"nfs_options"`
NFSClients *[]string `json:"nfs_clients"`
SMBEnabled *bool `json:"smb_enabled"`
SMBShareName *string `json:"smb_share_name"`
SMBComment *string `json:"smb_comment"`
SMBGuestOK *bool `json:"smb_guest_ok"`
SMBReadOnly *bool `json:"smb_read_only"`
SMBBrowseable *bool `json:"smb_browseable"`
IsActive *bool `json:"is_active"`
}
// UpdateShare updates an existing share
func (s *Service) UpdateShare(ctx context.Context, shareID string, req *UpdateShareRequest) (*Share, error) {
// Get current share
share, err := s.GetShare(ctx, shareID)
if err != nil {
return nil, err
}
// Build update query dynamically
updates := []string{}
args := []interface{}{}
argIndex := 1
if req.NFSEnabled != nil {
updates = append(updates, fmt.Sprintf("nfs_enabled = $%d", argIndex))
args = append(args, *req.NFSEnabled)
argIndex++
}
if req.NFSOptions != nil {
updates = append(updates, fmt.Sprintf("nfs_options = $%d", argIndex))
args = append(args, *req.NFSOptions)
argIndex++
}
if req.NFSClients != nil {
updates = append(updates, fmt.Sprintf("nfs_clients = $%d", argIndex))
args = append(args, pq.Array(*req.NFSClients))
argIndex++
}
if req.SMBEnabled != nil {
updates = append(updates, fmt.Sprintf("smb_enabled = $%d", argIndex))
args = append(args, *req.SMBEnabled)
argIndex++
}
if req.SMBShareName != nil {
updates = append(updates, fmt.Sprintf("smb_share_name = $%d", argIndex))
args = append(args, *req.SMBShareName)
argIndex++
}
if req.SMBComment != nil {
updates = append(updates, fmt.Sprintf("smb_comment = $%d", argIndex))
args = append(args, *req.SMBComment)
argIndex++
}
if req.SMBGuestOK != nil {
updates = append(updates, fmt.Sprintf("smb_guest_ok = $%d", argIndex))
args = append(args, *req.SMBGuestOK)
argIndex++
}
if req.SMBReadOnly != nil {
updates = append(updates, fmt.Sprintf("smb_read_only = $%d", argIndex))
args = append(args, *req.SMBReadOnly)
argIndex++
}
if req.SMBBrowseable != nil {
updates = append(updates, fmt.Sprintf("smb_browseable = $%d", argIndex))
args = append(args, *req.SMBBrowseable)
argIndex++
}
if req.IsActive != nil {
updates = append(updates, fmt.Sprintf("is_active = $%d", argIndex))
args = append(args, *req.IsActive)
argIndex++
}
if len(updates) == 0 {
return share, nil // No changes
}
// Update share_type based on enabled protocols
nfsEnabled := share.NFSEnabled
smbEnabled := share.SMBEnabled
if req.NFSEnabled != nil {
nfsEnabled = *req.NFSEnabled
}
if req.SMBEnabled != nil {
smbEnabled = *req.SMBEnabled
}
shareType := "none"
if nfsEnabled && smbEnabled {
shareType = "both"
} else if nfsEnabled {
shareType = "nfs"
} else if smbEnabled {
shareType = "smb"
}
updates = append(updates, fmt.Sprintf("share_type = $%d", argIndex))
args = append(args, shareType)
argIndex++
updates = append(updates, fmt.Sprintf("updated_at = NOW()"))
args = append(args, shareID)
query := fmt.Sprintf(`
UPDATE zfs_shares
SET %s
WHERE id = $%d
`, strings.Join(updates, ", "), argIndex)
_, err = s.db.ExecContext(ctx, query, args...)
if err != nil {
return nil, fmt.Errorf("failed to update share: %w", err)
}
// Re-apply NFS export if NFS is enabled
if nfsEnabled {
nfsOptions := share.NFSOptions
if req.NFSOptions != nil {
nfsOptions = *req.NFSOptions
}
nfsClients := share.NFSClients
if req.NFSClients != nil {
nfsClients = *req.NFSClients
}
if err := s.applyNFSExport(ctx, share.MountPoint, nfsOptions, nfsClients); err != nil {
s.logger.Error("Failed to apply NFS export", "error", err, "share_id", shareID)
}
} else {
// Remove NFS export if disabled
if err := s.removeNFSExport(ctx, share.MountPoint); err != nil {
s.logger.Error("Failed to remove NFS export", "error", err, "share_id", shareID)
}
}
// Re-apply SMB share if SMB is enabled
if smbEnabled {
smbShareName := share.SMBShareName
if req.SMBShareName != nil {
smbShareName = *req.SMBShareName
}
smbPath := share.SMBPath
smbComment := share.SMBComment
if req.SMBComment != nil {
smbComment = *req.SMBComment
}
smbGuestOK := share.SMBGuestOK
if req.SMBGuestOK != nil {
smbGuestOK = *req.SMBGuestOK
}
smbReadOnly := share.SMBReadOnly
if req.SMBReadOnly != nil {
smbReadOnly = *req.SMBReadOnly
}
smbBrowseable := share.SMBBrowseable
if req.SMBBrowseable != nil {
smbBrowseable = *req.SMBBrowseable
}
if err := s.applySMBShare(ctx, smbShareName, smbPath, smbComment, smbGuestOK, smbReadOnly, smbBrowseable); err != nil {
s.logger.Error("Failed to apply SMB share", "error", err, "share_id", shareID)
}
} else {
// Remove SMB share if disabled
if err := s.removeSMBShare(ctx, share.SMBShareName); err != nil {
s.logger.Error("Failed to remove SMB share", "error", err, "share_id", shareID)
}
}
return s.GetShare(ctx, shareID)
}
// DeleteShare deletes a share
func (s *Service) DeleteShare(ctx context.Context, shareID string) error {
// Get share to get mount point and share name
share, err := s.GetShare(ctx, shareID)
if err != nil {
return err
}
// Remove NFS export
if share.NFSEnabled {
if err := s.removeNFSExport(ctx, share.MountPoint); err != nil {
s.logger.Error("Failed to remove NFS export", "error", err, "share_id", shareID)
}
}
// Remove SMB share
if share.SMBEnabled {
if err := s.removeSMBShare(ctx, share.SMBShareName); err != nil {
s.logger.Error("Failed to remove SMB share", "error", err, "share_id", shareID)
}
}
// Delete from database
_, err = s.db.ExecContext(ctx, "DELETE FROM zfs_shares WHERE id = $1", shareID)
if err != nil {
return fmt.Errorf("failed to delete share: %w", err)
}
return nil
}
// applyNFSExport adds or updates an NFS export in /etc/exports
func (s *Service) applyNFSExport(ctx context.Context, mountPoint, options string, clients []string) error {
if mountPoint == "" || mountPoint == "none" {
return fmt.Errorf("mount point is required for NFS export")
}
// Build client list (default to * if empty)
clientList := "*"
if len(clients) > 0 {
clientList = strings.Join(clients, " ")
}
// Build export line
exportLine := fmt.Sprintf("%s %s(%s)", mountPoint, clientList, options)
// Read current /etc/exports
exportsPath := "/etc/exports"
exportsContent, err := os.ReadFile(exportsPath)
if err != nil && !os.IsNotExist(err) {
return fmt.Errorf("failed to read exports file: %w", err)
}
lines := strings.Split(string(exportsContent), "\n")
var newLines []string
found := false
// Check if this mount point already exists
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" || strings.HasPrefix(line, "#") {
newLines = append(newLines, line)
continue
}
// Check if this line is for our mount point
if strings.HasPrefix(line, mountPoint+" ") {
newLines = append(newLines, exportLine)
found = true
} else {
newLines = append(newLines, line)
}
}
// Add if not found
if !found {
newLines = append(newLines, exportLine)
}
// Write back to file
newContent := strings.Join(newLines, "\n") + "\n"
if err := os.WriteFile(exportsPath, []byte(newContent), 0644); err != nil {
return fmt.Errorf("failed to write exports file: %w", err)
}
// Apply exports
cmd := exec.CommandContext(ctx, "sudo", "exportfs", "-ra")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to apply exports: %s: %w", string(output), err)
}
s.logger.Info("NFS export applied", "mount_point", mountPoint, "clients", clientList)
return nil
}
// removeNFSExport removes an NFS export from /etc/exports
func (s *Service) removeNFSExport(ctx context.Context, mountPoint string) error {
if mountPoint == "" || mountPoint == "none" {
return nil // Nothing to remove
}
exportsPath := "/etc/exports"
exportsContent, err := os.ReadFile(exportsPath)
if err != nil {
if os.IsNotExist(err) {
return nil // File doesn't exist, nothing to remove
}
return fmt.Errorf("failed to read exports file: %w", err)
}
lines := strings.Split(string(exportsContent), "\n")
var newLines []string
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" || strings.HasPrefix(line, "#") {
newLines = append(newLines, line)
continue
}
// Skip lines for this mount point
if strings.HasPrefix(line, mountPoint+" ") {
continue
}
newLines = append(newLines, line)
}
// Write back to file
newContent := strings.Join(newLines, "\n")
if newContent != "" && !strings.HasSuffix(newContent, "\n") {
newContent += "\n"
}
if err := os.WriteFile(exportsPath, []byte(newContent), 0644); err != nil {
return fmt.Errorf("failed to write exports file: %w", err)
}
// Apply exports
cmd := exec.CommandContext(ctx, "sudo", "exportfs", "-ra")
if output, err := cmd.CombinedOutput(); err != nil {
return fmt.Errorf("failed to apply exports: %s: %w", string(output), err)
}
s.logger.Info("NFS export removed", "mount_point", mountPoint)
return nil
}
// applySMBShare adds or updates an SMB share in /etc/samba/smb.conf
func (s *Service) applySMBShare(ctx context.Context, shareName, path, comment string, guestOK, readOnly, browseable bool) error {
if shareName == "" {
return fmt.Errorf("SMB share name is required")
}
if path == "" {
return fmt.Errorf("SMB path is required")
}
smbConfPath := "/etc/samba/smb.conf"
smbContent, err := os.ReadFile(smbConfPath)
if err != nil {
return fmt.Errorf("failed to read smb.conf: %w", err)
}
// Parse and update smb.conf
lines := strings.Split(string(smbContent), "\n")
var newLines []string
inShare := false
shareStart := -1
for i, line := range lines {
trimmed := strings.TrimSpace(line)
// Check if we're entering our share section
if strings.HasPrefix(trimmed, "[") && strings.HasSuffix(trimmed, "]") {
sectionName := trimmed[1 : len(trimmed)-1]
if sectionName == shareName {
inShare = true
shareStart = i
continue
} else if inShare {
// We've left our share section, insert the share config here
newLines = append(newLines, s.buildSMBShareConfig(shareName, path, comment, guestOK, readOnly, browseable))
inShare = false
}
}
if inShare {
// Skip lines until we find the next section or end of file
continue
}
newLines = append(newLines, line)
}
// If we were still in the share at the end, add it
if inShare {
newLines = append(newLines, s.buildSMBShareConfig(shareName, path, comment, guestOK, readOnly, browseable))
} else if shareStart == -1 {
// Share doesn't exist, add it at the end
newLines = append(newLines, "")
newLines = append(newLines, s.buildSMBShareConfig(shareName, path, comment, guestOK, readOnly, browseable))
}
// Write back to file
newContent := strings.Join(newLines, "\n")
if err := os.WriteFile(smbConfPath, []byte(newContent), 0644); err != nil {
return fmt.Errorf("failed to write smb.conf: %w", err)
}
// Reload Samba
cmd := exec.CommandContext(ctx, "sudo", "systemctl", "reload", "smbd")
if output, err := cmd.CombinedOutput(); err != nil {
// Try restart if reload fails
cmd = exec.CommandContext(ctx, "sudo", "systemctl", "restart", "smbd")
if output2, err2 := cmd.CombinedOutput(); err2 != nil {
return fmt.Errorf("failed to reload/restart smbd: %s / %s: %w", string(output), string(output2), err2)
}
}
s.logger.Info("SMB share applied", "share_name", shareName, "path", path)
return nil
}
// buildSMBShareConfig builds the SMB share configuration block
func (s *Service) buildSMBShareConfig(shareName, path, comment string, guestOK, readOnly, browseable bool) string {
var config []string
config = append(config, fmt.Sprintf("[%s]", shareName))
if comment != "" {
config = append(config, fmt.Sprintf(" comment = %s", comment))
}
config = append(config, fmt.Sprintf(" path = %s", path))
if guestOK {
config = append(config, " guest ok = yes")
} else {
config = append(config, " guest ok = no")
}
if readOnly {
config = append(config, " read only = yes")
} else {
config = append(config, " read only = no")
}
if browseable {
config = append(config, " browseable = yes")
} else {
config = append(config, " browseable = no")
}
return strings.Join(config, "\n")
}
// removeSMBShare removes an SMB share from /etc/samba/smb.conf
func (s *Service) removeSMBShare(ctx context.Context, shareName string) error {
if shareName == "" {
return nil // Nothing to remove
}
smbConfPath := "/etc/samba/smb.conf"
smbContent, err := os.ReadFile(smbConfPath)
if err != nil {
if os.IsNotExist(err) {
return nil // File doesn't exist, nothing to remove
}
return fmt.Errorf("failed to read smb.conf: %w", err)
}
lines := strings.Split(string(smbContent), "\n")
var newLines []string
inShare := false
for _, line := range lines {
trimmed := strings.TrimSpace(line)
// Check if we're entering our share section
if strings.HasPrefix(trimmed, "[") && strings.HasSuffix(trimmed, "]") {
sectionName := trimmed[1 : len(trimmed)-1]
if sectionName == shareName {
inShare = true
continue
} else if inShare {
// We've left our share section
inShare = false
}
}
if inShare {
// Skip lines in this share section
continue
}
newLines = append(newLines, line)
}
// Write back to file
newContent := strings.Join(newLines, "\n")
if err := os.WriteFile(smbConfPath, []byte(newContent), 0644); err != nil {
return fmt.Errorf("failed to write smb.conf: %w", err)
}
// Reload Samba
cmd := exec.CommandContext(ctx, "sudo", "systemctl", "reload", "smbd")
if output, err := cmd.CombinedOutput(); err != nil {
// Try restart if reload fails
cmd = exec.CommandContext(ctx, "sudo", "systemctl", "restart", "smbd")
if output2, err2 := cmd.CombinedOutput(); err2 != nil {
return fmt.Errorf("failed to reload/restart smbd: %s / %s: %w", string(output), string(output2), err2)
}
}
s.logger.Info("SMB share removed", "share_name", shareName)
return nil
}

View File

@@ -195,7 +195,7 @@ func (s *DiskService) getZFSPoolForDisk(ctx context.Context, devicePath string)
deviceName := strings.TrimPrefix(devicePath, "/dev/")
// Get all ZFS pools
cmd := exec.CommandContext(ctx, "sudo", "zpool", "list", "-H", "-o", "name")
cmd := exec.CommandContext(ctx, "zpool", "list", "-H", "-o", "name")
output, err := cmd.Output()
if err != nil {
return ""
@@ -208,7 +208,7 @@ func (s *DiskService) getZFSPoolForDisk(ctx context.Context, devicePath string)
}
// Check pool status for this device
statusCmd := exec.CommandContext(ctx, "sudo", "zpool", "status", poolName)
statusCmd := exec.CommandContext(ctx, "zpool", "status", poolName)
statusOutput, err := statusCmd.Output()
if err != nil {
continue

View File

@@ -16,16 +16,6 @@ import (
"github.com/lib/pq"
)
// zfsCommand executes a ZFS command with sudo
func zfsCommand(ctx context.Context, args ...string) *exec.Cmd {
return exec.CommandContext(ctx, "sudo", append([]string{"zfs"}, args...)...)
}
// zpoolCommand executes a ZPOOL command with sudo
func zpoolCommand(ctx context.Context, args ...string) *exec.Cmd {
return exec.CommandContext(ctx, "sudo", append([]string{"zpool"}, args...)...)
}
// ZFSService handles ZFS pool management
type ZFSService struct {
db *database.DB
@@ -125,10 +115,6 @@ func (s *ZFSService) CreatePool(ctx context.Context, name string, raidLevel stri
var args []string
args = append(args, "create", "-f") // -f to force creation
// Set default mountpoint to /opt/calypso/data/pool/<pool-name>
mountPoint := fmt.Sprintf("/opt/calypso/data/pool/%s", name)
args = append(args, "-m", mountPoint)
// Note: compression is a filesystem property, not a pool property
// We'll set it after pool creation using zfs set
@@ -169,15 +155,9 @@ func (s *ZFSService) CreatePool(ctx context.Context, name string, raidLevel stri
args = append(args, disks...)
}
// Create mountpoint directory if it doesn't exist
if err := os.MkdirAll(mountPoint, 0755); err != nil {
return nil, fmt.Errorf("failed to create mountpoint directory %s: %w", mountPoint, err)
}
s.logger.Info("Created mountpoint directory", "path", mountPoint)
// Execute zpool create (with sudo for permissions)
s.logger.Info("Creating ZFS pool", "name", name, "raid_level", raidLevel, "disks", disks, "mountpoint", mountPoint, "args", args)
cmd := zpoolCommand(ctx, args...)
// Execute zpool create
s.logger.Info("Creating ZFS pool", "name", name, "raid_level", raidLevel, "disks", disks, "args", args)
cmd := exec.CommandContext(ctx, "zpool", args...)
output, err := cmd.CombinedOutput()
if err != nil {
errorMsg := string(output)
@@ -190,7 +170,7 @@ func (s *ZFSService) CreatePool(ctx context.Context, name string, raidLevel stri
// Set filesystem properties (compression, etc.) after pool creation
// ZFS creates a root filesystem with the same name as the pool
if compression != "" && compression != "off" {
cmd = zfsCommand(ctx, "set", fmt.Sprintf("compression=%s", compression), name)
cmd = exec.CommandContext(ctx, "zfs", "set", fmt.Sprintf("compression=%s", compression), name)
output, err = cmd.CombinedOutput()
if err != nil {
s.logger.Warn("Failed to set compression property", "pool", name, "compression", compression, "error", string(output))
@@ -205,7 +185,7 @@ func (s *ZFSService) CreatePool(ctx context.Context, name string, raidLevel stri
if err != nil {
// Try to destroy the pool if we can't get info
s.logger.Warn("Failed to get pool info, attempting to destroy pool", "name", name, "error", err)
zpoolCommand(ctx, "destroy", "-f", name).Run()
exec.CommandContext(ctx, "zpool", "destroy", "-f", name).Run()
return nil, fmt.Errorf("failed to get pool info after creation: %w", err)
}
@@ -239,7 +219,7 @@ func (s *ZFSService) CreatePool(ctx context.Context, name string, raidLevel stri
if err != nil {
// Cleanup: destroy pool if database insert fails
s.logger.Error("Failed to save pool to database, destroying pool", "name", name, "error", err)
zpoolCommand(ctx, "destroy", "-f", name).Run()
exec.CommandContext(ctx, "zpool", "destroy", "-f", name).Run()
return nil, fmt.Errorf("failed to save pool to database: %w", err)
}
@@ -263,7 +243,7 @@ func (s *ZFSService) CreatePool(ctx context.Context, name string, raidLevel stri
// getPoolInfo retrieves information about a ZFS pool
func (s *ZFSService) getPoolInfo(ctx context.Context, poolName string) (*ZFSPool, error) {
// Get pool size and used space
cmd := zpoolCommand(ctx, "list", "-H", "-o", "name,size,allocated", poolName)
cmd := exec.CommandContext(ctx, "zpool", "list", "-H", "-o", "name,size,allocated", poolName)
output, err := cmd.CombinedOutput()
if err != nil {
errorMsg := string(output)
@@ -342,7 +322,7 @@ func parseZFSSize(sizeStr string) (int64, error) {
// getSpareDisks retrieves spare disks from zpool status
func (s *ZFSService) getSpareDisks(ctx context.Context, poolName string) ([]string, error) {
cmd := zpoolCommand(ctx, "status", poolName)
cmd := exec.CommandContext(ctx, "zpool", "status", poolName)
output, err := cmd.CombinedOutput()
if err != nil {
return nil, fmt.Errorf("failed to get pool status: %w", err)
@@ -383,7 +363,7 @@ func (s *ZFSService) getSpareDisks(ctx context.Context, poolName string) ([]stri
// getCompressRatio gets the compression ratio from ZFS
func (s *ZFSService) getCompressRatio(ctx context.Context, poolName string) (float64, error) {
cmd := zfsCommand(ctx, "get", "-H", "-o", "value", "compressratio", poolName)
cmd := exec.CommandContext(ctx, "zfs", "get", "-H", "-o", "value", "compressratio", poolName)
output, err := cmd.Output()
if err != nil {
return 1.0, err
@@ -426,20 +406,16 @@ func (s *ZFSService) ListPools(ctx context.Context) ([]*ZFSPool, error) {
for rows.Next() {
var pool ZFSPool
var description sql.NullString
var createdBy sql.NullString
err := rows.Scan(
&pool.ID, &pool.Name, &description, &pool.RaidLevel, pq.Array(&pool.Disks),
&pool.SizeBytes, &pool.UsedBytes, &pool.Compression, &pool.Deduplication,
&pool.AutoExpand, &pool.ScrubInterval, &pool.IsActive, &pool.HealthStatus,
&pool.CreatedAt, &pool.UpdatedAt, &createdBy,
&pool.CreatedAt, &pool.UpdatedAt, &pool.CreatedBy,
)
if err != nil {
s.logger.Error("Failed to scan pool row", "error", err, "error_type", fmt.Sprintf("%T", err))
s.logger.Error("Failed to scan pool row", "error", err)
continue // Skip this pool instead of failing entire query
}
if createdBy.Valid {
pool.CreatedBy = createdBy.String
}
if description.Valid {
pool.Description = description.String
}
@@ -525,7 +501,7 @@ func (s *ZFSService) DeletePool(ctx context.Context, poolID string) error {
// Destroy ZFS pool with -f flag to force destroy (works for both empty and non-empty pools)
// The -f flag is needed to destroy pools even if they have datasets or are in use
s.logger.Info("Destroying ZFS pool", "pool", pool.Name)
cmd := zpoolCommand(ctx, "destroy", "-f", pool.Name)
cmd := exec.CommandContext(ctx, "zpool", "destroy", "-f", pool.Name)
output, err := cmd.CombinedOutput()
if err != nil {
errorMsg := string(output)
@@ -540,15 +516,6 @@ func (s *ZFSService) DeletePool(ctx context.Context, poolID string) error {
s.logger.Info("ZFS pool destroyed successfully", "pool", pool.Name)
}
// Remove mount point directory (default: /opt/calypso/data/pool/<pool-name>)
mountPoint := fmt.Sprintf("/opt/calypso/data/pool/%s", pool.Name)
if err := os.RemoveAll(mountPoint); err != nil {
s.logger.Warn("Failed to remove mount point directory", "mountpoint", mountPoint, "error", err)
// Don't fail pool deletion if mount point removal fails
} else {
s.logger.Info("Removed mount point directory", "mountpoint", mountPoint)
}
// Mark disks as unused
for _, diskPath := range pool.Disks {
_, err = s.db.ExecContext(ctx,
@@ -583,7 +550,7 @@ func (s *ZFSService) AddSpareDisk(ctx context.Context, poolID string, diskPaths
}
// Verify pool exists in ZFS and check if disks are already spare
cmd := zpoolCommand(ctx, "status", pool.Name)
cmd := exec.CommandContext(ctx, "zpool", "status", pool.Name)
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("pool %s does not exist in ZFS: %w", pool.Name, err)
@@ -608,7 +575,7 @@ func (s *ZFSService) AddSpareDisk(ctx context.Context, poolID string, diskPaths
// Execute zpool add
s.logger.Info("Adding spare disks to ZFS pool", "pool", pool.Name, "disks", diskPaths)
cmd = zpoolCommand(ctx, args...)
cmd = exec.CommandContext(ctx, "zpool", args...)
output, err = cmd.CombinedOutput()
if err != nil {
errorMsg := string(output)
@@ -643,7 +610,6 @@ func (s *ZFSService) AddSpareDisk(ctx context.Context, poolID string, diskPaths
// ZFSDataset represents a ZFS dataset
type ZFSDataset struct {
ID string `json:"id"`
Name string `json:"name"`
Pool string `json:"pool"`
Type string `json:"type"` // filesystem, volume, snapshot
@@ -662,7 +628,7 @@ type ZFSDataset struct {
func (s *ZFSService) ListDatasets(ctx context.Context, poolName string) ([]*ZFSDataset, error) {
// Get datasets from database
query := `
SELECT id, name, pool_name, type, mount_point,
SELECT name, pool_name, type, mount_point,
used_bytes, available_bytes, referenced_bytes,
compression, deduplication, quota, reservation,
created_at
@@ -688,7 +654,7 @@ func (s *ZFSService) ListDatasets(ctx context.Context, poolName string) ([]*ZFSD
var mountPoint sql.NullString
err := rows.Scan(
&ds.ID, &ds.Name, &ds.Pool, &ds.Type, &mountPoint,
&ds.Name, &ds.Pool, &ds.Type, &mountPoint,
&ds.UsedBytes, &ds.AvailableBytes, &ds.ReferencedBytes,
&ds.Compression, &ds.Deduplication, &ds.Quota, &ds.Reservation,
&ds.CreatedAt,
@@ -730,36 +696,10 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
// Construct full dataset name
fullName := poolName + "/" + req.Name
// Get pool mount point to validate dataset mount point is within pool directory
poolMountPoint := fmt.Sprintf("/opt/calypso/data/pool/%s", poolName)
var mountPath string
// For filesystem datasets, validate and set mount point
if req.Type == "filesystem" {
if req.MountPoint != "" {
// User provided mount point - validate it's within pool directory
mountPath = filepath.Clean(req.MountPoint)
// Check if mount point is within pool mount point directory
poolMountAbs, err := filepath.Abs(poolMountPoint)
if err != nil {
return nil, fmt.Errorf("failed to resolve pool mount point: %w", err)
}
mountPathAbs, err := filepath.Abs(mountPath)
if err != nil {
return nil, fmt.Errorf("failed to resolve mount point: %w", err)
}
// Check if mount path is within pool mount point directory
relPath, err := filepath.Rel(poolMountAbs, mountPathAbs)
if err != nil || strings.HasPrefix(relPath, "..") {
return nil, fmt.Errorf("mount point must be within pool directory: %s (pool mount: %s)", mountPath, poolMountPoint)
}
} else {
// No mount point provided - use default: /opt/calypso/data/pool/<pool-name>/<dataset-name>/
mountPath = filepath.Join(poolMountPoint, req.Name)
}
// For filesystem datasets, create mount directory if mount point is provided
if req.Type == "filesystem" && req.MountPoint != "" {
// Clean and validate mount point path
mountPath := filepath.Clean(req.MountPoint)
// Check if directory already exists
if info, err := os.Stat(mountPath); err == nil {
@@ -808,14 +748,14 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
args = append(args, "-o", fmt.Sprintf("compression=%s", req.Compression))
}
// Set mount point for filesystems (always set, either user-provided or default)
if req.Type == "filesystem" {
args = append(args, "-o", fmt.Sprintf("mountpoint=%s", mountPath))
// Set mount point if provided (only for filesystems, not volumes)
if req.Type == "filesystem" && req.MountPoint != "" {
args = append(args, "-o", fmt.Sprintf("mountpoint=%s", req.MountPoint))
}
// Execute zfs create
s.logger.Info("Creating ZFS dataset", "name", fullName, "type", req.Type)
cmd := zfsCommand(ctx, args...)
cmd := exec.CommandContext(ctx, "zfs", args...)
output, err := cmd.CombinedOutput()
if err != nil {
errorMsg := string(output)
@@ -825,7 +765,7 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
// Set quota if specified (for filesystems)
if req.Type == "filesystem" && req.Quota > 0 {
quotaCmd := zfsCommand(ctx, "set", fmt.Sprintf("quota=%d", req.Quota), fullName)
quotaCmd := exec.CommandContext(ctx, "zfs", "set", fmt.Sprintf("quota=%d", req.Quota), fullName)
if quotaOutput, err := quotaCmd.CombinedOutput(); err != nil {
s.logger.Warn("Failed to set quota", "dataset", fullName, "error", err, "output", string(quotaOutput))
}
@@ -833,7 +773,7 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
// Set reservation if specified
if req.Reservation > 0 {
resvCmd := zfsCommand(ctx, "set", fmt.Sprintf("reservation=%d", req.Reservation), fullName)
resvCmd := exec.CommandContext(ctx, "zfs", "set", fmt.Sprintf("reservation=%d", req.Reservation), fullName)
if resvOutput, err := resvCmd.CombinedOutput(); err != nil {
s.logger.Warn("Failed to set reservation", "dataset", fullName, "error", err, "output", string(resvOutput))
}
@@ -845,30 +785,30 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
if err != nil {
s.logger.Error("Failed to get pool ID", "pool", poolName, "error", err)
// Try to destroy the dataset if we can't save to database
zfsCommand(ctx, "destroy", "-r", fullName).Run()
exec.CommandContext(ctx, "zfs", "destroy", "-r", fullName).Run()
return nil, fmt.Errorf("failed to get pool ID: %w", err)
}
// Get dataset info from ZFS to save to database
cmd = zfsCommand(ctx, "list", "-H", "-o", "name,used,avail,refer,compress,dedup,quota,reservation,mountpoint", fullName)
cmd = exec.CommandContext(ctx, "zfs", "list", "-H", "-o", "name,used,avail,refer,compress,dedup,quota,reservation,mountpoint", fullName)
output, err = cmd.CombinedOutput()
if err != nil {
s.logger.Error("Failed to get dataset info", "name", fullName, "error", err)
// Try to destroy the dataset if we can't get info
zfsCommand(ctx, "destroy", "-r", fullName).Run()
exec.CommandContext(ctx, "zfs", "destroy", "-r", fullName).Run()
return nil, fmt.Errorf("failed to get dataset info: %w", err)
}
// Parse dataset info
lines := strings.TrimSpace(string(output))
if lines == "" {
zfsCommand(ctx, "destroy", "-r", fullName).Run()
exec.CommandContext(ctx, "zfs", "destroy", "-r", fullName).Run()
return nil, fmt.Errorf("dataset not found after creation")
}
fields := strings.Fields(lines)
if len(fields) < 9 {
zfsCommand(ctx, "destroy", "-r", fullName).Run()
exec.CommandContext(ctx, "zfs", "destroy", "-r", fullName).Run()
return nil, fmt.Errorf("invalid dataset info format")
}
@@ -883,7 +823,7 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
// Determine dataset type
datasetType := req.Type
typeCmd := zfsCommand(ctx, "get", "-H", "-o", "value", "type", fullName)
typeCmd := exec.CommandContext(ctx, "zfs", "get", "-H", "-o", "value", "type", fullName)
if typeOutput, err := typeCmd.Output(); err == nil {
volType := strings.TrimSpace(string(typeOutput))
if volType == "volume" {
@@ -897,7 +837,7 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
quota := int64(-1)
if datasetType == "volume" {
// For volumes, get volsize
volsizeCmd := zfsCommand(ctx, "get", "-H", "-o", "value", "volsize", fullName)
volsizeCmd := exec.CommandContext(ctx, "zfs", "get", "-H", "-o", "value", "volsize", fullName)
if volsizeOutput, err := volsizeCmd.Output(); err == nil {
volsizeStr := strings.TrimSpace(string(volsizeOutput))
if volsizeStr != "-" && volsizeStr != "none" {
@@ -927,7 +867,7 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
// Get creation time
createdAt := time.Now()
creationCmd := zfsCommand(ctx, "get", "-H", "-o", "value", "creation", fullName)
creationCmd := exec.CommandContext(ctx, "zfs", "get", "-H", "-o", "value", "creation", fullName)
if creationOutput, err := creationCmd.Output(); err == nil {
creationStr := strings.TrimSpace(string(creationOutput))
if t, err := time.Parse("Mon Jan 2 15:04:05 2006", creationStr); err == nil {
@@ -959,7 +899,7 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
if err != nil {
s.logger.Error("Failed to save dataset to database", "name", fullName, "error", err)
// Try to destroy the dataset if we can't save to database
zfsCommand(ctx, "destroy", "-r", fullName).Run()
exec.CommandContext(ctx, "zfs", "destroy", "-r", fullName).Run()
return nil, fmt.Errorf("failed to save dataset to database: %w", err)
}
@@ -987,7 +927,7 @@ func (s *ZFSService) CreateDataset(ctx context.Context, poolName string, req Cre
func (s *ZFSService) DeleteDataset(ctx context.Context, datasetName string) error {
// Check if dataset exists and get its mount point before deletion
var mountPoint string
cmd := zfsCommand(ctx, "list", "-H", "-o", "name,mountpoint", datasetName)
cmd := exec.CommandContext(ctx, "zfs", "list", "-H", "-o", "name,mountpoint", datasetName)
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("dataset %s does not exist: %w", datasetName, err)
@@ -1006,7 +946,7 @@ func (s *ZFSService) DeleteDataset(ctx context.Context, datasetName string) erro
// Get dataset type to determine if we should clean up mount directory
var datasetType string
typeCmd := zfsCommand(ctx, "get", "-H", "-o", "value", "type", datasetName)
typeCmd := exec.CommandContext(ctx, "zfs", "get", "-H", "-o", "value", "type", datasetName)
typeOutput, err := typeCmd.Output()
if err == nil {
datasetType = strings.TrimSpace(string(typeOutput))
@@ -1029,7 +969,7 @@ func (s *ZFSService) DeleteDataset(ctx context.Context, datasetName string) erro
// Delete the dataset from ZFS (use -r for recursive to delete children)
s.logger.Info("Deleting ZFS dataset", "name", datasetName, "mountpoint", mountPoint)
cmd = zfsCommand(ctx, "destroy", "-r", datasetName)
cmd = exec.CommandContext(ctx, "zfs", "destroy", "-r", datasetName)
output, err = cmd.CombinedOutput()
if err != nil {
errorMsg := string(output)

View File

@@ -2,7 +2,6 @@ package storage
import (
"context"
"fmt"
"os/exec"
"regexp"
"strconv"
@@ -99,17 +98,11 @@ type PoolInfo struct {
func (m *ZFSPoolMonitor) getSystemPools(ctx context.Context) (map[string]PoolInfo, error) {
pools := make(map[string]PoolInfo)
// Get pool list (with sudo for permissions)
cmd := exec.CommandContext(ctx, "sudo", "zpool", "list", "-H", "-o", "name,size,alloc,free,health")
output, err := cmd.CombinedOutput()
// Get pool list
cmd := exec.CommandContext(ctx, "zpool", "list", "-H", "-o", "name,size,alloc,free,health")
output, err := cmd.Output()
if err != nil {
// If no pools exist, zpool list returns exit code 1 but that's OK
// Check if output is empty (no pools) vs actual error
outputStr := strings.TrimSpace(string(output))
if outputStr == "" || strings.Contains(outputStr, "no pools available") {
return pools, nil // No pools, return empty map (not an error)
}
return nil, fmt.Errorf("zpool list failed: %w, output: %s", err, outputStr)
return nil, err
}
lines := strings.Split(strings.TrimSpace(string(output)), "\n")

View File

@@ -3,7 +3,6 @@ package system
import (
"net/http"
"strconv"
"time"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/atlasos/calypso/internal/tasks"
@@ -132,163 +131,3 @@ func (h *Handler) ListNetworkInterfaces(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{"interfaces": interfaces})
}
// GetManagementIPAddress returns the management IP address
func (h *Handler) GetManagementIPAddress(c *gin.Context) {
ip, err := h.service.GetManagementIPAddress(c.Request.Context())
if err != nil {
h.logger.Error("Failed to get management IP address", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get management IP address"})
return
}
c.JSON(http.StatusOK, gin.H{"ip_address": ip})
}
// SaveNTPSettings saves NTP configuration to the OS
func (h *Handler) SaveNTPSettings(c *gin.Context) {
var settings NTPSettings
if err := c.ShouldBindJSON(&settings); err != nil {
h.logger.Error("Invalid request body", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request body"})
return
}
// Validate timezone
if settings.Timezone == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "timezone is required"})
return
}
// Validate NTP servers
if len(settings.NTPServers) == 0 {
c.JSON(http.StatusBadRequest, gin.H{"error": "at least one NTP server is required"})
return
}
if err := h.service.SaveNTPSettings(c.Request.Context(), settings); err != nil {
h.logger.Error("Failed to save NTP settings", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "NTP settings saved successfully"})
}
// GetNTPSettings retrieves current NTP configuration
func (h *Handler) GetNTPSettings(c *gin.Context) {
settings, err := h.service.GetNTPSettings(c.Request.Context())
if err != nil {
h.logger.Error("Failed to get NTP settings", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get NTP settings"})
return
}
c.JSON(http.StatusOK, gin.H{"settings": settings})
}
// UpdateNetworkInterface updates a network interface configuration
func (h *Handler) UpdateNetworkInterface(c *gin.Context) {
ifaceName := c.Param("name")
if ifaceName == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "interface name is required"})
return
}
var req struct {
IPAddress string `json:"ip_address" binding:"required"`
Subnet string `json:"subnet" binding:"required"`
Gateway string `json:"gateway,omitempty"`
DNS1 string `json:"dns1,omitempty"`
DNS2 string `json:"dns2,omitempty"`
Role string `json:"role,omitempty"`
}
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid request body", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request body"})
return
}
// Convert to service request
serviceReq := UpdateNetworkInterfaceRequest{
IPAddress: req.IPAddress,
Subnet: req.Subnet,
Gateway: req.Gateway,
DNS1: req.DNS1,
DNS2: req.DNS2,
Role: req.Role,
}
updatedIface, err := h.service.UpdateNetworkInterface(c.Request.Context(), ifaceName, serviceReq)
if err != nil {
h.logger.Error("Failed to update network interface", "interface", ifaceName, "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"interface": updatedIface})
}
// GetSystemLogs retrieves recent system logs
func (h *Handler) GetSystemLogs(c *gin.Context) {
limitStr := c.DefaultQuery("limit", "30")
limit, err := strconv.Atoi(limitStr)
if err != nil || limit <= 0 || limit > 100 {
limit = 30
}
logs, err := h.service.GetSystemLogs(c.Request.Context(), limit)
if err != nil {
h.logger.Error("Failed to get system logs", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get system logs"})
return
}
c.JSON(http.StatusOK, gin.H{"logs": logs})
}
// GetNetworkThroughput retrieves network throughput data from RRD
func (h *Handler) GetNetworkThroughput(c *gin.Context) {
// Default to last 5 minutes
durationStr := c.DefaultQuery("duration", "5m")
duration, err := time.ParseDuration(durationStr)
if err != nil {
duration = 5 * time.Minute
}
data, err := h.service.GetNetworkThroughput(c.Request.Context(), duration)
if err != nil {
h.logger.Error("Failed to get network throughput", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to get network throughput"})
return
}
c.JSON(http.StatusOK, gin.H{"data": data})
}
// ExecuteCommand executes a shell command
func (h *Handler) ExecuteCommand(c *gin.Context) {
var req struct {
Command string `json:"command" binding:"required"`
Service string `json:"service,omitempty"` // Optional: system, scst, storage, backup, tape
}
if err := c.ShouldBindJSON(&req); err != nil {
h.logger.Error("Invalid request body", "error", err)
c.JSON(http.StatusBadRequest, gin.H{"error": "command is required"})
return
}
// Execute command based on service context
output, err := h.service.ExecuteCommand(c.Request.Context(), req.Command, req.Service)
if err != nil {
h.logger.Error("Failed to execute command", "error", err, "command", req.Command, "service", req.Service)
c.JSON(http.StatusInternalServerError, gin.H{
"error": err.Error(),
"output": output, // Include output even on error
})
return
}
c.JSON(http.StatusOK, gin.H{"output": output})
}

View File

@@ -1,292 +0,0 @@
package system
import (
"context"
"fmt"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/atlasos/calypso/internal/common/logger"
)
// RRDService handles RRD database operations for network monitoring
type RRDService struct {
logger *logger.Logger
rrdDir string
interfaceName string
}
// NewRRDService creates a new RRD service
func NewRRDService(log *logger.Logger, rrdDir string, interfaceName string) *RRDService {
return &RRDService{
logger: log,
rrdDir: rrdDir,
interfaceName: interfaceName,
}
}
// NetworkStats represents network interface statistics
type NetworkStats struct {
Interface string `json:"interface"`
RxBytes uint64 `json:"rx_bytes"`
TxBytes uint64 `json:"tx_bytes"`
RxPackets uint64 `json:"rx_packets"`
TxPackets uint64 `json:"tx_packets"`
Timestamp time.Time `json:"timestamp"`
}
// GetNetworkStats reads network statistics from /proc/net/dev
func (r *RRDService) GetNetworkStats(ctx context.Context, interfaceName string) (*NetworkStats, error) {
data, err := os.ReadFile("/proc/net/dev")
if err != nil {
return nil, fmt.Errorf("failed to read /proc/net/dev: %w", err)
}
lines := strings.Split(string(data), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if !strings.HasPrefix(line, interfaceName+":") {
continue
}
// Parse line: interface: rx_bytes rx_packets ... tx_bytes tx_packets ...
parts := strings.Fields(line)
if len(parts) < 17 {
continue
}
// Extract statistics
// Format: interface: rx_bytes rx_packets rx_errs rx_drop ... tx_bytes tx_packets ...
rxBytes, err := strconv.ParseUint(parts[1], 10, 64)
if err != nil {
continue
}
rxPackets, err := strconv.ParseUint(parts[2], 10, 64)
if err != nil {
continue
}
txBytes, err := strconv.ParseUint(parts[9], 10, 64)
if err != nil {
continue
}
txPackets, err := strconv.ParseUint(parts[10], 10, 64)
if err != nil {
continue
}
return &NetworkStats{
Interface: interfaceName,
RxBytes: rxBytes,
TxBytes: txBytes,
RxPackets: rxPackets,
TxPackets: txPackets,
Timestamp: time.Now(),
}, nil
}
return nil, fmt.Errorf("interface %s not found in /proc/net/dev", interfaceName)
}
// InitializeRRD creates RRD database if it doesn't exist
func (r *RRDService) InitializeRRD(ctx context.Context) error {
// Ensure RRD directory exists
if err := os.MkdirAll(r.rrdDir, 0755); err != nil {
return fmt.Errorf("failed to create RRD directory: %w", err)
}
rrdFile := filepath.Join(r.rrdDir, fmt.Sprintf("network-%s.rrd", r.interfaceName))
// Check if RRD file already exists
if _, err := os.Stat(rrdFile); err == nil {
r.logger.Info("RRD file already exists", "file", rrdFile)
return nil
}
// Create RRD database
// Use COUNTER type to track cumulative bytes, RRD will calculate rate automatically
// DS:inbound:COUNTER:20:0:U - inbound cumulative bytes, 20s heartbeat
// DS:outbound:COUNTER:20:0:U - outbound cumulative bytes, 20s heartbeat
// RRA:AVERAGE:0.5:1:600 - 1 sample per step, 600 steps (100 minutes at 10s interval)
// RRA:AVERAGE:0.5:6:700 - 6 samples per step, 700 steps (11.6 hours at 1min interval)
// RRA:AVERAGE:0.5:60:730 - 60 samples per step, 730 steps (5 days at 1hour interval)
// RRA:MAX:0.5:1:600 - Max values for same intervals
// RRA:MAX:0.5:6:700
// RRA:MAX:0.5:60:730
cmd := exec.CommandContext(ctx, "rrdtool", "create", rrdFile,
"--step", "10", // 10 second step
"DS:inbound:COUNTER:20:0:U", // Inbound cumulative bytes, 20s heartbeat
"DS:outbound:COUNTER:20:0:U", // Outbound cumulative bytes, 20s heartbeat
"RRA:AVERAGE:0.5:1:600", // 10s resolution, 100 minutes
"RRA:AVERAGE:0.5:6:700", // 1min resolution, 11.6 hours
"RRA:AVERAGE:0.5:60:730", // 1hour resolution, 5 days
"RRA:MAX:0.5:1:600", // Max values
"RRA:MAX:0.5:6:700",
"RRA:MAX:0.5:60:730",
)
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("failed to create RRD: %s: %w", string(output), err)
}
r.logger.Info("RRD database created", "file", rrdFile)
return nil
}
// UpdateRRD updates RRD database with new network statistics
func (r *RRDService) UpdateRRD(ctx context.Context, stats *NetworkStats) error {
rrdFile := filepath.Join(r.rrdDir, fmt.Sprintf("network-%s.rrd", stats.Interface))
// Update with cumulative byte counts (COUNTER type)
// RRD will automatically calculate the rate (bytes per second)
cmd := exec.CommandContext(ctx, "rrdtool", "update", rrdFile,
fmt.Sprintf("%d:%d:%d", stats.Timestamp.Unix(), stats.RxBytes, stats.TxBytes),
)
output, err := cmd.CombinedOutput()
if err != nil {
return fmt.Errorf("failed to update RRD: %s: %w", string(output), err)
}
return nil
}
// FetchRRDData fetches data from RRD database for graphing
func (r *RRDService) FetchRRDData(ctx context.Context, startTime time.Time, endTime time.Time, resolution string) ([]NetworkDataPoint, error) {
rrdFile := filepath.Join(r.rrdDir, fmt.Sprintf("network-%s.rrd", r.interfaceName))
// Check if RRD file exists
if _, err := os.Stat(rrdFile); os.IsNotExist(err) {
return []NetworkDataPoint{}, nil
}
// Fetch data using rrdtool fetch
// Use AVERAGE consolidation with appropriate resolution
cmd := exec.CommandContext(ctx, "rrdtool", "fetch", rrdFile,
"AVERAGE",
"--start", fmt.Sprintf("%d", startTime.Unix()),
"--end", fmt.Sprintf("%d", endTime.Unix()),
"--resolution", resolution,
)
output, err := cmd.CombinedOutput()
if err != nil {
return nil, fmt.Errorf("failed to fetch RRD data: %s: %w", string(output), err)
}
// Parse rrdtool fetch output
// Format:
// inbound outbound
// 1234567890: 1.2345678901e+06 2.3456789012e+06
points := []NetworkDataPoint{}
lines := strings.Split(string(output), "\n")
// Skip header lines
dataStart := false
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" {
continue
}
// Check if this is the data section
if strings.Contains(line, "inbound") && strings.Contains(line, "outbound") {
dataStart = true
continue
}
if !dataStart {
continue
}
// Parse data line: timestamp: inbound_value outbound_value
parts := strings.Fields(line)
if len(parts) < 3 {
continue
}
// Parse timestamp
timestampStr := strings.TrimSuffix(parts[0], ":")
timestamp, err := strconv.ParseInt(timestampStr, 10, 64)
if err != nil {
continue
}
// Parse inbound (bytes per second from COUNTER, convert to Mbps)
inboundStr := parts[1]
inbound, err := strconv.ParseFloat(inboundStr, 64)
if err != nil || inbound < 0 {
// Skip NaN or negative values
continue
}
// Convert bytes per second to Mbps (bytes/s * 8 / 1000000)
inboundMbps := inbound * 8 / 1000000
// Parse outbound
outboundStr := parts[2]
outbound, err := strconv.ParseFloat(outboundStr, 64)
if err != nil || outbound < 0 {
// Skip NaN or negative values
continue
}
outboundMbps := outbound * 8 / 1000000
// Format time as MM:SS
t := time.Unix(timestamp, 0)
timeStr := fmt.Sprintf("%02d:%02d", t.Minute(), t.Second())
points = append(points, NetworkDataPoint{
Time: timeStr,
Inbound: inboundMbps,
Outbound: outboundMbps,
})
}
return points, nil
}
// NetworkDataPoint represents a single data point for graphing
type NetworkDataPoint struct {
Time string `json:"time"`
Inbound float64 `json:"inbound"` // Mbps
Outbound float64 `json:"outbound"` // Mbps
}
// StartCollector starts a background goroutine to periodically collect and update RRD
func (r *RRDService) StartCollector(ctx context.Context, interval time.Duration) error {
// Initialize RRD if needed
if err := r.InitializeRRD(ctx); err != nil {
return fmt.Errorf("failed to initialize RRD: %w", err)
}
go func() {
ticker := time.NewTicker(interval)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
// Get current stats
stats, err := r.GetNetworkStats(ctx, r.interfaceName)
if err != nil {
r.logger.Warn("Failed to get network stats", "error", err)
continue
}
// Update RRD with cumulative byte counts
// RRD COUNTER type will automatically calculate rate
if err := r.UpdateRRD(ctx, stats); err != nil {
r.logger.Warn("Failed to update RRD", "error", err)
}
}
}
}()
return nil
}

View File

@@ -4,7 +4,6 @@ import (
"context"
"encoding/json"
"fmt"
"os"
"os/exec"
"strings"
"time"
@@ -12,98 +11,18 @@ import (
"github.com/atlasos/calypso/internal/common/logger"
)
// NTPSettings represents NTP configuration
type NTPSettings struct {
Timezone string `json:"timezone"`
NTPServers []string `json:"ntp_servers"`
}
// Service handles system management operations
type Service struct {
logger *logger.Logger
rrdService *RRDService
}
// detectPrimaryInterface detects the primary network interface (first non-loopback with IP)
func detectPrimaryInterface(ctx context.Context) string {
// Try to get default route interface
cmd := exec.CommandContext(ctx, "ip", "route", "show", "default")
output, err := cmd.Output()
if err == nil {
lines := strings.Split(string(output), "\n")
for _, line := range lines {
if strings.Contains(line, "dev ") {
parts := strings.Fields(line)
for i, part := range parts {
if part == "dev" && i+1 < len(parts) {
iface := parts[i+1]
if iface != "lo" {
return iface
}
}
}
}
}
}
// Fallback: get first non-loopback interface with IP
cmd = exec.CommandContext(ctx, "ip", "-4", "addr", "show")
output, err = cmd.Output()
if err == nil {
lines := strings.Split(string(output), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
// Look for interface name line (e.g., "2: ens18: <BROADCAST...")
if len(line) > 0 && line[0] >= '0' && line[0] <= '9' && strings.Contains(line, ":") {
parts := strings.Fields(line)
if len(parts) >= 2 {
iface := strings.TrimSuffix(parts[1], ":")
if iface != "" && iface != "lo" {
// Check if this interface has an IP (next lines will have "inet")
// For simplicity, return first non-loopback interface
return iface
}
}
}
}
}
// Final fallback
return "eth0"
logger *logger.Logger
}
// NewService creates a new system service
func NewService(log *logger.Logger) *Service {
// Initialize RRD service for network monitoring
rrdDir := "/var/lib/calypso/rrd"
// Auto-detect primary interface
ctx := context.Background()
interfaceName := detectPrimaryInterface(ctx)
log.Info("Detected primary network interface", "interface", interfaceName)
rrdService := NewRRDService(log, rrdDir, interfaceName)
return &Service{
logger: log,
rrdService: rrdService,
logger: log,
}
}
// StartNetworkMonitoring starts the RRD collector for network monitoring
func (s *Service) StartNetworkMonitoring(ctx context.Context) error {
return s.rrdService.StartCollector(ctx, 10*time.Second)
}
// GetNetworkThroughput fetches network throughput data from RRD
func (s *Service) GetNetworkThroughput(ctx context.Context, duration time.Duration) ([]NetworkDataPoint, error) {
endTime := time.Now()
startTime := endTime.Add(-duration)
// Use 10 second resolution for recent data
return s.rrdService.FetchRRDData(ctx, startTime, endTime, "10")
}
// ServiceStatus represents a systemd service status
type ServiceStatus struct {
Name string `json:"name"`
@@ -116,37 +35,31 @@ type ServiceStatus struct {
// GetServiceStatus retrieves the status of a systemd service
func (s *Service) GetServiceStatus(ctx context.Context, serviceName string) (*ServiceStatus, error) {
status := &ServiceStatus{
Name: serviceName,
}
// Get each property individually to ensure correct parsing
properties := map[string]*string{
"ActiveState": &status.ActiveState,
"SubState": &status.SubState,
"LoadState": &status.LoadState,
"Description": &status.Description,
}
for prop, target := range properties {
cmd := exec.CommandContext(ctx, "systemctl", "show", serviceName, "--property", prop, "--value", "--no-pager")
output, err := cmd.Output()
if err != nil {
s.logger.Warn("Failed to get property", "service", serviceName, "property", prop, "error", err)
continue
}
*target = strings.TrimSpace(string(output))
}
// Get timestamp if available
cmd := exec.CommandContext(ctx, "systemctl", "show", serviceName, "--property", "ActiveEnterTimestamp", "--value", "--no-pager")
cmd := exec.CommandContext(ctx, "systemctl", "show", serviceName,
"--property=ActiveState,SubState,LoadState,Description,ActiveEnterTimestamp",
"--value", "--no-pager")
output, err := cmd.Output()
if err == nil {
timestamp := strings.TrimSpace(string(output))
if timestamp != "" {
if t, err := time.Parse("Mon 2006-01-02 15:04:05 MST", timestamp); err == nil {
status.Since = t
}
if err != nil {
return nil, fmt.Errorf("failed to get service status: %w", err)
}
lines := strings.Split(strings.TrimSpace(string(output)), "\n")
if len(lines) < 4 {
return nil, fmt.Errorf("invalid service status output")
}
status := &ServiceStatus{
Name: serviceName,
ActiveState: strings.TrimSpace(lines[0]),
SubState: strings.TrimSpace(lines[1]),
LoadState: strings.TrimSpace(lines[2]),
Description: strings.TrimSpace(lines[3]),
}
// Parse timestamp if available
if len(lines) > 4 && lines[4] != "" {
if t, err := time.Parse("Mon 2006-01-02 15:04:05 MST", strings.TrimSpace(lines[4])); err == nil {
status.Since = t
}
}
@@ -156,15 +69,10 @@ func (s *Service) GetServiceStatus(ctx context.Context, serviceName string) (*Se
// ListServices lists all Calypso-related services
func (s *Service) ListServices(ctx context.Context) ([]ServiceStatus, error) {
services := []string{
"ssh",
"sshd",
"smbd",
"iscsi-scst",
"nfs-server",
"nfs",
"mhvtl",
"calypso-api",
"scst",
"iscsi-scst",
"mhvtl",
"postgresql",
}
@@ -220,108 +128,6 @@ func (s *Service) GetJournalLogs(ctx context.Context, serviceName string, lines
return logs, nil
}
// SystemLogEntry represents a parsed system log entry
type SystemLogEntry struct {
Time string `json:"time"`
Level string `json:"level"`
Source string `json:"source"`
Message string `json:"message"`
}
// GetSystemLogs retrieves recent system logs from journalctl
func (s *Service) GetSystemLogs(ctx context.Context, limit int) ([]SystemLogEntry, error) {
if limit <= 0 || limit > 100 {
limit = 30 // Default to 30 logs
}
cmd := exec.CommandContext(ctx, "journalctl",
"-n", fmt.Sprintf("%d", limit),
"-o", "json",
"--no-pager",
"--since", "1 hour ago") // Only get logs from last hour
output, err := cmd.Output()
if err != nil {
return nil, fmt.Errorf("failed to get system logs: %w", err)
}
var logs []SystemLogEntry
linesOutput := strings.Split(strings.TrimSpace(string(output)), "\n")
for _, line := range linesOutput {
if line == "" {
continue
}
var logEntry map[string]interface{}
if err := json.Unmarshal([]byte(line), &logEntry); err != nil {
continue
}
// Parse timestamp (__REALTIME_TIMESTAMP is in microseconds)
var timeStr string
if timestamp, ok := logEntry["__REALTIME_TIMESTAMP"].(float64); ok {
// Convert microseconds to nanoseconds for time.Unix (1 microsecond = 1000 nanoseconds)
t := time.Unix(0, int64(timestamp)*1000)
timeStr = t.Format("15:04:05")
} else if timestamp, ok := logEntry["_SOURCE_REALTIME_TIMESTAMP"].(float64); ok {
t := time.Unix(0, int64(timestamp)*1000)
timeStr = t.Format("15:04:05")
} else {
timeStr = time.Now().Format("15:04:05")
}
// Parse log level (priority)
level := "INFO"
if priority, ok := logEntry["PRIORITY"].(float64); ok {
switch int(priority) {
case 0: // emerg
level = "EMERG"
case 1, 2, 3: // alert, crit, err
level = "ERROR"
case 4: // warning
level = "WARN"
case 5: // notice
level = "NOTICE"
case 6: // info
level = "INFO"
case 7: // debug
level = "DEBUG"
}
}
// Parse source (systemd unit or syslog identifier)
source := "system"
if unit, ok := logEntry["_SYSTEMD_UNIT"].(string); ok && unit != "" {
// Remove .service suffix if present
source = strings.TrimSuffix(unit, ".service")
} else if ident, ok := logEntry["SYSLOG_IDENTIFIER"].(string); ok && ident != "" {
source = ident
} else if comm, ok := logEntry["_COMM"].(string); ok && comm != "" {
source = comm
}
// Parse message
message := ""
if msg, ok := logEntry["MESSAGE"].(string); ok {
message = msg
}
if message != "" {
logs = append(logs, SystemLogEntry{
Time: timeStr,
Level: level,
Source: source,
Message: message,
})
}
}
// Reverse to get newest first
for i, j := 0, len(logs)-1; i < j; i, j = i+1, j-1 {
logs[i], logs[j] = logs[j], logs[i]
}
return logs, nil
}
// GenerateSupportBundle generates a diagnostic support bundle
func (s *Service) GenerateSupportBundle(ctx context.Context, outputPath string) error {
// Create bundle directory
@@ -377,9 +183,6 @@ type NetworkInterface struct {
Status string `json:"status"` // "Connected" or "Down"
Speed string `json:"speed"` // e.g., "10 Gbps", "1 Gbps"
Role string `json:"role"` // "Management", "ISCSI", or empty
Gateway string `json:"gateway,omitempty"`
DNS1 string `json:"dns1,omitempty"`
DNS2 string `json:"dns2,omitempty"`
}
// ListNetworkInterfaces lists all network interfaces
@@ -494,103 +297,6 @@ func (s *Service) ListNetworkInterfaces(ctx context.Context) ([]NetworkInterface
}
}
// Get default gateway for each interface
cmd = exec.CommandContext(ctx, "ip", "route", "show")
output, err = cmd.Output()
if err == nil {
lines = strings.Split(string(output), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" {
continue
}
// Parse default route: "default via 10.10.14.1 dev ens18"
if strings.HasPrefix(line, "default via ") {
parts := strings.Fields(line)
// Find "via" and "dev" in the parts
var gateway string
var ifaceName string
for i, part := range parts {
if part == "via" && i+1 < len(parts) {
gateway = parts[i+1]
}
if part == "dev" && i+1 < len(parts) {
ifaceName = parts[i+1]
}
}
if gateway != "" && ifaceName != "" {
if iface, exists := interfaceMap[ifaceName]; exists {
iface.Gateway = gateway
s.logger.Info("Set default gateway for interface", "name", ifaceName, "gateway", gateway)
}
}
} else if strings.Contains(line, " via ") && strings.Contains(line, " dev ") {
// Parse network route: "10.10.14.0/24 via 10.10.14.1 dev ens18"
// Or: "192.168.1.0/24 via 192.168.1.1 dev eth0"
parts := strings.Fields(line)
var gateway string
var ifaceName string
for i, part := range parts {
if part == "via" && i+1 < len(parts) {
gateway = parts[i+1]
}
if part == "dev" && i+1 < len(parts) {
ifaceName = parts[i+1]
}
}
// Only set gateway if it's not already set (prefer default route)
if gateway != "" && ifaceName != "" {
if iface, exists := interfaceMap[ifaceName]; exists {
if iface.Gateway == "" {
iface.Gateway = gateway
s.logger.Info("Set gateway from network route for interface", "name", ifaceName, "gateway", gateway)
}
}
}
}
}
} else {
s.logger.Warn("Failed to get routes", "error", err)
}
// Get DNS servers from systemd-resolved or /etc/resolv.conf
// Try systemd-resolved first
cmd = exec.CommandContext(ctx, "systemd-resolve", "--status")
output, err = cmd.Output()
dnsServers := []string{}
if err == nil {
// Parse DNS from systemd-resolve output
lines = strings.Split(string(output), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "DNS Servers:") {
// Format: "DNS Servers: 8.8.8.8 8.8.4.4"
parts := strings.Fields(line)
if len(parts) >= 3 {
dnsServers = parts[2:]
}
break
}
}
} else {
// Fallback to /etc/resolv.conf
data, err := os.ReadFile("/etc/resolv.conf")
if err == nil {
lines = strings.Split(string(data), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "nameserver ") {
dns := strings.TrimPrefix(line, "nameserver ")
dns = strings.TrimSpace(dns)
if dns != "" {
dnsServers = append(dnsServers, dns)
}
}
}
}
}
// Convert map to slice
var interfaces []NetworkInterface
s.logger.Debug("Converting interface map to slice", "map_size", len(interfaceMap))
@@ -613,14 +319,6 @@ func (s *Service) ListNetworkInterfaces(ctx context.Context) ([]NetworkInterface
}
}
// Set DNS servers (use first two if available)
if len(dnsServers) > 0 {
iface.DNS1 = dnsServers[0]
}
if len(dnsServers) > 1 {
iface.DNS2 = dnsServers[1]
}
// Determine role based on interface name or IP (simple heuristic)
// You can enhance this with configuration file or database lookup
if strings.Contains(iface.Name, "eth") || strings.Contains(iface.Name, "ens") {
@@ -647,401 +345,3 @@ func (s *Service) ListNetworkInterfaces(ctx context.Context) ([]NetworkInterface
s.logger.Info("Listed network interfaces", "count", len(interfaces))
return interfaces, nil
}
// GetManagementIPAddress returns the IP address of the management interface
func (s *Service) GetManagementIPAddress(ctx context.Context) (string, error) {
interfaces, err := s.ListNetworkInterfaces(ctx)
if err != nil {
return "", fmt.Errorf("failed to list network interfaces: %w", err)
}
// First, try to find interface with Role "Management"
for _, iface := range interfaces {
if iface.Role == "Management" && iface.IPAddress != "" && iface.Status == "Connected" {
s.logger.Info("Found management interface", "interface", iface.Name, "ip", iface.IPAddress)
return iface.IPAddress, nil
}
}
// Fallback: use interface with default route (primary interface)
for _, iface := range interfaces {
if iface.Gateway != "" && iface.IPAddress != "" && iface.Status == "Connected" {
s.logger.Info("Using primary interface as management", "interface", iface.Name, "ip", iface.IPAddress)
return iface.IPAddress, nil
}
}
// Final fallback: use first connected interface with IP
for _, iface := range interfaces {
if iface.IPAddress != "" && iface.Status == "Connected" && iface.Name != "lo" {
s.logger.Info("Using first connected interface as management", "interface", iface.Name, "ip", iface.IPAddress)
return iface.IPAddress, nil
}
}
return "", fmt.Errorf("no management interface found")
}
// UpdateNetworkInterfaceRequest represents the request to update a network interface
type UpdateNetworkInterfaceRequest struct {
IPAddress string `json:"ip_address"`
Subnet string `json:"subnet"`
Gateway string `json:"gateway,omitempty"`
DNS1 string `json:"dns1,omitempty"`
DNS2 string `json:"dns2,omitempty"`
Role string `json:"role,omitempty"`
}
// UpdateNetworkInterface updates network interface configuration
func (s *Service) UpdateNetworkInterface(ctx context.Context, ifaceName string, req UpdateNetworkInterfaceRequest) (*NetworkInterface, error) {
// Validate interface exists
cmd := exec.CommandContext(ctx, "ip", "link", "show", ifaceName)
if err := cmd.Run(); err != nil {
return nil, fmt.Errorf("interface %s not found: %w", ifaceName, err)
}
// Remove existing IP address if any
cmd = exec.CommandContext(ctx, "ip", "addr", "flush", "dev", ifaceName)
cmd.Run() // Ignore error, interface might not have IP
// Set new IP address and subnet
ipWithSubnet := fmt.Sprintf("%s/%s", req.IPAddress, req.Subnet)
cmd = exec.CommandContext(ctx, "ip", "addr", "add", ipWithSubnet, "dev", ifaceName)
output, err := cmd.CombinedOutput()
if err != nil {
s.logger.Error("Failed to set IP address", "interface", ifaceName, "error", err, "output", string(output))
return nil, fmt.Errorf("failed to set IP address: %w", err)
}
// Remove existing default route if any
cmd = exec.CommandContext(ctx, "ip", "route", "del", "default")
cmd.Run() // Ignore error, might not exist
// Set gateway if provided
if req.Gateway != "" {
cmd = exec.CommandContext(ctx, "ip", "route", "add", "default", "via", req.Gateway, "dev", ifaceName)
output, err = cmd.CombinedOutput()
if err != nil {
s.logger.Error("Failed to set gateway", "interface", ifaceName, "error", err, "output", string(output))
return nil, fmt.Errorf("failed to set gateway: %w", err)
}
}
// Update DNS in systemd-resolved or /etc/resolv.conf
if req.DNS1 != "" || req.DNS2 != "" {
// Try using systemd-resolve first
cmd = exec.CommandContext(ctx, "systemd-resolve", "--status")
if cmd.Run() == nil {
// systemd-resolve is available, use it
dnsServers := []string{}
if req.DNS1 != "" {
dnsServers = append(dnsServers, req.DNS1)
}
if req.DNS2 != "" {
dnsServers = append(dnsServers, req.DNS2)
}
if len(dnsServers) > 0 {
// Use resolvectl to set DNS (newer systemd)
cmd = exec.CommandContext(ctx, "resolvectl", "dns", ifaceName, strings.Join(dnsServers, " "))
if cmd.Run() != nil {
// Fallback to systemd-resolve
cmd = exec.CommandContext(ctx, "systemd-resolve", "--interface", ifaceName, "--set-dns", strings.Join(dnsServers, " "))
output, err = cmd.CombinedOutput()
if err != nil {
s.logger.Warn("Failed to set DNS via systemd-resolve", "error", err, "output", string(output))
}
}
}
} else {
// Fallback: update /etc/resolv.conf
resolvContent := "# Generated by Calypso\n"
if req.DNS1 != "" {
resolvContent += fmt.Sprintf("nameserver %s\n", req.DNS1)
}
if req.DNS2 != "" {
resolvContent += fmt.Sprintf("nameserver %s\n", req.DNS2)
}
tmpPath := "/tmp/resolv.conf." + fmt.Sprintf("%d", time.Now().Unix())
if err := os.WriteFile(tmpPath, []byte(resolvContent), 0644); err != nil {
s.logger.Warn("Failed to write temporary resolv.conf", "error", err)
} else {
cmd = exec.CommandContext(ctx, "sh", "-c", fmt.Sprintf("mv %s /etc/resolv.conf", tmpPath))
output, err = cmd.CombinedOutput()
if err != nil {
s.logger.Warn("Failed to update /etc/resolv.conf", "error", err, "output", string(output))
}
}
}
}
// Bring interface up
cmd = exec.CommandContext(ctx, "ip", "link", "set", ifaceName, "up")
output, err = cmd.CombinedOutput()
if err != nil {
s.logger.Warn("Failed to bring interface up", "interface", ifaceName, "error", err, "output", string(output))
}
// Return updated interface
updatedIface := &NetworkInterface{
Name: ifaceName,
IPAddress: req.IPAddress,
Subnet: req.Subnet,
Gateway: req.Gateway,
DNS1: req.DNS1,
DNS2: req.DNS2,
Role: req.Role,
Status: "Connected",
Speed: "Unknown", // Will be updated on next list
}
s.logger.Info("Updated network interface", "interface", ifaceName, "ip", req.IPAddress, "subnet", req.Subnet)
return updatedIface, nil
}
// SaveNTPSettings saves NTP configuration to the OS
func (s *Service) SaveNTPSettings(ctx context.Context, settings NTPSettings) error {
// Set timezone using timedatectl
if settings.Timezone != "" {
cmd := exec.CommandContext(ctx, "timedatectl", "set-timezone", settings.Timezone)
output, err := cmd.CombinedOutput()
if err != nil {
s.logger.Error("Failed to set timezone", "timezone", settings.Timezone, "error", err, "output", string(output))
return fmt.Errorf("failed to set timezone: %w", err)
}
s.logger.Info("Timezone set", "timezone", settings.Timezone)
}
// Configure NTP servers in systemd-timesyncd
if len(settings.NTPServers) > 0 {
configPath := "/etc/systemd/timesyncd.conf"
// Build config content
configContent := "[Time]\n"
configContent += "NTP="
for i, server := range settings.NTPServers {
if i > 0 {
configContent += " "
}
configContent += server
}
configContent += "\n"
// Write to temporary file first, then move to final location (requires root)
tmpPath := "/tmp/timesyncd.conf." + fmt.Sprintf("%d", time.Now().Unix())
if err := os.WriteFile(tmpPath, []byte(configContent), 0644); err != nil {
s.logger.Error("Failed to write temporary NTP config", "error", err)
return fmt.Errorf("failed to write temporary NTP configuration: %w", err)
}
// Move to final location using sudo (requires root privileges)
cmd := exec.CommandContext(ctx, "sh", "-c", fmt.Sprintf("mv %s %s", tmpPath, configPath))
output, err := cmd.CombinedOutput()
if err != nil {
s.logger.Error("Failed to move NTP config", "error", err, "output", string(output))
os.Remove(tmpPath) // Clean up temp file
return fmt.Errorf("failed to move NTP configuration: %w", err)
}
// Restart systemd-timesyncd to apply changes
cmd = exec.CommandContext(ctx, "systemctl", "restart", "systemd-timesyncd")
output, err = cmd.CombinedOutput()
if err != nil {
s.logger.Error("Failed to restart systemd-timesyncd", "error", err, "output", string(output))
return fmt.Errorf("failed to restart systemd-timesyncd: %w", err)
}
s.logger.Info("NTP servers configured", "servers", settings.NTPServers)
}
return nil
}
// GetNTPSettings retrieves current NTP configuration from the OS
func (s *Service) GetNTPSettings(ctx context.Context) (*NTPSettings, error) {
settings := &NTPSettings{
NTPServers: []string{},
}
// Get current timezone using timedatectl
cmd := exec.CommandContext(ctx, "timedatectl", "show", "--property=Timezone", "--value")
output, err := cmd.Output()
if err != nil {
s.logger.Warn("Failed to get timezone", "error", err)
settings.Timezone = "Etc/UTC" // Default fallback
} else {
settings.Timezone = strings.TrimSpace(string(output))
if settings.Timezone == "" {
settings.Timezone = "Etc/UTC"
}
}
// Read NTP servers from systemd-timesyncd config
configPath := "/etc/systemd/timesyncd.conf"
data, err := os.ReadFile(configPath)
if err != nil {
s.logger.Warn("Failed to read NTP config", "error", err)
// Default NTP servers if config file doesn't exist
settings.NTPServers = []string{"pool.ntp.org", "time.google.com"}
} else {
// Parse NTP servers from config file
lines := strings.Split(string(data), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "NTP=") {
ntpLine := strings.TrimPrefix(line, "NTP=")
if ntpLine != "" {
servers := strings.Fields(ntpLine)
settings.NTPServers = servers
break
}
}
}
// If no NTP servers found in config, use defaults
if len(settings.NTPServers) == 0 {
settings.NTPServers = []string{"pool.ntp.org", "time.google.com"}
}
}
return settings, nil
}
// ExecuteCommand executes a shell command and returns the output
// service parameter is optional and can be: system, scst, storage, backup, tape
func (s *Service) ExecuteCommand(ctx context.Context, command string, service string) (string, error) {
// Sanitize command - basic security check
command = strings.TrimSpace(command)
if command == "" {
return "", fmt.Errorf("command cannot be empty")
}
// Block dangerous commands that could harm the system
dangerousCommands := []string{
"rm -rf /",
"dd if=",
":(){ :|:& };:",
"mkfs",
"fdisk",
"parted",
"format",
"> /dev/sd",
"mkfs.ext",
"mkfs.xfs",
"mkfs.btrfs",
"wipefs",
}
commandLower := strings.ToLower(command)
for _, dangerous := range dangerousCommands {
if strings.Contains(commandLower, dangerous) {
return "", fmt.Errorf("command blocked for security reasons")
}
}
// Service-specific command handling
switch service {
case "scst":
// Allow SCST admin commands
if strings.HasPrefix(command, "scstadmin") {
// SCST commands are safe
break
}
case "backup":
// Allow bconsole commands
if strings.HasPrefix(command, "bconsole") {
// Backup console commands are safe
break
}
case "storage":
// Allow ZFS and storage commands
if strings.HasPrefix(command, "zfs") || strings.HasPrefix(command, "zpool") || strings.HasPrefix(command, "lsblk") {
// Storage commands are safe
break
}
case "tape":
// Allow tape library commands
if strings.HasPrefix(command, "mtx") || strings.HasPrefix(command, "lsscsi") || strings.HasPrefix(command, "sg_") {
// Tape commands are safe
break
}
}
// Execute command with timeout (30 seconds)
ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
// Check if command already has sudo (reuse commandLower from above)
hasSudo := strings.HasPrefix(commandLower, "sudo ")
// Determine if command needs sudo based on service and command type
needsSudo := false
if !hasSudo {
// Commands that typically need sudo
sudoCommands := []string{
"scstadmin",
"systemctl",
"zfs",
"zpool",
"mount",
"umount",
"ip link",
"ip addr",
"iptables",
"journalctl",
}
for _, sudoCmd := range sudoCommands {
if strings.HasPrefix(commandLower, sudoCmd) {
needsSudo = true
break
}
}
// Service-specific sudo requirements
switch service {
case "scst":
// All SCST admin commands need sudo
if strings.HasPrefix(commandLower, "scstadmin") {
needsSudo = true
}
case "storage":
// ZFS commands typically need sudo
if strings.HasPrefix(commandLower, "zfs") || strings.HasPrefix(commandLower, "zpool") {
needsSudo = true
}
case "system":
// System commands like systemctl need sudo
if strings.HasPrefix(commandLower, "systemctl") || strings.HasPrefix(commandLower, "journalctl") {
needsSudo = true
}
}
}
// Build command with or without sudo
var cmd *exec.Cmd
if needsSudo && !hasSudo {
// Use sudo for privileged commands (if not already present)
cmd = exec.CommandContext(ctx, "sudo", "sh", "-c", command)
} else {
// Regular command (or already has sudo)
cmd = exec.CommandContext(ctx, "sh", "-c", command)
}
cmd.Env = append(os.Environ(), "TERM=xterm-256color")
cmd.Env = append(os.Environ(), "TERM=xterm-256color")
output, err := cmd.CombinedOutput()
if err != nil {
// Return output even if there's an error (some commands return non-zero exit codes)
outputStr := string(output)
if len(outputStr) > 0 {
return outputStr, nil
}
return "", fmt.Errorf("command execution failed: %w", err)
}
return string(output), nil
}

View File

@@ -1,328 +0,0 @@
package system
import (
"encoding/json"
"io"
"net/http"
"os"
"os/exec"
"os/user"
"sync"
"syscall"
"time"
"github.com/atlasos/calypso/internal/common/logger"
"github.com/creack/pty"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
)
const (
// WebSocket timeouts
writeWait = 10 * time.Second
pongWait = 60 * time.Second
pingPeriod = (pongWait * 9) / 10
)
var upgrader = websocket.Upgrader{
ReadBufferSize: 4096,
WriteBufferSize: 4096,
CheckOrigin: func(r *http.Request) bool {
// Allow all origins - in production, validate against allowed domains
return true
},
}
// TerminalSession manages a single terminal session
type TerminalSession struct {
conn *websocket.Conn
pty *os.File
cmd *exec.Cmd
logger *logger.Logger
mu sync.RWMutex
closed bool
username string
done chan struct{}
}
// HandleTerminalWebSocket handles WebSocket connection for terminal
func HandleTerminalWebSocket(c *gin.Context, log *logger.Logger) {
// Verify authentication
userID, exists := c.Get("user_id")
if !exists {
log.Warn("Terminal WebSocket: unauthorized access", "ip", c.ClientIP())
c.JSON(http.StatusUnauthorized, gin.H{"error": "unauthorized"})
return
}
username, _ := c.Get("username")
if username == nil {
username = userID
}
log.Info("Terminal WebSocket: connection attempt", "username", username, "ip", c.ClientIP())
// Upgrade connection
conn, err := upgrader.Upgrade(c.Writer, c.Request, nil)
if err != nil {
log.Error("Terminal WebSocket: upgrade failed", "error", err)
return
}
log.Info("Terminal WebSocket: connection upgraded", "username", username)
// Create session
session := &TerminalSession{
conn: conn,
logger: log,
username: username.(string),
done: make(chan struct{}),
}
// Start terminal
if err := session.startPTY(); err != nil {
log.Error("Terminal WebSocket: failed to start PTY", "error", err, "username", username)
session.sendError(err.Error())
session.close()
return
}
// Handle messages and PTY output
go session.handleRead()
go session.handleWrite()
}
// startPTY starts the PTY session
func (s *TerminalSession) startPTY() error {
// Get user info
currentUser, err := user.Lookup(s.username)
if err != nil {
// Fallback to current user
currentUser, err = user.Current()
if err != nil {
return err
}
}
// Determine shell
shell := os.Getenv("SHELL")
if shell == "" {
shell = "/bin/bash"
}
// Create command
s.cmd = exec.Command(shell)
s.cmd.Env = append(os.Environ(),
"TERM=xterm-256color",
"HOME="+currentUser.HomeDir,
"USER="+currentUser.Username,
"USERNAME="+currentUser.Username,
)
s.cmd.Dir = currentUser.HomeDir
// Start PTY
ptyFile, err := pty.Start(s.cmd)
if err != nil {
return err
}
s.pty = ptyFile
// Set initial size
pty.Setsize(ptyFile, &pty.Winsize{
Rows: 24,
Cols: 80,
})
return nil
}
// handleRead handles incoming WebSocket messages
func (s *TerminalSession) handleRead() {
defer s.close()
// Set read deadline and pong handler
s.conn.SetReadDeadline(time.Now().Add(pongWait))
s.conn.SetPongHandler(func(string) error {
s.conn.SetReadDeadline(time.Now().Add(pongWait))
return nil
})
for {
select {
case <-s.done:
return
default:
messageType, data, err := s.conn.ReadMessage()
if err != nil {
if websocket.IsUnexpectedCloseError(err, websocket.CloseGoingAway, websocket.CloseAbnormalClosure) {
s.logger.Error("Terminal WebSocket: read error", "error", err)
}
return
}
// Handle binary messages (raw input)
if messageType == websocket.BinaryMessage {
s.writeToPTY(data)
continue
}
// Handle text messages (JSON commands)
if messageType == websocket.TextMessage {
var msg map[string]interface{}
if err := json.Unmarshal(data, &msg); err != nil {
continue
}
switch msg["type"] {
case "input":
if data, ok := msg["data"].(string); ok {
s.writeToPTY([]byte(data))
}
case "resize":
if cols, ok1 := msg["cols"].(float64); ok1 {
if rows, ok2 := msg["rows"].(float64); ok2 {
s.resizePTY(uint16(cols), uint16(rows))
}
}
case "ping":
s.writeWS(websocket.TextMessage, []byte(`{"type":"pong"}`))
}
}
}
}
}
// handleWrite handles PTY output to WebSocket
func (s *TerminalSession) handleWrite() {
defer s.close()
ticker := time.NewTicker(pingPeriod)
defer ticker.Stop()
// Read from PTY and write to WebSocket
buffer := make([]byte, 4096)
for {
select {
case <-s.done:
return
case <-ticker.C:
// Send ping
if err := s.writeWS(websocket.PingMessage, nil); err != nil {
return
}
default:
// Read from PTY
if s.pty != nil {
n, err := s.pty.Read(buffer)
if err != nil {
if err != io.EOF {
s.logger.Error("Terminal WebSocket: PTY read error", "error", err)
}
return
}
if n > 0 {
// Write binary data to WebSocket
if err := s.writeWS(websocket.BinaryMessage, buffer[:n]); err != nil {
return
}
}
}
}
}
}
// writeToPTY writes data to PTY
func (s *TerminalSession) writeToPTY(data []byte) {
s.mu.RLock()
closed := s.closed
pty := s.pty
s.mu.RUnlock()
if closed || pty == nil {
return
}
if _, err := pty.Write(data); err != nil {
s.logger.Error("Terminal WebSocket: PTY write error", "error", err)
}
}
// resizePTY resizes the PTY
func (s *TerminalSession) resizePTY(cols, rows uint16) {
s.mu.RLock()
closed := s.closed
ptyFile := s.pty
s.mu.RUnlock()
if closed || ptyFile == nil {
return
}
// Use pty.Setsize from package, not method from variable
pty.Setsize(ptyFile, &pty.Winsize{
Cols: cols,
Rows: rows,
})
}
// writeWS writes message to WebSocket
func (s *TerminalSession) writeWS(messageType int, data []byte) error {
s.mu.RLock()
closed := s.closed
conn := s.conn
s.mu.RUnlock()
if closed || conn == nil {
return io.ErrClosedPipe
}
conn.SetWriteDeadline(time.Now().Add(writeWait))
return conn.WriteMessage(messageType, data)
}
// sendError sends error message
func (s *TerminalSession) sendError(errMsg string) {
msg := map[string]interface{}{
"type": "error",
"error": errMsg,
}
data, _ := json.Marshal(msg)
s.writeWS(websocket.TextMessage, data)
}
// close closes the terminal session
func (s *TerminalSession) close() {
s.mu.Lock()
defer s.mu.Unlock()
if s.closed {
return
}
s.closed = true
close(s.done)
// Close PTY
if s.pty != nil {
s.pty.Close()
}
// Kill process
if s.cmd != nil && s.cmd.Process != nil {
s.cmd.Process.Signal(syscall.SIGTERM)
time.Sleep(100 * time.Millisecond)
if s.cmd.ProcessState == nil || !s.cmd.ProcessState.Exited() {
s.cmd.Process.Kill()
}
}
// Close WebSocket
if s.conn != nil {
s.conn.Close()
}
s.logger.Info("Terminal WebSocket: session closed", "username", s.username)
}

View File

@@ -1 +0,0 @@
/opt/calypso/conf/bacula

View File

@@ -1,788 +0,0 @@
# Coding Standards
## AtlasOS - Calypso Backup Appliance
**Version:** 1.0.0-alpha
**Date:** 2025-01-XX
**Status:** Active
---
## 1. Overview
This document defines the coding standards and best practices for the Calypso project. All code must adhere to these standards to ensure consistency, maintainability, and quality.
## 2. General Principles
### 2.1 Code Quality
- **Readability**: Code should be self-documenting and easy to understand
- **Maintainability**: Code should be easy to modify and extend
- **Consistency**: Follow consistent patterns across the codebase
- **Simplicity**: Prefer simple solutions over complex ones
- **DRY**: Don't Repeat Yourself - avoid code duplication
### 2.2 Code Review
- All code must be reviewed before merging
- Reviewers should check for adherence to these standards
- Address review comments before merging
### 2.3 Documentation
- Document complex logic and algorithms
- Keep comments up-to-date with code changes
- Write clear commit messages
---
## 3. Backend (Go) Standards
### 3.1 Code Formatting
#### 3.1.1 Use gofmt
- Always run `gofmt` before committing
- Use `goimports` for import organization
- Configure IDE to format on save
#### 3.1.2 Line Length
- Maximum line length: 100 characters
- Break long lines for readability
#### 3.1.3 Indentation
- Use tabs for indentation (not spaces)
- Tab width: 4 spaces equivalent
### 3.2 Naming Conventions
#### 3.2.1 Packages
```go
// Good: lowercase, single word, descriptive
package storage
package auth
package monitoring
// Bad: mixed case, abbreviations
package Storage
package Auth
package Mon
```
#### 3.2.2 Functions
```go
// Good: camelCase, descriptive
func createZFSPool(name string) error
func listNetworkInterfaces() ([]Interface, error)
func validateUserInput(input string) error
// Bad: unclear names, abbreviations
func create(name string) error
func list() ([]Interface, error)
func val(input string) error
```
#### 3.2.3 Variables
```go
// Good: camelCase, descriptive
var poolName string
var networkInterfaces []Interface
var isActive bool
// Bad: single letters, unclear
var n string
var ifs []Interface
var a bool
```
#### 3.2.4 Constants
```go
// Good: PascalCase for exported, camelCase for unexported
const DefaultPort = 8080
const maxRetries = 3
// Bad: inconsistent casing
const defaultPort = 8080
const MAX_RETRIES = 3
```
#### 3.2.5 Types and Structs
```go
// Good: PascalCase, descriptive
type ZFSPool struct {
ID string
Name string
Status string
}
// Bad: unclear names
type Pool struct {
I string
N string
S string
}
```
### 3.3 File Organization
#### 3.3.1 File Structure
```go
// 1. Package declaration
package storage
// 2. Imports (standard, third-party, local)
import (
"context"
"fmt"
"github.com/gin-gonic/gin"
"github.com/atlasos/calypso/internal/common/database"
)
// 3. Constants
const (
defaultTimeout = 30 * time.Second
)
// 4. Types
type Service struct {
db *database.DB
}
// 5. Functions
func NewService(db *database.DB) *Service {
return &Service{db: db}
}
```
#### 3.3.2 File Naming
- Use lowercase with underscores: `handler.go`, `service.go`
- Test files: `handler_test.go`
- One main type per file when possible
### 3.4 Error Handling
#### 3.4.1 Error Return
```go
// Good: always return error as last value
func createPool(name string) (*Pool, error) {
if name == "" {
return nil, fmt.Errorf("pool name cannot be empty")
}
// ...
}
// Bad: panic, no error return
func createPool(name string) *Pool {
if name == "" {
panic("pool name cannot be empty")
}
}
```
#### 3.4.2 Error Wrapping
```go
// Good: wrap errors with context
if err != nil {
return fmt.Errorf("failed to create pool %s: %w", name, err)
}
// Bad: lose error context
if err != nil {
return err
}
```
#### 3.4.3 Error Messages
```go
// Good: clear, actionable error messages
return fmt.Errorf("pool '%s' already exists", name)
return fmt.Errorf("insufficient disk space: need %d bytes, have %d bytes", needed, available)
// Bad: unclear error messages
return fmt.Errorf("error")
return fmt.Errorf("failed")
```
### 3.5 Comments
#### 3.5.1 Package Comments
```go
// Package storage provides storage management functionality including
// ZFS pool and dataset operations, disk discovery, and storage repository management.
package storage
```
#### 3.5.2 Function Comments
```go
// CreateZFSPool creates a new ZFS pool with the specified configuration.
// It validates the pool name, checks disk availability, and creates the pool.
// Returns an error if the pool cannot be created.
func CreateZFSPool(ctx context.Context, name string, disks []string) error {
// ...
}
```
#### 3.5.3 Inline Comments
```go
// Good: explain why, not what
// Retry up to 3 times to handle transient network errors
for i := 0; i < 3; i++ {
// ...
}
// Bad: obvious comments
// Loop 3 times
for i := 0; i < 3; i++ {
// ...
}
```
### 3.6 Testing
#### 3.6.1 Test File Naming
- Test files: `*_test.go`
- Test functions: `TestFunctionName`
- Benchmark functions: `BenchmarkFunctionName`
#### 3.6.2 Test Structure
```go
func TestCreateZFSPool(t *testing.T) {
tests := []struct {
name string
input string
wantErr bool
}{
{
name: "valid pool name",
input: "tank",
wantErr: false,
},
{
name: "empty pool name",
input: "",
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := createPool(tt.input)
if (err != nil) != tt.wantErr {
t.Errorf("createPool() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
```
### 3.7 Concurrency
#### 3.7.1 Context Usage
```go
// Good: always accept context as first parameter
func (s *Service) CreatePool(ctx context.Context, name string) error {
// Use context for cancellation and timeout
ctx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
// ...
}
// Bad: no context
func (s *Service) CreatePool(name string) error {
// ...
}
```
#### 3.7.2 Goroutines
```go
// Good: use context for cancellation
go func() {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
// ...
}()
// Bad: no cancellation mechanism
go func() {
// ...
}()
```
### 3.8 Database Operations
#### 3.8.1 Query Context
```go
// Good: use context for queries
rows, err := s.db.QueryContext(ctx, query, args...)
// Bad: no context
rows, err := s.db.Query(query, args...)
```
#### 3.8.2 Transactions
```go
// Good: use transactions for multiple operations
tx, err := s.db.BeginTx(ctx, nil)
if err != nil {
return err
}
defer tx.Rollback()
// ... operations ...
if err := tx.Commit(); err != nil {
return err
}
```
---
## 4. Frontend (TypeScript/React) Standards
### 4.1 Code Formatting
#### 4.1.1 Use Prettier
- Configure Prettier for consistent formatting
- Format on save enabled
- Maximum line length: 100 characters
#### 4.1.2 Indentation
- Use 2 spaces for indentation
- Consistent spacing in JSX
### 4.2 Naming Conventions
#### 4.2.1 Components
```typescript
// Good: PascalCase, descriptive
function StoragePage() { }
function CreatePoolModal() { }
function NetworkInterfaceCard() { }
// Bad: unclear names
function Page() { }
function Modal() { }
function Card() { }
```
#### 4.2.2 Functions
```typescript
// Good: camelCase, descriptive
function createZFSPool(name: string): Promise<ZFSPool> { }
function handleSubmit(event: React.FormEvent): void { }
function formatBytes(bytes: number): string { }
// Bad: unclear names
function create(name: string) { }
function handle(e: any) { }
function fmt(b: number) { }
```
#### 4.2.3 Variables
```typescript
// Good: camelCase, descriptive
const poolName = 'tank'
const networkInterfaces: NetworkInterface[] = []
const isActive = true
// Bad: unclear names
const n = 'tank'
const ifs: any[] = []
const a = true
```
#### 4.2.4 Constants
```typescript
// Good: UPPER_SNAKE_CASE for constants
const DEFAULT_PORT = 8080
const MAX_RETRIES = 3
const API_BASE_URL = '/api/v1'
// Bad: inconsistent casing
const defaultPort = 8080
const maxRetries = 3
```
#### 4.2.5 Types and Interfaces
```typescript
// Good: PascalCase, descriptive
interface ZFSPool {
id: string
name: string
status: string
}
type PoolStatus = 'online' | 'offline' | 'degraded'
// Bad: unclear names
interface Pool {
i: string
n: string
s: string
}
```
### 4.3 File Organization
#### 4.3.1 Component Structure
```typescript
// 1. Imports (React, third-party, local)
import { useState } from 'react'
import { useQuery } from '@tanstack/react-query'
import { zfsApi } from '@/api/storage'
// 2. Types/Interfaces
interface Props {
poolId: string
}
// 3. Component
export default function PoolDetail({ poolId }: Props) {
// 4. Hooks
const [isLoading, setIsLoading] = useState(false)
// 5. Queries
const { data: pool } = useQuery({
queryKey: ['pool', poolId],
queryFn: () => zfsApi.getPool(poolId),
})
// 6. Handlers
const handleDelete = () => {
// ...
}
// 7. Effects
useEffect(() => {
// ...
}, [poolId])
// 8. Render
return (
// JSX
)
}
```
#### 4.3.2 File Naming
- Components: `PascalCase.tsx` (e.g., `StoragePage.tsx`)
- Utilities: `camelCase.ts` (e.g., `formatBytes.ts`)
- Types: `camelCase.ts` or `types.ts`
- Hooks: `useCamelCase.ts` (e.g., `useStorage.ts`)
### 4.4 TypeScript
#### 4.4.1 Type Safety
```typescript
// Good: explicit types
function createPool(name: string): Promise<ZFSPool> {
// ...
}
// Bad: any types
function createPool(name: any): any {
// ...
}
```
#### 4.4.2 Interface Definitions
```typescript
// Good: clear interface definitions
interface ZFSPool {
id: string
name: string
status: 'online' | 'offline' | 'degraded'
totalCapacityBytes: number
usedCapacityBytes: number
}
// Bad: unclear or missing types
interface Pool {
id: any
name: any
status: any
}
```
### 4.5 React Patterns
#### 4.5.1 Hooks
```typescript
// Good: custom hooks for reusable logic
function useZFSPool(poolId: string) {
return useQuery({
queryKey: ['pool', poolId],
queryFn: () => zfsApi.getPool(poolId),
})
}
// Usage
const { data: pool } = useZFSPool(poolId)
```
#### 4.5.2 Component Composition
```typescript
// Good: small, focused components
function PoolCard({ pool }: { pool: ZFSPool }) {
return (
<div>
<PoolHeader pool={pool} />
<PoolStats pool={pool} />
<PoolActions pool={pool} />
</div>
)
}
// Bad: large, monolithic components
function PoolCard({ pool }: { pool: ZFSPool }) {
// 500+ lines of JSX
}
```
#### 4.5.3 State Management
```typescript
// Good: use React Query for server state
const { data, isLoading } = useQuery({
queryKey: ['pools'],
queryFn: zfsApi.listPools,
})
// Good: use local state for UI state
const [isModalOpen, setIsModalOpen] = useState(false)
// Good: use Zustand for global UI state
const { user, setUser } = useAuthStore()
```
### 4.6 Error Handling
#### 4.6.1 Error Boundaries
```typescript
// Good: use error boundaries
function ErrorBoundary({ children }: { children: React.ReactNode }) {
// ...
}
// Usage
<ErrorBoundary>
<App />
</ErrorBoundary>
```
#### 4.6.2 Error Handling in Queries
```typescript
// Good: handle errors in queries
const { data, error, isLoading } = useQuery({
queryKey: ['pools'],
queryFn: zfsApi.listPools,
onError: (error) => {
console.error('Failed to load pools:', error)
// Show user-friendly error message
},
})
```
### 4.7 Styling
#### 4.7.1 TailwindCSS
```typescript
// Good: use Tailwind classes
<div className="flex items-center gap-4 p-6 bg-card-dark rounded-lg border border-border-dark">
<h2 className="text-lg font-bold text-white">Storage Pools</h2>
</div>
// Bad: inline styles
<div style={{ display: 'flex', padding: '24px', backgroundColor: '#18232e' }}>
<h2 style={{ fontSize: '18px', fontWeight: 'bold', color: 'white' }}>Storage Pools</h2>
</div>
```
#### 4.7.2 Class Organization
```typescript
// Good: logical grouping
className="flex items-center gap-4 p-6 bg-card-dark rounded-lg border border-border-dark hover:bg-border-dark transition-colors"
// Bad: random order
className="p-6 flex border rounded-lg items-center gap-4 bg-card-dark border-border-dark"
```
### 4.8 Testing
#### 4.8.1 Component Testing
```typescript
// Good: test component behavior
describe('StoragePage', () => {
it('displays pools when loaded', () => {
render(<StoragePage />)
expect(screen.getByText('tank')).toBeInTheDocument()
})
it('shows loading state', () => {
render(<StoragePage />)
expect(screen.getByText('Loading...')).toBeInTheDocument()
})
})
```
---
## 5. Git Commit Standards
### 5.1 Commit Message Format
```
<type>(<scope>): <subject>
<body>
<footer>
```
### 5.2 Commit Types
- **feat**: New feature
- **fix**: Bug fix
- **docs**: Documentation changes
- **style**: Code style changes (formatting, etc.)
- **refactor**: Code refactoring
- **test**: Test additions or changes
- **chore**: Build process or auxiliary tool changes
### 5.3 Commit Examples
```
feat(storage): add ZFS pool creation endpoint
Add POST /api/v1/storage/zfs/pools endpoint with validation
and error handling.
Closes #123
fix(shares): correct dataset_id field in create share
The frontend was sending dataset_name instead of dataset_id.
Updated to use UUID from dataset selection.
docs: update API documentation for snapshot endpoints
refactor(auth): simplify JWT token validation logic
```
### 5.4 Branch Naming
- **feature/**: New features (e.g., `feature/object-storage`)
- **fix/**: Bug fixes (e.g., `fix/share-creation-error`)
- **docs/**: Documentation (e.g., `docs/api-documentation`)
- **refactor/**: Refactoring (e.g., `refactor/storage-service`)
---
## 6. Code Review Guidelines
### 6.1 Review Checklist
- [ ] Code follows naming conventions
- [ ] Code is properly formatted
- [ ] Error handling is appropriate
- [ ] Tests are included for new features
- [ ] Documentation is updated
- [ ] No security vulnerabilities
- [ ] Performance considerations addressed
- [ ] No commented-out code
- [ ] No console.log statements (use proper logging)
### 6.2 Review Comments
- Be constructive and respectful
- Explain why, not just what
- Suggest improvements, not just point out issues
- Approve when code meets standards
---
## 7. Documentation Standards
### 7.1 Code Comments
- Document complex logic
- Explain "why" not "what"
- Keep comments up-to-date
### 7.2 API Documentation
- Document all public APIs
- Include parameter descriptions
- Include return value descriptions
- Include error conditions
### 7.3 README Files
- Keep README files updated
- Include setup instructions
- Include usage examples
- Include troubleshooting tips
---
## 8. Performance Standards
### 8.1 Backend
- Database queries should be optimized
- Use indexes appropriately
- Avoid N+1 query problems
- Use connection pooling
### 8.2 Frontend
- Minimize re-renders
- Use React.memo for expensive components
- Lazy load routes
- Optimize bundle size
---
## 9. Security Standards
### 9.1 Input Validation
- Validate all user inputs
- Sanitize inputs before use
- Use parameterized queries
- Escape output
### 9.2 Authentication
- Never store passwords in plaintext
- Use secure token storage
- Implement proper session management
- Handle token expiration
### 9.3 Authorization
- Check permissions on every request
- Use principle of least privilege
- Log security events
- Handle authorization errors properly
---
## 10. Tools and Configuration
### 10.1 Backend Tools
- **gofmt**: Code formatting
- **goimports**: Import organization
- **golint**: Linting
- **go vet**: Static analysis
### 10.2 Frontend Tools
- **Prettier**: Code formatting
- **ESLint**: Linting
- **TypeScript**: Type checking
- **Vite**: Build tool
---
## 11. Exceptions
### 11.1 When to Deviate
- Performance-critical code may require optimization
- Legacy code integration may require different patterns
- Third-party library constraints
### 11.2 Documenting Exceptions
- Document why standards are not followed
- Include comments explaining deviations
- Review exceptions during code review
---
## Document History
| Version | Date | Author | Changes |
|---------|------|--------|---------|
| 1.0.0-alpha | 2025-01-XX | Development Team | Initial coding standards document |

View File

@@ -1,102 +0,0 @@
# Calypso System Architecture Document
Adastra Storage & Backup Appliance
Version: 1.0 (Dev Release V1)
Status: Baseline Architecture
## 1. Purpose & Scope
This document describes the system architecture of Calypso as an integrated storage and backup appliance. It aligns with the System Requirements Specification (SRS) and System Design Specification (SDS), and serves as a reference for architects, engineers, operators, and auditors.
## 2. Architectural Principles
- Appliance-first design
- Clear separation of binaries, configuration, and data
- ZFS-native storage architecture
- Upgrade and rollback safety
- Minimal external dependencies
## 3. High-Level Architecture
Calypso operates as a single-node appliance where the control plane orchestrates storage, backup, object storage, tape, and iSCSI subsystems through a unified API and UI.
## 4. Deployment Model
- Single-node deployment
- Bare metal or VM (bare metal recommended)
- Linux-based OS (LTS)
## 5. Centralized Filesystem Architecture
### 5.1 Domain Separation
| Domain | Location |
|------|---------|
| Binaries | /opt/adastra/calypso |
| Configuration | /etc/calypso |
| Data (ZFS) | /srv/calypso |
| Logs | /var/log/calypso |
| Runtime | /var/lib/calypso, /run/calypso |
### 5.2 Binary Layout
```
/opt/adastra/calypso/
releases/
1.0.0/
bin/
web/
migrations/
scripts/
current -> releases/1.0.0
third_party/
```
### 5.3 Configuration Layout
```
/etc/calypso/
calypso.yaml
secrets.env
tls/
integrations/
system/
```
### 5.4 ZFS Data Layout
```
/srv/calypso/
db/
backups/
object/
shares/
vtl/
iscsi/
uploads/
cache/
_system/
```
## 6. Component Architecture
- Calypso Control Plane (Go-based API)
- ZFS (core storage)
- Bacula (backup)
- MinIO (object storage)
- SCST (iSCSI)
- MHVTL (virtual tape library)
## 7. Data Flow
- User actions handled by Calypso API
- Operations executed on ZFS datasets
- Metadata stored centrally in ZFS
## 8. Security Baseline
- Service isolation
- Permission-based filesystem access
- Secrets separation
- Controlled subsystem access
## 9. Upgrade & Rollback
- Versioned releases
- Atomic switch via symlink
- Data preserved independently in ZFS
## 10. Non-Goals (V1)
- Multi-node clustering
- Kubernetes orchestration
- Inline malware scanning
## 11. Summary
Calypso provides a clean, upgrade-safe, and enterprise-grade appliance architecture, forming a strong foundation for future HA and immutable designs.

View File

@@ -1,468 +0,0 @@
# Infrastructure & Environment Review
## AtlasOS - Calypso Backup Appliance
**Review Date:** 2025-01-XX
**Reviewer:** Development Team
**Status:** In Progress
---
## Executive Summary
This document reviews the current infrastructure and environment implementation against the `Calypso_System_Architecture.md` specification. The review identifies alignment, gaps, and recommendations for improvement.
**Overall Status:****Mostly Aligned** with minor deviations
---
## 1. Architecture Alignment Review
### 1.1 High-Level Architecture ✅ **ALIGNED**
**Documentation Spec:**
- Single-node appliance
- Control plane orchestrates storage, backup, object storage, tape, and iSCSI
- Unified API and UI
**Current Implementation:**
- ✅ Single-node deployment model
- ✅ Go-based API (Calypso Control Plane)
- ✅ React-based UI
- ✅ Unified API endpoints for all subsystems
**Status:****FULLY ALIGNED**
---
### 1.2 Deployment Model ✅ **ALIGNED**
**Documentation Spec:**
- Single-node deployment
- Bare metal or VM (bare metal recommended)
- Linux-based OS (LTS)
**Current Implementation:**
- ✅ Single-node deployment
- ✅ Ubuntu 24.04 LTS (as per install script)
- ✅ Systemd service management
- ✅ Supports both bare metal and VM
**Status:****FULLY ALIGNED**
---
## 2. Filesystem Architecture Review
### 2.1 Domain Separation ⚠️ **PARTIALLY ALIGNED**
**Documentation Spec:**
```
Domain | Location
----------------|------------------
Binaries | /opt/adastra/calypso
Configuration | /etc/calypso
Data (ZFS) | /srv/calypso
Logs | /var/log/calypso
Runtime | /var/lib/calypso, /run/calypso
```
**Current Implementation:**
- ⚠️ **Binaries**: Currently in `/development/calypso/backend/bin/` (development) or systemd service path
- ⚠️ **Configuration**: Uses `/etc/calypso/config.yaml` (as per main.go flag default) ✅
- ⚠️ **Data**: Not explicitly organized under `/srv/calypso/` structure
- ⚠️ **Logs**: Not explicitly organized under `/var/log/calypso/`
- ⚠️ **Runtime**: Not explicitly organized under `/var/lib/calypso/` or `/run/calypso/`
**Gaps Identified:**
1. Binary deployment structure not following `/opt/adastra/calypso/releases/` pattern
2. Data directory structure not organized per spec
3. Log directory structure not organized per spec
4. Runtime directory structure not organized per spec
**Recommendations:**
- [ ] Create deployment script to organize binaries per spec
- [ ] Create data directory structure under `/srv/calypso/`
- [ ] Configure logging to use `/var/log/calypso/`
- [ ] Configure runtime directories
**Status:** ⚠️ **PARTIALLY ALIGNED** - Structure exists but not fully organized per spec
---
### 2.2 Binary Layout ⚠️ **NOT ALIGNED**
**Documentation Spec:**
```
/opt/adastra/calypso/
releases/
1.0.0/
bin/
web/
migrations/
scripts/
current -> releases/1.0.0
third_party/
```
**Current Implementation:**
- ❌ Binaries in `backend/bin/calypso-api` (development)
- ❌ No versioned release structure
- ❌ No symlink to current version
- ❌ Frontend built to `frontend/dist/` (not organized per spec)
**Gaps Identified:**
1. No versioned release structure
2. No symlink mechanism for atomic upgrades
3. Frontend assets not organized per spec
**Recommendations:**
- [ ] Create release packaging script
- [ ] Implement versioned release structure
- [ ] Create symlink mechanism for atomic upgrades
- [ ] Organize frontend assets per spec
**Status:****NOT ALIGNED** - Needs implementation
---
### 2.3 Configuration Layout ✅ **ALIGNED**
**Documentation Spec:**
```
/etc/calypso/
calypso.yaml
secrets.env
tls/
integrations/
system/
```
**Current Implementation:**
- ✅ Configuration file path: `/etc/calypso/config.yaml` (as per main.go)
-`config.yaml.example` exists in repository
- ⚠️ Other directories (secrets.env, tls/, integrations/, system/) not explicitly created
**Status:****MOSTLY ALIGNED** - Main config path correct, subdirectories can be added
---
### 2.4 ZFS Data Layout ⚠️ **NOT IMPLEMENTED**
**Documentation Spec:**
```
/srv/calypso/
db/
backups/
object/
shares/
vtl/
iscsi/
uploads/
cache/
_system/
```
**Current Implementation:**
- ❌ No explicit `/srv/calypso/` directory structure
- ⚠️ ZFS datasets may be created but not organized per this structure
- ⚠️ Data stored in various locations (database in PostgreSQL default, etc.)
**Gaps Identified:**
1. No centralized data directory structure
2. ZFS datasets not organized per spec
3. Data scattered across system
**Recommendations:**
- [ ] Create `/srv/calypso/` directory structure
- [ ] Organize ZFS datasets per spec
- [ ] Update services to use centralized data locations
**Status:****NOT IMPLEMENTED** - Needs implementation
---
## 3. Component Architecture Review
### 3.1 Core Components ✅ **ALIGNED**
**Documentation Spec:**
- Calypso Control Plane (Go-based API) ✅
- ZFS (core storage) ✅
- Bacula (backup) ✅
- MinIO (object storage) ⚠️
- SCST (iSCSI) ✅
- MHVTL (virtual tape library) ✅
**Current Implementation:**
- ✅ Go-based API implemented
- ✅ ZFS integration implemented
- ✅ Bacula/Bareos integration implemented
- ⚠️ Object storage: UI exists but backend integration not confirmed
- ✅ SCST integration implemented
- ✅ MHVTL integration implemented
**Status:****MOSTLY ALIGNED** - Object storage backend needs verification
---
## 4. Technology Stack Review
### 4.1 Backend Stack ✅ **ALIGNED**
**Documentation Spec:**
- Go-based API
- PostgreSQL database
- Systemd service management
**Current Implementation:**
- ✅ Go 1.21+ (go.mod confirms)
- ✅ PostgreSQL (database package confirms)
- ✅ Systemd services (deploy/systemd/ confirms)
- ✅ Gin web framework
- ✅ Structured logging (zerolog)
**Status:****FULLY ALIGNED**
---
### 4.2 Frontend Stack ✅ **ALIGNED**
**Documentation Spec:**
- React-based UI
- Modern build tooling
**Current Implementation:**
- ✅ React 18 with TypeScript
- ✅ Vite build tool
- ✅ TailwindCSS styling
- ✅ TanStack Query for data fetching
- ✅ React Router for navigation
**Status:****FULLY ALIGNED**
---
### 4.3 External Dependencies ✅ **ALIGNED**
**Documentation Spec:**
- ZFS tools
- SCST
- Bacula/Bareos
- MHVTL
- System utilities
**Current Implementation:**
- ✅ ZFS integration (storage/zfs.go)
- ✅ SCST integration (scst/ package)
- ✅ Bacula/Bareos integration (backup/ package)
- ✅ MHVTL integration (tape_vtl/ package)
- ✅ System utilities (system/ package)
**Status:****FULLY ALIGNED**
---
## 5. Security Architecture Review
### 5.1 Service Isolation ✅ **ALIGNED**
**Documentation Spec:**
- Service isolation
- Permission-based filesystem access
- Secrets separation
- Controlled subsystem access
**Current Implementation:**
- ✅ Systemd service isolation
- ✅ RBAC permission system (IAM package)
- ✅ JWT authentication
- ✅ Permission middleware
- ✅ Audit logging
**Status:****FULLY ALIGNED**
---
## 6. Upgrade & Rollback Review
### 6.1 Version Management ❌ **NOT IMPLEMENTED**
**Documentation Spec:**
- Versioned releases
- Atomic switch via symlink
- Data preserved independently in ZFS
**Current Implementation:**
- ❌ No versioned release structure
- ❌ No symlink mechanism
- ⚠️ Data preservation depends on database backups
**Gaps Identified:**
1. No release versioning system
2. No atomic upgrade mechanism
3. No rollback capability
**Recommendations:**
- [ ] Implement release versioning
- [ ] Create symlink-based upgrade mechanism
- [ ] Document rollback procedures
**Status:****NOT IMPLEMENTED** - Needs implementation
---
## 7. Data Flow Review
### 7.1 Request Flow ✅ **ALIGNED**
**Documentation Spec:**
- User actions handled by Calypso API
- Operations executed on ZFS datasets
- Metadata stored centrally in ZFS
**Current Implementation:**
- ✅ User actions via API
- ✅ ZFS operations via storage service
- ⚠️ Metadata stored in PostgreSQL (not ZFS)
**Note:** Current implementation uses PostgreSQL for metadata, which is acceptable but differs from spec. This is actually a better practice for metadata management.
**Status:****FUNCTIONALLY ALIGNED** (with improvement)
---
## 8. Environment Configuration Review
### 8.1 Development Environment ✅ **ALIGNED**
**Current Implementation:**
- ✅ Development setup in `/development/calypso/`
- ✅ Separate dev and production configs
- ✅ Development systemd service
- ✅ Build scripts
**Status:****ALIGNED**
---
### 8.2 Production Environment ⚠️ **NEEDS IMPROVEMENT**
**Gaps Identified:**
1. No production deployment script
2. No production directory structure setup
3. No production configuration templates
**Recommendations:**
- [ ] Create production deployment script
- [ ] Set up production directory structure
- [ ] Create production configuration templates
**Status:** ⚠️ **NEEDS IMPROVEMENT**
---
## 9. Summary of Findings
### 9.1 Fully Aligned ✅
- High-level architecture
- Deployment model
- Component architecture
- Technology stack
- Security architecture
- Request/data flow
- Development environment
### 9.2 Partially Aligned ⚠️
- Filesystem domain separation (structure exists but not fully organized)
- Configuration layout (main path correct, subdirectories can be added)
### 9.3 Not Aligned ❌
- Binary layout (no versioned releases)
- ZFS data layout (not organized per spec)
- Upgrade & rollback (not implemented)
---
## 10. Recommendations
### 10.1 High Priority
1. **Implement Binary Layout Structure**
- Create `/opt/adastra/calypso/releases/` structure
- Implement versioned releases
- Create symlink mechanism
2. **Organize Data Directory Structure**
- Create `/srv/calypso/` with subdirectories
- Organize ZFS datasets per spec
- Update services to use centralized locations
3. **Implement Upgrade & Rollback**
- Version management system
- Atomic upgrade mechanism
- Rollback procedures
### 10.2 Medium Priority
1. **Complete Configuration Layout**
- Create subdirectories (tls/, integrations/, system/)
- Organize secrets.env
2. **Production Deployment**
- Production deployment script
- Production directory setup
- Production configuration templates
### 10.3 Low Priority
1. **Log Directory Organization**
- Configure logging to `/var/log/calypso/`
- Log rotation configuration
2. **Runtime Directory Organization**
- Configure runtime directories
- PID file management
---
## 11. Action Items
### Immediate Actions
- [ ] Review and approve this assessment
- [ ] Prioritize gaps based on business needs
- [ ] Create implementation plan for high-priority items
### Short-term (1-2 weeks)
- [ ] Implement binary layout structure
- [ ] Organize data directory structure
- [ ] Create production deployment script
### Medium-term (1 month)
- [ ] Implement upgrade & rollback mechanism
- [ ] Complete configuration layout
- [ ] Organize log and runtime directories
---
## 12. Conclusion
The current infrastructure and environment implementation is **functionally aligned** with the architecture specification in terms of core functionality and component integration. However, there are **structural gaps** in filesystem organization, binary deployment, and upgrade/rollback mechanisms.
**Key Strengths:**
- ✅ Solid component architecture
- ✅ Good security implementation
- ✅ Proper technology stack
- ✅ Functional data flow
**Key Gaps:**
- ❌ Filesystem organization per spec
- ❌ Versioned release structure
- ❌ Upgrade/rollback mechanism
**Overall Assessment:** The system is **production-ready for functionality** but needs **structural improvements** for enterprise-grade deployment and maintenance.
---
## Document History
| Version | Date | Author | Changes |
|---------|------|--------|---------|
| 1.0 | 2025-01-XX | Development Team | Initial infrastructure review |

View File

@@ -1,82 +0,0 @@
# AtlasOS - Calypso Documentation
## Alpha Release
This directory contains the Software Requirements Specification (SRS) and Software Design Specification (SDS) documentation for the Calypso backup appliance management system.
## Documentation Structure
### Software Requirements Specification (SRS)
Located in `srs/` directory:
- **SRS-00-Overview.md**: Overview and introduction
- **SRS-01-Storage-Management.md**: ZFS storage management requirements
- **SRS-02-File-Sharing.md**: SMB/NFS share management requirements
- **SRS-03-iSCSI-Management.md**: iSCSI target management requirements
- **SRS-04-Tape-Library-Management.md**: Physical and VTL management requirements
- **SRS-05-Backup-Management.md**: Bacula/Bareos integration requirements
- **SRS-06-Object-Storage.md**: S3-compatible object storage requirements
- **SRS-07-Snapshot-Replication.md**: ZFS snapshot and replication requirements
- **SRS-08-System-Management.md**: System configuration and management requirements
- **SRS-09-Monitoring-Alerting.md**: Monitoring and alerting requirements
- **SRS-10-IAM.md**: Identity and access management requirements
- **SRS-11-User-Interface.md**: User interface and experience requirements
### Software Design Specification (SDS)
Located in `sds/` directory:
- **SDS-00-Overview.md**: Design overview and introduction
- **SDS-01-System-Architecture.md**: System architecture and component design
- **SDS-02-Database-Design.md**: Database schema and data models
- **SDS-03-API-Design.md**: REST API design and endpoints
- **SDS-04-Security-Design.md**: Security architecture and implementation
- **SDS-05-Integration-Design.md**: External system integration patterns
### Coding Standards
- **CODING-STANDARDS.md**: Code style, naming conventions, and best practices for Go and TypeScript/React
### Infrastructure Review
- **INFRASTRUCTURE-REVIEW.md**: Review of current infrastructure and environment against architecture specification
- **Calypso_System_Architecture.md**: System architecture specification document
### Technology Stack
- **TECHNOLOGY-STACK.md**: Comprehensive list of all technologies, frameworks, libraries, and tools used in Calypso
## Quick Reference
### Features Implemented
1. ✅ Storage Management (ZFS pools, datasets, disks)
2. ✅ File Sharing (SMB/CIFS, NFS)
3. ✅ iSCSI Management (SCST integration)
4. ✅ Tape Library Management (Physical & VTL)
5. ✅ Backup Management (Bacula/Bareos integration)
6. ✅ Object Storage (S3-compatible)
7. ✅ Snapshot & Replication
8. ✅ System Management (Network, Services, NTP, SNMP, License)
9. ✅ Monitoring & Alerting
10. ✅ Identity & Access Management (IAM)
11. ✅ User Interface (React SPA)
### Technology Stack
- **Backend**: Go 1.21+, Gin, PostgreSQL
- **Frontend**: React 18, TypeScript, Vite, TailwindCSS
- **External**: ZFS, SCST, Bacula/Bareos, MHVTL
## Document Status
**Version**: 1.0.0-alpha
**Last Updated**: 2025-01-XX
**Status**: In Development
## Contributing
When updating documentation:
1. Update the relevant SRS or SDS document
2. Update the version and date in the document
3. Update this README if structure changes
4. Maintain consistency across documents
## Related Documentation
- Implementation guides: `../on-progress/`
- Technical specifications: `../../src/srs-technical-spec-documents/`

View File

@@ -1,152 +0,0 @@
# Technology Stack Summary
## AtlasOS - Calypso Backup Appliance
Quick reference for all technologies used in Calypso.
---
## 🖥️ Operating System
- **Ubuntu Server 24.04 LTS**
- **Linux Kernel 6.8+**
- **systemd** - Service management
---
## 🔧 Backend Stack
### Core
- **Go 1.24+** - Programming language
- **Gin** - Web framework
- **PostgreSQL 14+** - Database
### Libraries
- **JWT (golang-jwt/jwt/v5)** - Authentication
- **UUID (google/uuid)** - UUID generation
- **WebSocket (gorilla/websocket)** - Real-time communication
- **Zap (uber.org/zap)** - Structured logging
- **YAML (gopkg.in/yaml.v3)** - Configuration parsing
- **lib/pq** - PostgreSQL driver (for arrays)
---
## 🎨 Frontend Stack
### Core
- **React 19** - UI framework
- **TypeScript** - Type-safe JavaScript
- **Vite** - Build tool
### Libraries
- **React Router DOM** - Routing
- **TanStack Query** - Data fetching & caching
- **Zustand** - State management
- **Axios** - HTTP client
- **TailwindCSS** - Styling
- **Lucide React** - Icons
- **Recharts** - Charts
- **xterm.js** - Terminal emulator
---
## 💾 Storage Technologies
### File Systems
- **ZFS** - Primary storage filesystem
- **LVM2** - Logical volume management
- **XFS** - High-performance filesystem
- **ext4** - Alternative filesystem
### Tools
- **parted, gdisk** - Partition management
- **smartmontools** - Disk monitoring
- **nvme-cli** - NVMe management
---
## 🌐 Network & File Sharing
### Protocols
- **NFS** - Network File System (nfs-kernel-server)
- **Samba** - SMB/CIFS file sharing
- **iSCSI** - Block storage (SCST)
### Tools
- **SCST** - iSCSI target subsystem
- **open-iscsi** - iSCSI initiator
---
## 💿 Backup & Tape
### Software
- **Bacula/Bareos** - Backup software
- **MHVTL** - Virtual Tape Library
### Tools
- **lsscsi** - SCSI device listing
- **sg3-utils** - SCSI generic utilities
- **mt-st** - Tape utilities
- **mtx** - Media changer control
---
## 🛡️ Security
### Antivirus
- **ClamAV** - Antivirus engine
- clamav-daemon
- clamav-freshclam
### Authentication
- **JWT** - Token-based auth
- **bcrypt/Argon2** - Password hashing
- **RBAC** - Role-based access control
---
## 📊 Monitoring
### Built-in
- **Custom Metrics Service** - System metrics
- **Health Checks** - Service health
- **Audit Logging** - Database-backed audit
---
## 🔄 Reverse Proxy (Optional)
- **Nginx** - Web server
- **Caddy** - Web server with auto-HTTPS
---
## 📦 Package Count
- **Backend Go Dependencies:** ~50 packages
- **Frontend npm Dependencies:** ~300+ packages
- **System Packages:** ~50+ packages
---
## 🏗️ Architecture
```
┌─────────────────────────────────┐
│ React 19 + TypeScript + Vite │ Frontend
└──────────────┬──────────────────┘
│ HTTP/REST
┌──────────────▼──────────────────┐
│ Go 1.24 + Gin + PostgreSQL │ Backend
└──────────────┬──────────────────┘
┌──────────────▼──────────────────┐
│ ZFS + SCST + Bacula + ClamAV │ System Services
└──────────────────────────────────┘
```
---
## 📚 Full Documentation
See `TECHNOLOGY-STACK.md` for complete details.

View File

@@ -1,501 +0,0 @@
# Technology Stack
## AtlasOS - Calypso Backup Appliance
**Version:** 1.0.0-alpha
**Last Updated:** 2025-01-XX
---
## Overview
This document provides a comprehensive list of all technologies, frameworks, libraries, and tools used in the Calypso backup appliance.
---
## 1. Operating System & Base Platform
### 1.1 Operating System
- **Ubuntu Server 24.04 LTS** (Primary target)
- **Linux Kernel** 6.8+ (with ZFS, SCST support)
- **systemd** - Service management
- **journald** - System logging
### 1.2 Base System Tools
- **chrony** - NTP time synchronization
- **ufw** - Firewall management
- **nftables** - Network filtering (alternative)
- **udev** - Device management
- **lsb-release** - Linux Standard Base
---
## 2. Backend Technology Stack
### 2.1 Programming Language
- **Go 1.22+** (golang)
- Version: 1.22.0 or later
- Architecture: linux-amd64
### 2.2 Web Framework
- **Gin** - HTTP web framework
- `github.com/gin-gonic/gin`
- RESTful API implementation
- Middleware support
### 2.3 Database
- **PostgreSQL 14+**
- Primary database for metadata
- Connection pooling (pgxpool)
- Migration system
### 2.4 Database Drivers
- **pgx/v5** - PostgreSQL driver
- `github.com/jackc/pgx/v5`
- `github.com/jackc/pgx/v5/pgxpool`
- **lib/pq** - PostgreSQL driver (for array types)
- `github.com/lib/pq`
### 2.5 Authentication & Security
- **JWT** - JSON Web Tokens
- `github.com/golang-jwt/jwt/v5`
- **bcrypt** - Password hashing
- `golang.org/x/crypto/bcrypt`
- **Argon2** - Password hashing (alternative)
### 2.6 Configuration Management
- **Viper** - Configuration management
- `github.com/spf13/viper`
- **YAML** - Configuration format
### 2.7 Logging
- **Zerolog** - Structured logging
- `github.com/rs/zerolog`
- **JSON** - Log format
### 2.8 HTTP Client & Utilities
- **HTTP Client** - Standard library
- **Context** - Request context management
- **Time** - Time handling
### 2.9 Additional Go Libraries
- **UUID** - UUID generation
- `github.com/google/uuid`
- **Errors** - Error handling
- `github.com/pkg/errors`
- **Sync** - Concurrency primitives
- `golang.org/x/sync/errgroup`
---
## 3. Frontend Technology Stack
### 3.1 Core Framework
- **React 18** - UI library
- Version: 18.x
- TypeScript support
### 3.2 Build Tool
- **Vite** - Build tool and dev server
- Fast HMR (Hot Module Replacement)
- Optimized production builds
### 3.3 Programming Language
- **TypeScript** - Type-safe JavaScript
- Type checking
- Modern ES6+ features
### 3.4 Routing
- **React Router DOM** - Client-side routing
- `react-router-dom`
- Version: 6.x
### 3.5 State Management
- **Zustand** - Lightweight state management
- `zustand`
- Global state (auth, UI state)
### 3.6 Data Fetching
- **TanStack Query (React Query)** - Server state management
- `@tanstack/react-query`
- Caching, refetching, mutations
### 3.7 HTTP Client
- **Axios** - HTTP client
- `axios`
- Request/response interceptors
### 3.8 Styling
- **TailwindCSS** - Utility-first CSS framework
- `tailwindcss`
- PostCSS integration
- Dark theme support
### 3.9 Icons
- **Lucide React** - Icon library
- `lucide-react`
- Modern icon set
### 3.10 UI Components
- **shadcn/ui** - UI component library (planned)
- **Custom Components** - Built with TailwindCSS
### 3.11 Charts & Visualization
- **Recharts** - Chart library
- `recharts`
- Line, bar, pie charts
### 3.12 Notifications
- **Sonner** - Toast notifications
- `sonner`
- Success, error, warning toasts
---
## 4. Storage Technologies
### 4.1 File System
- **ZFS** - Zettabyte File System
- `zfsutils-linux`
- `zfs-dkms`
- Pool and dataset management
- Snapshots and replication
### 4.2 Block Storage
- **LVM2** - Logical Volume Manager
- Volume group management
- Thin provisioning
### 4.3 File Systems
- **XFS** - High-performance filesystem (primary)
- **ext4** - Alternative filesystem
### 4.4 Disk Management
- **parted** - Partition management
- **gdisk** - GPT partition editor
- **smartmontools** - SMART disk monitoring
- **nvme-cli** - NVMe device management
---
## 5. Network & File Sharing
### 5.1 File Sharing Protocols
- **NFS** - Network File System
- `nfs-kernel-server`
- `nfs-common`
- NFSv4 support
- **Samba** - SMB/CIFS protocol
- `samba`
- `samba-common-bin`
- Windows file sharing compatibility
### 5.2 iSCSI
- **SCST** - SCSI Target Subsystem
- Kernel module
- `iscsi-scst`
- `scstadmin` - Management tool
### 5.3 Network Tools
- **open-iscsi** - iSCSI initiator
- **iscsiadm** - iSCSI administration
---
## 6. Backup & Tape Technologies
### 6.1 Backup Software
- **Bacula** - Backup software
- `bacula-common`
- `bacula-sd` - Storage daemon
- `bacula-client`
- `bacula-console` - Management console
- **Bareos** - Bacula fork (alternative)
### 6.2 Virtual Tape Library
- **MHVTL** - Virtual Tape Library
- `mhvtl`
- `mhvtl-utils`
- `vtlcmd` - Management tool
### 6.3 Physical Tape
- **lsscsi** - List SCSI devices
- **sg3-utils** - SCSI generic utilities
- **mt-st** - Magnetic tape utilities
- **mtx** - Media changer control
---
## 7. Security & Antivirus
### 7.1 Antivirus
- **ClamAV** - Antivirus engine
- `clamav`
- `clamav-daemon`
- `clamav-freshclam` - Virus definition updates
- `clamav-unofficial-sigs` - Unofficial signatures
---
## 8. Object Storage
### 8.1 S3-Compatible Storage
- **MinIO** - Object storage (planned/integration)
- S3-compatible API
- Bucket management
---
## 9. Development Tools
### 9.1 Build Tools
- **Make** - Build automation
- **Go Build** - Go compiler
- **npm/pnpm** - Node.js package manager
### 9.2 Version Control
- **Git** - Version control system
### 9.3 Code Quality
- **gofmt** - Go code formatter
- **goimports** - Go import organizer
- **golint** - Go linter (optional)
- **go vet** - Go static analysis
### 9.4 Frontend Tools
- **Prettier** - Code formatter
- **ESLint** - JavaScript/TypeScript linter
- **TypeScript Compiler** - Type checking
---
## 10. System Services
### 10.1 Service Management
- **systemd** - Service manager
- **journalctl** - Log viewing
### 10.2 Reverse Proxy (Optional)
- **Nginx** - Web server and reverse proxy
- **Caddy** - Web server with automatic HTTPS
---
## 11. Monitoring & Observability
### 11.1 Metrics Collection
- **Custom Metrics Service** - Built-in metrics
- **System Metrics** - CPU, memory, disk, network
### 11.2 Logging
- **Structured Logging** - JSON format
- **Audit Logging** - Database-backed audit trail
### 11.3 Health Checks
- **Health Endpoint** - `/api/v1/health`
- **Service Status** - Component health monitoring
---
## 12. Database Technologies
### 12.1 Primary Database
- **PostgreSQL 14+**
- Metadata storage
- User management
- Audit logs
- Task tracking
- Alert management
### 12.2 Database Tools
- **psql** - PostgreSQL client
- **pg_dump** - Database backup
- **pg_restore** - Database restore
---
## 13. Web Technologies
### 13.1 Protocols
- **HTTP/1.1** - Web protocol
- **HTTPS** - Secure HTTP (with TLS)
- **WebSocket** - Real-time communication (planned)
### 13.2 API
- **RESTful API** - Resource-based API
- **JSON** - Data interchange format
---
## 14. Container & Virtualization (Future)
### 14.1 Container Technologies (Not in V1)
- **Docker** - Containerization (future)
- **Kubernetes** - Orchestration (future)
---
## 15. Package Management
### 15.1 Backend
- **Go Modules** - Dependency management
- `go.mod`
- `go.sum`
### 15.2 Frontend
- **npm** - Node.js package manager
- **pnpm** - Fast, disk space efficient package manager
---
## 16. Testing Tools (Development)
### 16.1 Backend Testing
- **Go Testing** - Built-in testing framework
- **Testify** - Testing toolkit (if used)
### 16.2 Frontend Testing
- **Vitest** - Unit testing (with Vite)
- **React Testing Library** - Component testing
---
## 17. Documentation Tools
### 17.1 Documentation
- **Markdown** - Documentation format
- **Mermaid** - Diagram generation (in docs)
---
## 18. Security Tools
### 18.1 Encryption
- **TLS/SSL** - Transport layer security
- **bcrypt** - Password hashing
- **Argon2** - Password hashing (alternative)
### 18.2 Access Control
- **JWT** - Token-based authentication
- **RBAC** - Role-Based Access Control
- **Permission System** - Resource-based permissions
---
## 19. Version Information
### 19.1 Backend Dependencies
See `backend/go.mod` for complete list of Go dependencies.
### 19.2 Frontend Dependencies
See `frontend/package.json` for complete list of npm dependencies.
---
## 20. External Integrations
### 20.1 System Integrations
- **ZFS Commands** - `zpool`, `zfs`
- **SCST Commands** - `scstadmin`
- **Bacula Commands** - `bconsole`
- **MHVTL Commands** - `vtlcmd`
- **Systemd Commands** - `systemctl`
### 20.2 File System Integrations
- **NFS Exports** - `/etc/exports`
- **Samba Config** - `/etc/samba/smb.conf`
- **ClamAV Config** - `/etc/clamav/`
---
## 21. Build & Deployment
### 21.1 Build Process
- **Go Build** - Compile Go binary
- **Vite Build** - Build frontend assets
- **Makefile** - Build automation
### 21.2 Deployment
- **Systemd Services** - Service deployment
- **Installer Scripts** - Automated installation
- **Configuration Management** - YAML-based config
---
## Summary
### Core Stack
- **Backend:** Go 1.22+ + Gin + PostgreSQL
- **Frontend:** React 18 + TypeScript + Vite + TailwindCSS
- **Storage:** ZFS + LVM2 + XFS
- **File Sharing:** NFS + Samba
- **iSCSI:** SCST
- **Backup:** Bacula/Bareos
- **Tape:** MHVTL + Physical tape tools
- **Antivirus:** ClamAV
- **Security:** JWT + bcrypt + RBAC
### Technology Categories
- **Languages:** Go, TypeScript, JavaScript, SQL, Bash
- **Frameworks:** Gin, React, TailwindCSS
- **Databases:** PostgreSQL
- **Storage:** ZFS, LVM2, XFS
- **Networking:** NFS, SMB/CIFS, iSCSI
- **Security:** JWT, bcrypt, ClamAV
- **Tools:** Git, Make, systemd, journald
---
## Version Matrix
| Component | Version | Purpose |
|-----------|---------|---------|
| Go | 1.22+ | Backend language |
| Node.js | 20.x LTS | Frontend runtime |
| React | 18.x | Frontend framework |
| PostgreSQL | 14+ | Database |
| ZFS | Latest | Storage filesystem |
| SCST | Latest | iSCSI target |
| Bacula | Latest | Backup software |
| ClamAV | Latest | Antivirus |
| NFS | Latest | Network file sharing |
| Samba | Latest | SMB/CIFS file sharing |
---
## Dependencies Count
- **Backend Go Dependencies:** ~30+ packages
- **Frontend npm Dependencies:** ~300+ packages
- **System Packages:** ~50+ packages
---
## License Information
Most components use open-source licenses:
- **Go:** BSD-style license
- **React:** MIT License
- **PostgreSQL:** PostgreSQL License
- **ZFS:** CDDL License
- **ClamAV:** GPL License
- **Samba:** GPL License
---
## References
- Backend dependencies: `backend/go.mod`
- Frontend dependencies: `frontend/package.json`
- System requirements: `scripts/install-requirements.sh`
- Architecture: `docs/alpha/Calypso_System_Architecture.md`
---
**Document History**
| Version | Date | Author | Changes |
|---------|------|--------|---------|
| 1.0 | 2025-01-XX | Development Team | Initial technology stack documentation |

View File

@@ -1,153 +0,0 @@
# Bacula Installation and Configuration Guide for Ubuntu 24.04
## 1. Introduction
This guide provides step-by-step instructions for installing and configuring Bacula on Ubuntu 24.04. The configuration files will be moved to a custom directory: `/opt/calypso/conf/bacula`.
## 2. Installation
First, update the package lists and install the Bacula components and a PostgreSQL database backend.
```bash
sudo apt-get update
sudo apt-get install -y bacula-director bacula-sd bacula-fd postgresql
```
During the installation, you may be prompted to configure a mail server. You can choose "No configuration" for now.
### 2.1. Install Bacula Console
Install the Bacula console, which provides the `bconsole` command-line utility for interacting with the Bacula Director.
```bash
sudo apt-get install -y bacula-console
```
## 3. Database Configuration
Create the Bacula database and user.
```bash
sudo -u postgres createuser -P bacula
sudo -u postgres createdb -O bacula bacula
```
When prompted, enter a password for the `bacula` user. You will need this password later.
Now, grant privileges to the `bacula` user on the `bacula` database.
```bash
sudo -u postgres psql -c "GRANT ALL PRIVILEGES ON DATABASE bacula TO bacula;"
```
Bacula provides scripts to create the necessary tables in the database.
```bash
sudo /usr/share/bacula-director/make_postgresql_tables.sql | sudo -u postgres psql bacula
```
## 4. Configuration File Migration
Create the new configuration directory and copy the default configuration files.
```bash
sudo mkdir -p /opt/calypso/conf/bacula
sudo cp /etc/bacula/* /opt/calypso/conf/bacula/
sudo chown -R bacula:bacula /opt/calypso/conf/bacula
```
## 5. Systemd Service Configuration
Create override files for the `bacula-director` and `bacula-sd` services to point to the new configuration file locations.
### 5.1. Bacula Director
```bash
sudo mkdir -p /etc/systemd/system/bacula-director.service.d
sudo bash -c 'cat > /etc/systemd/system/bacula-director.service.d/override.conf <<EOF
[Service]
ExecStart=
ExecStart=/usr/sbin/bacula-dir -f -c /opt/calypso/conf/bacula/bacula-dir.conf
EOF'
```
### 5.2. Bacula Storage Daemon
```bash
sudo mkdir -p /etc/systemd/system/bacula-sd.service.d
sudo bash -c 'cat > /etc/systemd/system/bacula-sd.service.d/override.conf <<EOF
[Service]
ExecStart=
ExecStart=/usr/sbin/bacula-sd -f -c /opt/calypso/conf/bacula/bacula-sd.conf
EOF'
```
### 5.3. Bacula File Daemon
```bash
sudo mkdir -p /etc/systemd/system/bacula-fd.service.d
sudo bash -c 'cat > /etc/systemd/system/bacula-fd.service.d/override.conf <<EOF
[Service]
ExecStart=
ExecStart=/usr/sbin/bacula-fd -f -c /opt/calypso/conf/bacula/bacula-fd.conf
EOF'
```
Reload the systemd daemon to apply the changes.
```bash
sudo systemctl daemon-reload
```
## 6. Bacula Configuration
Update the `bacula-dir.conf` and `bacula-sd.conf` files to use the new paths and settings.
### 6.1. Bacula Director Configuration
Edit `/opt/calypso/conf/bacula/bacula-dir.conf` and make the following changes:
* In the `Storage` resource, update the `address` to point to the correct IP address or hostname.
* In the `Catalog` resource, update the `dbuser` and `dbpassword` with the values you set in step 3.
* Update any other paths as necessary.
### 6.2. Bacula Storage Daemon Configuration
Edit `/opt/calypso/conf/bacula/bacula-sd.conf` and make the following changes:
* In the `Storage` resource, update the `SDAddress` to point to the correct IP address or hostname.
* Create a directory for the storage device and set the correct permissions.
```bash
sudo mkdir -p /var/lib/bacula/storage
sudo chown -R bacula:tape /var/lib/bacula/storage
```
* In the `Device` resource, update the `Archive Device` to point to the storage directory you just created. For example:
```
Device {
Name = FileStorage
Media Type = File
Archive Device = /var/lib/bacula/storage
LabelMedia = yes;
Random Access = Yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
}
```
## 7. Starting and Verifying Services
Start the Bacula services and check their status.
```bash
sudo systemctl start bacula-director bacula-sd bacula-fd
sudo systemctl status bacula-director bacula-sd bacula-fd
```
## 8. SELinux/AppArmor
If you are using SELinux or AppArmor, you may need to adjust the security policies to allow Bacula to access the new configuration directory and storage directory. The specific steps will depend on your security policy.

View File

@@ -1,86 +0,0 @@
# Bacula Integration Documentation
## For Calypso Backup Appliance
This directory contains documentation for installing, configuring, and integrating Bacula backup software with the Calypso appliance.
---
## Documents
### Installation
- **BACULA-INSTALLATION.md** - Complete installation guide for Bacula Community edition
- Manual installation steps
- Repository configuration
- Package installation
- Post-installation setup
- Integration with Calypso
### Configuration
- **BACULA-CONFIGURATION.md** - Advanced configuration guide
- Director configuration
- Storage Daemon configuration
- File Daemon configuration
- Job scheduling
- Integration with Calypso storage
---
## Quick Start
### Installation via Calypso Installer
```bash
# Bacula is included in Calypso installer
sudo ./installer/alpha/install.sh
```
### Manual Installation
See `BACULA-INSTALLATION.md` for detailed steps.
### Basic Configuration
1. Edit `/opt/bacula/etc/bacula-dir.conf`
2. Configure Director, Catalog, Storage, Pool resources
3. Test configuration: `sudo /opt/bacula/bin/bacula-dir -t`
4. Reload: `sudo systemctl restart bacula-dir`
---
## Integration Points
### Database
- Bacula uses PostgreSQL database (can share with Calypso or separate)
- Calypso can query Bacula database directly
- Database name: `bacula` (default)
### Storage
- Bacula can use Calypso-managed ZFS datasets
- Storage location: `/srv/calypso/backups/`
- Integration via Calypso Storage API
### Management
- Calypso API executes bconsole commands
- Job monitoring via Calypso dashboard
- Configuration management via Calypso UI
---
## References
- **Official Bacula Documentation:** https://www.bacula.org/documentation/
- **Bacula Community Installation Guide:** https://www.bacula.org/whitepapers/CommunityInstallationGuide.pdf
- **Bacula Concept Guide:** https://www.bacula.org/whitepapers/ConceptGuide.pdf
---
## Support
For Bacula-specific issues:
- Bacula Community Support: https://www.bacula.org/support
- Bacula Mailing Lists: https://www.bacula.org/community/mailing-lists/
For Calypso integration issues:
- See main Calypso documentation: `docs/alpha/`
- Check Calypso logs: `sudo journalctl -u calypso-api`

View File

@@ -1,102 +0,0 @@
# ClamAV Installation and Configuration Guide for Ubuntu 24.04
## 1. Introduction
This guide provides step-by-step instructions for installing and configuring ClamAV on Ubuntu 24.04. The configuration files will be moved to a custom directory: `/opt/calypso/conf/clamav`.
## 2. Installation
First, update the package lists and install the `clamav` and `clamav-daemon` packages.
```bash
sudo apt-get update
sudo apt-get install -y clamav clamav-daemon
```
## 3. Configuration File Migration
Create the new configuration directory and copy the default configuration files.
```bash
sudo mkdir -p /opt/calypso/conf/clamav
sudo cp /etc/clamav/clamd.conf /opt/calypso/conf/clamav/clamd.conf
sudo cp /etc/clamav/freshclam.conf /opt/calypso/conf/clamav/freshclam.conf
```
Change the ownership of the new directory to the `clamav` user and group.
```bash
sudo chown -R clamav:clamav /opt/calypso/conf/clamav
```
## 4. Systemd Service Configuration
Create override files for the `clamav-daemon` and `clamav-freshclam` services to point to the new configuration file locations.
### 4.1. clamav-daemon Service
```bash
sudo mkdir -p /etc/systemd/system/clamav-daemon.service.d
sudo bash -c 'cat > /etc/systemd/system/clamav-daemon.service.d/override.conf <<EOF
[Service]
ExecStart=
ExecStart=/usr/sbin/clamd --foreground=true --config-file=/opt/calypso/conf/clamav/clamd.conf
EOF'
```
### 4.2. clamav-freshclam Service
```bash
sudo mkdir -p /etc/systemd/system/clamav-freshclam.service.d
sudo bash -c 'cat > /etc/systemd/system/clamav-freshclam.service.d/override.conf <<EOF
[Service]
ExecStart=
ExecStart=/usr/bin/freshclam -d --foreground=true --config-file=/opt/calypso/conf/clamav/freshclam.conf
EOF'
```
Reload the systemd daemon to apply the changes.
```bash
sudo systemctl daemon-reload
```
## 5. AppArmor Configuration
By default, AppArmor restricts ClamAV from accessing files outside of its default directories. You need to create local AppArmor override files to allow access to the new configuration directory.
### 5.1. freshclam AppArmor Profile
```bash
sudo echo "/opt/calypso/conf/clamav/freshclam.conf r," > /etc/apparmor.d/local/usr.bin.freshclam
```
### 5.2. clamd AppArmor Profile
```bash
sudo echo "/opt/calypso/conf/clamav/clamd.conf r," > /etc/apparmor.d/local/usr.sbin.clamd
```
You also need to grant execute permissions to the parent directory for the clamav user to be able to traverse it.
```bash
sudo chmod o+x /opt/calypso/conf
```
Reload the AppArmor profiles to apply the changes.
```bash
sudo systemctl reload apparmor
```
## 6. Starting and Verifying Services
Restart the ClamAV services and check their status to ensure they are using the new configuration file.
```bash
sudo systemctl restart clamav-daemon clamav-freshclam
sudo systemctl status clamav-daemon clamav-freshclam
```
You should see that both services are `active (running)`.

View File

@@ -1,90 +0,0 @@
# mhvtl Installation and Configuration Guide
This guide details the steps to install and configure the `mhvtl` (Virtual Tape Library) on this system, including compiling from source and setting up custom paths.
## 1. Prerequisites
Ensure the necessary build tools are installed on the system.
```bash
sudo apt-get update
sudo apt-get install -y git make gcc
```
## 2. Download and Compile Source Code
First, clone the `mhvtl` source code from the official repository and then compile and install both the kernel module and the user-space utilities.
```bash
# Create a directory for the build process
mkdir /src/calypso/mhvtl_build
# Clone the source code
git clone https://github.com/markh794/mhvtl.git /src/calypso/mhvtl_build
# Compile and install the kernel module
cd /src/calypso/mhvtl_build/kernel
make
sudo make install
# Compile and install user-space daemons and utilities
cd /src/calypso/mhvtl_build
make
sudo make install
```
## 3. Configure Custom Paths
By default, `mhvtl` uses `/etc/mhvtl` for configuration and `/opt/mhvtl` for media. The following steps reconfigure the installation to use custom paths located in `/opt/calypso/`.
### a. Create Custom Directories
Create the directories for the custom configuration and media paths.
```bash
sudo mkdir -p /opt/calypso/conf/vtl/ /opt/calypso/data/vtl/media/
```
### b. Relocate Configuration Files
Copy the default configuration files generated during installation to the new location. Then, update the `device.conf` file to point to the new media directory. Finally, replace the original configuration directory with a symbolic link.
```bash
# Copy default config files to the new directory
sudo cp -a /etc/mhvtl/* /opt/calypso/conf/vtl/
# Update the Home directory path in the new device.conf
sudo sed -i 's|Home directory: /opt/mhvtl|Home directory: /opt/calypso/data/vtl/media|g' /opt/calypso/conf/vtl/device.conf
# Replace the original config directory with a symlink
sudo rm -rf /etc/mhvtl
sudo ln -s /opt/calypso/conf/vtl /etc/mhvtl
```
### c. Relocate Media Data
Move the default media files to the new location and replace the original data directory with a symbolic link.
```bash
# Move the media contents to the new directory
sudo mv /opt/mhvtl/* /opt/calypso/data/vtl/media/
# Replace the original media directory with a symlink
sudo rmdir /opt/mhvtl
sudo ln -s /opt/calypso/data/vtl/media /opt/mhvtl
```
## 4. Start and Verify Services
With the installation and configuration complete, start the `mhvtl` services and verify that they are running correctly.
```bash
# Load the kernel module (this service should now work)
sudo systemctl start mhvtl-load-modules.service
# Start the main mhvtl target, which starts all related daemons
sudo systemctl start mhvtl.target
# Verify the status of the main services
systemctl status mhvtl.target vtllibrary@10.service vtltape@11.service
```

Some files were not shown because too many files have changed in this diff Show More