This commit is contained in:
2025-12-23 07:50:08 +00:00
parent 4c3ea0059d
commit 7826c6ed24
12 changed files with 2008 additions and 98 deletions

View File

@@ -1,19 +1,20 @@
SOFTWARE REQUIREMENTS SPECIFICATION (SRS) SOFTWARE REQUIREMENTS SPECIFICATION (SRS)
PlutoOS Storage Controller Operating System (v1) AtlasOS Storage Controller Operating System (v1)
================================================== ==================================================
1. INTRODUCTION 1. INTRODUCTION
-------------------------------------------------- --------------------------------------------------
1.1 Purpose 1.1 Purpose
This document defines the functional and non-functional requirements for PlutoOS v1, This document defines the functional and non-functional requirements for AtlasOS v1,
a storage controller operating system built on Linux with ZFS as the core storage engine. a storage controller operating system built on Linux with ZFS as the core storage engine.
It serves as the authoritative reference for development scope, validation, and acceptance. It serves as the authoritative reference for development scope, validation, and acceptance.
1.2 Scope 1.2 Scope
PlutoOS v1 provides: AtlasOS v1 provides:
- ZFS pool, dataset, and ZVOL management - ZFS pool, dataset, and ZVOL management
- Storage services: SMB, NFS, iSCSI (ZVOL-backed) - Storage services: SMB, NFS, iSCSI (ZVOL-backed)
- Virtual Tape Library (VTL) with mhvtl for tape emulation
- Automated snapshot management - Automated snapshot management
- Role-Based Access Control (RBAC) and audit logging - Role-Based Access Control (RBAC) and audit logging
- Web-based GUI and local TUI - Web-based GUI and local TUI
@@ -36,7 +37,7 @@ Desired State : Configuration stored in DB and applied atomically to system
2. SYSTEM OVERVIEW 2. SYSTEM OVERVIEW
-------------------------------------------------- --------------------------------------------------
PlutoOS consists of: AtlasOS consists of:
- Base OS : Minimal Linux (Ubuntu/Debian) - Base OS : Minimal Linux (Ubuntu/Debian)
- Data Plane : ZFS and storage services - Data Plane : ZFS and storage services
- Control Plane: Go backend with HTMX-based UI - Control Plane: Go backend with HTMX-based UI
@@ -93,6 +94,18 @@ Viewer : Read-only access
- System SHALL configure initiator ACLs - System SHALL configure initiator ACLs
- System SHALL expose connection instructions - System SHALL expose connection instructions
4.6.1 Virtual Tape Library (VTL)
- System SHALL manage mhvtl service (start, stop, restart)
- System SHALL create and manage virtual tape libraries (media changers)
- System SHALL create and manage virtual tape drives (LTO-5 through LTO-8)
- System SHALL create and manage virtual tape cartridges
- System SHALL support tape operations (load, eject, read, write)
- System SHALL manage library_contents files for tape inventory
- System SHALL validate drive ID conflicts to prevent device path collisions
- System SHALL automatically restart mhvtl service after configuration changes
- System SHALL support multiple vendors (IBM, HP, Quantum, Tandberg, Overland)
- System SHALL enforce RBAC for VTL operations (Administrator and Operator only)
4.7 Job Management 4.7 Job Management
- System SHALL execute long-running operations as jobs - System SHALL execute long-running operations as jobs
- System SHALL track job status and progress - System SHALL track job status and progress
@@ -161,7 +174,7 @@ Viewer : Read-only access
7. ACCEPTANCE CRITERIA (v1) 7. ACCEPTANCE CRITERIA (v1)
-------------------------------------------------- --------------------------------------------------
PlutoOS v1 is accepted when: AtlasOS v1 is accepted when:
- ZFS pool, dataset, share, and LUN lifecycle works end-to-end - ZFS pool, dataset, share, and LUN lifecycle works end-to-end
- Snapshot policies are active and observable - Snapshot policies are active and observable
- RBAC and audit logging are enforced - RBAC and audit logging are enforced

View File

@@ -5,6 +5,7 @@ AtlasOS is an appliance-style storage controller build by Adastra
**v1 Focus** **v1 Focus**
- ZFS storage engine - ZFS storage engine
- SMB / NFS / iSCSI (ZVOL) - SMB / NFS / iSCSI (ZVOL)
- Virtual Tape Library (VTL) with mhvtl
- Auto snapshots (sanoid) - Auto snapshots (sanoid)
- RBAC + audit - RBAC + audit
- TUI (Bubble Tea) + Web GUI (HTMX) - TUI (Bubble Tea) + Web GUI (HTMX)
@@ -30,3 +31,50 @@ sudo ./installer/install.sh --offline-bundle /path/to/atlas-bundle
``` ```
See `installer/README.md` and `docs/INSTALLATION.md` for detailed instructions. See `installer/README.md` and `docs/INSTALLATION.md` for detailed instructions.
## Features
### Storage Management
- **ZFS**: Pool, dataset, and ZVOL management with health monitoring
- **SMB/CIFS**: Windows file sharing with permission management
- **NFS**: Network file sharing with client access control
- **iSCSI**: Block storage with target and LUN management
### Virtual Tape Library (VTL)
- **Media Changers**: Create and manage virtual tape libraries
- **Tape Drives**: Configure virtual drives (LTO-5 through LTO-8)
- **Tape Cartridges**: Create and manage virtual tapes
- **Tape Operations**: Load, eject, and manage tape media
- **Multi-Vendor Support**: IBM, HP, Quantum, Tandberg, Overland
- **Automatic Service Management**: Auto-restart mhvtl after configuration changes
### Security & Access Control
- **RBAC**: Role-based access control (Administrator, Operator, Viewer)
- **Audit Logging**: Immutable audit trail for all operations
- **Authentication**: JWT-based authentication
### Monitoring
- **Prometheus Metrics**: System and storage metrics
- **Health Monitoring**: Pool health and capacity tracking
- **Job Management**: Track long-running operations
## Installation Directory
Atlas is installed to `/opt/atlas` by default. The installer script will:
1. Install all required dependencies (ZFS, SMB, NFS, iSCSI, mhvtl)
2. Build Atlas binaries
3. Set up systemd services
4. Configure directories and permissions
## Pushing Changes to Repository
Use the provided script to commit and push changes:
```bash
./scripts/push-to-repo.sh "Your commit message"
```
Or skip version update:
```bash
./scripts/push-to-repo.sh "Your commit message" --skip-version
```

View File

@@ -440,6 +440,23 @@ install_dependencies() {
} }
fi fi
# Install mhvtl (Virtual Tape Library) for VTL functionality
echo " Installing mhvtl (Virtual Tape Library)..."
DEBIAN_FRONTEND=noninteractive apt-get install -y -qq \
mhvtl \
mhvtl-utils \
mtx \
sg3-utils || {
echo -e "${YELLOW}Warning: mhvtl installation failed, VTL features may not be available${NC}"
echo " You may need to install mhvtl manually or from source"
}
# Create mhvtl directories if they don't exist
mkdir -p /etc/mhvtl
mkdir -p /opt/mhvtl
chown root:root /etc/mhvtl
chown root:root /opt/mhvtl
# Install databases (SQLite for compatibility, PostgreSQL as default) # Install databases (SQLite for compatibility, PostgreSQL as default)
echo " Installing database packages..." echo " Installing database packages..."
DEBIAN_FRONTEND=noninteractive apt-get install -y -qq \ DEBIAN_FRONTEND=noninteractive apt-get install -y -qq \

View File

@@ -1,11 +1,14 @@
package httpapp package httpapp
import ( import (
"fmt"
"net/http" "net/http"
"net/url" "net/url"
"strconv"
"strings" "strings"
"gitea.avt.data-center.id/othman.suseno/atlas/internal/errors" "gitea.avt.data-center.id/othman.suseno/atlas/internal/errors"
"gitea.avt.data-center.id/othman.suseno/atlas/internal/models"
) )
// methodHandler routes requests based on HTTP method // methodHandler routes requests based on HTTP method
@@ -85,8 +88,9 @@ func (a *App) handlePoolOps(w http.ResponseWriter, r *http.Request) {
} }
if strings.HasSuffix(r.URL.Path, "/scrub") { if strings.HasSuffix(r.URL.Path, "/scrub") {
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
if r.Method == http.MethodPost { if r.Method == http.MethodPost {
a.handleScrubPool(w, r) a.requireRole(storageRoles...)(http.HandlerFunc(a.handleScrubPool)).ServeHTTP(w, r)
} else if r.Method == http.MethodGet { } else if r.Method == http.MethodGet {
a.handleGetScrubStatus(w, r) a.handleGetScrubStatus(w, r)
} else { } else {
@@ -96,8 +100,9 @@ func (a *App) handlePoolOps(w http.ResponseWriter, r *http.Request) {
} }
if strings.HasSuffix(r.URL.Path, "/export") { if strings.HasSuffix(r.URL.Path, "/export") {
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
if r.Method == http.MethodPost { if r.Method == http.MethodPost {
a.handleExportPool(w, r) a.requireRole(storageRoles...)(http.HandlerFunc(a.handleExportPool)).ServeHTTP(w, r)
} else { } else {
writeError(w, errors.NewAPIError(errors.ErrCodeBadRequest, "method not allowed", http.StatusMethodNotAllowed)) writeError(w, errors.NewAPIError(errors.ErrCodeBadRequest, "method not allowed", http.StatusMethodNotAllowed))
} }
@@ -106,50 +111,67 @@ func (a *App) handlePoolOps(w http.ResponseWriter, r *http.Request) {
if strings.HasSuffix(r.URL.Path, "/spare") { if strings.HasSuffix(r.URL.Path, "/spare") {
if r.Method == http.MethodPost { if r.Method == http.MethodPost {
a.handleAddSpareDisk(w, r) storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleAddSpareDisk)).ServeHTTP(w, r)
} else { } else {
writeError(w, errors.NewAPIError(errors.ErrCodeBadRequest, "method not allowed", http.StatusMethodNotAllowed)) writeError(w, errors.NewAPIError(errors.ErrCodeBadRequest, "method not allowed", http.StatusMethodNotAllowed))
} }
return return
} }
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
methodHandler( methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleGetPool(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleGetPool(w, r) },
nil, nil,
nil, nil,
func(w http.ResponseWriter, r *http.Request) { a.handleDeletePool(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeletePool)).ServeHTTP(w, r)
},
nil, nil,
)(w, r) )(w, r)
} }
// handleDatasetOps routes dataset operations by method // handleDatasetOps routes dataset operations by method
func (a *App) handleDatasetOps(w http.ResponseWriter, r *http.Request) { func (a *App) handleDatasetOps(w http.ResponseWriter, r *http.Request) {
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
methodHandler( methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleGetDataset(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleGetDataset(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateDataset(w, r) }, func(w http.ResponseWriter, r *http.Request) {
func(w http.ResponseWriter, r *http.Request) { a.handleUpdateDataset(w, r) }, a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateDataset)).ServeHTTP(w, r)
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteDataset(w, r) }, },
func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleUpdateDataset)).ServeHTTP(w, r)
},
func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteDataset)).ServeHTTP(w, r)
},
nil, nil,
)(w, r) )(w, r)
} }
// handleZVOLOps routes ZVOL operations by method // handleZVOLOps routes ZVOL operations by method
func (a *App) handleZVOLOps(w http.ResponseWriter, r *http.Request) { func (a *App) handleZVOLOps(w http.ResponseWriter, r *http.Request) {
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
methodHandler( methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleGetZVOL(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleGetZVOL(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateZVOL(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateZVOL)).ServeHTTP(w, r)
},
nil, nil,
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteZVOL(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteZVOL)).ServeHTTP(w, r)
},
nil, nil,
)(w, r) )(w, r)
} }
// handleSnapshotOps routes snapshot operations by method // handleSnapshotOps routes snapshot operations by method
func (a *App) handleSnapshotOps(w http.ResponseWriter, r *http.Request) { func (a *App) handleSnapshotOps(w http.ResponseWriter, r *http.Request) {
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
// Check if it's a restore operation // Check if it's a restore operation
if strings.HasSuffix(r.URL.Path, "/restore") { if strings.HasSuffix(r.URL.Path, "/restore") {
if r.Method == http.MethodPost { if r.Method == http.MethodPost {
a.handleRestoreSnapshot(w, r) a.requireRole(storageRoles...)(http.HandlerFunc(a.handleRestoreSnapshot)).ServeHTTP(w, r)
} else { } else {
writeError(w, errors.ErrBadRequest("method not allowed")) writeError(w, errors.ErrBadRequest("method not allowed"))
} }
@@ -158,42 +180,67 @@ func (a *App) handleSnapshotOps(w http.ResponseWriter, r *http.Request) {
methodHandler( methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleGetSnapshot(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleGetSnapshot(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateSnapshot(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateSnapshot)).ServeHTTP(w, r)
},
nil, nil,
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteSnapshot(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteSnapshot)).ServeHTTP(w, r)
},
nil, nil,
)(w, r) )(w, r)
} }
// handleSnapshotPolicyOps routes snapshot policy operations by method // handleSnapshotPolicyOps routes snapshot policy operations by method
func (a *App) handleSnapshotPolicyOps(w http.ResponseWriter, r *http.Request) { func (a *App) handleSnapshotPolicyOps(w http.ResponseWriter, r *http.Request) {
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
methodHandler( methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleGetSnapshotPolicy(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleGetSnapshotPolicy(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateSnapshotPolicy(w, r) }, func(w http.ResponseWriter, r *http.Request) {
func(w http.ResponseWriter, r *http.Request) { a.handleUpdateSnapshotPolicy(w, r) }, a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateSnapshotPolicy)).ServeHTTP(w, r)
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteSnapshotPolicy(w, r) }, },
func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleUpdateSnapshotPolicy)).ServeHTTP(w, r)
},
func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteSnapshotPolicy)).ServeHTTP(w, r)
},
nil, nil,
)(w, r) )(w, r)
} }
// handleSMBShareOps routes SMB share operations by method // handleSMBShareOps routes SMB share operations by method
func (a *App) handleSMBShareOps(w http.ResponseWriter, r *http.Request) { func (a *App) handleSMBShareOps(w http.ResponseWriter, r *http.Request) {
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
methodHandler( methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleGetSMBShare(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleGetSMBShare(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateSMBShare(w, r) }, func(w http.ResponseWriter, r *http.Request) {
func(w http.ResponseWriter, r *http.Request) { a.handleUpdateSMBShare(w, r) }, a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateSMBShare)).ServeHTTP(w, r)
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteSMBShare(w, r) }, },
func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleUpdateSMBShare)).ServeHTTP(w, r)
},
func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteSMBShare)).ServeHTTP(w, r)
},
nil, nil,
)(w, r) )(w, r)
} }
// handleNFSExportOps routes NFS export operations by method // handleNFSExportOps routes NFS export operations by method
func (a *App) handleNFSExportOps(w http.ResponseWriter, r *http.Request) { func (a *App) handleNFSExportOps(w http.ResponseWriter, r *http.Request) {
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
methodHandler( methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleGetNFSExport(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleGetNFSExport(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateNFSExport(w, r) }, func(w http.ResponseWriter, r *http.Request) {
func(w http.ResponseWriter, r *http.Request) { a.handleUpdateNFSExport(w, r) }, a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateNFSExport)).ServeHTTP(w, r)
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteNFSExport(w, r) }, },
func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleUpdateNFSExport)).ServeHTTP(w, r)
},
func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteNFSExport)).ServeHTTP(w, r)
},
nil, nil,
)(w, r) )(w, r)
} }
@@ -206,6 +253,7 @@ func (a *App) handleBackupOps(w http.ResponseWriter, r *http.Request) {
return return
} }
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
switch r.Method { switch r.Method {
case http.MethodGet: case http.MethodGet:
// Check if it's a verify request // Check if it's a verify request
@@ -217,12 +265,12 @@ func (a *App) handleBackupOps(w http.ResponseWriter, r *http.Request) {
case http.MethodPost: case http.MethodPost:
// Restore backup (POST /api/v1/backups/{id}/restore) // Restore backup (POST /api/v1/backups/{id}/restore)
if strings.HasSuffix(r.URL.Path, "/restore") { if strings.HasSuffix(r.URL.Path, "/restore") {
a.handleRestoreBackup(w, r) a.requireRole(storageRoles...)(http.HandlerFunc(a.handleRestoreBackup)).ServeHTTP(w, r)
} else { } else {
writeError(w, errors.ErrBadRequest("invalid backup operation")) writeError(w, errors.ErrBadRequest("invalid backup operation"))
} }
case http.MethodDelete: case http.MethodDelete:
a.handleDeleteBackup(w, r) a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteBackup)).ServeHTTP(w, r)
default: default:
writeError(w, errors.ErrBadRequest("method not allowed")) writeError(w, errors.ErrBadRequest("method not allowed"))
} }
@@ -244,9 +292,10 @@ func (a *App) handleISCSITargetOps(w http.ResponseWriter, r *http.Request) {
return return
} }
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
if strings.HasSuffix(r.URL.Path, "/luns") { if strings.HasSuffix(r.URL.Path, "/luns") {
if r.Method == http.MethodPost { if r.Method == http.MethodPost {
a.handleAddLUN(w, r) a.requireRole(storageRoles...)(http.HandlerFunc(a.handleAddLUN)).ServeHTTP(w, r)
return return
} }
writeError(w, errors.NewAPIError(errors.ErrCodeBadRequest, "method not allowed", http.StatusMethodNotAllowed)) writeError(w, errors.NewAPIError(errors.ErrCodeBadRequest, "method not allowed", http.StatusMethodNotAllowed))
@@ -255,7 +304,7 @@ func (a *App) handleISCSITargetOps(w http.ResponseWriter, r *http.Request) {
if strings.HasSuffix(r.URL.Path, "/luns/remove") { if strings.HasSuffix(r.URL.Path, "/luns/remove") {
if r.Method == http.MethodPost { if r.Method == http.MethodPost {
a.handleRemoveLUN(w, r) a.requireRole(storageRoles...)(http.HandlerFunc(a.handleRemoveLUN)).ServeHTTP(w, r)
return return
} }
writeError(w, errors.NewAPIError(errors.ErrCodeBadRequest, "method not allowed", http.StatusMethodNotAllowed)) writeError(w, errors.NewAPIError(errors.ErrCodeBadRequest, "method not allowed", http.StatusMethodNotAllowed))
@@ -265,8 +314,12 @@ func (a *App) handleISCSITargetOps(w http.ResponseWriter, r *http.Request) {
methodHandler( methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleGetISCSITarget(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleGetISCSITarget(w, r) },
nil, nil,
func(w http.ResponseWriter, r *http.Request) { a.handleUpdateISCSITarget(w, r) }, func(w http.ResponseWriter, r *http.Request) {
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteISCSITarget(w, r) }, a.requireRole(storageRoles...)(http.HandlerFunc(a.handleUpdateISCSITarget)).ServeHTTP(w, r)
},
func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteISCSITarget)).ServeHTTP(w, r)
},
nil, nil,
)(w, r) )(w, r)
} }
@@ -304,22 +357,68 @@ func (a *App) handleUserOps(w http.ResponseWriter, r *http.Request) {
// handleVTLDriveOps routes VTL drive operations by method // handleVTLDriveOps routes VTL drive operations by method
func (a *App) handleVTLDriveOps(w http.ResponseWriter, r *http.Request) { func (a *App) handleVTLDriveOps(w http.ResponseWriter, r *http.Request) {
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
methodHandler( methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleGetVTLDrive(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleGetVTLDrive(w, r) },
nil, nil,
nil, func(w http.ResponseWriter, r *http.Request) {
nil, a.requireRole(storageRoles...)(http.HandlerFunc(a.handleUpdateVTLDrive)).ServeHTTP(w, r)
},
func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteVTLDrive)).ServeHTTP(w, r)
},
nil, nil,
)(w, r) )(w, r)
} }
// handleVTLTapeOps routes VTL tape operations by method // handleVTLTapeOps routes VTL tape operations by method
func (a *App) handleVTLTapeOps(w http.ResponseWriter, r *http.Request) { func (a *App) handleVTLTapeOps(w http.ResponseWriter, r *http.Request) {
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
methodHandler( methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleGetVTLTape(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleGetVTLTape(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateVTLTape(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateVTLTape)).ServeHTTP(w, r)
},
nil, nil,
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteVTLTape(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteVTLTape)).ServeHTTP(w, r)
},
nil,
)(w, r)
}
// handleMediaChangerOps routes media changer operations by method
func (a *App) handleMediaChangerOps(w http.ResponseWriter, r *http.Request) {
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
methodHandler(
func(w http.ResponseWriter, r *http.Request) {
// Get single changer by ID
libraryIDStr := pathParam(r, "/api/v1/vtl/changers/")
libraryID, err := strconv.Atoi(libraryIDStr)
if err != nil || libraryID <= 0 {
writeError(w, errors.ErrValidation("invalid library_id"))
return
}
changers, err := a.vtlService.ListMediaChangers()
if err != nil {
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to list changers: %v", err)))
return
}
for _, changer := range changers {
if changer.LibraryID == libraryID {
writeJSON(w, http.StatusOK, changer)
return
}
}
writeError(w, errors.ErrNotFound(fmt.Sprintf("media changer %d not found", libraryID)))
},
nil,
func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleUpdateMediaChanger)).ServeHTTP(w, r)
},
func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteMediaChanger)).ServeHTTP(w, r)
},
nil, nil,
)(w, r) )(w, r)
} }

View File

@@ -65,9 +65,14 @@ func (a *App) routes() {
a.mux.HandleFunc("/api/openapi.yaml", a.handleOpenAPISpec) a.mux.HandleFunc("/api/openapi.yaml", a.handleOpenAPISpec)
// Backup & Restore // Backup & Restore
// Define allowed roles for storage operations (Administrator and Operator, not Viewer)
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
a.mux.HandleFunc("/api/v1/backups", methodHandler( a.mux.HandleFunc("/api/v1/backups", methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleListBackups(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleListBackups(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateBackup(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateBackup)).ServeHTTP(w, r)
},
nil, nil, nil, nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/backups/", a.handleBackupOps) a.mux.HandleFunc("/api/v1/backups/", a.handleBackupOps)
@@ -85,7 +90,9 @@ func (a *App) routes() {
)) ))
a.mux.HandleFunc("/api/v1/pools", methodHandler( a.mux.HandleFunc("/api/v1/pools", methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleListPools(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleListPools(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreatePool(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreatePool)).ServeHTTP(w, r)
},
nil, nil, nil, nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/pools/available", methodHandler( a.mux.HandleFunc("/api/v1/pools/available", methodHandler(
@@ -94,21 +101,27 @@ func (a *App) routes() {
)) ))
a.mux.HandleFunc("/api/v1/pools/import", methodHandler( a.mux.HandleFunc("/api/v1/pools/import", methodHandler(
nil, nil,
func(w http.ResponseWriter, r *http.Request) { a.handleImportPool(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleImportPool)).ServeHTTP(w, r)
},
nil, nil, nil, nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/pools/", a.handlePoolOps) a.mux.HandleFunc("/api/v1/pools/", a.handlePoolOps)
a.mux.HandleFunc("/api/v1/datasets", methodHandler( a.mux.HandleFunc("/api/v1/datasets", methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleListDatasets(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleListDatasets(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateDataset(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateDataset)).ServeHTTP(w, r)
},
nil, nil, nil, nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/datasets/", a.handleDatasetOps) a.mux.HandleFunc("/api/v1/datasets/", a.handleDatasetOps)
a.mux.HandleFunc("/api/v1/zvols", methodHandler( a.mux.HandleFunc("/api/v1/zvols", methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleListZVOLs(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleListZVOLs(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateZVOL(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateZVOL)).ServeHTTP(w, r)
},
nil, nil, nil, nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/zvols/", a.handleZVOLOps) a.mux.HandleFunc("/api/v1/zvols/", a.handleZVOLOps)
@@ -116,13 +129,17 @@ func (a *App) routes() {
// Snapshot Management // Snapshot Management
a.mux.HandleFunc("/api/v1/snapshots", methodHandler( a.mux.HandleFunc("/api/v1/snapshots", methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleListSnapshots(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleListSnapshots(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateSnapshot(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateSnapshot)).ServeHTTP(w, r)
},
nil, nil, nil, nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/snapshots/", a.handleSnapshotOps) a.mux.HandleFunc("/api/v1/snapshots/", a.handleSnapshotOps)
a.mux.HandleFunc("/api/v1/snapshot-policies", methodHandler( a.mux.HandleFunc("/api/v1/snapshot-policies", methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleListSnapshotPolicies(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleListSnapshotPolicies(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateSnapshotPolicy(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateSnapshotPolicy)).ServeHTTP(w, r)
},
nil, nil, nil, nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/snapshot-policies/", a.handleSnapshotPolicyOps) a.mux.HandleFunc("/api/v1/snapshot-policies/", a.handleSnapshotPolicyOps)
@@ -130,7 +147,9 @@ func (a *App) routes() {
// Storage Services - SMB // Storage Services - SMB
a.mux.HandleFunc("/api/v1/shares/smb", methodHandler( a.mux.HandleFunc("/api/v1/shares/smb", methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleListSMBShares(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleListSMBShares(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateSMBShare(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateSMBShare)).ServeHTTP(w, r)
},
nil, nil, nil, nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/shares/smb/", a.handleSMBShareOps) a.mux.HandleFunc("/api/v1/shares/smb/", a.handleSMBShareOps)
@@ -138,7 +157,9 @@ func (a *App) routes() {
// Storage Services - NFS // Storage Services - NFS
a.mux.HandleFunc("/api/v1/exports/nfs", methodHandler( a.mux.HandleFunc("/api/v1/exports/nfs", methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleListNFSExports(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleListNFSExports(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateNFSExport(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateNFSExport)).ServeHTTP(w, r)
},
nil, nil, nil, nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/exports/nfs/", a.handleNFSExportOps) a.mux.HandleFunc("/api/v1/exports/nfs/", a.handleNFSExportOps)
@@ -146,7 +167,9 @@ func (a *App) routes() {
// Storage Services - iSCSI // Storage Services - iSCSI
a.mux.HandleFunc("/api/v1/iscsi/targets", methodHandler( a.mux.HandleFunc("/api/v1/iscsi/targets", methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleListISCSITargets(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleListISCSITargets(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateISCSITarget(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateISCSITarget)).ServeHTTP(w, r)
},
nil, nil, nil, nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/iscsi/targets/", a.handleISCSITargetOps) a.mux.HandleFunc("/api/v1/iscsi/targets/", a.handleISCSITargetOps)
@@ -158,24 +181,36 @@ func (a *App) routes() {
)) ))
a.mux.HandleFunc("/api/v1/vtl/drives", methodHandler( a.mux.HandleFunc("/api/v1/vtl/drives", methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleListVTLDrives(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleListVTLDrives(w, r) },
nil, nil, nil, nil, func(w http.ResponseWriter, r *http.Request) {
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateVTLDrive)).ServeHTTP(w, r)
},
nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/vtl/drives/", a.handleVTLDriveOps) a.mux.HandleFunc("/api/v1/vtl/drives/", a.handleVTLDriveOps)
a.mux.HandleFunc("/api/v1/vtl/tapes", methodHandler( a.mux.HandleFunc("/api/v1/vtl/tapes", methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleListVTLTapes(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleListVTLTapes(w, r) },
func(w http.ResponseWriter, r *http.Request) { a.handleCreateVTLTape(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateVTLTape)).ServeHTTP(w, r)
},
nil, nil, nil, nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/vtl/tapes/", a.handleVTLTapeOps) a.mux.HandleFunc("/api/v1/vtl/tapes/", a.handleVTLTapeOps)
a.mux.HandleFunc("/api/v1/vtl/service", methodHandler( a.mux.HandleFunc("/api/v1/vtl/service", methodHandler(
nil, nil,
func(w http.ResponseWriter, r *http.Request) { a.handleVTLServiceControl(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleVTLServiceControl)).ServeHTTP(w, r)
},
nil, nil, nil, nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/vtl/changers", methodHandler( a.mux.HandleFunc("/api/v1/vtl/changers", methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleListVTLMediaChangers(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleListVTLMediaChangers(w, r) },
nil, nil, nil, nil, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateMediaChanger)).ServeHTTP(w, r)
},
nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/vtl/changers/", a.handleMediaChangerOps)
a.mux.HandleFunc("/api/v1/vtl/devices/iscsi", methodHandler( a.mux.HandleFunc("/api/v1/vtl/devices/iscsi", methodHandler(
func(w http.ResponseWriter, r *http.Request) { a.handleListVTLDevicesForISCSI(w, r) }, func(w http.ResponseWriter, r *http.Request) { a.handleListVTLDevicesForISCSI(w, r) },
nil, nil, nil, nil, nil, nil, nil, nil,
@@ -186,12 +221,16 @@ func (a *App) routes() {
)) ))
a.mux.HandleFunc("/api/v1/vtl/tape/load", methodHandler( a.mux.HandleFunc("/api/v1/vtl/tape/load", methodHandler(
nil, nil,
func(w http.ResponseWriter, r *http.Request) { a.handleLoadTape(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleLoadTape)).ServeHTTP(w, r)
},
nil, nil, nil, nil, nil, nil,
)) ))
a.mux.HandleFunc("/api/v1/vtl/tape/eject", methodHandler( a.mux.HandleFunc("/api/v1/vtl/tape/eject", methodHandler(
nil, nil,
func(w http.ResponseWriter, r *http.Request) { a.handleEjectTape(w, r) }, func(w http.ResponseWriter, r *http.Request) {
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleEjectTape)).ServeHTTP(w, r)
},
nil, nil, nil, nil, nil, nil,
)) ))

View File

@@ -286,6 +286,193 @@ func (a *App) handleEjectTape(w http.ResponseWriter, r *http.Request) {
}) })
} }
// handleCreateMediaChanger creates a new media changer/library
func (a *App) handleCreateMediaChanger(w http.ResponseWriter, r *http.Request) {
var req struct {
LibraryID int `json:"library_id"`
Vendor string `json:"vendor"`
Product string `json:"product"`
Serial string `json:"serial"`
NumSlots int `json:"num_slots"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, errors.ErrValidation(fmt.Sprintf("invalid request body: %v", err)))
return
}
if req.LibraryID <= 0 {
writeError(w, errors.ErrValidation("library_id must be greater than 0"))
return
}
if req.NumSlots <= 0 {
req.NumSlots = 10 // Default number of slots
}
if err := a.vtlService.AddMediaChanger(req.LibraryID, req.Vendor, req.Product, req.Serial, req.NumSlots); err != nil {
log.Printf("create media changer error: %v", err)
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to create media changer: %v", err)))
return
}
writeJSON(w, http.StatusCreated, map[string]interface{}{
"message": "Media changer created successfully",
"library_id": req.LibraryID,
})
}
// handleUpdateMediaChanger updates a media changer/library configuration
func (a *App) handleUpdateMediaChanger(w http.ResponseWriter, r *http.Request) {
libraryIDStr := pathParam(r, "/api/v1/vtl/changers/")
libraryID, err := strconv.Atoi(libraryIDStr)
if err != nil || libraryID <= 0 {
writeError(w, errors.ErrValidation("invalid library_id"))
return
}
var req struct {
Vendor string `json:"vendor"`
Product string `json:"product"`
Serial string `json:"serial"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, errors.ErrValidation(fmt.Sprintf("invalid request body: %v", err)))
return
}
if err := a.vtlService.UpdateMediaChanger(libraryID, req.Vendor, req.Product, req.Serial); err != nil {
log.Printf("update media changer error: %v", err)
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to update media changer: %v", err)))
return
}
writeJSON(w, http.StatusOK, map[string]interface{}{
"message": "Media changer updated successfully",
"library_id": libraryID,
})
}
// handleDeleteMediaChanger removes a media changer/library
func (a *App) handleDeleteMediaChanger(w http.ResponseWriter, r *http.Request) {
libraryIDStr := pathParam(r, "/api/v1/vtl/changers/")
libraryID, err := strconv.Atoi(libraryIDStr)
if err != nil || libraryID <= 0 {
writeError(w, errors.ErrValidation("invalid library_id"))
return
}
if err := a.vtlService.RemoveMediaChanger(libraryID); err != nil {
log.Printf("delete media changer error: %v", err)
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to delete media changer: %v", err)))
return
}
writeJSON(w, http.StatusOK, map[string]interface{}{
"message": "Media changer deleted successfully",
"library_id": libraryID,
})
}
// handleCreateVTLDrive creates a new drive
func (a *App) handleCreateVTLDrive(w http.ResponseWriter, r *http.Request) {
var req struct {
DriveID int `json:"drive_id"`
LibraryID int `json:"library_id"`
SlotID int `json:"slot_id"`
Vendor string `json:"vendor"`
Product string `json:"product"`
Serial string `json:"serial"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, errors.ErrValidation(fmt.Sprintf("invalid request body: %v", err)))
return
}
if req.DriveID <= 0 {
writeError(w, errors.ErrValidation("drive_id must be greater than 0"))
return
}
if req.LibraryID <= 0 {
writeError(w, errors.ErrValidation("library_id must be greater than 0"))
return
}
if req.SlotID <= 0 {
writeError(w, errors.ErrValidation("slot_id must be greater than 0"))
return
}
if err := a.vtlService.AddDrive(req.DriveID, req.LibraryID, req.SlotID, req.Vendor, req.Product, req.Serial); err != nil {
log.Printf("create VTL drive error: %v", err)
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to create VTL drive: %v", err)))
return
}
writeJSON(w, http.StatusCreated, map[string]interface{}{
"message": "Drive created successfully",
"drive_id": req.DriveID,
})
}
// handleUpdateVTLDrive updates a drive configuration
func (a *App) handleUpdateVTLDrive(w http.ResponseWriter, r *http.Request) {
driveIDStr := pathParam(r, "id")
driveID, err := strconv.Atoi(driveIDStr)
if err != nil || driveID <= 0 {
writeError(w, errors.ErrValidation("invalid drive_id"))
return
}
var req struct {
LibraryID int `json:"library_id"`
SlotID int `json:"slot_id"`
Vendor string `json:"vendor"`
Product string `json:"product"`
Serial string `json:"serial"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, errors.ErrValidation(fmt.Sprintf("invalid request body: %v", err)))
return
}
if err := a.vtlService.UpdateDrive(driveID, req.LibraryID, req.SlotID, req.Vendor, req.Product, req.Serial); err != nil {
log.Printf("update VTL drive error: %v", err)
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to update VTL drive: %v", err)))
return
}
writeJSON(w, http.StatusOK, map[string]string{
"message": "Drive updated successfully",
"drive_id": fmt.Sprintf("%d", driveID),
})
}
// handleDeleteVTLDrive removes a drive
func (a *App) handleDeleteVTLDrive(w http.ResponseWriter, r *http.Request) {
driveIDStr := pathParam(r, "id")
driveID, err := strconv.Atoi(driveIDStr)
if err != nil || driveID <= 0 {
writeError(w, errors.ErrValidation("invalid drive_id"))
return
}
if err := a.vtlService.RemoveDrive(driveID); err != nil {
log.Printf("delete VTL drive error: %v", err)
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to delete VTL drive: %v", err)))
return
}
writeJSON(w, http.StatusOK, map[string]string{
"message": "Drive deleted successfully",
"drive_id": fmt.Sprintf("%d", driveID),
})
}
// handleListVTLDevicesForISCSI returns all tape devices (drives and medium changers) for iSCSI passthrough // handleListVTLDevicesForISCSI returns all tape devices (drives and medium changers) for iSCSI passthrough
func (a *App) handleListVTLDevicesForISCSI(w http.ResponseWriter, r *http.Request) { func (a *App) handleListVTLDevicesForISCSI(w http.ResponseWriter, r *http.Request) {
devices := []map[string]interface{}{} devices := []map[string]interface{}{}

View File

@@ -740,7 +740,12 @@ func (s *VTLService) ListMediaChangers() ([]models.VTLMediaChanger, error) {
// Parse device.conf to get libraries // Parse device.conf to get libraries
deviceConfig, err := s.parseDeviceConfig() deviceConfig, err := s.parseDeviceConfig()
if err != nil {
log.Printf("Warning: failed to parse device.conf: %v", err)
}
if err == nil && len(deviceConfig.Libraries) > 0 { if err == nil && len(deviceConfig.Libraries) > 0 {
log.Printf("Found %d libraries in device.conf", len(deviceConfig.Libraries))
// Count drives per library // Count drives per library
drivesPerLibrary := make(map[int]int) drivesPerLibrary := make(map[int]int)
for _, drive := range deviceConfig.Drives { for _, drive := range deviceConfig.Drives {
@@ -750,11 +755,37 @@ func (s *VTLService) ListMediaChangers() ([]models.VTLMediaChanger, error) {
// Count slots per library from library_contents // Count slots per library from library_contents
slotsPerLibrary := make(map[int]int) slotsPerLibrary := make(map[int]int)
for _, lib := range deviceConfig.Libraries { for _, lib := range deviceConfig.Libraries {
slotMap, err := s.parseLibraryContents(lib.ID) // Count all slots (including empty ones) by reading file directly
if err == nil { contentsPath := fmt.Sprintf("/etc/mhvtl/library_contents.%d", lib.ID)
slotsPerLibrary[lib.ID] = len(slotMap) if file, err := os.Open(contentsPath); err == nil {
scanner := bufio.NewScanner(file)
slotRegex := regexp.MustCompile(`^Slot\s+(\d+):`)
maxSlot := 0
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
if line == "" || strings.HasPrefix(line, "#") {
continue
}
if matches := slotRegex.FindStringSubmatch(line); len(matches) >= 2 {
if slotID, err := strconv.Atoi(matches[1]); err == nil && slotID > maxSlot {
maxSlot = slotID
}
}
}
file.Close()
if maxSlot > 0 {
slotsPerLibrary[lib.ID] = maxSlot
} else {
// Fallback: try parseLibraryContents
slotMap, _ := s.parseLibraryContents(lib.ID)
slotsPerLibrary[lib.ID] = len(slotMap)
if slotsPerLibrary[lib.ID] == 0 {
slotsPerLibrary[lib.ID] = 10 // Default
}
}
} else { } else {
slotsPerLibrary[lib.ID] = 10 // Default // File doesn't exist, use default
slotsPerLibrary[lib.ID] = 10
} }
} }
@@ -1026,6 +1057,542 @@ func (s *VTLService) parseDeviceConfig() (*DeviceConfig, error) {
return config, scanner.Err() return config, scanner.Err()
} }
// writeDeviceConfig writes device.conf from DeviceConfig struct
func (s *VTLService) writeDeviceConfig(config *DeviceConfig) error {
// Ensure filesystem is writable (remount if needed)
parentDir := filepath.Dir(s.deviceConfigPath)
if err := s.ensureWritableFilesystem(parentDir); err != nil {
log.Printf("Warning: failed to ensure writable filesystem for %s: %v", parentDir, err)
// Continue anyway, might still work
}
// Create backup of existing config
backupPath := s.deviceConfigPath + ".backup"
if _, err := os.Stat(s.deviceConfigPath); err == nil {
// File exists, create backup
if err := exec.Command("cp", s.deviceConfigPath, backupPath).Run(); err != nil {
log.Printf("Warning: failed to create backup of device.conf: %v", err)
}
}
// Try to create file, if it fails due to read-only, try remount again
file, err := os.Create(s.deviceConfigPath)
if err != nil {
// Check if it's a read-only filesystem error
if strings.Contains(err.Error(), "read-only") || strings.Contains(err.Error(), "read only") {
log.Printf("Filesystem is read-only, attempting remount for %s", parentDir)
if remountErr := s.remountReadWrite(parentDir); remountErr != nil {
return fmt.Errorf("failed to remount filesystem as read-write: %v", remountErr)
}
// Try again after remount
file, err = os.Create(s.deviceConfigPath)
if err != nil {
return fmt.Errorf("failed to create device.conf after remount: %v", err)
}
} else {
return fmt.Errorf("failed to create device.conf: %v", err)
}
}
defer file.Close()
writer := bufio.NewWriter(file)
// Write libraries and their drives
for _, lib := range config.Libraries {
// Write library header
writer.WriteString(fmt.Sprintf("Library: %d CHANNEL: 00 TARGET: 00 LUN: 00\n", lib.ID))
if lib.Vendor != "" {
writer.WriteString(fmt.Sprintf("Vendor identification: %s\n", lib.Vendor))
} else {
writer.WriteString("Vendor identification: STK\n")
}
if lib.Product != "" {
writer.WriteString(fmt.Sprintf("Product identification: %s\n", lib.Product))
} else {
// Default product based on library ID
if lib.ID == 30 {
writer.WriteString("Product identification: L80\n")
} else {
writer.WriteString("Product identification: L700\n")
}
}
if lib.Serial != "" {
writer.WriteString(fmt.Sprintf("Unit serial number: %s\n", lib.Serial))
} else {
writer.WriteString(fmt.Sprintf("Unit serial number: %08d\n", lib.ID))
}
writer.WriteString("\n")
// Write drives for this library
for _, drive := range config.Drives {
if drive.LibraryID == lib.ID {
// Calculate CHANNEL, TARGET, LUN from drive ID
// Drive ID format: libraryID * 10 + slot (e.g., 11 = library 10, slot 1)
channel := "00"
target := fmt.Sprintf("%02d", (drive.ID%100)/10)
lun := "00"
writer.WriteString(fmt.Sprintf("Drive: %d CHANNEL: %s TARGET: %s LUN: %s\n", drive.ID, channel, target, lun))
writer.WriteString(fmt.Sprintf("Library ID: %d\n", drive.LibraryID))
writer.WriteString(fmt.Sprintf("Slot: %d\n", drive.SlotID))
if drive.Vendor != "" {
writer.WriteString(fmt.Sprintf("Vendor identification: %s\n", drive.Vendor))
} else {
writer.WriteString("Vendor identification: IBM\n")
}
if drive.Product != "" {
writer.WriteString(fmt.Sprintf("Product identification: %s\n", drive.Product))
} else {
writer.WriteString("Product identification: ULT3580-TD5\n")
}
if drive.Serial != "" {
writer.WriteString(fmt.Sprintf("Unit serial number: %s\n", drive.Serial))
} else {
writer.WriteString(fmt.Sprintf("Unit serial number: %08d\n", drive.ID))
}
writer.WriteString("\n")
}
}
}
// Flush and sync to ensure data is written to disk
if err := writer.Flush(); err != nil {
return fmt.Errorf("failed to flush device.conf: %v", err)
}
if err := file.Sync(); err != nil {
log.Printf("Warning: failed to sync device.conf: %v", err)
}
log.Printf("Successfully wrote device.conf with %d libraries and %d drives", len(config.Libraries), len(config.Drives))
return nil
}
// AddMediaChanger adds a new media changer/library to device.conf
func (s *VTLService) AddMediaChanger(libraryID int, vendor, product, serial string, numSlots int) error {
if libraryID <= 0 {
return fmt.Errorf("library ID must be greater than 0")
}
// Parse existing config
config, err := s.parseDeviceConfig()
if err != nil {
// If file doesn't exist, create new config
config = &DeviceConfig{
Libraries: []LibraryConfig{},
Drives: []DriveConfig{},
}
}
// Check if library already exists
for _, lib := range config.Libraries {
if lib.ID == libraryID {
return fmt.Errorf("library %d already exists", libraryID)
}
}
// Set defaults
if vendor == "" {
vendor = "STK"
}
if product == "" {
if libraryID == 30 {
product = "L80"
} else {
product = "L700"
}
}
if serial == "" {
serial = fmt.Sprintf("%08d", libraryID)
}
// Add new library
newLib := LibraryConfig{
ID: libraryID,
Vendor: vendor,
Product: product,
Serial: serial,
}
config.Libraries = append(config.Libraries, newLib)
// Write updated config
if err := s.writeDeviceConfig(config); err != nil {
return fmt.Errorf("failed to write device.conf: %v", err)
}
// Create library_contents file if it doesn't exist
contentsPath := fmt.Sprintf("/etc/mhvtl/library_contents.%d", libraryID)
if _, err := os.Stat(contentsPath); os.IsNotExist(err) {
// Ensure filesystem is writable
parentDir := filepath.Dir(contentsPath)
if err := s.ensureWritableFilesystem(parentDir); err != nil {
log.Printf("Warning: failed to ensure writable filesystem for %s: %v", parentDir, err)
}
file, err := os.Create(contentsPath)
if err != nil {
// If read-only, try remount
if strings.Contains(err.Error(), "read-only") || strings.Contains(err.Error(), "read only") {
if remountErr := s.remountReadWrite(parentDir); remountErr != nil {
log.Printf("Warning: failed to remount for library_contents: %v", remountErr)
} else {
// Retry after remount
file, err = os.Create(contentsPath)
}
}
if err != nil {
log.Printf("Warning: failed to create library_contents.%d: %v", libraryID, err)
} else {
defer file.Close()
// Write library_contents in correct format
file.WriteString("VERSION: 2\n\n")
// Drives will be added when drives are assigned to this library
file.WriteString("Picker 1:\n\n")
// Initialize with empty slots (MAP entries)
for i := 1; i <= numSlots; i++ {
file.WriteString(fmt.Sprintf("MAP %d:\n", i))
}
log.Printf("Created library_contents.%d with %d slots", libraryID, numSlots)
}
} else {
defer file.Close()
// Write library_contents in correct format
file.WriteString("VERSION: 2\n\n")
// Drives will be added when drives are assigned to this library
file.WriteString("Picker 1:\n\n")
// Initialize with empty slots (MAP entries)
for i := 1; i <= numSlots; i++ {
file.WriteString(fmt.Sprintf("MAP %d:\n", i))
}
log.Printf("Created library_contents.%d with %d slots", libraryID, numSlots)
}
}
// Restart mhvtl service to reflect changes (new library needs to be detected)
if err := s.RestartService(); err != nil {
log.Printf("Warning: failed to restart mhvtl service after adding media changer: %v", err)
// Continue even if restart fails - library is added to config
}
log.Printf("Added media changer: Library %d (%s %s)", libraryID, vendor, product)
return nil
}
// RemoveMediaChanger removes a media changer/library from device.conf
func (s *VTLService) RemoveMediaChanger(libraryID int) error {
if libraryID <= 0 {
return fmt.Errorf("library ID must be greater than 0")
}
// Parse existing config
config, err := s.parseDeviceConfig()
if err != nil {
return fmt.Errorf("failed to parse device.conf: %v", err)
}
// Find and remove library
found := false
newLibraries := []LibraryConfig{}
for _, lib := range config.Libraries {
if lib.ID != libraryID {
newLibraries = append(newLibraries, lib)
} else {
found = true
}
}
if !found {
return fmt.Errorf("library %d not found", libraryID)
}
// Remove all drives associated with this library
newDrives := []DriveConfig{}
for _, drive := range config.Drives {
if drive.LibraryID != libraryID {
newDrives = append(newDrives, drive)
}
}
config.Libraries = newLibraries
config.Drives = newDrives
// Write updated config
if err := s.writeDeviceConfig(config); err != nil {
return fmt.Errorf("failed to write device.conf: %v", err)
}
// Optionally remove library_contents file (but keep it for safety)
// contentsPath := fmt.Sprintf("/etc/mhvtl/library_contents.%d", libraryID)
// os.Remove(contentsPath)
// Restart mhvtl service to reflect changes (library removal)
if err := s.RestartService(); err != nil {
log.Printf("Warning: failed to restart mhvtl service after removing media changer: %v", err)
// Continue even if restart fails - library is removed from config
}
log.Printf("Removed media changer: Library %d", libraryID)
return nil
}
// UpdateMediaChanger updates a media changer/library configuration
func (s *VTLService) UpdateMediaChanger(libraryID int, vendor, product, serial string) error {
if libraryID <= 0 {
return fmt.Errorf("library ID must be greater than 0")
}
// Parse existing config
config, err := s.parseDeviceConfig()
if err != nil {
return fmt.Errorf("failed to parse device.conf: %v", err)
}
// Find and update library
found := false
for i := range config.Libraries {
if config.Libraries[i].ID == libraryID {
if vendor != "" {
config.Libraries[i].Vendor = vendor
}
if product != "" {
config.Libraries[i].Product = product
}
if serial != "" {
config.Libraries[i].Serial = serial
}
found = true
break
}
}
if !found {
return fmt.Errorf("library %d not found", libraryID)
}
// Write updated config
if err := s.writeDeviceConfig(config); err != nil {
return fmt.Errorf("failed to write device.conf: %v", err)
}
// Restart mhvtl service to reflect changes (library config update)
if err := s.RestartService(); err != nil {
log.Printf("Warning: failed to restart mhvtl service after updating media changer: %v", err)
// Continue even if restart fails - library is updated in config
}
log.Printf("Updated media changer: Library %d", libraryID)
return nil
}
// AddDrive adds a new drive to device.conf
func (s *VTLService) AddDrive(driveID, libraryID, slotID int, vendor, product, serial string) error {
if driveID <= 0 {
return fmt.Errorf("drive ID must be greater than 0")
}
if libraryID <= 0 {
return fmt.Errorf("library ID must be greater than 0")
}
if slotID <= 0 {
return fmt.Errorf("slot ID must be greater than 0")
}
// Parse existing config
config, err := s.parseDeviceConfig()
if err != nil {
// If file doesn't exist, create new config
config = &DeviceConfig{
Libraries: []LibraryConfig{},
Drives: []DriveConfig{},
}
}
// Check if library exists
libraryExists := false
for _, lib := range config.Libraries {
if lib.ID == libraryID {
libraryExists = true
break
}
}
if !libraryExists {
return fmt.Errorf("library %d does not exist", libraryID)
}
// Check if drive already exists
for _, drive := range config.Drives {
if drive.ID == driveID {
return fmt.Errorf("drive %d already exists", driveID)
}
}
// Calculate TARGET for this drive (used to determine device path)
// TARGET = (driveID % 100) / 10
// This ensures each drive gets a unique TARGET
newTarget := (driveID % 100) / 10
// Check for TARGET conflict with existing drives
// Each TARGET maps to a unique device path (/dev/stX), so we need to ensure uniqueness
for _, drive := range config.Drives {
existingTarget := (drive.ID % 100) / 10
if existingTarget == newTarget {
// TARGET maps to device: TARGET 01 -> /dev/st0, TARGET 02 -> /dev/st1, etc.
deviceNum := newTarget - 1
return fmt.Errorf("drive ID %d would conflict with drive %d (both use TARGET %02d, device /dev/st%d). Please use a different drive ID", driveID, drive.ID, existingTarget, deviceNum)
}
}
// Set defaults
if vendor == "" {
vendor = "IBM"
}
if product == "" {
product = "ULT3580-TD5"
}
if serial == "" {
serial = fmt.Sprintf("%08d", driveID)
}
// Add new drive
newDrive := DriveConfig{
ID: driveID,
LibraryID: libraryID,
SlotID: slotID,
Vendor: vendor,
Product: product,
Serial: serial,
}
config.Drives = append(config.Drives, newDrive)
// Write updated config
if err := s.writeDeviceConfig(config); err != nil {
return fmt.Errorf("failed to write device.conf: %v", err)
}
// Update library_contents to add Drive entry if not exists
if err := s.updateLibraryContentsForDrive(libraryID, driveID); err != nil {
log.Printf("Warning: failed to update library_contents.%d for drive %d: %v", libraryID, driveID, err)
// Continue even if library_contents update fails
}
// Restart mhvtl service to reflect changes (new drive needs to be detected)
if err := s.RestartService(); err != nil {
log.Printf("Warning: failed to restart mhvtl service after adding drive: %v", err)
// Continue even if restart fails - drive is added to config
}
log.Printf("Added drive: Drive %d (Library %d, Slot %d)", driveID, libraryID, slotID)
return nil
}
// RemoveDrive removes a drive from device.conf
func (s *VTLService) RemoveDrive(driveID int) error {
if driveID <= 0 {
return fmt.Errorf("drive ID must be greater than 0")
}
// Parse existing config
config, err := s.parseDeviceConfig()
if err != nil {
return fmt.Errorf("failed to parse device.conf: %v", err)
}
// Find and remove drive
found := false
newDrives := []DriveConfig{}
for _, drive := range config.Drives {
if drive.ID != driveID {
newDrives = append(newDrives, drive)
} else {
found = true
}
}
if !found {
return fmt.Errorf("drive %d not found", driveID)
}
config.Drives = newDrives
// Write updated config
if err := s.writeDeviceConfig(config); err != nil {
return fmt.Errorf("failed to write device.conf: %v", err)
}
// Restart mhvtl service to reflect changes (drive removal)
if err := s.RestartService(); err != nil {
log.Printf("Warning: failed to restart mhvtl service after removing drive: %v", err)
// Continue even if restart fails - drive is removed from config
}
log.Printf("Removed drive: Drive %d", driveID)
return nil
}
// UpdateDrive updates a drive configuration
func (s *VTLService) UpdateDrive(driveID, libraryID, slotID int, vendor, product, serial string) error {
if driveID <= 0 {
return fmt.Errorf("drive ID must be greater than 0")
}
// Parse existing config
config, err := s.parseDeviceConfig()
if err != nil {
return fmt.Errorf("failed to parse device.conf: %v", err)
}
// Find and update drive
found := false
for i := range config.Drives {
if config.Drives[i].ID == driveID {
if libraryID > 0 {
// Check if library exists
libraryExists := false
for _, lib := range config.Libraries {
if lib.ID == libraryID {
libraryExists = true
break
}
}
if !libraryExists {
return fmt.Errorf("library %d does not exist", libraryID)
}
config.Drives[i].LibraryID = libraryID
}
if slotID > 0 {
config.Drives[i].SlotID = slotID
}
if vendor != "" {
config.Drives[i].Vendor = vendor
}
if product != "" {
config.Drives[i].Product = product
}
if serial != "" {
config.Drives[i].Serial = serial
}
found = true
break
}
}
if !found {
return fmt.Errorf("drive %d not found", driveID)
}
// Write updated config
if err := s.writeDeviceConfig(config); err != nil {
return fmt.Errorf("failed to write device.conf: %v", err)
}
// Restart mhvtl service to reflect changes (drive config update)
if err := s.RestartService(); err != nil {
log.Printf("Warning: failed to restart mhvtl service after updating drive: %v", err)
// Continue even if restart fails - drive is updated in config
}
log.Printf("Updated drive: Drive %d", driveID)
return nil
}
// parseLibraryContents parses library_contents.X file // parseLibraryContents parses library_contents.X file
func (s *VTLService) parseLibraryContents(libraryID int) (map[int]string, error) { func (s *VTLService) parseLibraryContents(libraryID int) (map[int]string, error) {
// Map slot ID to barcode // Map slot ID to barcode
@@ -1067,32 +1634,31 @@ func (s *VTLService) parseLibraryContents(libraryID int) (map[int]string, error)
} }
// findDeviceForDrive finds device path for a drive ID // findDeviceForDrive finds device path for a drive ID
// In mhvtl, device path is determined by the order drives appear in device.conf
// TARGET number in device.conf maps to device: TARGET 01 -> /dev/st0, TARGET 02 -> /dev/st1, etc.
func (s *VTLService) findDeviceForDrive(driveID int) string { func (s *VTLService) findDeviceForDrive(driveID int) string {
// Try to find device in /sys/class/scsi_tape/ // Parse device.conf to get TARGET for this drive
tapePath := "/sys/class/scsi_tape" deviceConfig, err := s.parseDeviceConfig()
entries, err := os.ReadDir(tapePath) if err == nil {
if err != nil { // Find the drive and calculate its TARGET
return fmt.Sprintf("/dev/st%d", driveID%10) // Fallback for _, drive := range deviceConfig.Drives {
} if drive.ID == driveID {
// Calculate TARGET from drive ID (same as in writeDeviceConfig)
for _, entry := range entries { target := (driveID % 100) / 10
if !entry.IsDir() { // In mhvtl: TARGET 01 -> /dev/st0, TARGET 02 -> /dev/st1, TARGET 03 -> /dev/st2
continue // Device numbering is 0-based, TARGET is effectively 1-based for the ones digit
} // So: TARGET 01 (ones=1) -> st0, TARGET 02 (ones=2) -> st1, etc.
deviceName := entry.Name() return fmt.Sprintf("/dev/st%d", target-1)
if !strings.HasPrefix(deviceName, "st") && !strings.HasPrefix(deviceName, "nst") { }
continue
}
// Check if this device matches the drive ID
// This is a simplified check - in real implementation, we'd need to map device to drive ID
deviceID := s.getDriveIDFromDevice(deviceName)
if deviceID == driveID {
return fmt.Sprintf("/dev/%s", deviceName)
} }
} }
return fmt.Sprintf("/dev/st%d", driveID%10) // Fallback // Fallback: calculate TARGET from drive ID
target := (driveID % 100) / 10
if target > 0 {
return fmt.Sprintf("/dev/st%d", target-1) // TARGET 01 -> st0, TARGET 02 -> st1
}
return fmt.Sprintf("/dev/st0") // Default fallback
} }
// checkMediaLoadedByDriveID checks if media is loaded in a drive by drive ID // checkMediaLoadedByDriveID checks if media is loaded in a drive by drive ID
@@ -1477,6 +2043,200 @@ func (s *VTLService) addTapeToLibraryContents(libraryID int, slotID int, barcode
return nil return nil
} }
// updateLibraryContentsForDrive adds or updates Drive entry in library_contents file
func (s *VTLService) updateLibraryContentsForDrive(libraryID, driveID int) error {
contentsPath := fmt.Sprintf("/etc/mhvtl/library_contents.%d", libraryID)
// Ensure filesystem is writable
parentDir := filepath.Dir(contentsPath)
if err := s.ensureWritableFilesystem(parentDir); err != nil {
log.Printf("Warning: failed to ensure writable filesystem for %s: %v", parentDir, err)
}
// Check if file exists, if not create it with proper format
file, err := os.OpenFile(contentsPath, os.O_RDWR|os.O_CREATE, 0644)
if err != nil {
// If read-only, try remount
if strings.Contains(err.Error(), "read-only") || strings.Contains(err.Error(), "read only") {
if remountErr := s.remountReadWrite(parentDir); remountErr != nil {
return fmt.Errorf("failed to remount filesystem: %v", remountErr)
}
// Retry after remount
file, err = os.OpenFile(contentsPath, os.O_RDWR|os.O_CREATE, 0644)
if err != nil {
return fmt.Errorf("failed to open library_contents file after remount: %v", err)
}
} else {
return fmt.Errorf("failed to open library_contents file: %v", err)
}
}
defer file.Close()
// Read all lines
var lines []string
scanner := bufio.NewScanner(file)
for scanner.Scan() {
lines = append(lines, scanner.Text())
}
if err := scanner.Err(); err != nil {
return fmt.Errorf("failed to read library_contents file: %v", err)
}
// If file is empty or doesn't have VERSION, create new format
if len(lines) == 0 {
lines = []string{
"VERSION: 2",
"",
"Picker 1:",
"",
}
} else {
// Check if VERSION exists
hasVersion := false
for _, line := range lines {
if strings.Contains(strings.TrimSpace(line), "VERSION:") {
hasVersion = true
break
}
}
if !hasVersion {
// Prepend VERSION
lines = append([]string{"VERSION: 2", ""}, lines...)
}
}
// Find drive number (sequential number for drives in this library)
// Count existing drives
driveRegex := regexp.MustCompile(`^Drive\s+(\d+):`)
maxDriveNum := 0
driveExists := false
driveNum := 0
for _, line := range lines {
trimmed := strings.TrimSpace(line)
if matches := driveRegex.FindStringSubmatch(trimmed); len(matches) >= 2 {
if num, err := strconv.Atoi(matches[1]); err == nil {
if num > maxDriveNum {
maxDriveNum = num
}
}
}
}
// Check if this drive already exists (by checking device.conf drive ID mapping)
// For now, we'll use sequential numbering
// Find if drive entry already exists by checking all drives in device.conf for this library
deviceConfig, err := s.parseDeviceConfig()
if err == nil {
driveIndex := 1
for _, drive := range deviceConfig.Drives {
if drive.LibraryID == libraryID {
if drive.ID == driveID {
driveNum = driveIndex
break
}
driveIndex++
}
}
if driveNum == 0 {
driveNum = maxDriveNum + 1
}
} else {
driveNum = maxDriveNum + 1
}
// Check if Drive entry already exists
drivePattern := fmt.Sprintf("Drive %d:", driveNum)
for _, line := range lines {
if strings.Contains(strings.TrimSpace(line), drivePattern) {
driveExists = true
break
}
}
// If drive already exists, no need to update
if driveExists {
return nil
}
// Build new lines
var newLines []string
insertedDrive := false
afterVersion := false
pickerFound := false
for _, line := range lines {
trimmed := strings.TrimSpace(line)
// Track position after VERSION
if strings.Contains(trimmed, "VERSION:") {
afterVersion = true
newLines = append(newLines, line)
continue
}
// Insert Drive entries after VERSION and before Picker
if afterVersion && !insertedDrive {
if strings.HasPrefix(trimmed, "Picker") {
pickerFound = true
// Insert drive before Picker
newLines = append(newLines, fmt.Sprintf("Drive %d:", driveNum))
newLines = append(newLines, "") // Empty line
insertedDrive = true
} else if strings.HasPrefix(trimmed, "MAP") && !pickerFound {
// If no Picker found, insert before MAP
newLines = append(newLines, fmt.Sprintf("Drive %d:", driveNum))
newLines = append(newLines, "") // Empty line
insertedDrive = true
}
}
newLines = append(newLines, line)
}
// If drive wasn't inserted, add it at appropriate position
if !insertedDrive {
// Find position to insert (after VERSION, before Picker or MAP)
insertPos := -1
for idx, line := range newLines {
trimmed := strings.TrimSpace(line)
if strings.HasPrefix(trimmed, "Picker") || strings.HasPrefix(trimmed, "MAP") {
insertPos = idx
break
}
}
if insertPos == -1 {
// Add at end
newLines = append(newLines, fmt.Sprintf("Drive %d:", driveNum))
} else {
// Insert at position
newLines = append(newLines[:insertPos], append([]string{fmt.Sprintf("Drive %d:", driveNum), ""}, newLines[insertPos:]...)...)
}
}
// Write back to file
if err := file.Truncate(0); err != nil {
return fmt.Errorf("failed to truncate library_contents file: %v", err)
}
if _, err := file.Seek(0, 0); err != nil {
return fmt.Errorf("failed to seek library_contents file: %v", err)
}
writer := bufio.NewWriter(file)
for _, line := range newLines {
writer.WriteString(line + "\n")
}
if err := writer.Flush(); err != nil {
return fmt.Errorf("failed to flush library_contents file: %v", err)
}
if err := file.Sync(); err != nil {
log.Printf("Warning: failed to sync library_contents file: %v", err)
}
log.Printf("Updated library_contents.%d with Drive %d entry", libraryID, driveNum)
return nil
}
// ensureWritableFilesystem checks if filesystem is writable and remounts if needed // ensureWritableFilesystem checks if filesystem is writable and remounts if needed
func (s *VTLService) ensureWritableFilesystem(path string) error { func (s *VTLService) ensureWritableFilesystem(path string) error {
// Try to create a test file to check if writable // Try to create a test file to check if writable
@@ -1498,9 +2258,12 @@ func (s *VTLService) remountReadWrite(path string) error {
// Try /opt first, then /, then use findmnt // Try /opt first, then /, then use findmnt
var mountPoint string var mountPoint string
// Check if path is under /opt // Check if path is under /opt or /etc
if strings.HasPrefix(path, "/opt") { if strings.HasPrefix(path, "/opt") {
mountPoint = "/opt" mountPoint = "/opt"
} else if strings.HasPrefix(path, "/etc") {
// For /etc, we need to remount root filesystem
mountPoint = "/"
} else { } else {
mountPoint = "/" mountPoint = "/"
} }

113
scripts/push-to-repo.sh Executable file
View File

@@ -0,0 +1,113 @@
#!/bin/bash
#
# Script to push Atlas changes to repository
# This script commits all changes, updates version, and pushes to remote
#
# Usage: ./scripts/push-to-repo.sh [commit message] [--skip-version]
#
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Get script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
cd "$REPO_ROOT"
# Check if git is available
if ! command -v git &>/dev/null; then
echo -e "${RED}Error: git is not installed${NC}"
exit 1
fi
# Check if we're in a git repository
if ! git rev-parse --git-dir &>/dev/null; then
echo -e "${RED}Error: Not in a git repository${NC}"
exit 1
fi
# Get commit message from argument or use default
COMMIT_MSG="${1:-Update Atlas with VTL features and improvements}"
# Check if --skip-version flag is set
SKIP_VERSION=false
if [[ "$*" == *"--skip-version"* ]]; then
SKIP_VERSION=true
fi
echo -e "${GREEN}Preparing to push changes to repository...${NC}"
# Check for uncommitted changes
if git diff --quiet && git diff --cached --quiet; then
echo -e "${YELLOW}No changes to commit${NC}"
exit 0
fi
# Show status
echo -e "${GREEN}Current git status:${NC}"
git status --short
# Ask for confirmation
read -p "Continue with commit and push? (y/n) " -n 1 -r
echo ""
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo -e "${YELLOW}Aborted${NC}"
exit 0
fi
# Update version if not skipped
if [[ "$SKIP_VERSION" == false ]]; then
echo -e "${GREEN}Updating version...${NC}"
# You can add version update logic here if needed
# For example, update a VERSION file or tag
fi
# Stage all changes
echo -e "${GREEN}Staging all changes...${NC}"
git add -A
# Commit changes
echo -e "${GREEN}Committing changes...${NC}"
git commit -m "$COMMIT_MSG" || {
echo -e "${YELLOW}No changes to commit${NC}"
exit 0
}
# Get current branch
CURRENT_BRANCH=$(git branch --show-current)
echo -e "${GREEN}Current branch: $CURRENT_BRANCH${NC}"
# Check if remote exists
if ! git remote | grep -q origin; then
echo -e "${YELLOW}Warning: No 'origin' remote found${NC}"
read -p "Do you want to set up a remote? (y/n) " -n 1 -r
echo ""
if [[ $REPLY =~ ^[Yy]$ ]]; then
read -p "Enter remote URL: " REMOTE_URL
git remote add origin "$REMOTE_URL"
else
echo -e "${YELLOW}Skipping push (no remote configured)${NC}"
exit 0
fi
fi
# Push to remote
echo -e "${GREEN}Pushing to remote repository...${NC}"
if git push origin "$CURRENT_BRANCH"; then
echo -e "${GREEN}✓ Successfully pushed to repository${NC}"
else
echo -e "${RED}✗ Push failed${NC}"
echo "You may need to:"
echo " 1. Set upstream: git push -u origin $CURRENT_BRANCH"
echo " 2. Pull first: git pull origin $CURRENT_BRANCH"
exit 1
fi
echo -e "${GREEN}Done!${NC}"

View File

@@ -143,8 +143,45 @@
window.location.href = '/login?return=' + encodeURIComponent(window.location.pathname); window.location.href = '/login?return=' + encodeURIComponent(window.location.pathname);
return; return;
} }
// Hide create/update/delete buttons for viewer role
hideButtonsForViewer();
})(); })();
// Get current user role
function getCurrentUserRole() {
try {
const userStr = localStorage.getItem('atlas_user');
if (userStr) {
const user = JSON.parse(userStr);
return (user.role || '').toLowerCase();
}
} catch (e) {
console.error('Error parsing user data:', e);
}
return '';
}
// Check if current user is viewer
function isViewer() {
return getCurrentUserRole() === 'viewer';
}
// Hide create/update/delete buttons for viewer role
function hideButtonsForViewer() {
if (isViewer()) {
// Hide create buttons
document.querySelectorAll('button').forEach(btn => {
const text = btn.textContent || '';
const onclick = btn.getAttribute('onclick') || '';
if (text.includes('Create') || text.includes('Add LUN') || text.includes('Delete') ||
onclick.includes('showCreate') || onclick.includes('addLUN') || onclick.includes('deleteISCSITarget')) {
btn.style.display = 'none';
}
});
}
}
function getAuthHeaders() { function getAuthHeaders() {
const token = localStorage.getItem('atlas_token'); const token = localStorage.getItem('atlas_token');
return { return {

View File

@@ -88,7 +88,8 @@
</div> </div>
<div> <div>
<label class="block text-sm font-medium text-slate-300 mb-1">Password *</label> <label class="block text-sm font-medium text-slate-300 mb-1">Password *</label>
<input type="password" name="password" required class="w-full px-3 py-2 bg-slate-900 border border-slate-700 rounded text-white text-sm focus:outline-none focus:ring-2 focus:ring-blue-600"> <input type="password" name="password" required minlength="8" class="w-full px-3 py-2 bg-slate-900 border border-slate-700 rounded text-white text-sm focus:outline-none focus:ring-2 focus:ring-blue-600">
<p class="text-xs text-slate-400 mt-1">Minimum 8 characters required</p>
</div> </div>
<div> <div>
<label class="block text-sm font-medium text-slate-300 mb-1">Role *</label> <label class="block text-sm font-medium text-slate-300 mb-1">Role *</label>
@@ -534,13 +535,50 @@ async function loadUsers(forceRefresh = false) {
} }
function showCreateUserModal() { function showCreateUserModal() {
// Reset form when opening modal
const form = document.getElementById('create-user-form');
if (form) {
form.reset();
}
document.getElementById('create-user-modal').classList.remove('hidden'); document.getElementById('create-user-modal').classList.remove('hidden');
} }
// Flag to prevent double submission
let isCreatingUser = false;
async function createUser(e) { async function createUser(e) {
e.preventDefault(); e.preventDefault();
// Prevent double submission
if (isCreatingUser) {
console.log('User creation already in progress, ignoring duplicate submission');
return;
}
isCreatingUser = true;
const submitButton = e.target.querySelector('button[type="submit"]');
const originalButtonText = submitButton ? submitButton.textContent : '';
// Disable submit button to prevent double clicks
if (submitButton) {
submitButton.disabled = true;
submitButton.textContent = 'Creating...';
}
const formData = new FormData(e.target); const formData = new FormData(e.target);
// Frontend validation
const password = formData.get('password');
if (password && password.length < 8) {
alert('Error: Password must be at least 8 characters long');
isCreatingUser = false;
if (submitButton) {
submitButton.disabled = false;
submitButton.textContent = originalButtonText;
}
return;
}
try { try {
const res = await fetch('/api/v1/users', { const res = await fetch('/api/v1/users', {
method: 'POST', method: 'POST',
@@ -567,18 +605,41 @@ async function createUser(e) {
if (res.ok || res.status === 201) { if (res.ok || res.status === 201) {
console.log('User created successfully, refreshing list...'); console.log('User created successfully, refreshing list...');
closeModal('create-user-modal'); // Reset form before closing modal
e.target.reset(); e.target.reset();
closeModal('create-user-modal');
// Force reload users list - add cache busting // Force reload users list - add cache busting
await loadUsers(true); await loadUsers(true);
alert('User created successfully'); alert('User created successfully');
} else { } else {
const errorMsg = (data && data.error) ? data.error : 'Failed to create user'; // Extract error message from different possible formats
console.error('Create user failed:', errorMsg); let errorMsg = 'Failed to create user';
if (data) {
// Try structured error format: {code, message, details}
if (data.message) {
errorMsg = data.message;
// Append details if available
if (data.details) {
errorMsg += ': ' + data.details;
}
}
// Fallback to simple error format: {error}
else if (data.error) {
errorMsg = data.error;
}
}
console.error('Create user failed:', errorMsg, data);
alert(`Error: ${errorMsg}`); alert(`Error: ${errorMsg}`);
} }
} catch (err) { } catch (err) {
alert(`Error: ${err.message}`); alert(`Error: ${err.message}`);
} finally {
// Re-enable submit button
isCreatingUser = false;
if (submitButton) {
submitButton.disabled = false;
submitButton.textContent = originalButtonText;
}
} }
} }

View File

@@ -335,8 +335,50 @@
window.location.href = '/login?return=' + encodeURIComponent(window.location.pathname); window.location.href = '/login?return=' + encodeURIComponent(window.location.pathname);
return; return;
} }
// Hide create/update/delete buttons for viewer role
hideButtonsForViewer();
})(); })();
// Get current user role
function getCurrentUserRole() {
try {
const userStr = localStorage.getItem('atlas_user');
if (userStr) {
const user = JSON.parse(userStr);
return (user.role || '').toLowerCase();
}
} catch (e) {
console.error('Error parsing user data:', e);
}
return '';
}
// Check if current user is viewer
function isViewer() {
return getCurrentUserRole() === 'viewer';
}
// Hide create/update/delete buttons for viewer role
function hideButtonsForViewer() {
if (isViewer()) {
// Hide create buttons
const createButtons = document.querySelectorAll('button[onclick*="showCreate"], button[onclick*="Create"], button[onclick*="Import"]');
createButtons.forEach(btn => {
if (btn.onclick && (btn.onclick.toString().includes('Create') || btn.onclick.toString().includes('Import'))) {
btn.style.display = 'none';
}
});
// Also hide by text content
document.querySelectorAll('button').forEach(btn => {
const text = btn.textContent || '';
if (text.includes('Create') || text.includes('Import')) {
btn.style.display = 'none';
}
});
}
}
let currentTab = 'pools'; let currentTab = 'pools';
function switchTab(tab) { function switchTab(tab) {

View File

@@ -76,7 +76,10 @@
<h2 class="text-lg font-semibold text-white">Virtual Tape Drives</h2> <h2 class="text-lg font-semibold text-white">Virtual Tape Drives</h2>
<p class="text-xs text-slate-400 mt-1">Manage tape drives and loaded media</p> <p class="text-xs text-slate-400 mt-1">Manage tape drives and loaded media</p>
</div> </div>
<button onclick="loadVTLDrives()" class="text-sm text-slate-400 hover:text-white">Refresh</button> <div class="flex gap-2">
<button onclick="showCreateDriveModal()" class="px-3 py-1.5 bg-blue-600 hover:bg-blue-700 text-white rounded text-sm">Add Drive</button>
<button onclick="loadVTLDrives()" class="text-sm text-slate-400 hover:text-white">Refresh</button>
</div>
</div> </div>
<div id="vtl-drives-list" class="p-4"> <div id="vtl-drives-list" class="p-4">
<p class="text-slate-400 text-sm">Loading...</p> <p class="text-slate-400 text-sm">Loading...</p>
@@ -112,7 +115,12 @@
<h2 class="text-lg font-semibold text-white">Media Changer</h2> <h2 class="text-lg font-semibold text-white">Media Changer</h2>
<p class="text-xs text-slate-400 mt-1">Manage tape library slots and operations</p> <p class="text-xs text-slate-400 mt-1">Manage tape library slots and operations</p>
</div> </div>
<button onclick="loadMediaChanger()" class="text-sm text-slate-400 hover:text-white">Refresh</button> <div class="flex gap-2">
<button onclick="showCreateChangerModal()" id="create-changer-btn" class="px-3 py-1.5 bg-blue-600 hover:bg-blue-700 text-white rounded text-sm">
Add Media Changer
</button>
<button onclick="loadMediaChanger()" class="text-sm text-slate-400 hover:text-white">Refresh</button>
</div>
</div> </div>
<div id="vtl-changer-content" class="p-4"> <div id="vtl-changer-content" class="p-4">
<p class="text-slate-400 text-sm">Loading...</p> <p class="text-slate-400 text-sm">Loading...</p>
@@ -228,6 +236,107 @@
</div> </div>
</div> </div>
<!-- Create/Edit Media Changer Modal -->
<div id="changer-modal" class="hidden fixed inset-0 bg-black/50 flex items-center justify-center z-50">
<div class="bg-slate-800 rounded-lg border border-slate-700 p-6 max-w-md w-full mx-4">
<h3 id="changer-modal-title" class="text-xl font-semibold text-white mb-4">Add Media Changer</h3>
<form id="changer-form" onsubmit="saveChanger(event)" class="space-y-4">
<input type="hidden" id="changer-library-id">
<div>
<label class="block text-sm font-medium text-slate-300 mb-1">Library ID *</label>
<input type="number" id="changer-id-input" name="library_id" min="1" required class="w-full px-3 py-2 bg-slate-900 border border-slate-700 rounded text-white text-sm focus:outline-none focus:ring-2 focus:ring-blue-600">
<p class="text-xs text-slate-400 mt-1">Unique library identifier (e.g., 10, 20, 30)</p>
</div>
<div>
<label class="block text-sm font-medium text-slate-300 mb-1">Vendor</label>
<input type="text" id="changer-vendor-input" name="vendor" placeholder="STK" class="w-full px-3 py-2 bg-slate-900 border border-slate-700 rounded text-white text-sm focus:outline-none focus:ring-2 focus:ring-blue-600">
<p class="text-xs text-slate-400 mt-1">Default: STK</p>
</div>
<div>
<label class="block text-sm font-medium text-slate-300 mb-1">Product</label>
<input type="text" id="changer-product-input" name="product" placeholder="L700 or L80" class="w-full px-3 py-2 bg-slate-900 border border-slate-700 rounded text-white text-sm focus:outline-none focus:ring-2 focus:ring-blue-600">
<p class="text-xs text-slate-400 mt-1">Default: L700 (or L80 for library 30)</p>
</div>
<div>
<label class="block text-sm font-medium text-slate-300 mb-1">Serial Number</label>
<input type="text" id="changer-serial-input" name="serial" placeholder="Auto-generated" class="w-full px-3 py-2 bg-slate-900 border border-slate-700 rounded text-white text-sm focus:outline-none focus:ring-2 focus:ring-blue-600">
<p class="text-xs text-slate-400 mt-1">Leave empty for auto-generation</p>
</div>
<div id="changer-slots-div">
<label class="block text-sm font-medium text-slate-300 mb-1">Number of Slots</label>
<input type="number" id="changer-slots-input" name="num_slots" min="1" value="10" class="w-full px-3 py-2 bg-slate-900 border border-slate-700 rounded text-white text-sm focus:outline-none focus:ring-2 focus:ring-blue-600">
<p class="text-xs text-slate-400 mt-1">Default: 10 slots</p>
</div>
<div class="flex gap-2 justify-end">
<button type="button" onclick="closeModal('changer-modal')" class="px-4 py-2 bg-slate-700 hover:bg-slate-600 text-white rounded text-sm">
Cancel
</button>
<button type="submit" class="px-4 py-2 bg-blue-600 hover:bg-blue-700 text-white rounded text-sm">
Save
</button>
</div>
</form>
</div>
</div>
<!-- Create/Edit Drive Modal -->
<div id="drive-modal" class="hidden fixed inset-0 bg-black/50 flex items-center justify-center z-50">
<div class="bg-slate-800 rounded-lg border border-slate-700 p-6 max-w-md w-full mx-4">
<h3 id="drive-modal-title" class="text-xl font-semibold text-white mb-4">Add Drive</h3>
<form id="drive-form" onsubmit="saveDrive(event)" class="space-y-4">
<div>
<label class="block text-sm font-medium text-slate-300 mb-1">Drive ID *</label>
<input type="number" id="drive-id-input" name="drive_id" min="1" required class="w-full px-3 py-2 bg-slate-900 border border-slate-700 rounded text-white text-sm focus:outline-none focus:ring-2 focus:ring-blue-600">
<p class="text-xs text-slate-400 mt-1">Unique drive identifier (e.g., 11, 12, 21)</p>
</div>
<div>
<label class="block text-sm font-medium text-slate-300 mb-1">Library ID *</label>
<input type="number" id="drive-library-id-input" name="library_id" min="1" required class="w-full px-3 py-2 bg-slate-900 border border-slate-700 rounded text-white text-sm focus:outline-none focus:ring-2 focus:ring-blue-600">
<p class="text-xs text-slate-400 mt-1">Library where this drive belongs</p>
</div>
<div>
<label class="block text-sm font-medium text-slate-300 mb-1">Slot ID *</label>
<input type="number" id="drive-slot-id-input" name="slot_id" min="1" required class="w-full px-3 py-2 bg-slate-900 border border-slate-700 rounded text-white text-sm focus:outline-none focus:ring-2 focus:ring-blue-600">
<p class="text-xs text-slate-400 mt-1">Slot number in the library</p>
</div>
<div>
<label class="block text-sm font-medium text-slate-300 mb-1">Vendor</label>
<select id="drive-vendor-input" name="vendor" onchange="updateDriveProductOptions()" class="w-full px-3 py-2 bg-slate-900 border border-slate-700 rounded text-white text-sm focus:outline-none focus:ring-2 focus:ring-blue-600">
<option value="IBM">IBM</option>
<option value="HP">HP</option>
<option value="Quantum">Quantum</option>
<option value="Tandberg">Tandberg</option>
<option value="Overland">Overland</option>
</select>
<p class="text-xs text-slate-400 mt-1">Select drive vendor</p>
</div>
<div>
<label class="block text-sm font-medium text-slate-300 mb-1">Product (LTO Drive Type)</label>
<select id="drive-product-input" name="product" class="w-full px-3 py-2 bg-slate-900 border border-slate-700 rounded text-white text-sm focus:outline-none focus:ring-2 focus:ring-blue-600">
<option value="ULT3580-TD5">IBM ULT3580-TD5 (LTO-5)</option>
<option value="ULT3580-TD6">IBM ULT3580-TD6 (LTO-6)</option>
<option value="ULT3580-TD7">IBM ULT3580-TD7 (LTO-7)</option>
<option value="ULT3580-TD8">IBM ULT3580-TD8 (LTO-8)</option>
</select>
<p class="text-xs text-slate-400 mt-1">Select LTO drive generation</p>
</div>
<div>
<label class="block text-sm font-medium text-slate-300 mb-1">Serial Number</label>
<input type="text" id="drive-serial-input" name="serial" placeholder="Auto-generated" class="w-full px-3 py-2 bg-slate-900 border border-slate-700 rounded text-white text-sm focus:outline-none focus:ring-2 focus:ring-blue-600">
<p class="text-xs text-slate-400 mt-1">Leave empty for auto-generation</p>
</div>
<div class="flex gap-2 justify-end">
<button type="button" onclick="closeModal('drive-modal')" class="px-4 py-2 bg-slate-700 hover:bg-slate-600 text-white rounded text-sm">
Cancel
</button>
<button type="submit" class="px-4 py-2 bg-blue-600 hover:bg-blue-700 text-white rounded text-sm">
Save
</button>
</div>
</form>
</div>
</div>
<!-- Service Control Modal --> <!-- Service Control Modal -->
<div id="service-control-modal" class="hidden fixed inset-0 bg-black/50 flex items-center justify-center z-50"> <div id="service-control-modal" class="hidden fixed inset-0 bg-black/50 flex items-center justify-center z-50">
<div class="bg-slate-800 rounded-lg border border-slate-700 p-6 max-w-md w-full mx-4"> <div class="bg-slate-800 rounded-lg border border-slate-700 p-6 max-w-md w-full mx-4">
@@ -263,8 +372,49 @@
window.location.href = '/login?return=' + encodeURIComponent(window.location.pathname); window.location.href = '/login?return=' + encodeURIComponent(window.location.pathname);
return; return;
} }
// Hide create/update/delete buttons for viewer role
hideButtonsForViewer();
})(); })();
// Get current user role
function getCurrentUserRole() {
try {
const userStr = localStorage.getItem('atlas_user');
if (userStr) {
const user = JSON.parse(userStr);
return (user.role || '').toLowerCase();
}
} catch (e) {
console.error('Error parsing user data:', e);
}
return '';
}
// Check if current user is viewer
function isViewer() {
return getCurrentUserRole() === 'viewer';
}
// Hide create/update/delete buttons for viewer role
function hideButtonsForViewer() {
if (isViewer()) {
// Hide create/control buttons
document.querySelectorAll('button').forEach(btn => {
const text = btn.textContent || '';
const onclick = btn.getAttribute('onclick') || '';
if (text.includes('Create') || text.includes('Service Control') || text.includes('Add') ||
(text.includes('Edit') && !text.includes('Refresh')) || text.includes('Delete') ||
onclick.includes('showCreate') || onclick.includes('showServiceControl') ||
onclick.includes('showCreateChanger') || onclick.includes('showEditChanger') ||
onclick.includes('deleteChanger') || onclick.includes('showCreateDriveModal') ||
onclick.includes('showEditDriveModal') || onclick.includes('deleteDrive')) {
btn.style.display = 'none';
}
});
}
}
function getAuthHeaders() { function getAuthHeaders() {
const token = localStorage.getItem('atlas_token'); const token = localStorage.getItem('atlas_token');
return { return {
@@ -342,7 +492,11 @@
// Load Drives // Load Drives
async function loadVTLDrives() { async function loadVTLDrives() {
try { try {
const res = await fetch('/api/v1/vtl/drives', { headers: getAuthHeaders() }); // Add cache busting to ensure fresh data
const res = await fetch('/api/v1/vtl/drives?_=' + Date.now(), {
headers: getAuthHeaders(),
cache: 'no-cache'
});
if (!res.ok) throw new Error('Failed to load drives'); if (!res.ok) throw new Error('Failed to load drives');
const drives = await res.json(); const drives = await res.json();
@@ -380,6 +534,8 @@
${drive.media_loaded ${drive.media_loaded
? `<button onclick="ejectTape(${drive.id})" class="px-3 py-1.5 bg-yellow-600 hover:bg-yellow-700 text-white rounded text-sm">Eject</button>` ? `<button onclick="ejectTape(${drive.id})" class="px-3 py-1.5 bg-yellow-600 hover:bg-yellow-700 text-white rounded text-sm">Eject</button>`
: `<button onclick="loadTape(${drive.id})" class="px-3 py-1.5 bg-blue-600 hover:bg-blue-700 text-white rounded text-sm">Load Tape</button>`} : `<button onclick="loadTape(${drive.id})" class="px-3 py-1.5 bg-blue-600 hover:bg-blue-700 text-white rounded text-sm">Load Tape</button>`}
<button onclick="showEditDriveModal(${drive.id})" class="px-3 py-1.5 bg-slate-600 hover:bg-slate-700 text-white rounded text-sm">Edit</button>
<button onclick="deleteDrive(${drive.id})" class="px-3 py-1.5 bg-red-600 hover:bg-red-700 text-white rounded text-sm">Delete</button>
</div> </div>
</div> </div>
</div> </div>
@@ -449,7 +605,12 @@
// Load Media Changer // Load Media Changer
async function loadMediaChanger() { async function loadMediaChanger() {
try { try {
const res = await fetch('/api/v1/vtl/changer/status', { headers: getAuthHeaders() }); // Use /api/v1/vtl/changers instead of /api/v1/vtl/changer/status for better consistency
// Add cache busting to force refresh
const res = await fetch('/api/v1/vtl/changers?t=' + Date.now(), {
headers: getAuthHeaders(),
cache: 'no-cache'
});
const changerEl = document.getElementById('vtl-changer-content'); const changerEl = document.getElementById('vtl-changer-content');
if (!res.ok) { if (!res.ok) {
@@ -478,17 +639,29 @@
changerEl.innerHTML = changerList.map(changer => ` changerEl.innerHTML = changerList.map(changer => `
<div class="bg-slate-900 rounded-lg p-6 border border-slate-700 mb-4"> <div class="bg-slate-900 rounded-lg p-6 border border-slate-700 mb-4">
<div class="flex items-center justify-between mb-4"> <div class="flex items-start justify-between">
<h3 class="text-lg font-semibold text-white">Media Changer ${changer.id || 'N/A'}</h3> <div class="flex-1">
${changer.status === 'online' <div class="flex items-center gap-2 mb-4">
? '<span class="px-2 py-1 rounded text-xs font-medium bg-green-900 text-green-300">Online</span>' <h3 class="text-lg font-semibold text-white">Media Changer ${changer.id || changer.library_id || 'N/A'}</h3>
: '<span class="px-2 py-1 rounded text-xs font-medium bg-red-900 text-red-300">Offline</span>'} ${changer.status === 'online'
</div> ? '<span class="px-2 py-1 rounded text-xs font-medium bg-green-900 text-green-300">Online</span>'
<div class="text-sm text-slate-400 space-y-2"> : '<span class="px-2 py-1 rounded text-xs font-medium bg-red-900 text-red-300">Offline</span>'}
<p><span class="text-slate-300">Device:</span> ${changer.device || 'N/A'}</p> </div>
<p><span class="text-slate-300">Library ID:</span> ${changer.library_id || 'N/A'}</p> <div class="text-sm text-slate-400 space-y-2">
<p><span class="text-slate-300">Slots:</span> ${changer.slots || 0}</p> <p><span class="text-slate-300">Device:</span> <span class="text-slate-400 font-mono">${changer.device || 'N/A'}</span></p>
<p><span class="text-slate-300">Drives:</span> ${changer.drives || 0}</p> <p><span class="text-slate-300">Library ID:</span> <span class="text-slate-400">${changer.library_id || changer.id || 'N/A'}</span></p>
<p><span class="text-slate-300">Slots:</span> <span class="text-slate-400">${changer.slots || 0}</span></p>
<p><span class="text-slate-300">Drives:</span> <span class="text-slate-400">${changer.drives || 0}</span></p>
</div>
</div>
<div class="flex gap-2 ml-4">
<button onclick="showEditChangerModal(${changer.library_id || changer.id})" class="px-3 py-1.5 bg-blue-600 hover:bg-blue-700 text-white rounded text-sm">
Edit
</button>
<button onclick="deleteChanger(${changer.library_id || changer.id})" class="px-3 py-1.5 bg-red-600 hover:bg-red-700 text-white rounded text-sm">
Delete
</button>
</div>
</div> </div>
</div> </div>
`).join(''); `).join('');
@@ -833,6 +1006,324 @@
document.getElementById(modalId).classList.add('hidden'); document.getElementById(modalId).classList.add('hidden');
} }
// Media Changer Management Functions
function showCreateChangerModal() {
document.getElementById('changer-modal-title').textContent = 'Add Media Changer';
document.getElementById('changer-form').reset();
document.getElementById('changer-library-id').value = '';
document.getElementById('changer-id-input').removeAttribute('readonly');
document.getElementById('changer-id-input').disabled = false;
document.getElementById('changer-slots-div').style.display = 'block';
document.getElementById('changer-modal').classList.remove('hidden');
}
function showEditChangerModal(libraryID) {
document.getElementById('changer-modal-title').textContent = 'Edit Media Changer';
document.getElementById('changer-form').reset();
document.getElementById('changer-library-id').value = libraryID;
document.getElementById('changer-id-input').value = libraryID;
document.getElementById('changer-id-input').setAttribute('readonly', 'readonly');
document.getElementById('changer-id-input').disabled = true;
document.getElementById('changer-slots-div').style.display = 'none';
// Load current changer data
fetch(`/api/v1/vtl/changers/${libraryID}`, { headers: getAuthHeaders() })
.then(res => res.json())
.then(changer => {
if (changer.library_id || changer.id) {
document.getElementById('changer-vendor-input').value = changer.vendor || 'STK';
document.getElementById('changer-product-input').value = changer.product || '';
document.getElementById('changer-serial-input').value = changer.serial || '';
}
})
.catch(err => {
console.error('Error loading changer data:', err);
});
document.getElementById('changer-modal').classList.remove('hidden');
}
async function saveChanger(e) {
e.preventDefault();
const formData = new FormData(e.target);
// Get library_id directly from the visible input field (not from FormData to avoid conflict with hidden input)
const libraryIDInput = document.getElementById('changer-id-input');
const libraryID = libraryIDInput ? parseInt(libraryIDInput.value) : parseInt(formData.get('library_id'));
const isEdit = document.getElementById('changer-library-id').value !== '';
// Validate library ID
if (!libraryID || libraryID <= 0 || isNaN(libraryID)) {
alert('Error: Library ID must be a valid number greater than 0');
return;
}
const data = {
library_id: libraryID,
vendor: formData.get('vendor') || '',
product: formData.get('product') || '',
serial: formData.get('serial') || '',
};
if (!isEdit) {
const numSlotsInput = document.getElementById('changer-slots-input');
data.num_slots = numSlotsInput ? parseInt(numSlotsInput.value) : parseInt(formData.get('num_slots')) || 10;
}
try {
let res;
if (isEdit) {
res = await fetch(`/api/v1/vtl/changers/${libraryID}`, {
method: 'PUT',
headers: getAuthHeaders(),
body: JSON.stringify(data)
});
} else {
res = await fetch('/api/v1/vtl/changers', {
method: 'POST',
headers: getAuthHeaders(),
body: JSON.stringify(data)
});
}
const result = await res.json().catch(() => null);
if (res.ok) {
closeModal('changer-modal');
// Wait a moment for backend to finish writing file
await new Promise(resolve => setTimeout(resolve, 500));
// Force refresh with cache busting
await loadMediaChanger();
await refreshVTLStatus();
alert(isEdit ? 'Media changer updated successfully' : 'Media changer created successfully');
} else {
const errorMsg = (result && result.message) ? result.message : 'Failed to save media changer';
alert(`Error: ${errorMsg}`);
}
} catch (err) {
alert(`Error: ${err.message}`);
}
}
async function deleteChanger(libraryID) {
if (!confirm(`Are you sure you want to delete media changer Library ${libraryID}? This will also remove all associated drives.`)) {
return;
}
try {
const res = await fetch(`/api/v1/vtl/changers/${libraryID}`, {
method: 'DELETE',
headers: getAuthHeaders()
});
const result = await res.json().catch(() => null);
if (res.ok) {
alert('Media changer deleted successfully');
loadMediaChanger();
refreshVTLStatus();
} else {
const errorMsg = (result && result.message) ? result.message : 'Failed to delete media changer';
alert(`Error: ${errorMsg}`);
}
} catch (err) {
alert(`Error: ${err.message}`);
}
}
// Drive product options by vendor
const driveProducts = {
'IBM': [
{ value: 'ULT3580-TD5', label: 'ULT3580-TD5 (LTO-5)' },
{ value: 'ULT3580-TD6', label: 'ULT3580-TD6 (LTO-6)' },
{ value: 'ULT3580-TD7', label: 'ULT3580-TD7 (LTO-7)' },
{ value: 'ULT3580-TD8', label: 'ULT3580-TD8 (LTO-8)' }
],
'HP': [
{ value: 'HP LTO-5', label: 'HP LTO-5' },
{ value: 'HP LTO-6', label: 'HP LTO-6' },
{ value: 'HP LTO-7', label: 'HP LTO-7' },
{ value: 'HP LTO-8', label: 'HP LTO-8' }
],
'Quantum': [
{ value: 'Quantum LTO-5', label: 'Quantum LTO-5' },
{ value: 'Quantum LTO-6', label: 'Quantum LTO-6' },
{ value: 'Quantum LTO-7', label: 'Quantum LTO-7' },
{ value: 'Quantum LTO-8', label: 'Quantum LTO-8' }
],
'Tandberg': [
{ value: 'Tandberg LTO-5', label: 'Tandberg LTO-5' },
{ value: 'Tandberg LTO-6', label: 'Tandberg LTO-6' },
{ value: 'Tandberg LTO-7', label: 'Tandberg LTO-7' },
{ value: 'Tandberg LTO-8', label: 'Tandberg LTO-8' }
],
'Overland': [
{ value: 'Overland LTO-5', label: 'Overland LTO-5' },
{ value: 'Overland LTO-6', label: 'Overland LTO-6' },
{ value: 'Overland LTO-7', label: 'Overland LTO-7' },
{ value: 'Overland LTO-8', label: 'Overland LTO-8' }
]
};
function updateDriveProductOptions() {
const vendorSelect = document.getElementById('drive-vendor-input');
const productSelect = document.getElementById('drive-product-input');
const vendor = vendorSelect.value;
// Clear existing options
productSelect.innerHTML = '';
// Add options for selected vendor
if (driveProducts[vendor]) {
driveProducts[vendor].forEach(product => {
const option = document.createElement('option');
option.value = product.value;
option.textContent = product.label;
productSelect.appendChild(option);
});
}
}
// Drive Management Functions
function showCreateDriveModal() {
document.getElementById('drive-modal-title').textContent = 'Add Drive';
document.getElementById('drive-form').reset();
document.getElementById('drive-id-input').removeAttribute('readonly');
document.getElementById('drive-id-input').disabled = false;
// Set default vendor to IBM
document.getElementById('drive-vendor-input').value = 'IBM';
updateDriveProductOptions();
document.getElementById('drive-modal').classList.remove('hidden');
}
async function showEditDriveModal(driveID) {
try {
const res = await fetch(`/api/v1/vtl/drives/${driveID}`, { headers: getAuthHeaders() });
if (!res.ok) throw new Error('Failed to load drive');
const drive = await res.json();
document.getElementById('drive-modal-title').textContent = 'Edit Drive';
document.getElementById('drive-id-input').value = drive.id;
document.getElementById('drive-id-input').setAttribute('readonly', 'readonly');
document.getElementById('drive-id-input').disabled = true;
document.getElementById('drive-library-id-input').value = drive.library_id || '';
document.getElementById('drive-slot-id-input').value = drive.slot_id || '';
// Set vendor and update product options
const vendor = drive.vendor || 'IBM';
document.getElementById('drive-vendor-input').value = vendor;
updateDriveProductOptions();
// Set product (try to match existing value or default to first option)
const productSelect = document.getElementById('drive-product-input');
const product = drive.product || '';
if (product) {
// Try to find matching option
let found = false;
for (let i = 0; i < productSelect.options.length; i++) {
if (productSelect.options[i].value === product) {
productSelect.value = product;
found = true;
break;
}
}
// If not found, add it as a custom option
if (!found) {
const option = document.createElement('option');
option.value = product;
option.textContent = product + ' (Custom)';
option.selected = true;
productSelect.appendChild(option);
}
}
document.getElementById('drive-serial-input').value = drive.serial || '';
document.getElementById('drive-modal').classList.remove('hidden');
} catch (err) {
alert(`Error: ${err.message}`);
}
}
async function saveDrive(event) {
event.preventDefault();
const formData = new FormData(event.target);
const driveID = parseInt(document.getElementById('drive-id-input').value);
const isEdit = document.getElementById('drive-id-input').hasAttribute('readonly');
const data = {
drive_id: driveID,
library_id: parseInt(document.getElementById('drive-library-id-input').value),
slot_id: parseInt(document.getElementById('drive-slot-id-input').value),
vendor: document.getElementById('drive-vendor-input').value || '',
product: document.getElementById('drive-product-input').value || '',
serial: document.getElementById('drive-serial-input').value || ''
};
try {
let res;
if (isEdit) {
res = await fetch(`/api/v1/vtl/drives/${driveID}`, {
method: 'PUT',
headers: getAuthHeaders(),
body: JSON.stringify(data)
});
} else {
res = await fetch('/api/v1/vtl/drives', {
method: 'POST',
headers: getAuthHeaders(),
body: JSON.stringify(data)
});
}
const result = await res.json().catch(() => null);
if (res.ok) {
closeModal('drive-modal');
// Wait a moment for backend to finish writing file
await new Promise(resolve => setTimeout(resolve, 1000));
// Force refresh with cache busting
await loadVTLDrives();
await refreshVTLStatus();
alert(isEdit ? 'Drive updated successfully' : 'Drive created successfully');
} else {
const errorMsg = (result && result.message) ? result.message : 'Failed to save drive';
alert(`Error: ${errorMsg}`);
}
} catch (err) {
alert(`Error: ${err.message}`);
}
}
async function deleteDrive(driveID) {
if (!confirm(`Are you sure you want to delete Drive ${driveID}?`)) {
return;
}
try {
const res = await fetch(`/api/v1/vtl/drives/${driveID}`, {
method: 'DELETE',
headers: getAuthHeaders()
});
const result = await res.json().catch(() => null);
if (res.ok) {
alert('Drive deleted successfully');
// Wait a moment for backend to finish writing file
await new Promise(resolve => setTimeout(resolve, 500));
loadVTLDrives();
refreshVTLStatus();
} else {
const errorMsg = (result && result.message) ? result.message : 'Failed to delete drive';
alert(`Error: ${errorMsg}`);
}
} catch (err) {
alert(`Error: ${err.message}`);
}
}
// Update Media Changer Status in dashboard // Update Media Changer Status in dashboard
async function updateMediaChangerStatus() { async function updateMediaChangerStatus() {
try { try {