Compare commits
5 Commits
main
...
atlas-alph
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
094ea1b1fe | ||
| 7826c6ed24 | |||
| 4c3ea0059d | |||
| 6a5ead9dbf | |||
| 268af8d691 |
@@ -1,19 +1,20 @@
|
||||
SOFTWARE REQUIREMENTS SPECIFICATION (SRS)
|
||||
PlutoOS – Storage Controller Operating System (v1)
|
||||
AtlasOS – Storage Controller Operating System (v1)
|
||||
|
||||
==================================================
|
||||
|
||||
1. INTRODUCTION
|
||||
--------------------------------------------------
|
||||
1.1 Purpose
|
||||
This document defines the functional and non-functional requirements for PlutoOS v1,
|
||||
This document defines the functional and non-functional requirements for AtlasOS v1,
|
||||
a storage controller operating system built on Linux with ZFS as the core storage engine.
|
||||
It serves as the authoritative reference for development scope, validation, and acceptance.
|
||||
|
||||
1.2 Scope
|
||||
PlutoOS v1 provides:
|
||||
AtlasOS v1 provides:
|
||||
- ZFS pool, dataset, and ZVOL management
|
||||
- Storage services: SMB, NFS, iSCSI (ZVOL-backed)
|
||||
- Virtual Tape Library (VTL) with mhvtl for tape emulation
|
||||
- Automated snapshot management
|
||||
- Role-Based Access Control (RBAC) and audit logging
|
||||
- Web-based GUI and local TUI
|
||||
@@ -36,7 +37,7 @@ Desired State : Configuration stored in DB and applied atomically to system
|
||||
|
||||
2. SYSTEM OVERVIEW
|
||||
--------------------------------------------------
|
||||
PlutoOS consists of:
|
||||
AtlasOS consists of:
|
||||
- Base OS : Minimal Linux (Ubuntu/Debian)
|
||||
- Data Plane : ZFS and storage services
|
||||
- Control Plane: Go backend with HTMX-based UI
|
||||
@@ -93,6 +94,18 @@ Viewer : Read-only access
|
||||
- System SHALL configure initiator ACLs
|
||||
- System SHALL expose connection instructions
|
||||
|
||||
4.6.1 Virtual Tape Library (VTL)
|
||||
- System SHALL manage mhvtl service (start, stop, restart)
|
||||
- System SHALL create and manage virtual tape libraries (media changers)
|
||||
- System SHALL create and manage virtual tape drives (LTO-5 through LTO-8)
|
||||
- System SHALL create and manage virtual tape cartridges
|
||||
- System SHALL support tape operations (load, eject, read, write)
|
||||
- System SHALL manage library_contents files for tape inventory
|
||||
- System SHALL validate drive ID conflicts to prevent device path collisions
|
||||
- System SHALL automatically restart mhvtl service after configuration changes
|
||||
- System SHALL support multiple vendors (IBM, HP, Quantum, Tandberg, Overland)
|
||||
- System SHALL enforce RBAC for VTL operations (Administrator and Operator only)
|
||||
|
||||
4.7 Job Management
|
||||
- System SHALL execute long-running operations as jobs
|
||||
- System SHALL track job status and progress
|
||||
@@ -161,7 +174,7 @@ Viewer : Read-only access
|
||||
|
||||
7. ACCEPTANCE CRITERIA (v1)
|
||||
--------------------------------------------------
|
||||
PlutoOS v1 is accepted when:
|
||||
AtlasOS v1 is accepted when:
|
||||
- ZFS pool, dataset, share, and LUN lifecycle works end-to-end
|
||||
- Snapshot policies are active and observable
|
||||
- RBAC and audit logging are enforced
|
||||
48
README.md
48
README.md
@@ -5,6 +5,7 @@ AtlasOS is an appliance-style storage controller build by Adastra
|
||||
**v1 Focus**
|
||||
- ZFS storage engine
|
||||
- SMB / NFS / iSCSI (ZVOL)
|
||||
- Virtual Tape Library (VTL) with mhvtl
|
||||
- Auto snapshots (sanoid)
|
||||
- RBAC + audit
|
||||
- TUI (Bubble Tea) + Web GUI (HTMX)
|
||||
@@ -30,3 +31,50 @@ sudo ./installer/install.sh --offline-bundle /path/to/atlas-bundle
|
||||
```
|
||||
|
||||
See `installer/README.md` and `docs/INSTALLATION.md` for detailed instructions.
|
||||
|
||||
## Features
|
||||
|
||||
### Storage Management
|
||||
- **ZFS**: Pool, dataset, and ZVOL management with health monitoring
|
||||
- **SMB/CIFS**: Windows file sharing with permission management
|
||||
- **NFS**: Network file sharing with client access control
|
||||
- **iSCSI**: Block storage with target and LUN management
|
||||
|
||||
### Virtual Tape Library (VTL)
|
||||
- **Media Changers**: Create and manage virtual tape libraries
|
||||
- **Tape Drives**: Configure virtual drives (LTO-5 through LTO-8)
|
||||
- **Tape Cartridges**: Create and manage virtual tapes
|
||||
- **Tape Operations**: Load, eject, and manage tape media
|
||||
- **Multi-Vendor Support**: IBM, HP, Quantum, Tandberg, Overland
|
||||
- **Automatic Service Management**: Auto-restart mhvtl after configuration changes
|
||||
|
||||
### Security & Access Control
|
||||
- **RBAC**: Role-based access control (Administrator, Operator, Viewer)
|
||||
- **Audit Logging**: Immutable audit trail for all operations
|
||||
- **Authentication**: JWT-based authentication
|
||||
|
||||
### Monitoring
|
||||
- **Prometheus Metrics**: System and storage metrics
|
||||
- **Health Monitoring**: Pool health and capacity tracking
|
||||
- **Job Management**: Track long-running operations
|
||||
|
||||
## Installation Directory
|
||||
|
||||
Atlas is installed to `/opt/atlas` by default. The installer script will:
|
||||
1. Install all required dependencies (ZFS, SMB, NFS, iSCSI, mhvtl)
|
||||
2. Build Atlas binaries
|
||||
3. Set up systemd services
|
||||
4. Configure directories and permissions
|
||||
|
||||
## Pushing Changes to Repository
|
||||
|
||||
Use the provided script to commit and push changes:
|
||||
|
||||
```bash
|
||||
./scripts/push-to-repo.sh "Your commit message"
|
||||
```
|
||||
|
||||
Or skip version update:
|
||||
```bash
|
||||
./scripts/push-to-repo.sh "Your commit message" --skip-version
|
||||
```
|
||||
|
||||
7
atlas.code-workspace
Normal file
7
atlas.code-workspace
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"folders": [
|
||||
{
|
||||
"path": "."
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -440,6 +440,26 @@ install_dependencies() {
|
||||
}
|
||||
fi
|
||||
|
||||
# Install mhvtl (Virtual Tape Library) for VTL functionality
|
||||
echo " Installing mhvtl (Virtual Tape Library)..."
|
||||
MHVTL_INSTALLER="$SCRIPT_DIR/mhvtl-installer"
|
||||
if [[ -f "$MHVTL_INSTALLER" ]]; then
|
||||
chmod +x "$MHVTL_INSTALLER" 2>/dev/null || true
|
||||
if ! "$MHVTL_INSTALLER"; then
|
||||
echo -e "${YELLOW}Warning: mhvtl installer failed, VTL features may not be available${NC}"
|
||||
echo " You may need to install mhvtl manually or from source"
|
||||
fi
|
||||
else
|
||||
echo -e "${YELLOW}Warning: mhvtl-installer not found at $MHVTL_INSTALLER${NC}"
|
||||
echo " You may need to install mhvtl manually or from source"
|
||||
fi
|
||||
|
||||
# Create mhvtl directories if they don't exist
|
||||
mkdir -p /etc/mhvtl
|
||||
mkdir -p /opt/mhvtl
|
||||
chown root:root /etc/mhvtl
|
||||
chown root:root /opt/mhvtl
|
||||
|
||||
# Install databases (SQLite for compatibility, PostgreSQL as default)
|
||||
echo " Installing database packages..."
|
||||
DEBIAN_FRONTEND=noninteractive apt-get install -y -qq \
|
||||
@@ -474,6 +494,28 @@ install_dependencies() {
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Install Rust compiler (for TUI)
|
||||
echo " Installing Rust compiler..."
|
||||
if ! command -v rustc &>/dev/null; then
|
||||
# Install rustup
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain stable
|
||||
source "$HOME/.cargo/env" || {
|
||||
# If source fails, add to PATH manually
|
||||
export PATH="$HOME/.cargo/bin:$PATH"
|
||||
}
|
||||
fi
|
||||
|
||||
# Verify Rust installation
|
||||
if command -v rustc &>/dev/null || [ -f "$HOME/.cargo/bin/rustc" ]; then
|
||||
if [ -f "$HOME/.cargo/bin/rustc" ]; then
|
||||
export PATH="$HOME/.cargo/bin:$PATH"
|
||||
fi
|
||||
RUST_VERSION=$(rustc --version 2>/dev/null | awk '{print $2}' || echo "installed")
|
||||
echo -e "${GREEN} ✓ Rust $RUST_VERSION installed${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}Warning: Rust installation may need manual setup${NC}"
|
||||
fi
|
||||
|
||||
# Install additional utilities
|
||||
echo " Installing additional utilities..."
|
||||
DEBIAN_FRONTEND=noninteractive apt-get install -y -qq \
|
||||
|
||||
BIN
installer/mhvtl-installer
Normal file
BIN
installer/mhvtl-installer
Normal file
Binary file not shown.
@@ -8,6 +8,7 @@ import (
|
||||
"net/url"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -2046,71 +2047,72 @@ func (a *App) syncSMBSharesFromOS() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// syncISCSITargetsFromOS syncs iSCSI targets from targetcli to the store
|
||||
// syncISCSITargetsFromOS syncs iSCSI targets directly from sysfs (no targetcli needed)
|
||||
func (a *App) syncISCSITargetsFromOS() error {
|
||||
log.Printf("debug: starting syncISCSITargetsFromOS")
|
||||
// Get list of targets from targetcli
|
||||
// Set TARGETCLI_HOME and TARGETCLI_LOCK_DIR to writable directories
|
||||
// Create the directories first if they don't exist
|
||||
os.MkdirAll("/tmp/.targetcli", 0755)
|
||||
os.MkdirAll("/tmp/targetcli-run", 0755)
|
||||
// Service runs as root, no need for sudo
|
||||
cmd := exec.Command("sh", "-c", "TARGETCLI_HOME=/tmp/.targetcli TARGETCLI_LOCK_DIR=/tmp/targetcli-run targetcli /iscsi ls")
|
||||
output, err := cmd.CombinedOutput()
|
||||
log.Printf("debug: starting syncISCSITargetsFromOS - reading from sysfs")
|
||||
|
||||
// Read iSCSI targets directly from /sys/kernel/config/target/iscsi/
|
||||
// This avoids targetcli lock file issues
|
||||
iscsiPath := "/sys/kernel/config/target/iscsi"
|
||||
|
||||
entries, err := os.ReadDir(iscsiPath)
|
||||
if err != nil {
|
||||
// Log the error but don't fail - targetcli might not be configured
|
||||
log.Printf("warning: failed to list iSCSI targets from targetcli: %v (output: %s)", err, string(output))
|
||||
log.Printf("warning: failed to read iSCSI config directory: %v", err)
|
||||
return nil
|
||||
}
|
||||
|
||||
log.Printf("debug: targetcli output: %s", string(output))
|
||||
lines := strings.Split(string(output), "\n")
|
||||
var currentIQN string
|
||||
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" {
|
||||
for _, entry := range entries {
|
||||
if !entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if this is a target line (starts with "o- iqn.")
|
||||
if strings.HasPrefix(line, "o- iqn.") {
|
||||
log.Printf("debug: found target line: %s", line)
|
||||
// Extract IQN from line like "o- iqn.2025-12.com.atlas:target-1"
|
||||
parts := strings.Fields(line)
|
||||
if len(parts) >= 2 {
|
||||
currentIQN = parts[1]
|
||||
// Check if this is an IQN (starts with "iqn.")
|
||||
iqn := entry.Name()
|
||||
if !strings.HasPrefix(iqn, "iqn.") {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if target already exists in store
|
||||
existingTargets := a.iscsiStore.List()
|
||||
exists := false
|
||||
for _, t := range existingTargets {
|
||||
if t.IQN == currentIQN {
|
||||
exists = true
|
||||
break
|
||||
}
|
||||
log.Printf("debug: found iSCSI target: %s", iqn)
|
||||
|
||||
// Check if target already exists in store
|
||||
existingTargets := a.iscsiStore.List()
|
||||
exists := false
|
||||
for _, t := range existingTargets {
|
||||
if t.IQN == iqn {
|
||||
exists = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if exists {
|
||||
log.Printf("debug: target %s already in store, skipping", iqn)
|
||||
// Still sync LUNs in case they changed
|
||||
target, err := a.iscsiStore.GetByIQN(iqn)
|
||||
if err == nil {
|
||||
if err := a.syncLUNsFromOS(iqn, target.ID, target.Type); err != nil {
|
||||
log.Printf("warning: failed to sync LUNs for target %s: %v", iqn, err)
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if !exists {
|
||||
// Try to determine target type from IQN
|
||||
targetType := models.ISCSITargetTypeDisk // Default to disk mode
|
||||
if strings.Contains(strings.ToLower(currentIQN), "tape") {
|
||||
targetType = models.ISCSITargetTypeTape
|
||||
}
|
||||
// Try to determine target type from IQN
|
||||
targetType := models.ISCSITargetTypeDisk // Default to disk mode
|
||||
if strings.Contains(strings.ToLower(iqn), "tape") {
|
||||
targetType = models.ISCSITargetTypeTape
|
||||
}
|
||||
|
||||
// Create target in store
|
||||
target, err := a.iscsiStore.CreateWithType(currentIQN, targetType, []string{})
|
||||
if err != nil && err != storage.ErrISCSITargetExists {
|
||||
log.Printf("warning: failed to sync iSCSI target %s: %v", currentIQN, err)
|
||||
} else if err == nil {
|
||||
log.Printf("synced iSCSI target from OS: %s (type: %s)", currentIQN, targetType)
|
||||
// Create target in store
|
||||
target, err := a.iscsiStore.CreateWithType(iqn, targetType, []string{})
|
||||
if err != nil && err != storage.ErrISCSITargetExists {
|
||||
log.Printf("warning: failed to sync iSCSI target %s: %v", iqn, err)
|
||||
continue
|
||||
} else if err == nil {
|
||||
log.Printf("synced iSCSI target from OS: %s (type: %s)", iqn, targetType)
|
||||
|
||||
// Now try to sync LUNs for this target
|
||||
if err := a.syncLUNsFromOS(currentIQN, target.ID, targetType); err != nil {
|
||||
log.Printf("warning: failed to sync LUNs for target %s: %v", currentIQN, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
// Now try to sync LUNs for this target
|
||||
if err := a.syncLUNsFromOS(iqn, target.ID, targetType); err != nil {
|
||||
log.Printf("warning: failed to sync LUNs for target %s: %v", iqn, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -2118,119 +2120,163 @@ func (a *App) syncISCSITargetsFromOS() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// syncLUNsFromOS syncs LUNs for a specific target from targetcli
|
||||
// syncLUNsFromOS syncs LUNs for a specific target directly from sysfs (no targetcli needed)
|
||||
func (a *App) syncLUNsFromOS(iqn, targetID string, targetType models.ISCSITargetType) error {
|
||||
// Get LUNs for this target
|
||||
// Service runs as root, no need for sudo
|
||||
cmd := exec.Command("sh", "-c", "TARGETCLI_HOME=/tmp/.targetcli TARGETCLI_LOCK_DIR=/tmp/targetcli-run targetcli /iscsi/"+iqn+"/tpg1/luns ls")
|
||||
output, err := cmd.CombinedOutput()
|
||||
log.Printf("debug: syncing LUNs for target %s from sysfs", iqn)
|
||||
|
||||
// Read LUNs directly from /sys/kernel/config/target/iscsi/{iqn}/tpgt_1/lun/
|
||||
tpgtPath := fmt.Sprintf("/sys/kernel/config/target/iscsi/%s/tpgt_1/lun", iqn)
|
||||
|
||||
entries, err := os.ReadDir(tpgtPath)
|
||||
if err != nil {
|
||||
// No LUNs or can't read - that's okay, log for debugging
|
||||
log.Printf("debug: failed to list LUNs for target %s: %v (output: %s)", iqn, err, string(output))
|
||||
log.Printf("debug: no LUNs directory found for target %s: %v", iqn, err)
|
||||
return nil // No LUNs is okay
|
||||
}
|
||||
|
||||
// Get target to check existing LUNs
|
||||
target, err := a.iscsiStore.Get(targetID)
|
||||
if err != nil {
|
||||
log.Printf("warning: target %s not found in store", targetID)
|
||||
return nil
|
||||
}
|
||||
|
||||
lines := strings.Split(string(output), "\n")
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if strings.HasPrefix(line, "o- lun") {
|
||||
// Parse LUN line like "o- lun0 ....................................... [block/pool-test-02-vol01 (/dev/zvol/pool-test-02/vol01) (default_tg_pt_gp)]"
|
||||
parts := strings.Fields(line)
|
||||
if len(parts) >= 2 {
|
||||
// Extract LUN ID from "lun0"
|
||||
lunIDStr := strings.TrimPrefix(parts[1], "lun")
|
||||
lunID, err := strconv.Atoi(lunIDStr)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
for _, entry := range entries {
|
||||
if !entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
// Extract backstore path and device from the line
|
||||
var backstorePath string
|
||||
var devicePath string
|
||||
var zvolName string
|
||||
// Check if this is a LUN directory (starts with "lun_")
|
||||
lunDirName := entry.Name()
|
||||
if !strings.HasPrefix(lunDirName, "lun_") {
|
||||
continue
|
||||
}
|
||||
|
||||
// Find the part with brackets - might span multiple parts
|
||||
fullLine := strings.Join(parts, " ")
|
||||
start := strings.Index(fullLine, "[")
|
||||
end := strings.LastIndex(fullLine, "]")
|
||||
if start >= 0 && end > start {
|
||||
content := fullLine[start+1 : end]
|
||||
// Parse content like "block/pool-test-02-vol01 (/dev/zvol/pool-test-02/vol01)"
|
||||
if strings.Contains(content, "(") {
|
||||
// Has device path
|
||||
parts2 := strings.Split(content, "(")
|
||||
if len(parts2) >= 2 {
|
||||
backstorePath = strings.TrimSpace(parts2[0])
|
||||
devicePath = strings.Trim(strings.TrimSpace(parts2[1]), "()")
|
||||
// Extract LUN ID from "lun_0", "lun_1", etc.
|
||||
lunIDStr := strings.TrimPrefix(lunDirName, "lun_")
|
||||
lunID, err := strconv.Atoi(lunIDStr)
|
||||
if err != nil {
|
||||
log.Printf("debug: invalid LUN directory name: %s", lunDirName)
|
||||
continue
|
||||
}
|
||||
|
||||
// If device is a zvol, extract ZVOL name
|
||||
if strings.HasPrefix(devicePath, "/dev/zvol/") {
|
||||
zvolName = strings.TrimPrefix(devicePath, "/dev/zvol/")
|
||||
}
|
||||
}
|
||||
// Check if LUN already exists
|
||||
lunExists := false
|
||||
for _, lun := range target.LUNs {
|
||||
if lun.ID == lunID {
|
||||
lunExists = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if lunExists {
|
||||
log.Printf("debug: LUN %d already exists for target %s", lunID, iqn)
|
||||
continue
|
||||
}
|
||||
|
||||
// Find storage_object symlink in LUN directory
|
||||
// Structure: /sys/kernel/config/target/iscsi/{iqn}/tpgt_1/lun/lun_0/{hash} -> ../../../../../../target/core/iblock_0/{name}
|
||||
lunPath := fmt.Sprintf("%s/%s", tpgtPath, lunDirName)
|
||||
storageObjectPath := ""
|
||||
|
||||
// Look for symlink in subdirectories (the hash directory that links to backstore)
|
||||
subEntries, err := os.ReadDir(lunPath)
|
||||
if err != nil {
|
||||
log.Printf("debug: failed to read LUN directory %s: %v", lunPath, err)
|
||||
continue
|
||||
}
|
||||
|
||||
for _, subEntry := range subEntries {
|
||||
// Check if this is a symlink (the hash directory that links to backstore)
|
||||
subEntryPath := fmt.Sprintf("%s/%s", lunPath, subEntry.Name())
|
||||
fileInfo, err := os.Lstat(subEntryPath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if it's a symlink
|
||||
if fileInfo.Mode()&os.ModeSymlink != 0 {
|
||||
if linkTarget, err := os.Readlink(subEntryPath); err == nil {
|
||||
// Resolve to absolute path
|
||||
if strings.HasPrefix(linkTarget, "/") {
|
||||
storageObjectPath = linkTarget
|
||||
} else {
|
||||
backstorePath = content
|
||||
// Relative path, resolve it
|
||||
absPath, err := filepath.Abs(fmt.Sprintf("%s/%s", lunPath, linkTarget))
|
||||
if err == nil {
|
||||
storageObjectPath = absPath
|
||||
} else {
|
||||
// Try resolving relative to parent directory
|
||||
storageObjectPath = filepath.Clean(fmt.Sprintf("%s/%s", filepath.Dir(lunPath), linkTarget))
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check if LUN already exists
|
||||
target, err := a.iscsiStore.Get(targetID)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if storageObjectPath == "" {
|
||||
log.Printf("debug: no storage_object symlink found for LUN %d in target %s", lunID, iqn)
|
||||
continue
|
||||
}
|
||||
|
||||
lunExists := false
|
||||
for _, lun := range target.LUNs {
|
||||
if lun.ID == lunID {
|
||||
lunExists = true
|
||||
// Read device path from storage_object/udev_path
|
||||
udevPathFile := fmt.Sprintf("%s/udev_path", storageObjectPath)
|
||||
devicePath := ""
|
||||
if udevPathBytes, err := os.ReadFile(udevPathFile); err == nil {
|
||||
devicePath = strings.TrimSpace(string(udevPathBytes))
|
||||
} else {
|
||||
log.Printf("debug: failed to read udev_path from %s: %v", udevPathFile, err)
|
||||
}
|
||||
|
||||
// Determine backstore type from path
|
||||
backstoreType := "block"
|
||||
if strings.Contains(storageObjectPath, "/pscsi/") {
|
||||
backstoreType = "pscsi"
|
||||
} else if strings.Contains(storageObjectPath, "/fileio/") {
|
||||
backstoreType = "fileio"
|
||||
}
|
||||
|
||||
// Extract ZVOL name if device is a zvol
|
||||
var zvolName string
|
||||
var size uint64
|
||||
if strings.HasPrefix(devicePath, "/dev/zvol/") {
|
||||
zvolName = strings.TrimPrefix(devicePath, "/dev/zvol/")
|
||||
// Get size from ZFS
|
||||
zvols, err := a.zfs.ListZVOLs("")
|
||||
if err == nil {
|
||||
for _, zvol := range zvols {
|
||||
if zvol.Name == zvolName {
|
||||
size = zvol.Size
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if !lunExists {
|
||||
// Determine backstore type
|
||||
backstoreType := "block"
|
||||
if strings.HasPrefix(backstorePath, "pscsi/") {
|
||||
backstoreType = "pscsi"
|
||||
} else if strings.HasPrefix(backstorePath, "fileio/") {
|
||||
backstoreType = "fileio"
|
||||
}
|
||||
|
||||
// Get size if it's a ZVOL
|
||||
var size uint64
|
||||
if zvolName != "" {
|
||||
zvols, err := a.zfs.ListZVOLs("")
|
||||
if err == nil {
|
||||
for _, zvol := range zvols {
|
||||
if zvol.Name == zvolName {
|
||||
size = zvol.Size
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Add LUN to store
|
||||
if targetType == models.ISCSITargetTypeTape && devicePath != "" {
|
||||
// Tape mode: use device
|
||||
_, err := a.iscsiStore.AddLUNWithDevice(targetID, "", devicePath, size, backstoreType, "")
|
||||
if err != nil && err != storage.ErrLUNExists {
|
||||
log.Printf("warning: failed to sync LUN %d for target %s: %v", lunID, iqn, err)
|
||||
}
|
||||
} else if zvolName != "" {
|
||||
// Disk mode: use ZVOL
|
||||
_, err := a.iscsiStore.AddLUNWithDevice(targetID, zvolName, "", size, backstoreType, "")
|
||||
if err != nil && err != storage.ErrLUNExists {
|
||||
log.Printf("warning: failed to sync LUN %d for target %s: %v", lunID, iqn, err)
|
||||
}
|
||||
} else if devicePath != "" {
|
||||
// Generic device
|
||||
_, err := a.iscsiStore.AddLUNWithDevice(targetID, "", devicePath, size, backstoreType, "")
|
||||
if err != nil && err != storage.ErrLUNExists {
|
||||
log.Printf("warning: failed to sync LUN %d for target %s: %v", lunID, iqn, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
// Add LUN to store
|
||||
if targetType == models.ISCSITargetTypeTape && devicePath != "" {
|
||||
// Tape mode: use device
|
||||
_, err := a.iscsiStore.AddLUNWithDevice(targetID, "", devicePath, size, backstoreType, "")
|
||||
if err != nil && err != storage.ErrLUNExists {
|
||||
log.Printf("warning: failed to sync LUN %d for target %s: %v", lunID, iqn, err)
|
||||
} else if err == nil {
|
||||
log.Printf("synced LUN %d from OS for target %s (device: %s)", lunID, iqn, devicePath)
|
||||
}
|
||||
} else if zvolName != "" {
|
||||
// Disk mode: use ZVOL
|
||||
_, err := a.iscsiStore.AddLUNWithDevice(targetID, zvolName, "", size, backstoreType, "")
|
||||
if err != nil && err != storage.ErrLUNExists {
|
||||
log.Printf("warning: failed to sync LUN %d for target %s: %v", lunID, iqn, err)
|
||||
} else if err == nil {
|
||||
log.Printf("synced LUN %d from OS for target %s (zvol: %s)", lunID, iqn, zvolName)
|
||||
}
|
||||
} else if devicePath != "" {
|
||||
// Generic device
|
||||
_, err := a.iscsiStore.AddLUNWithDevice(targetID, "", devicePath, size, backstoreType, "")
|
||||
if err != nil && err != storage.ErrLUNExists {
|
||||
log.Printf("warning: failed to sync LUN %d for target %s: %v", lunID, iqn, err)
|
||||
} else if err == nil {
|
||||
log.Printf("synced LUN %d from OS for target %s (device: %s)", lunID, iqn, devicePath)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -47,6 +47,7 @@ type App struct {
|
||||
smbService *services.SMBService
|
||||
nfsService *services.NFSService
|
||||
iscsiService *services.ISCSIService
|
||||
vtlService *services.VTLService
|
||||
metricsCollector *metrics.Collector
|
||||
startTime time.Time
|
||||
backupService *backup.Service
|
||||
@@ -128,6 +129,7 @@ func New(cfg Config) (*App, error) {
|
||||
smbService := services.NewSMBService()
|
||||
nfsService := services.NewNFSService()
|
||||
iscsiService := services.NewISCSIService()
|
||||
vtlService := services.NewVTLService()
|
||||
|
||||
// Initialize metrics collector
|
||||
metricsCollector := metrics.NewCollector()
|
||||
@@ -170,6 +172,7 @@ func New(cfg Config) (*App, error) {
|
||||
smbService: smbService,
|
||||
nfsService: nfsService,
|
||||
iscsiService: iscsiService,
|
||||
vtlService: vtlService,
|
||||
metricsCollector: metricsCollector,
|
||||
startTime: startTime,
|
||||
backupService: backupService,
|
||||
|
||||
@@ -113,6 +113,7 @@ func (a *App) isPublicEndpoint(path, method string) bool {
|
||||
"/storage", // Storage management page
|
||||
"/shares", // Shares page
|
||||
"/iscsi", // iSCSI page
|
||||
"/vtl", // VTL (Virtual Tape Library) page
|
||||
"/protection", // Data Protection page
|
||||
"/management", // System Management page
|
||||
"/api/docs", // API documentation
|
||||
@@ -138,17 +139,22 @@ func (a *App) isPublicEndpoint(path, method string) bool {
|
||||
// SECURITY: Only GET requests are allowed without authentication
|
||||
// POST, PUT, DELETE, PATCH require authentication
|
||||
publicReadOnlyPaths := []string{
|
||||
"/api/v1/dashboard", // Dashboard data
|
||||
"/api/v1/disks", // List disks
|
||||
"/api/v1/pools", // List pools (GET only)
|
||||
"/api/v1/pools/available", // List available pools
|
||||
"/api/v1/datasets", // List datasets (GET only)
|
||||
"/api/v1/zvols", // List ZVOLs (GET only)
|
||||
"/api/v1/shares/smb", // List SMB shares (GET only)
|
||||
"/api/v1/exports/nfs", // List NFS exports (GET only)
|
||||
"/api/v1/iscsi/targets", // List iSCSI targets (GET only)
|
||||
"/api/v1/snapshots", // List snapshots (GET only)
|
||||
"/api/v1/snapshot-policies", // List snapshot policies (GET only)
|
||||
"/api/v1/dashboard", // Dashboard data
|
||||
"/api/v1/disks", // List disks
|
||||
"/api/v1/pools", // List pools (GET only)
|
||||
"/api/v1/pools/available", // List available pools
|
||||
"/api/v1/datasets", // List datasets (GET only)
|
||||
"/api/v1/zvols", // List ZVOLs (GET only)
|
||||
"/api/v1/shares/smb", // List SMB shares (GET only)
|
||||
"/api/v1/exports/nfs", // List NFS exports (GET only)
|
||||
"/api/v1/iscsi/targets", // List iSCSI targets (GET only)
|
||||
"/api/v1/vtl/status", // VTL status (GET only)
|
||||
"/api/v1/vtl/drives", // List VTL drives (GET only)
|
||||
"/api/v1/vtl/tapes", // List VTL tapes (GET only)
|
||||
"/api/v1/vtl/changers", // List VTL media changers (GET only)
|
||||
"/api/v1/vtl/changer/status", // VTL media changer status (GET only)
|
||||
"/api/v1/snapshots", // List snapshots (GET only)
|
||||
"/api/v1/snapshot-policies", // List snapshot policies (GET only)
|
||||
}
|
||||
|
||||
for _, publicPath := range publicReadOnlyPaths {
|
||||
|
||||
@@ -24,6 +24,9 @@ type DashboardData struct {
|
||||
SMBStatus bool `json:"smb_status"`
|
||||
NFSStatus bool `json:"nfs_status"`
|
||||
ISCSIStatus bool `json:"iscsi_status"`
|
||||
VTLStatus bool `json:"vtl_status"`
|
||||
VTLDrives int `json:"vtl_drives"`
|
||||
VTLTapes int `json:"vtl_tapes"`
|
||||
} `json:"services"`
|
||||
Jobs struct {
|
||||
Total int `json:"total"`
|
||||
@@ -93,6 +96,16 @@ func (a *App) handleDashboardAPI(w http.ResponseWriter, r *http.Request) {
|
||||
data.Services.ISCSIStatus, _ = a.iscsiService.GetStatus()
|
||||
}
|
||||
|
||||
// VTL status
|
||||
if a.vtlService != nil {
|
||||
vtlStatus, err := a.vtlService.GetStatus()
|
||||
if err == nil {
|
||||
data.Services.VTLStatus = vtlStatus.ServiceRunning
|
||||
data.Services.VTLDrives = vtlStatus.DrivesOnline
|
||||
data.Services.VTLTapes = vtlStatus.TapesAvailable
|
||||
}
|
||||
}
|
||||
|
||||
// Job statistics
|
||||
allJobs := a.jobManager.List("")
|
||||
data.Jobs.Total = len(allJobs)
|
||||
|
||||
@@ -80,6 +80,17 @@ func (a *App) handleManagement(w http.ResponseWriter, r *http.Request) {
|
||||
a.render(w, "management.html", data)
|
||||
}
|
||||
|
||||
func (a *App) handleVTL(w http.ResponseWriter, r *http.Request) {
|
||||
data := map[string]any{
|
||||
"Title": "Virtual Tape Library",
|
||||
"Build": map[string]string{
|
||||
"version": "v0.1.0-dev",
|
||||
},
|
||||
"ContentTemplate": "vtl-content",
|
||||
}
|
||||
a.render(w, "vtl.html", data)
|
||||
}
|
||||
|
||||
func (a *App) handleLoginPage(w http.ResponseWriter, r *http.Request) {
|
||||
data := map[string]any{
|
||||
"Title": "Login",
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
package httpapp
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"gitea.avt.data-center.id/othman.suseno/atlas/internal/errors"
|
||||
"gitea.avt.data-center.id/othman.suseno/atlas/internal/models"
|
||||
)
|
||||
|
||||
// methodHandler routes requests based on HTTP method
|
||||
@@ -85,8 +88,9 @@ func (a *App) handlePoolOps(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
if strings.HasSuffix(r.URL.Path, "/scrub") {
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
if r.Method == http.MethodPost {
|
||||
a.handleScrubPool(w, r)
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleScrubPool)).ServeHTTP(w, r)
|
||||
} else if r.Method == http.MethodGet {
|
||||
a.handleGetScrubStatus(w, r)
|
||||
} else {
|
||||
@@ -96,8 +100,9 @@ func (a *App) handlePoolOps(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
if strings.HasSuffix(r.URL.Path, "/export") {
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
if r.Method == http.MethodPost {
|
||||
a.handleExportPool(w, r)
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleExportPool)).ServeHTTP(w, r)
|
||||
} else {
|
||||
writeError(w, errors.NewAPIError(errors.ErrCodeBadRequest, "method not allowed", http.StatusMethodNotAllowed))
|
||||
}
|
||||
@@ -106,50 +111,67 @@ func (a *App) handlePoolOps(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
if strings.HasSuffix(r.URL.Path, "/spare") {
|
||||
if r.Method == http.MethodPost {
|
||||
a.handleAddSpareDisk(w, r)
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleAddSpareDisk)).ServeHTTP(w, r)
|
||||
} else {
|
||||
writeError(w, errors.NewAPIError(errors.ErrCodeBadRequest, "method not allowed", http.StatusMethodNotAllowed))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleGetPool(w, r) },
|
||||
nil,
|
||||
nil,
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleDeletePool(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeletePool)).ServeHTTP(w, r)
|
||||
},
|
||||
nil,
|
||||
)(w, r)
|
||||
}
|
||||
|
||||
// handleDatasetOps routes dataset operations by method
|
||||
func (a *App) handleDatasetOps(w http.ResponseWriter, r *http.Request) {
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleGetDataset(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreateDataset(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleUpdateDataset(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteDataset(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateDataset)).ServeHTTP(w, r)
|
||||
},
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleUpdateDataset)).ServeHTTP(w, r)
|
||||
},
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteDataset)).ServeHTTP(w, r)
|
||||
},
|
||||
nil,
|
||||
)(w, r)
|
||||
}
|
||||
|
||||
// handleZVOLOps routes ZVOL operations by method
|
||||
func (a *App) handleZVOLOps(w http.ResponseWriter, r *http.Request) {
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleGetZVOL(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreateZVOL(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateZVOL)).ServeHTTP(w, r)
|
||||
},
|
||||
nil,
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteZVOL(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteZVOL)).ServeHTTP(w, r)
|
||||
},
|
||||
nil,
|
||||
)(w, r)
|
||||
}
|
||||
|
||||
// handleSnapshotOps routes snapshot operations by method
|
||||
func (a *App) handleSnapshotOps(w http.ResponseWriter, r *http.Request) {
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
// Check if it's a restore operation
|
||||
if strings.HasSuffix(r.URL.Path, "/restore") {
|
||||
if r.Method == http.MethodPost {
|
||||
a.handleRestoreSnapshot(w, r)
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleRestoreSnapshot)).ServeHTTP(w, r)
|
||||
} else {
|
||||
writeError(w, errors.ErrBadRequest("method not allowed"))
|
||||
}
|
||||
@@ -158,42 +180,67 @@ func (a *App) handleSnapshotOps(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleGetSnapshot(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreateSnapshot(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateSnapshot)).ServeHTTP(w, r)
|
||||
},
|
||||
nil,
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteSnapshot(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteSnapshot)).ServeHTTP(w, r)
|
||||
},
|
||||
nil,
|
||||
)(w, r)
|
||||
}
|
||||
|
||||
// handleSnapshotPolicyOps routes snapshot policy operations by method
|
||||
func (a *App) handleSnapshotPolicyOps(w http.ResponseWriter, r *http.Request) {
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleGetSnapshotPolicy(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreateSnapshotPolicy(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleUpdateSnapshotPolicy(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteSnapshotPolicy(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateSnapshotPolicy)).ServeHTTP(w, r)
|
||||
},
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleUpdateSnapshotPolicy)).ServeHTTP(w, r)
|
||||
},
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteSnapshotPolicy)).ServeHTTP(w, r)
|
||||
},
|
||||
nil,
|
||||
)(w, r)
|
||||
}
|
||||
|
||||
// handleSMBShareOps routes SMB share operations by method
|
||||
func (a *App) handleSMBShareOps(w http.ResponseWriter, r *http.Request) {
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleGetSMBShare(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreateSMBShare(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleUpdateSMBShare(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteSMBShare(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateSMBShare)).ServeHTTP(w, r)
|
||||
},
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleUpdateSMBShare)).ServeHTTP(w, r)
|
||||
},
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteSMBShare)).ServeHTTP(w, r)
|
||||
},
|
||||
nil,
|
||||
)(w, r)
|
||||
}
|
||||
|
||||
// handleNFSExportOps routes NFS export operations by method
|
||||
func (a *App) handleNFSExportOps(w http.ResponseWriter, r *http.Request) {
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleGetNFSExport(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreateNFSExport(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleUpdateNFSExport(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteNFSExport(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateNFSExport)).ServeHTTP(w, r)
|
||||
},
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleUpdateNFSExport)).ServeHTTP(w, r)
|
||||
},
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteNFSExport)).ServeHTTP(w, r)
|
||||
},
|
||||
nil,
|
||||
)(w, r)
|
||||
}
|
||||
@@ -206,6 +253,7 @@ func (a *App) handleBackupOps(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
switch r.Method {
|
||||
case http.MethodGet:
|
||||
// Check if it's a verify request
|
||||
@@ -217,12 +265,12 @@ func (a *App) handleBackupOps(w http.ResponseWriter, r *http.Request) {
|
||||
case http.MethodPost:
|
||||
// Restore backup (POST /api/v1/backups/{id}/restore)
|
||||
if strings.HasSuffix(r.URL.Path, "/restore") {
|
||||
a.handleRestoreBackup(w, r)
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleRestoreBackup)).ServeHTTP(w, r)
|
||||
} else {
|
||||
writeError(w, errors.ErrBadRequest("invalid backup operation"))
|
||||
}
|
||||
case http.MethodDelete:
|
||||
a.handleDeleteBackup(w, r)
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteBackup)).ServeHTTP(w, r)
|
||||
default:
|
||||
writeError(w, errors.ErrBadRequest("method not allowed"))
|
||||
}
|
||||
@@ -244,9 +292,10 @@ func (a *App) handleISCSITargetOps(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
if strings.HasSuffix(r.URL.Path, "/luns") {
|
||||
if r.Method == http.MethodPost {
|
||||
a.handleAddLUN(w, r)
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleAddLUN)).ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
writeError(w, errors.NewAPIError(errors.ErrCodeBadRequest, "method not allowed", http.StatusMethodNotAllowed))
|
||||
@@ -255,7 +304,7 @@ func (a *App) handleISCSITargetOps(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
if strings.HasSuffix(r.URL.Path, "/luns/remove") {
|
||||
if r.Method == http.MethodPost {
|
||||
a.handleRemoveLUN(w, r)
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleRemoveLUN)).ServeHTTP(w, r)
|
||||
return
|
||||
}
|
||||
writeError(w, errors.NewAPIError(errors.ErrCodeBadRequest, "method not allowed", http.StatusMethodNotAllowed))
|
||||
@@ -265,8 +314,12 @@ func (a *App) handleISCSITargetOps(w http.ResponseWriter, r *http.Request) {
|
||||
methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleGetISCSITarget(w, r) },
|
||||
nil,
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleUpdateISCSITarget(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleDeleteISCSITarget(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleUpdateISCSITarget)).ServeHTTP(w, r)
|
||||
},
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteISCSITarget)).ServeHTTP(w, r)
|
||||
},
|
||||
nil,
|
||||
)(w, r)
|
||||
}
|
||||
@@ -301,3 +354,71 @@ func (a *App) handleUserOps(w http.ResponseWriter, r *http.Request) {
|
||||
nil,
|
||||
)(w, r)
|
||||
}
|
||||
|
||||
// handleVTLDriveOps routes VTL drive operations by method
|
||||
func (a *App) handleVTLDriveOps(w http.ResponseWriter, r *http.Request) {
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleGetVTLDrive(w, r) },
|
||||
nil,
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleUpdateVTLDrive)).ServeHTTP(w, r)
|
||||
},
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteVTLDrive)).ServeHTTP(w, r)
|
||||
},
|
||||
nil,
|
||||
)(w, r)
|
||||
}
|
||||
|
||||
// handleVTLTapeOps routes VTL tape operations by method
|
||||
func (a *App) handleVTLTapeOps(w http.ResponseWriter, r *http.Request) {
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleGetVTLTape(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateVTLTape)).ServeHTTP(w, r)
|
||||
},
|
||||
nil,
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteVTLTape)).ServeHTTP(w, r)
|
||||
},
|
||||
nil,
|
||||
)(w, r)
|
||||
}
|
||||
|
||||
// handleMediaChangerOps routes media changer operations by method
|
||||
func (a *App) handleMediaChangerOps(w http.ResponseWriter, r *http.Request) {
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
// Get single changer by ID
|
||||
libraryIDStr := pathParam(r, "/api/v1/vtl/changers/")
|
||||
libraryID, err := strconv.Atoi(libraryIDStr)
|
||||
if err != nil || libraryID <= 0 {
|
||||
writeError(w, errors.ErrValidation("invalid library_id"))
|
||||
return
|
||||
}
|
||||
changers, err := a.vtlService.ListMediaChangers()
|
||||
if err != nil {
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to list changers: %v", err)))
|
||||
return
|
||||
}
|
||||
for _, changer := range changers {
|
||||
if changer.LibraryID == libraryID {
|
||||
writeJSON(w, http.StatusOK, changer)
|
||||
return
|
||||
}
|
||||
}
|
||||
writeError(w, errors.ErrNotFound(fmt.Sprintf("media changer %d not found", libraryID)))
|
||||
},
|
||||
nil,
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleUpdateMediaChanger)).ServeHTTP(w, r)
|
||||
},
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleDeleteMediaChanger)).ServeHTTP(w, r)
|
||||
},
|
||||
nil,
|
||||
)(w, r)
|
||||
}
|
||||
|
||||
@@ -20,6 +20,7 @@ func (a *App) routes() {
|
||||
a.mux.HandleFunc("/iscsi", a.handleISCSI)
|
||||
a.mux.HandleFunc("/protection", a.handleProtection)
|
||||
a.mux.HandleFunc("/management", a.handleManagement)
|
||||
a.mux.HandleFunc("/vtl", a.handleVTL)
|
||||
|
||||
// Health & metrics
|
||||
a.mux.HandleFunc("/healthz", a.handleHealthz)
|
||||
@@ -64,9 +65,14 @@ func (a *App) routes() {
|
||||
a.mux.HandleFunc("/api/openapi.yaml", a.handleOpenAPISpec)
|
||||
|
||||
// Backup & Restore
|
||||
// Define allowed roles for storage operations (Administrator and Operator, not Viewer)
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
|
||||
a.mux.HandleFunc("/api/v1/backups", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListBackups(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreateBackup(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateBackup)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/backups/", a.handleBackupOps)
|
||||
@@ -84,7 +90,9 @@ func (a *App) routes() {
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/pools", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListPools(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreatePool(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreatePool)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/pools/available", methodHandler(
|
||||
@@ -93,21 +101,27 @@ func (a *App) routes() {
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/pools/import", methodHandler(
|
||||
nil,
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleImportPool(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleImportPool)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/pools/", a.handlePoolOps)
|
||||
|
||||
a.mux.HandleFunc("/api/v1/datasets", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListDatasets(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreateDataset(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateDataset)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/datasets/", a.handleDatasetOps)
|
||||
|
||||
a.mux.HandleFunc("/api/v1/zvols", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListZVOLs(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreateZVOL(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateZVOL)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/zvols/", a.handleZVOLOps)
|
||||
@@ -115,13 +129,17 @@ func (a *App) routes() {
|
||||
// Snapshot Management
|
||||
a.mux.HandleFunc("/api/v1/snapshots", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListSnapshots(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreateSnapshot(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateSnapshot)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/snapshots/", a.handleSnapshotOps)
|
||||
a.mux.HandleFunc("/api/v1/snapshot-policies", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListSnapshotPolicies(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreateSnapshotPolicy(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateSnapshotPolicy)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/snapshot-policies/", a.handleSnapshotPolicyOps)
|
||||
@@ -129,7 +147,9 @@ func (a *App) routes() {
|
||||
// Storage Services - SMB
|
||||
a.mux.HandleFunc("/api/v1/shares/smb", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListSMBShares(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreateSMBShare(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateSMBShare)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/shares/smb/", a.handleSMBShareOps)
|
||||
@@ -137,7 +157,9 @@ func (a *App) routes() {
|
||||
// Storage Services - NFS
|
||||
a.mux.HandleFunc("/api/v1/exports/nfs", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListNFSExports(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreateNFSExport(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateNFSExport)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/exports/nfs/", a.handleNFSExportOps)
|
||||
@@ -145,11 +167,73 @@ func (a *App) routes() {
|
||||
// Storage Services - iSCSI
|
||||
a.mux.HandleFunc("/api/v1/iscsi/targets", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListISCSITargets(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreateISCSITarget(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateISCSITarget)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/iscsi/targets/", a.handleISCSITargetOps)
|
||||
|
||||
// Storage Services - VTL (Virtual Tape Library)
|
||||
a.mux.HandleFunc("/api/v1/vtl/status", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleGetVTLStatus(w, r) },
|
||||
nil, nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/vtl/drives", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListVTLDrives(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
storageRoles := []models.Role{models.RoleAdministrator, models.RoleOperator}
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateVTLDrive)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/vtl/drives/", a.handleVTLDriveOps)
|
||||
a.mux.HandleFunc("/api/v1/vtl/tapes", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListVTLTapes(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateVTLTape)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/vtl/tapes/", a.handleVTLTapeOps)
|
||||
a.mux.HandleFunc("/api/v1/vtl/service", methodHandler(
|
||||
nil,
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleVTLServiceControl)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/vtl/changers", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListVTLMediaChangers(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleCreateMediaChanger)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/vtl/changers/", a.handleMediaChangerOps)
|
||||
a.mux.HandleFunc("/api/v1/vtl/devices/iscsi", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListVTLDevicesForISCSI(w, r) },
|
||||
nil, nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/vtl/changer/status", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleGetVTLMediaChangerStatus(w, r) },
|
||||
nil, nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/vtl/tape/load", methodHandler(
|
||||
nil,
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleLoadTape)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
a.mux.HandleFunc("/api/v1/vtl/tape/eject", methodHandler(
|
||||
nil,
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
a.requireRole(storageRoles...)(http.HandlerFunc(a.handleEjectTape)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
|
||||
// Job Management
|
||||
a.mux.HandleFunc("/api/v1/jobs", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListJobs(w, r) },
|
||||
|
||||
513
internal/httpapp/vtl_handlers.go
Normal file
513
internal/httpapp/vtl_handlers.go
Normal file
@@ -0,0 +1,513 @@
|
||||
package httpapp
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"strconv"
|
||||
|
||||
"gitea.avt.data-center.id/othman.suseno/atlas/internal/errors"
|
||||
)
|
||||
|
||||
// VTL API Handlers
|
||||
|
||||
// handleGetVTLStatus returns the overall VTL system status
|
||||
func (a *App) handleGetVTLStatus(w http.ResponseWriter, r *http.Request) {
|
||||
status, err := a.vtlService.GetStatus()
|
||||
if err != nil {
|
||||
log.Printf("get VTL status error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to get VTL status: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusOK, status)
|
||||
}
|
||||
|
||||
// handleListVTLDrives returns all virtual tape drives
|
||||
func (a *App) handleListVTLDrives(w http.ResponseWriter, r *http.Request) {
|
||||
drives, err := a.vtlService.ListDrives()
|
||||
if err != nil {
|
||||
log.Printf("list VTL drives error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to list VTL drives: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusOK, drives)
|
||||
}
|
||||
|
||||
// handleListVTLTapes returns all virtual tapes
|
||||
func (a *App) handleListVTLTapes(w http.ResponseWriter, r *http.Request) {
|
||||
tapes, err := a.vtlService.ListTapes()
|
||||
if err != nil {
|
||||
log.Printf("list VTL tapes error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to list VTL tapes: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusOK, tapes)
|
||||
}
|
||||
|
||||
// handleCreateVTLTape creates a new virtual tape
|
||||
func (a *App) handleCreateVTLTape(w http.ResponseWriter, r *http.Request) {
|
||||
var req struct {
|
||||
Barcode string `json:"barcode"`
|
||||
Type string `json:"type"` // e.g., "LTO-5", "LTO-6"
|
||||
Size uint64 `json:"size"` // Size in bytes (0 = default, will use generation-based size)
|
||||
LibraryID int `json:"library_id"` // Library ID where tape will be placed
|
||||
SlotID int `json:"slot_id"` // Slot ID in library where tape will be placed
|
||||
}
|
||||
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
writeError(w, errors.ErrValidation(fmt.Sprintf("invalid request body: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
if req.Barcode == "" {
|
||||
writeError(w, errors.ErrValidation("barcode is required"))
|
||||
return
|
||||
}
|
||||
|
||||
if req.LibraryID <= 0 {
|
||||
writeError(w, errors.ErrValidation("library_id is required and must be greater than 0"))
|
||||
return
|
||||
}
|
||||
|
||||
if req.SlotID <= 0 {
|
||||
writeError(w, errors.ErrValidation("slot_id is required and must be greater than 0"))
|
||||
return
|
||||
}
|
||||
|
||||
if req.Type == "" {
|
||||
// Will be determined from barcode suffix if not provided
|
||||
req.Type = ""
|
||||
}
|
||||
|
||||
if err := a.vtlService.CreateTape(req.Barcode, req.Type, req.Size, req.LibraryID, req.SlotID); err != nil {
|
||||
log.Printf("create VTL tape error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to create VTL tape: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusCreated, map[string]string{
|
||||
"message": "Virtual tape created successfully",
|
||||
"barcode": req.Barcode,
|
||||
})
|
||||
}
|
||||
|
||||
// handleDeleteVTLTape deletes a virtual tape
|
||||
func (a *App) handleDeleteVTLTape(w http.ResponseWriter, r *http.Request) {
|
||||
barcode := pathParam(r, "barcode")
|
||||
if barcode == "" {
|
||||
writeError(w, errors.ErrValidation("barcode is required"))
|
||||
return
|
||||
}
|
||||
|
||||
if err := a.vtlService.DeleteTape(barcode); err != nil {
|
||||
log.Printf("delete VTL tape error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to delete VTL tape: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusOK, map[string]string{
|
||||
"message": "Virtual tape deleted successfully",
|
||||
"barcode": barcode,
|
||||
})
|
||||
}
|
||||
|
||||
// handleVTLServiceControl controls the mhvtl service (start/stop/restart)
|
||||
func (a *App) handleVTLServiceControl(w http.ResponseWriter, r *http.Request) {
|
||||
var req struct {
|
||||
Action string `json:"action"` // "start", "stop", "restart"
|
||||
}
|
||||
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
writeError(w, errors.ErrValidation(fmt.Sprintf("invalid request body: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
var err error
|
||||
switch req.Action {
|
||||
case "start":
|
||||
err = a.vtlService.StartService()
|
||||
case "stop":
|
||||
err = a.vtlService.StopService()
|
||||
case "restart":
|
||||
err = a.vtlService.RestartService()
|
||||
default:
|
||||
writeError(w, errors.ErrValidation("invalid action: must be 'start', 'stop', or 'restart'"))
|
||||
return
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
log.Printf("VTL service control error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to %s VTL service: %v", req.Action, err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusOK, map[string]string{
|
||||
"message": "VTL service " + req.Action + "ed successfully",
|
||||
"action": req.Action,
|
||||
})
|
||||
}
|
||||
|
||||
// handleGetVTLDrive returns a specific drive by ID
|
||||
func (a *App) handleGetVTLDrive(w http.ResponseWriter, r *http.Request) {
|
||||
driveIDStr := pathParam(r, "id")
|
||||
driveID, err := strconv.Atoi(driveIDStr)
|
||||
if err != nil {
|
||||
writeError(w, errors.ErrValidation(fmt.Sprintf("invalid drive ID: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
drives, err := a.vtlService.ListDrives()
|
||||
if err != nil {
|
||||
log.Printf("list VTL drives error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to list VTL drives: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
for _, drive := range drives {
|
||||
if drive.ID == driveID {
|
||||
writeJSON(w, http.StatusOK, drive)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
writeError(w, errors.ErrNotFound("drive not found"))
|
||||
}
|
||||
|
||||
// handleGetVTLTape returns a specific tape by barcode
|
||||
func (a *App) handleGetVTLTape(w http.ResponseWriter, r *http.Request) {
|
||||
barcode := pathParam(r, "barcode")
|
||||
if barcode == "" {
|
||||
writeError(w, errors.ErrValidation("barcode is required"))
|
||||
return
|
||||
}
|
||||
|
||||
tapes, err := a.vtlService.ListTapes()
|
||||
if err != nil {
|
||||
log.Printf("list VTL tapes error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to list VTL tapes: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
for _, tape := range tapes {
|
||||
if tape.Barcode == barcode {
|
||||
writeJSON(w, http.StatusOK, tape)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
writeError(w, errors.ErrNotFound("tape not found"))
|
||||
}
|
||||
|
||||
// handleListVTLMediaChangers returns all media changers
|
||||
func (a *App) handleListVTLMediaChangers(w http.ResponseWriter, r *http.Request) {
|
||||
changers, err := a.vtlService.ListMediaChangers()
|
||||
if err != nil {
|
||||
log.Printf("list VTL media changers error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to list VTL media changers: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusOK, changers)
|
||||
}
|
||||
|
||||
// handleGetVTLMediaChangerStatus returns media changer status (all changers)
|
||||
func (a *App) handleGetVTLMediaChangerStatus(w http.ResponseWriter, r *http.Request) {
|
||||
changers, err := a.vtlService.ListMediaChangers()
|
||||
if err != nil {
|
||||
log.Printf("get VTL media changer status error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to get VTL media changer status: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
// Return all changers, or first one if only one is requested
|
||||
if len(changers) == 0 {
|
||||
writeError(w, errors.ErrNotFound("no media changer found"))
|
||||
return
|
||||
}
|
||||
|
||||
// Return all changers as array
|
||||
writeJSON(w, http.StatusOK, changers)
|
||||
}
|
||||
|
||||
// handleLoadTape loads a tape into a drive
|
||||
func (a *App) handleLoadTape(w http.ResponseWriter, r *http.Request) {
|
||||
var req struct {
|
||||
DriveID int `json:"drive_id"`
|
||||
Barcode string `json:"barcode"`
|
||||
}
|
||||
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
writeError(w, errors.ErrValidation(fmt.Sprintf("invalid request body: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
if req.Barcode == "" {
|
||||
writeError(w, errors.ErrValidation("barcode is required"))
|
||||
return
|
||||
}
|
||||
|
||||
if err := a.vtlService.LoadTape(req.DriveID, req.Barcode); err != nil {
|
||||
log.Printf("load tape error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to load tape: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusOK, map[string]string{
|
||||
"message": "Tape loaded successfully",
|
||||
"barcode": req.Barcode,
|
||||
"drive_id": fmt.Sprintf("%d", req.DriveID),
|
||||
})
|
||||
}
|
||||
|
||||
// handleEjectTape ejects a tape from a drive
|
||||
func (a *App) handleEjectTape(w http.ResponseWriter, r *http.Request) {
|
||||
var req struct {
|
||||
DriveID int `json:"drive_id"`
|
||||
}
|
||||
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
writeError(w, errors.ErrValidation(fmt.Sprintf("invalid request body: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
if err := a.vtlService.EjectTape(req.DriveID); err != nil {
|
||||
log.Printf("eject tape error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to eject tape: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusOK, map[string]string{
|
||||
"message": "Tape ejected successfully",
|
||||
"drive_id": fmt.Sprintf("%d", req.DriveID),
|
||||
})
|
||||
}
|
||||
|
||||
// handleCreateMediaChanger creates a new media changer/library
|
||||
func (a *App) handleCreateMediaChanger(w http.ResponseWriter, r *http.Request) {
|
||||
var req struct {
|
||||
LibraryID int `json:"library_id"`
|
||||
Vendor string `json:"vendor"`
|
||||
Product string `json:"product"`
|
||||
Serial string `json:"serial"`
|
||||
NumSlots int `json:"num_slots"`
|
||||
}
|
||||
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
writeError(w, errors.ErrValidation(fmt.Sprintf("invalid request body: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
if req.LibraryID <= 0 {
|
||||
writeError(w, errors.ErrValidation("library_id must be greater than 0"))
|
||||
return
|
||||
}
|
||||
|
||||
if req.NumSlots <= 0 {
|
||||
req.NumSlots = 10 // Default number of slots
|
||||
}
|
||||
|
||||
if err := a.vtlService.AddMediaChanger(req.LibraryID, req.Vendor, req.Product, req.Serial, req.NumSlots); err != nil {
|
||||
log.Printf("create media changer error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to create media changer: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusCreated, map[string]interface{}{
|
||||
"message": "Media changer created successfully",
|
||||
"library_id": req.LibraryID,
|
||||
})
|
||||
}
|
||||
|
||||
// handleUpdateMediaChanger updates a media changer/library configuration
|
||||
func (a *App) handleUpdateMediaChanger(w http.ResponseWriter, r *http.Request) {
|
||||
libraryIDStr := pathParam(r, "/api/v1/vtl/changers/")
|
||||
libraryID, err := strconv.Atoi(libraryIDStr)
|
||||
if err != nil || libraryID <= 0 {
|
||||
writeError(w, errors.ErrValidation("invalid library_id"))
|
||||
return
|
||||
}
|
||||
|
||||
var req struct {
|
||||
Vendor string `json:"vendor"`
|
||||
Product string `json:"product"`
|
||||
Serial string `json:"serial"`
|
||||
}
|
||||
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
writeError(w, errors.ErrValidation(fmt.Sprintf("invalid request body: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
if err := a.vtlService.UpdateMediaChanger(libraryID, req.Vendor, req.Product, req.Serial); err != nil {
|
||||
log.Printf("update media changer error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to update media changer: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusOK, map[string]interface{}{
|
||||
"message": "Media changer updated successfully",
|
||||
"library_id": libraryID,
|
||||
})
|
||||
}
|
||||
|
||||
// handleDeleteMediaChanger removes a media changer/library
|
||||
func (a *App) handleDeleteMediaChanger(w http.ResponseWriter, r *http.Request) {
|
||||
libraryIDStr := pathParam(r, "/api/v1/vtl/changers/")
|
||||
libraryID, err := strconv.Atoi(libraryIDStr)
|
||||
if err != nil || libraryID <= 0 {
|
||||
writeError(w, errors.ErrValidation("invalid library_id"))
|
||||
return
|
||||
}
|
||||
|
||||
if err := a.vtlService.RemoveMediaChanger(libraryID); err != nil {
|
||||
log.Printf("delete media changer error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to delete media changer: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusOK, map[string]interface{}{
|
||||
"message": "Media changer deleted successfully",
|
||||
"library_id": libraryID,
|
||||
})
|
||||
}
|
||||
|
||||
// handleCreateVTLDrive creates a new drive
|
||||
func (a *App) handleCreateVTLDrive(w http.ResponseWriter, r *http.Request) {
|
||||
var req struct {
|
||||
DriveID int `json:"drive_id"`
|
||||
LibraryID int `json:"library_id"`
|
||||
SlotID int `json:"slot_id"`
|
||||
Vendor string `json:"vendor"`
|
||||
Product string `json:"product"`
|
||||
Serial string `json:"serial"`
|
||||
}
|
||||
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
writeError(w, errors.ErrValidation(fmt.Sprintf("invalid request body: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
if req.DriveID <= 0 {
|
||||
writeError(w, errors.ErrValidation("drive_id must be greater than 0"))
|
||||
return
|
||||
}
|
||||
|
||||
if req.LibraryID <= 0 {
|
||||
writeError(w, errors.ErrValidation("library_id must be greater than 0"))
|
||||
return
|
||||
}
|
||||
|
||||
if req.SlotID <= 0 {
|
||||
writeError(w, errors.ErrValidation("slot_id must be greater than 0"))
|
||||
return
|
||||
}
|
||||
|
||||
if err := a.vtlService.AddDrive(req.DriveID, req.LibraryID, req.SlotID, req.Vendor, req.Product, req.Serial); err != nil {
|
||||
log.Printf("create VTL drive error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to create VTL drive: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusCreated, map[string]interface{}{
|
||||
"message": "Drive created successfully",
|
||||
"drive_id": req.DriveID,
|
||||
})
|
||||
}
|
||||
|
||||
// handleUpdateVTLDrive updates a drive configuration
|
||||
func (a *App) handleUpdateVTLDrive(w http.ResponseWriter, r *http.Request) {
|
||||
driveIDStr := pathParam(r, "id")
|
||||
driveID, err := strconv.Atoi(driveIDStr)
|
||||
if err != nil || driveID <= 0 {
|
||||
writeError(w, errors.ErrValidation("invalid drive_id"))
|
||||
return
|
||||
}
|
||||
|
||||
var req struct {
|
||||
LibraryID int `json:"library_id"`
|
||||
SlotID int `json:"slot_id"`
|
||||
Vendor string `json:"vendor"`
|
||||
Product string `json:"product"`
|
||||
Serial string `json:"serial"`
|
||||
}
|
||||
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
writeError(w, errors.ErrValidation(fmt.Sprintf("invalid request body: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
if err := a.vtlService.UpdateDrive(driveID, req.LibraryID, req.SlotID, req.Vendor, req.Product, req.Serial); err != nil {
|
||||
log.Printf("update VTL drive error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to update VTL drive: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusOK, map[string]string{
|
||||
"message": "Drive updated successfully",
|
||||
"drive_id": fmt.Sprintf("%d", driveID),
|
||||
})
|
||||
}
|
||||
|
||||
// handleDeleteVTLDrive removes a drive
|
||||
func (a *App) handleDeleteVTLDrive(w http.ResponseWriter, r *http.Request) {
|
||||
driveIDStr := pathParam(r, "id")
|
||||
driveID, err := strconv.Atoi(driveIDStr)
|
||||
if err != nil || driveID <= 0 {
|
||||
writeError(w, errors.ErrValidation("invalid drive_id"))
|
||||
return
|
||||
}
|
||||
|
||||
if err := a.vtlService.RemoveDrive(driveID); err != nil {
|
||||
log.Printf("delete VTL drive error: %v", err)
|
||||
writeError(w, errors.ErrInternal(fmt.Sprintf("failed to delete VTL drive: %v", err)))
|
||||
return
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusOK, map[string]string{
|
||||
"message": "Drive deleted successfully",
|
||||
"drive_id": fmt.Sprintf("%d", driveID),
|
||||
})
|
||||
}
|
||||
|
||||
// handleListVTLDevicesForISCSI returns all tape devices (drives and medium changers) for iSCSI passthrough
|
||||
func (a *App) handleListVTLDevicesForISCSI(w http.ResponseWriter, r *http.Request) {
|
||||
devices := []map[string]interface{}{}
|
||||
|
||||
// Get drives
|
||||
drives, err := a.vtlService.ListDrives()
|
||||
if err == nil {
|
||||
for _, drive := range drives {
|
||||
devices = append(devices, map[string]interface{}{
|
||||
"type": "drive",
|
||||
"device": drive.Device,
|
||||
"id": drive.ID,
|
||||
"library_id": drive.LibraryID,
|
||||
"vendor": drive.Vendor,
|
||||
"product": drive.Product,
|
||||
"description": fmt.Sprintf("Tape Drive %d (Library %d) - %s %s", drive.ID, drive.LibraryID, drive.Vendor, drive.Product),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Get medium changers
|
||||
changers, err := a.vtlService.ListMediaChangers()
|
||||
if err == nil {
|
||||
for _, changer := range changers {
|
||||
devices = append(devices, map[string]interface{}{
|
||||
"type": "changer",
|
||||
"device": changer.Device,
|
||||
"id": changer.ID,
|
||||
"library_id": changer.LibraryID,
|
||||
"slots": changer.Slots,
|
||||
"drives": changer.Drives,
|
||||
"description": fmt.Sprintf("Media Changer (Library %d) - %d slots, %d drives", changer.LibraryID, changer.Slots, changer.Drives),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
writeJSON(w, http.StatusOK, devices)
|
||||
}
|
||||
58
internal/models/vtl.go
Normal file
58
internal/models/vtl.go
Normal file
@@ -0,0 +1,58 @@
|
||||
package models
|
||||
|
||||
// VTLDrive represents a virtual tape drive
|
||||
type VTLDrive struct {
|
||||
ID int `json:"id"` // Drive ID (e.g., 11, 12, 13, 14 for library 10)
|
||||
LibraryID int `json:"library_id"` // Library ID (tens digit)
|
||||
SlotID int `json:"slot_id"` // Slot ID (ones digit)
|
||||
Vendor string `json:"vendor"` // Drive vendor (e.g., "IBM")
|
||||
Product string `json:"product"` // Drive product (e.g., "ULT3580-TD5")
|
||||
Type string `json:"type"` // Tape type (e.g., "LTO-5", "LTO-6")
|
||||
Device string `json:"device"` // Device path (e.g., "/dev/st0")
|
||||
Status string `json:"status"` // "online", "offline", "error"
|
||||
MediaLoaded bool `json:"media_loaded"` // Whether tape is loaded
|
||||
Barcode string `json:"barcode"` // Barcode of loaded tape (if any)
|
||||
}
|
||||
|
||||
// VTLMediaChanger represents a virtual media changer
|
||||
type VTLMediaChanger struct {
|
||||
ID int `json:"id"` // Changer ID
|
||||
LibraryID int `json:"library_id"` // Library ID
|
||||
Device string `json:"device"` // Device path (e.g., "/dev/sg0")
|
||||
Status string `json:"status"` // "online", "offline", "error"
|
||||
Slots int `json:"slots"` // Number of slots
|
||||
Drives int `json:"drives"` // Number of drives
|
||||
}
|
||||
|
||||
// VTLTape represents a virtual tape cartridge
|
||||
type VTLTape struct {
|
||||
Barcode string `json:"barcode"` // Tape barcode
|
||||
LibraryID int `json:"library_id"` // Library ID
|
||||
SlotID int `json:"slot_id"` // Slot ID (0 = not in library)
|
||||
DriveID int `json:"drive_id"` // Drive ID if loaded (-1 if not loaded)
|
||||
Type string `json:"type"` // Tape type (e.g., "LTO-5")
|
||||
Size uint64 `json:"size"` // Tape capacity in bytes
|
||||
Used uint64 `json:"used"` // Used space in bytes
|
||||
Status string `json:"status"` // "available", "in_use", "error"
|
||||
}
|
||||
|
||||
// VTLConfig represents mhvtl configuration
|
||||
type VTLConfig struct {
|
||||
Enabled bool `json:"enabled"` // Whether VTL is enabled
|
||||
LibraryID int `json:"library_id"` // Default library ID
|
||||
Drives []VTLDrive `json:"drives"` // List of drives
|
||||
Changer *VTLMediaChanger `json:"changer"` // Media changer
|
||||
Tapes []VTLTape `json:"tapes"` // List of tapes
|
||||
ConfigPath string `json:"config_path"` // Path to mhvtl config
|
||||
StoragePath string `json:"storage_path"` // Path to tape storage
|
||||
}
|
||||
|
||||
// VTLStatus represents overall VTL system status
|
||||
type VTLStatus struct {
|
||||
ServiceRunning bool `json:"service_running"` // Whether mhvtl service is running
|
||||
DrivesOnline int `json:"drives_online"` // Number of online drives
|
||||
DrivesTotal int `json:"drives_total"` // Total number of drives
|
||||
TapesTotal int `json:"tapes_total"` // Total number of tapes
|
||||
TapesAvailable int `json:"tapes_available"` // Number of available tapes
|
||||
LastError string `json:"last_error"` // Last error message (if any)
|
||||
}
|
||||
2294
internal/services/vtl.go
Normal file
2294
internal/services/vtl.go
Normal file
File diff suppressed because it is too large
Load Diff
113
scripts/push-to-repo.sh
Executable file
113
scripts/push-to-repo.sh
Executable file
@@ -0,0 +1,113 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Script to push Atlas changes to repository
|
||||
# This script commits all changes, updates version, and pushes to remote
|
||||
#
|
||||
# Usage: ./scripts/push-to-repo.sh [commit message] [--skip-version]
|
||||
#
|
||||
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Get script directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
||||
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
# Check if git is available
|
||||
if ! command -v git &>/dev/null; then
|
||||
echo -e "${RED}Error: git is not installed${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if we're in a git repository
|
||||
if ! git rev-parse --git-dir &>/dev/null; then
|
||||
echo -e "${RED}Error: Not in a git repository${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get commit message from argument or use default
|
||||
COMMIT_MSG="${1:-Update Atlas with VTL features and improvements}"
|
||||
|
||||
# Check if --skip-version flag is set
|
||||
SKIP_VERSION=false
|
||||
if [[ "$*" == *"--skip-version"* ]]; then
|
||||
SKIP_VERSION=true
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}Preparing to push changes to repository...${NC}"
|
||||
|
||||
# Check for uncommitted changes
|
||||
if git diff --quiet && git diff --cached --quiet; then
|
||||
echo -e "${YELLOW}No changes to commit${NC}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Show status
|
||||
echo -e "${GREEN}Current git status:${NC}"
|
||||
git status --short
|
||||
|
||||
# Ask for confirmation
|
||||
read -p "Continue with commit and push? (y/n) " -n 1 -r
|
||||
echo ""
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo -e "${YELLOW}Aborted${NC}"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Update version if not skipped
|
||||
if [[ "$SKIP_VERSION" == false ]]; then
|
||||
echo -e "${GREEN}Updating version...${NC}"
|
||||
# You can add version update logic here if needed
|
||||
# For example, update a VERSION file or tag
|
||||
fi
|
||||
|
||||
# Stage all changes
|
||||
echo -e "${GREEN}Staging all changes...${NC}"
|
||||
git add -A
|
||||
|
||||
# Commit changes
|
||||
echo -e "${GREEN}Committing changes...${NC}"
|
||||
git commit -m "$COMMIT_MSG" || {
|
||||
echo -e "${YELLOW}No changes to commit${NC}"
|
||||
exit 0
|
||||
}
|
||||
|
||||
# Get current branch
|
||||
CURRENT_BRANCH=$(git branch --show-current)
|
||||
echo -e "${GREEN}Current branch: $CURRENT_BRANCH${NC}"
|
||||
|
||||
# Check if remote exists
|
||||
if ! git remote | grep -q origin; then
|
||||
echo -e "${YELLOW}Warning: No 'origin' remote found${NC}"
|
||||
read -p "Do you want to set up a remote? (y/n) " -n 1 -r
|
||||
echo ""
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
read -p "Enter remote URL: " REMOTE_URL
|
||||
git remote add origin "$REMOTE_URL"
|
||||
else
|
||||
echo -e "${YELLOW}Skipping push (no remote configured)${NC}"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
# Push to remote
|
||||
echo -e "${GREEN}Pushing to remote repository...${NC}"
|
||||
if git push origin "$CURRENT_BRANCH"; then
|
||||
echo -e "${GREEN}✓ Successfully pushed to repository${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ Push failed${NC}"
|
||||
echo "You may need to:"
|
||||
echo " 1. Set upstream: git push -u origin $CURRENT_BRANCH"
|
||||
echo " 2. Pull first: git pull origin $CURRENT_BRANCH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}Done!${NC}"
|
||||
|
||||
2028
tui-rust/Cargo.lock
generated
Normal file
2028
tui-rust/Cargo.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
16
tui-rust/Cargo.toml
Normal file
16
tui-rust/Cargo.toml
Normal file
@@ -0,0 +1,16 @@
|
||||
[package]
|
||||
name = "atlas-tui"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
|
||||
[dependencies]
|
||||
ratatui = "0.27"
|
||||
crossterm = "0.28"
|
||||
serde = { version = "1.0", features = ["derive"] }
|
||||
serde_json = "1.0"
|
||||
reqwest = { version = "0.12", features = ["json"] }
|
||||
tokio = { version = "1.0", features = ["full"] }
|
||||
anyhow = "1.0"
|
||||
dirs = "5.0"
|
||||
|
||||
|
||||
50
tui-rust/README.md
Normal file
50
tui-rust/README.md
Normal file
@@ -0,0 +1,50 @@
|
||||
# AtlasOS TUI (Rust + ratatui)
|
||||
|
||||
Terminal User Interface untuk AtlasOS yang dibangun dengan Rust dan ratatui.
|
||||
|
||||
## Features
|
||||
|
||||
- Modern TUI dengan ratatui
|
||||
- Navigasi dengan keyboard
|
||||
- Support untuk semua fitur AtlasOS API
|
||||
- Login authentication
|
||||
- Real-time data display
|
||||
|
||||
## Build
|
||||
|
||||
```bash
|
||||
cd tui-rust
|
||||
cargo build --release
|
||||
```
|
||||
|
||||
Binary akan ada di `target/release/atlas-tui`
|
||||
|
||||
## Run
|
||||
|
||||
```bash
|
||||
./target/release/atlas-tui
|
||||
```
|
||||
|
||||
Atau set environment variable untuk API URL:
|
||||
```bash
|
||||
ATLAS_API_URL=http://localhost:8080 ./target/release/atlas-tui
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
- Rust 1.70+
|
||||
- ratatui 0.27
|
||||
- crossterm 0.28
|
||||
- reqwest (untuk HTTP client)
|
||||
- tokio (untuk async runtime)
|
||||
|
||||
## Status
|
||||
|
||||
🚧 **Work in Progress** - Implementasi dasar sudah ada, masih perlu:
|
||||
- Complete semua menu handlers
|
||||
- Input forms untuk create/edit operations
|
||||
- Better error handling
|
||||
- Loading states
|
||||
- Data tables untuk lists
|
||||
|
||||
|
||||
173
tui-rust/src/api.rs
Normal file
173
tui-rust/src/api.rs
Normal file
@@ -0,0 +1,173 @@
|
||||
use anyhow::{Context, Result};
|
||||
use reqwest::Client;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::Value;
|
||||
use std::collections::HashMap;
|
||||
|
||||
pub struct APIClient {
|
||||
base_url: String,
|
||||
client: Client,
|
||||
token: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
pub struct LoginRequest {
|
||||
pub username: String,
|
||||
pub password: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
pub struct LoginResponse {
|
||||
pub token: String,
|
||||
pub user: Option<Value>,
|
||||
}
|
||||
|
||||
impl APIClient {
|
||||
pub fn new(base_url: String) -> Self {
|
||||
Self {
|
||||
base_url,
|
||||
client: Client::new(),
|
||||
token: None,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn set_token(&mut self, token: String) {
|
||||
self.token = Some(token);
|
||||
}
|
||||
|
||||
pub fn has_token(&self) -> bool {
|
||||
self.token.is_some()
|
||||
}
|
||||
|
||||
pub async fn login(&mut self, username: String, password: String) -> Result<LoginResponse> {
|
||||
let url = format!("{}/api/v1/auth/login", self.base_url);
|
||||
let req = LoginRequest { username, password };
|
||||
|
||||
let response = self
|
||||
.client
|
||||
.post(&url)
|
||||
.json(&req)
|
||||
.send()
|
||||
.await
|
||||
.context("Failed to send login request")?;
|
||||
|
||||
let status = response.status();
|
||||
if !status.is_success() {
|
||||
let text = response.text().await.unwrap_or_default();
|
||||
anyhow::bail!("Login failed: {}", text);
|
||||
}
|
||||
|
||||
let login_resp: LoginResponse = response
|
||||
.json()
|
||||
.await
|
||||
.context("Failed to parse login response")?;
|
||||
|
||||
self.set_token(login_resp.token.clone());
|
||||
Ok(login_resp)
|
||||
}
|
||||
|
||||
pub async fn get(&self, path: &str) -> Result<Value> {
|
||||
let url = format!("{}{}", self.base_url, path);
|
||||
let mut request = self.client.get(&url);
|
||||
|
||||
if let Some(ref token) = self.token {
|
||||
request = request.bearer_auth(token);
|
||||
}
|
||||
|
||||
let response = request
|
||||
.send()
|
||||
.await
|
||||
.context("Failed to send GET request")?;
|
||||
|
||||
let status = response.status();
|
||||
if !status.is_success() {
|
||||
let text = response.text().await.unwrap_or_default();
|
||||
anyhow::bail!("API error (status {}): {}", status, text);
|
||||
}
|
||||
|
||||
let json: Value = response
|
||||
.json()
|
||||
.await
|
||||
.context("Failed to parse JSON response")?;
|
||||
|
||||
Ok(json)
|
||||
}
|
||||
|
||||
pub async fn post(&self, path: &str, body: &Value) -> Result<Value> {
|
||||
let url = format!("{}{}", self.base_url, path);
|
||||
let mut request = self.client.post(&url).json(body);
|
||||
|
||||
if let Some(ref token) = self.token {
|
||||
request = request.bearer_auth(token);
|
||||
}
|
||||
|
||||
let response = request
|
||||
.send()
|
||||
.await
|
||||
.context("Failed to send POST request")?;
|
||||
|
||||
let status = response.status();
|
||||
if !status.is_success() {
|
||||
let text = response.text().await.unwrap_or_default();
|
||||
anyhow::bail!("API error (status {}): {}", status, text);
|
||||
}
|
||||
|
||||
let json: Value = response
|
||||
.json()
|
||||
.await
|
||||
.context("Failed to parse JSON response")?;
|
||||
|
||||
Ok(json)
|
||||
}
|
||||
|
||||
pub async fn delete(&self, path: &str) -> Result<()> {
|
||||
let url = format!("{}{}", self.base_url, path);
|
||||
let mut request = self.client.delete(&url);
|
||||
|
||||
if let Some(ref token) = self.token {
|
||||
request = request.bearer_auth(token);
|
||||
}
|
||||
|
||||
let response = request
|
||||
.send()
|
||||
.await
|
||||
.context("Failed to send DELETE request")?;
|
||||
|
||||
let status = response.status();
|
||||
if !status.is_success() {
|
||||
let text = response.text().await.unwrap_or_default();
|
||||
anyhow::bail!("API error (status {}): {}", status, text);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn put(&self, path: &str, body: &Value) -> Result<Value> {
|
||||
let url = format!("{}{}", self.base_url, path);
|
||||
let mut request = self.client.put(&url).json(body);
|
||||
|
||||
if let Some(ref token) = self.token {
|
||||
request = request.bearer_auth(token);
|
||||
}
|
||||
|
||||
let response = request
|
||||
.send()
|
||||
.await
|
||||
.context("Failed to send PUT request")?;
|
||||
|
||||
let status = response.status();
|
||||
if !status.is_success() {
|
||||
let text = response.text().await.unwrap_or_default();
|
||||
anyhow::bail!("API error (status {}): {}", status, text);
|
||||
}
|
||||
|
||||
let json: Value = response
|
||||
.json()
|
||||
.await
|
||||
.context("Failed to parse JSON response")?;
|
||||
|
||||
Ok(json)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
189
tui-rust/src/app.rs
Normal file
189
tui-rust/src/app.rs
Normal file
@@ -0,0 +1,189 @@
|
||||
use crate::api::APIClient;
|
||||
use crate::ui;
|
||||
use anyhow::Result;
|
||||
use crossterm::event::{self, Event, KeyCode, KeyEventKind};
|
||||
use ratatui::backend::Backend;
|
||||
use ratatui::Terminal;
|
||||
use ratatui::widgets::ListState;
|
||||
use std::io;
|
||||
|
||||
#[derive(Clone, Copy, PartialEq)]
|
||||
pub enum LoginStep {
|
||||
Username,
|
||||
Password,
|
||||
Done,
|
||||
}
|
||||
|
||||
pub enum AppState {
|
||||
Login,
|
||||
MainMenu,
|
||||
ZFSMenu,
|
||||
StorageMenu,
|
||||
SnapshotMenu,
|
||||
SystemMenu,
|
||||
BackupMenu,
|
||||
UserMenu,
|
||||
ServiceMenu,
|
||||
ViewingData,
|
||||
InputPrompt(String), // Prompt message
|
||||
Exit,
|
||||
}
|
||||
|
||||
pub struct App {
|
||||
pub api_client: APIClient,
|
||||
pub state: AppState,
|
||||
pub input_buffer: String,
|
||||
pub input_mode: bool,
|
||||
pub error_message: Option<String>,
|
||||
pub success_message: Option<String>,
|
||||
pub data: Option<serde_json::Value>,
|
||||
pub selected_index: usize,
|
||||
pub list_state: ListState,
|
||||
pub username: String,
|
||||
pub password: String,
|
||||
pub login_step: LoginStep,
|
||||
}
|
||||
|
||||
impl App {
|
||||
pub fn new(api_url: String) -> Self {
|
||||
Self {
|
||||
api_client: APIClient::new(api_url),
|
||||
state: AppState::Login,
|
||||
input_buffer: String::new(),
|
||||
input_mode: false,
|
||||
error_message: None,
|
||||
success_message: None,
|
||||
data: None,
|
||||
selected_index: 0,
|
||||
list_state: ListState::default(),
|
||||
username: String::new(),
|
||||
password: String::new(),
|
||||
login_step: LoginStep::Username,
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn run<B: Backend>(&mut self, terminal: &mut Terminal<B>) -> Result<()> {
|
||||
loop {
|
||||
terminal.draw(|f| ui::draw(f, self))?;
|
||||
|
||||
if let Event::Key(key) = event::read()? {
|
||||
if key.kind == KeyEventKind::Press {
|
||||
match key.code {
|
||||
KeyCode::Char('q') | KeyCode::Esc => {
|
||||
if matches!(self.state, AppState::Login) {
|
||||
self.state = AppState::Exit;
|
||||
break;
|
||||
} else {
|
||||
self.state = AppState::MainMenu;
|
||||
}
|
||||
}
|
||||
KeyCode::Enter => {
|
||||
self.handle_enter().await?;
|
||||
}
|
||||
KeyCode::Backspace => {
|
||||
if self.input_mode {
|
||||
self.input_buffer.pop();
|
||||
}
|
||||
}
|
||||
KeyCode::Char(c) => {
|
||||
if self.input_mode {
|
||||
self.input_buffer.push(c);
|
||||
} else {
|
||||
self.handle_key(c).await?;
|
||||
}
|
||||
}
|
||||
KeyCode::Up => {
|
||||
if self.selected_index > 0 {
|
||||
self.selected_index -= 1;
|
||||
}
|
||||
self.list_state.select(Some(self.selected_index));
|
||||
}
|
||||
KeyCode::Down => {
|
||||
self.selected_index += 1;
|
||||
self.list_state.select(Some(self.selected_index));
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if matches!(self.state, AppState::Exit) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn handle_enter(&mut self) -> Result<()> {
|
||||
match &self.state {
|
||||
AppState::Login => {
|
||||
// Login will be handled in input mode
|
||||
if !self.input_mode {
|
||||
self.input_mode = true;
|
||||
self.input_buffer.clear();
|
||||
}
|
||||
}
|
||||
AppState::MainMenu => {
|
||||
match self.selected_index {
|
||||
0 => self.state = AppState::ZFSMenu,
|
||||
1 => self.state = AppState::StorageMenu,
|
||||
2 => self.state = AppState::SnapshotMenu,
|
||||
3 => self.state = AppState::SystemMenu,
|
||||
4 => self.state = AppState::BackupMenu,
|
||||
5 => self.state = AppState::UserMenu,
|
||||
6 => self.state = AppState::ServiceMenu,
|
||||
_ => {}
|
||||
}
|
||||
self.selected_index = 0;
|
||||
self.list_state.select(Some(0));
|
||||
}
|
||||
AppState::ZFSMenu => {
|
||||
// Handle ZFS menu selection
|
||||
self.handle_zfs_action().await?;
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn handle_key(&mut self, c: char) -> Result<()> {
|
||||
match c {
|
||||
'1'..='9' => {
|
||||
if matches!(self.state, AppState::MainMenu) {
|
||||
let idx = (c as usize) - ('1' as usize);
|
||||
if idx < 7 {
|
||||
self.selected_index = idx;
|
||||
}
|
||||
}
|
||||
}
|
||||
'0' => {
|
||||
if matches!(self.state, AppState::MainMenu) {
|
||||
self.state = AppState::Exit;
|
||||
}
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn handle_zfs_action(&mut self) -> Result<()> {
|
||||
match self.selected_index {
|
||||
0 => {
|
||||
// List pools
|
||||
match self.api_client.get("/api/v1/pools").await {
|
||||
Ok(data) => {
|
||||
self.data = Some(data);
|
||||
self.state = AppState::ViewingData;
|
||||
}
|
||||
Err(e) => {
|
||||
self.error_message = Some(format!("Error: {}", e));
|
||||
}
|
||||
}
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
55
tui-rust/src/main.rs
Normal file
55
tui-rust/src/main.rs
Normal file
@@ -0,0 +1,55 @@
|
||||
use anyhow::Result;
|
||||
use crossterm::{
|
||||
event::{self, DisableMouseCapture, EnableMouseCapture, Event, KeyCode, KeyEventKind},
|
||||
execute,
|
||||
terminal::{disable_raw_mode, enable_raw_mode, EnterAlternateScreen, LeaveAlternateScreen},
|
||||
};
|
||||
use ratatui::{
|
||||
backend::CrosstermBackend,
|
||||
layout::{Alignment, Constraint, Direction, Layout, Rect},
|
||||
style::{Color, Modifier, Style},
|
||||
text::{Line, Span},
|
||||
widgets::{Block, Borders, List, ListItem, ListState, Paragraph, Wrap},
|
||||
Frame, Terminal,
|
||||
};
|
||||
use std::io;
|
||||
|
||||
mod api;
|
||||
mod app;
|
||||
mod ui;
|
||||
|
||||
use app::App;
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> Result<()> {
|
||||
// Setup terminal
|
||||
enable_raw_mode()?;
|
||||
let mut stdout = io::stdout();
|
||||
execute!(stdout, EnterAlternateScreen, EnableMouseCapture)?;
|
||||
let backend = CrosstermBackend::new(stdout);
|
||||
let mut terminal = Terminal::new(backend)?;
|
||||
|
||||
// Get API URL from environment or use default
|
||||
let api_url = std::env::var("ATLAS_API_URL")
|
||||
.unwrap_or_else(|_| "http://localhost:8080".to_string());
|
||||
|
||||
// Create app
|
||||
let mut app = App::new(api_url);
|
||||
let res = app.run(&mut terminal).await;
|
||||
|
||||
// Restore terminal
|
||||
disable_raw_mode()?;
|
||||
execute!(
|
||||
terminal.backend_mut(),
|
||||
LeaveAlternateScreen,
|
||||
DisableMouseCapture
|
||||
)?;
|
||||
terminal.show_cursor()?;
|
||||
|
||||
if let Err(err) = res {
|
||||
println!("Error: {:?}", err);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
157
tui-rust/src/ui.rs
Normal file
157
tui-rust/src/ui.rs
Normal file
@@ -0,0 +1,157 @@
|
||||
use crate::app::{App, AppState, LoginStep};
|
||||
use ratatui::{
|
||||
layout::{Alignment, Constraint, Direction, Layout, Rect},
|
||||
style::{Color, Modifier, Style},
|
||||
text::{Line, Span},
|
||||
widgets::{Block, Borders, List, ListItem, Paragraph, Wrap},
|
||||
Frame,
|
||||
};
|
||||
|
||||
pub fn draw(frame: &mut Frame, app: &App) {
|
||||
let chunks = Layout::default()
|
||||
.direction(Direction::Vertical)
|
||||
.constraints([
|
||||
Constraint::Length(3), // Header
|
||||
Constraint::Min(0), // Main content
|
||||
Constraint::Length(3), // Footer/Status
|
||||
])
|
||||
.split(frame.size());
|
||||
|
||||
// Header
|
||||
let header = Paragraph::new("AtlasOS Terminal Interface")
|
||||
.style(Style::default().fg(Color::Cyan).add_modifier(Modifier::BOLD))
|
||||
.alignment(Alignment::Center)
|
||||
.block(Block::default().borders(Borders::ALL));
|
||||
frame.render_widget(header, chunks[0]);
|
||||
|
||||
// Main content based on state
|
||||
match &app.state {
|
||||
AppState::Login => draw_login(frame, app, chunks[1]),
|
||||
AppState::MainMenu => draw_main_menu(frame, app, chunks[1]),
|
||||
AppState::ZFSMenu => draw_zfs_menu(frame, app, chunks[1]),
|
||||
AppState::ViewingData => draw_data_view(frame, app, chunks[1]),
|
||||
_ => draw_main_menu(frame, app, chunks[1]),
|
||||
}
|
||||
|
||||
// Footer/Status
|
||||
let footer_text = if app.input_mode {
|
||||
format!("Input: {}", app.input_buffer)
|
||||
} else {
|
||||
"Press 'q' to quit, Enter to select".to_string()
|
||||
};
|
||||
|
||||
let footer = Paragraph::new(footer_text)
|
||||
.style(Style::default().fg(Color::Yellow))
|
||||
.block(Block::default().borders(Borders::ALL));
|
||||
frame.render_widget(footer, chunks[2]);
|
||||
|
||||
// Show error/success messages
|
||||
if let Some(ref error) = app.error_message {
|
||||
let error_block = Paragraph::new(error.as_str())
|
||||
.style(Style::default().fg(Color::Red))
|
||||
.block(Block::default().borders(Borders::ALL).title("Error"));
|
||||
frame.render_widget(error_block, frame.size());
|
||||
}
|
||||
|
||||
if let Some(ref success) = app.success_message {
|
||||
let success_block = Paragraph::new(success.as_str())
|
||||
.style(Style::default().fg(Color::Green))
|
||||
.block(Block::default().borders(Borders::ALL).title("Success"));
|
||||
frame.render_widget(success_block, frame.size());
|
||||
}
|
||||
}
|
||||
|
||||
fn draw_login(frame: &mut Frame, app: &App, area: Rect) {
|
||||
let chunks = Layout::default()
|
||||
.direction(Direction::Vertical)
|
||||
.constraints([Constraint::Length(3), Constraint::Length(3), Constraint::Min(0)])
|
||||
.split(area);
|
||||
|
||||
let username_prompt = Paragraph::new(if app.login_step == LoginStep::Username {
|
||||
format!("Username: {}", app.input_buffer)
|
||||
} else {
|
||||
format!("Username: {}", app.username)
|
||||
})
|
||||
.block(Block::default().borders(Borders::ALL).title("Login - Username"));
|
||||
frame.render_widget(username_prompt, chunks[0]);
|
||||
|
||||
let password_prompt = Paragraph::new(if app.login_step == LoginStep::Password {
|
||||
format!("Password: {}", "*".repeat(app.input_buffer.len()))
|
||||
} else if app.login_step == LoginStep::Done {
|
||||
"Password: ********".to_string()
|
||||
} else {
|
||||
"Password: ".to_string()
|
||||
})
|
||||
.block(Block::default().borders(Borders::ALL).title("Login - Password"));
|
||||
frame.render_widget(password_prompt, chunks[1]);
|
||||
}
|
||||
|
||||
fn draw_main_menu(frame: &mut Frame, app: &App, area: Rect) {
|
||||
let items = vec![
|
||||
ListItem::new("1. ZFS Management"),
|
||||
ListItem::new("2. Storage Services"),
|
||||
ListItem::new("3. Snapshots"),
|
||||
ListItem::new("4. System Information"),
|
||||
ListItem::new("5. Backup & Restore"),
|
||||
ListItem::new("6. User Management"),
|
||||
ListItem::new("7. Service Management"),
|
||||
ListItem::new("0. Exit"),
|
||||
];
|
||||
|
||||
let list = List::new(items)
|
||||
.block(Block::default().borders(Borders::ALL).title("Main Menu"))
|
||||
.highlight_style(
|
||||
Style::default()
|
||||
.bg(Color::Blue)
|
||||
.add_modifier(Modifier::BOLD),
|
||||
)
|
||||
.highlight_symbol(">> ");
|
||||
|
||||
frame.render_stateful_widget(list, area, &mut app.list_state.clone());
|
||||
}
|
||||
|
||||
fn draw_zfs_menu(frame: &mut Frame, app: &App, area: Rect) {
|
||||
let items = vec![
|
||||
ListItem::new("1. List Pools"),
|
||||
ListItem::new("2. Create Pool"),
|
||||
ListItem::new("3. Delete Pool"),
|
||||
ListItem::new("4. Import Pool"),
|
||||
ListItem::new("5. Export Pool"),
|
||||
ListItem::new("6. List Available Pools"),
|
||||
ListItem::new("7. Start Scrub"),
|
||||
ListItem::new("8. Get Scrub Status"),
|
||||
ListItem::new("9. List Datasets"),
|
||||
ListItem::new("10. Create Dataset"),
|
||||
ListItem::new("11. Delete Dataset"),
|
||||
ListItem::new("12. List ZVOLs"),
|
||||
ListItem::new("13. Create ZVOL"),
|
||||
ListItem::new("14. Delete ZVOL"),
|
||||
ListItem::new("15. List Disks"),
|
||||
ListItem::new("0. Back"),
|
||||
];
|
||||
|
||||
let list = List::new(items)
|
||||
.block(Block::default().borders(Borders::ALL).title("ZFS Management"))
|
||||
.highlight_style(
|
||||
Style::default()
|
||||
.bg(Color::Blue)
|
||||
.add_modifier(Modifier::BOLD),
|
||||
)
|
||||
.highlight_symbol(">> ");
|
||||
|
||||
frame.render_stateful_widget(list, area, &mut app.list_state.clone());
|
||||
}
|
||||
|
||||
fn draw_data_view(frame: &mut Frame, app: &App, area: Rect) {
|
||||
let text = if let Some(ref data) = app.data {
|
||||
serde_json::to_string_pretty(data).unwrap_or_else(|_| "Invalid JSON".to_string())
|
||||
} else {
|
||||
"No data".to_string()
|
||||
};
|
||||
|
||||
let paragraph = Paragraph::new(text)
|
||||
.block(Block::default().borders(Borders::ALL).title("Data"))
|
||||
.wrap(Wrap { trim: true });
|
||||
frame.render_widget(paragraph, area);
|
||||
}
|
||||
|
||||
1
tui-rust/target/.rustc_info.json
Normal file
1
tui-rust/target/.rustc_info.json
Normal file
@@ -0,0 +1 @@
|
||||
{"rustc_fingerprint":2852558656607291690,"outputs":{"7971740275564407648":{"success":true,"status":"","code":0,"stdout":"___\nlib___.rlib\nlib___.so\nlib___.so\nlib___.a\nlib___.so\n/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu\noff\npacked\nunpacked\n___\ndebug_assertions\npanic=\"unwind\"\nproc_macro\ntarget_abi=\"\"\ntarget_arch=\"x86_64\"\ntarget_endian=\"little\"\ntarget_env=\"gnu\"\ntarget_family=\"unix\"\ntarget_feature=\"fxsr\"\ntarget_feature=\"sse\"\ntarget_feature=\"sse2\"\ntarget_has_atomic=\"16\"\ntarget_has_atomic=\"32\"\ntarget_has_atomic=\"64\"\ntarget_has_atomic=\"8\"\ntarget_has_atomic=\"ptr\"\ntarget_os=\"linux\"\ntarget_pointer_width=\"64\"\ntarget_vendor=\"unknown\"\nunix\n","stderr":""},"17747080675513052775":{"success":true,"status":"","code":0,"stdout":"rustc 1.92.0 (ded5c06cf 2025-12-08)\nbinary: rustc\ncommit-hash: ded5c06cf21d2b93bffd5d884aa6e96934ee4234\ncommit-date: 2025-12-08\nhost: x86_64-unknown-linux-gnu\nrelease: 1.92.0\nLLVM version: 21.1.3\n","stderr":""}},"successes":{}}
|
||||
3
tui-rust/target/CACHEDIR.TAG
Normal file
3
tui-rust/target/CACHEDIR.TAG
Normal file
@@ -0,0 +1,3 @@
|
||||
Signature: 8a477f597d28d172789f06886806bc55
|
||||
# This file is a cache directory tag created by cargo.
|
||||
# For information about cache directory tags see https://bford.info/cachedir/
|
||||
0
tui-rust/target/debug/.cargo-lock
Normal file
0
tui-rust/target/debug/.cargo-lock
Normal file
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
9f46af916923b818
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[\"alloc\"]","declared_features":"[\"alloc\", \"default\", \"fresh-rust\", \"nightly\", \"serde\", \"std\"]","target":5388200169723499962,"profile":187265481308423917,"path":10591411839453927008,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/allocator-api2-00a55c07d0ea4ce1/dep-lib-allocator_api2","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
@@ -0,0 +1 @@
|
||||
dc662e630174161b
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"","declared_features":"","target":0,"profile":0,"path":0,"deps":[[1852463361802237065,"build_script_build",false,7542658676444820174]],"local":[{"RerunIfChanged":{"output":"debug/build/anyhow-7dffd08ca73f1c01/output","paths":["src/nightly.rs"]}},{"RerunIfEnvChanged":{"var":"RUSTC_BOOTSTRAP","val":null}}],"rustflags":[],"config":0,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
9037ebe70e28bb6a
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[\"default\", \"std\"]","declared_features":"[\"backtrace\", \"default\", \"std\"]","target":16100955855663461252,"profile":2241668132362809309,"path":6508595044157912618,"deps":[[1852463361802237065,"build_script_build",false,1951875037819463388]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/anyhow-a8e29080dfa88f6a/dep-lib-anyhow","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
@@ -0,0 +1 @@
|
||||
cef2827f1ae8ac68
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[\"default\", \"std\"]","declared_features":"[\"backtrace\", \"default\", \"std\"]","target":17883862002600103897,"profile":2225463790103693989,"path":12383270898441138485,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/anyhow-f0f8ac34947eb6de/dep-build-script-build-script-build","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
674a9fe368f24e7e
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[]","declared_features":"[]","target":734735678962942177,"profile":17672942494452627365,"path":4942398508502643691,"deps":[[1852463361802237065,"anyhow",false,7690784833150859152],[6770350463469152398,"ratatui",false,8605790855636449198],[7720834239451334583,"tokio",false,15912980743490839798],[8256202458064874477,"dirs",false,3269800498822942012],[12832915883349295919,"serde_json",false,16487956888173208731],[13548984313718623784,"serde",false,5329090938770790385],[15232994347474474160,"reqwest",false,15480548085929901409],[17030156879047273469,"crossterm",false,4957819273199307474]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/atlas-tui-942713f2b8879d87/dep-bin-atlas-tui","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
File diff suppressed because one or more lines are too long
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
43c815419676b18a
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[]","declared_features":"[\"portable-atomic\"]","target":14411119108718288063,"profile":2241668132362809309,"path":14374989505947797619,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/atomic-waker-5d2db5f7d9229e4e/dep-lib-atomic_waker","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
fbeb557ceabbc941
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[\"alloc\", \"default\", \"std\"]","declared_features":"[\"alloc\", \"default\", \"std\"]","target":13060062996227388079,"profile":2241668132362809309,"path":16841996087006313610,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/base64-77885587371c3c70/dep-lib-base64","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
399b8e222bd51e75
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[\"std\"]","declared_features":"[\"arbitrary\", \"bytemuck\", \"example_generated\", \"serde\", \"serde_core\", \"std\"]","target":7691312148208718491,"profile":2241668132362809309,"path":18132948457891314767,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bitflags-c10f145ac5d2b8b0/dep-lib-bitflags","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
5c467bf1ce00fc07
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[\"default\", \"std\"]","declared_features":"[\"default\", \"extra-platforms\", \"serde\", \"std\"]","target":11402411492164584411,"profile":13827760451848848284,"path":4272742517227241382,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/bytes-22e51706ccb8b8a5/dep-lib-bytes","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
815964b2d56322bd
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[]","declared_features":"[]","target":10353004457644949388,"profile":2241668132362809309,"path":9079747549669873607,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/cassowary-aa4b64b6516378b9/dep-lib-cassowary","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
acc856170b75e508
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[\"alloc\", \"default\", \"std\"]","declared_features":"[\"alloc\", \"default\", \"std\"]","target":13710694652376480987,"profile":2241668132362809309,"path":7051727155796915785,"deps":[[14156967978702956262,"rustversion",false,3576039853037520463]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/castaway-ff42463aed60da48/dep-lib-castaway","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
be523ed4acabb160
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[]","declared_features":"[\"jobserver\", \"parallel\"]","target":11042037588551934598,"profile":4333757155065362140,"path":17383320173655230330,"deps":[[3099554076084276815,"find_msvc_tools",false,15966056650687871624],[8410525223747752176,"shlex",false,12430830252493355193]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/cc-5ababa02f409e72c/dep-lib-cc","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
d900adba52e0b576
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[]","declared_features":"[\"core\", \"rustc-dep-of-std\"]","target":13840298032947503755,"profile":2241668132362809309,"path":12502755193429384494,"deps":[],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/cfg-if-e720413b8edafa0a/dep-lib-cfg_if","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
b91f26981adf1a0f
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[]","declared_features":"[\"arbitrary\", \"bytes\", \"markup\", \"proptest\", \"quickcheck\", \"rkyv\", \"serde\", \"smallvec\"]","target":12681387934967326413,"profile":2241668132362809309,"path":16585554804481745583,"deps":[[1127187624154154345,"castaway",false,641047212466817196],[7667230146095136825,"cfg_if",false,8553989713183965401],[13785866025199020095,"static_assertions",false,10291160929660643979],[14468264357544478988,"itoa",false,6186034875721881408],[15688235455146705107,"ryu",false,14470341346598747597]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/compact_str-6352bc54d98381c4/dep-lib-compact_str","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
d24aa17f5cb6cd44
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[\"bracketed-paste\", \"default\", \"events\", \"windows\"]","declared_features":"[\"bracketed-paste\", \"default\", \"event-stream\", \"events\", \"filedescriptor\", \"libc\", \"serde\", \"use-dev-tty\", \"windows\"]","target":7162149947039624270,"profile":2241668132362809309,"path":4837326999873331563,"deps":[[3430646239657634944,"rustix",false,14217815455417083885],[4627466251042474366,"signal_hook_mio",false,5567887508813209129],[9001817693037665195,"bitflags",false,8439417132978969401],[9156379307790651767,"mio",false,17228512017508346083],[12459942763388630573,"parking_lot",false,1853447893049640521],[17154765528929363175,"signal_hook",false,18380763669658861905]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/crossterm-07436bce45723907/dep-lib-crossterm","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
8d6fb3cc4d6f101a
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[\"bracketed-paste\", \"default\", \"events\", \"windows\"]","declared_features":"[\"bracketed-paste\", \"default\", \"event-stream\", \"events\", \"filedescriptor\", \"serde\", \"use-dev-tty\", \"windows\"]","target":7162149947039624270,"profile":2241668132362809309,"path":13494933240171638998,"deps":[[4627466251042474366,"signal_hook_mio",false,5567887508813209129],[8730874933663560167,"libc",false,14297221119988602182],[9001817693037665195,"bitflags",false,8439417132978969401],[10703860158168350592,"mio",false,14194003628438370002],[12459942763388630573,"parking_lot",false,1853447893049640521],[17154765528929363175,"signal_hook",false,18380763669658861905]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/crossterm-f5d3f81935b0e066/dep-lib-crossterm","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
3c8dbcc23aaa602d
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[]","declared_features":"[]","target":8852154185408534478,"profile":2241668132362809309,"path":16480735575227115549,"deps":[[11795441179928084356,"dirs_sys",false,18011542114159487803]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/dirs-c5b065c0a7a6fcf3/dep-lib-dirs","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
3beb76a31fdaf5f9
|
||||
@@ -0,0 +1 @@
|
||||
{"rustc":4758242423518056681,"features":"[]","declared_features":"[]","target":1716570026465204918,"profile":2241668132362809309,"path":2042082684137801100,"deps":[[8730874933663560167,"libc",false,14297221119988602182],[9760035060063614848,"option_ext",false,14281056310037105554]],"local":[{"CheckDepInfo":{"dep_info":"debug/.fingerprint/dirs-sys-09fc8f284ff83619/dep-lib-dirs_sys","checksum":false}}],"rustflags":[],"config":2069994364910194474,"compile_kind":0}
|
||||
Binary file not shown.
@@ -0,0 +1 @@
|
||||
This file has an mtime of when this was started.
|
||||
@@ -0,0 +1 @@
|
||||
fc56741641cece53
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user