Compare commits
62 Commits
4991a95008
...
atlas-alph
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
094ea1b1fe | ||
| 7826c6ed24 | |||
| 4c3ea0059d | |||
| 6a5ead9dbf | |||
| 268af8d691 | |||
| ad83ae84e4 | |||
| b1a47685f9 | |||
| 6202ef8e83 | |||
| 2bb892dfdc | |||
| a463a09329 | |||
| 45aaec9e47 | |||
| 3a25138d5b | |||
| b90c725cdb | |||
| 98bedf6487 | |||
| 8029bcfa15 | |||
| e36c855bf4 | |||
| 4e8fb66e25 | |||
| 11b8196d84 | |||
| 78f99033fa | |||
| 4b11d839ec | |||
| d9dcb00b0f | |||
| 95b2dbac04 | |||
| 8b5183d98a | |||
| d55206af82 | |||
| b335b0d9f3 | |||
| c98b5b0935 | |||
| 0e26ed99bc | |||
| 945217c536 | |||
| 4ad93e7fe5 | |||
| def02bb36d | |||
| 746cf809df | |||
| 315e44bb62 | |||
| a7ba6c83ea | |||
| 27b0400ef3 | |||
| f1a344bf6a | |||
| e1a66dc7df | |||
| 1c53988cbd | |||
| b4ef76f0d0 | |||
| 732e5aca11 | |||
| f45c878051 | |||
| c405ca27dd | |||
| 5abcbb7dda | |||
| 7ac7e77f1d | |||
| 921e7219ab | |||
| ad0c4dfc24 | |||
| abd8cef10a | |||
| 9779b30a65 | |||
| 507961716e | |||
| 96a6b5a4cf | |||
| df475bc85e | |||
| 3e64de18ed | |||
| 7c33e736f9 | |||
| 54e76d9304 | |||
| ed96137bad | |||
| 461edbc970 | |||
| a6da313dfc | |||
| f4683eeb73 | |||
| adc97943cd | |||
| 2259191e29 | |||
| 52cbd13941 | |||
| cf7669191e | |||
| 9ae433aae9 |
58
.cursor/rules/atlas-project-rules.mdc
Normal file
58
.cursor/rules/atlas-project-rules.mdc
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
alwaysApply: true
|
||||
---
|
||||
##########################################
|
||||
# Atlas Project Standard Rules v1.0
|
||||
# ISO Ref: DevOps-Config-2025
|
||||
# Maintainer: Adastra - InfraOps Team
|
||||
##########################################
|
||||
|
||||
## Metadata
|
||||
- Template Name : Atlas Project Standard Rules
|
||||
- Version : 1.0
|
||||
- Maintainer : InfraOps Team
|
||||
- Last Updated : 2025-12-14
|
||||
|
||||
---
|
||||
|
||||
## Rule Categories
|
||||
|
||||
### 🔧 Indentation & Spacing
|
||||
[ ] CURSOR-001: Gunakan 2 spasi untuk indentation
|
||||
[ ] CURSOR-002: Hindari tab, gunakan spasi konsisten
|
||||
|
||||
### 📂 Naming Convention
|
||||
[ ] CURSOR-010: File harus pakai snake_case
|
||||
[ ] CURSOR-011: Folder pakai kebab-case
|
||||
[ ] CURSOR-012: Config file wajib ada suffix `.conf`
|
||||
[ ] CURSOR-013: Script file wajib ada suffix `.sh`
|
||||
[ ] CURSOR-014: Log file wajib ada suffix `.log`
|
||||
|
||||
### 🗂 File Structure
|
||||
[ ] CURSOR-020: Semua file harus ada header metadata
|
||||
[ ] CURSOR-021: Pisahkan config, script, dan log folder
|
||||
[ ] CURSOR-022: Tidak ada file kosong di repo
|
||||
|
||||
### ✅ Audit & Compliance
|
||||
[ ] CURSOR-030: Checklist harus lengkap sebelum commit
|
||||
[ ] CURSOR-031: Semua config tervalidasi linting
|
||||
[ ] CURSOR-032: Banner branding wajib ada di setiap template
|
||||
|
||||
### ⚠️ Error Handling
|
||||
[ ] CURSOR-040: Log error harus diarahkan ke folder `/logs`
|
||||
[ ] CURSOR-041: Tidak ada hardcoded path di script
|
||||
[ ] CURSOR-042: Semua service startup diverifikasi
|
||||
|
||||
---
|
||||
|
||||
## Compliance Scoring
|
||||
- [ ] 100% → Audit Passed
|
||||
- [ ] 80–99% → Minor Findings
|
||||
- [ ] <80% → Audit Failed
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
- Semua rule harus dipetakan ke ID unik (CURSOR-XXX).
|
||||
- Versi baru wajib update metadata & banner.
|
||||
- Checklist ini bisa dipakai lintas project untuk konsistensi.
|
||||
@@ -2,18 +2,38 @@ name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ "main", "develop" ]
|
||||
branches: ["main", "develop"]
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
test-build:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/setup-go@v4
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '1.22'
|
||||
go-version: "1.22"
|
||||
cache: true
|
||||
|
||||
- name: Go env
|
||||
run: |
|
||||
go version
|
||||
go env
|
||||
|
||||
- name: Vet
|
||||
run: go vet ./...
|
||||
|
||||
- name: Test
|
||||
run: go test ./...
|
||||
run: go test ./... -race -count=1
|
||||
|
||||
- name: Build
|
||||
run: go build ./cmd/...
|
||||
|
||||
- name: Quick static checks (optional)
|
||||
run: |
|
||||
# gofmt check (fails if formatting differs)
|
||||
test -z "$(gofmt -l . | head -n 1)"
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -2,6 +2,7 @@
|
||||
atlas-api
|
||||
atlas-tui
|
||||
atlas-agent
|
||||
pluto-api
|
||||
|
||||
# Go
|
||||
/vendor/
|
||||
|
||||
185
AtlasOS_SRS_v1.md
Normal file
185
AtlasOS_SRS_v1.md
Normal file
@@ -0,0 +1,185 @@
|
||||
SOFTWARE REQUIREMENTS SPECIFICATION (SRS)
|
||||
AtlasOS – Storage Controller Operating System (v1)
|
||||
|
||||
==================================================
|
||||
|
||||
1. INTRODUCTION
|
||||
--------------------------------------------------
|
||||
1.1 Purpose
|
||||
This document defines the functional and non-functional requirements for AtlasOS v1,
|
||||
a storage controller operating system built on Linux with ZFS as the core storage engine.
|
||||
It serves as the authoritative reference for development scope, validation, and acceptance.
|
||||
|
||||
1.2 Scope
|
||||
AtlasOS v1 provides:
|
||||
- ZFS pool, dataset, and ZVOL management
|
||||
- Storage services: SMB, NFS, iSCSI (ZVOL-backed)
|
||||
- Virtual Tape Library (VTL) with mhvtl for tape emulation
|
||||
- Automated snapshot management
|
||||
- Role-Based Access Control (RBAC) and audit logging
|
||||
- Web-based GUI and local TUI
|
||||
- Monitoring and Prometheus-compatible metrics
|
||||
|
||||
The following are explicitly out of scope for v1:
|
||||
- High Availability (HA) or clustering
|
||||
- Multi-node replication
|
||||
- Object storage (S3)
|
||||
- Active Directory / LDAP integration
|
||||
|
||||
1.3 Definitions
|
||||
Dataset : ZFS filesystem
|
||||
ZVOL : ZFS block device
|
||||
LUN : Logical Unit Number exposed via iSCSI
|
||||
Job : Asynchronous long-running operation
|
||||
Desired State : Configuration stored in DB and applied atomically to system
|
||||
|
||||
==================================================
|
||||
|
||||
2. SYSTEM OVERVIEW
|
||||
--------------------------------------------------
|
||||
AtlasOS consists of:
|
||||
- Base OS : Minimal Linux (Ubuntu/Debian)
|
||||
- Data Plane : ZFS and storage services
|
||||
- Control Plane: Go backend with HTMX-based UI
|
||||
- Interfaces : Web GUI, TUI, Metrics endpoint
|
||||
|
||||
==================================================
|
||||
|
||||
3. USER CLASSES
|
||||
--------------------------------------------------
|
||||
Administrator : Full system and storage control
|
||||
Operator : Storage and service operations
|
||||
Viewer : Read-only access
|
||||
|
||||
==================================================
|
||||
|
||||
4. FUNCTIONAL REQUIREMENTS
|
||||
--------------------------------------------------
|
||||
|
||||
4.1 Authentication & Authorization
|
||||
- System SHALL require authentication for all management access
|
||||
- System SHALL enforce RBAC with predefined roles
|
||||
- Access SHALL be denied by default
|
||||
|
||||
4.2 ZFS Management
|
||||
- System SHALL list available disks (read-only)
|
||||
- System SHALL create, import, and export ZFS pools
|
||||
- System SHALL report pool health status
|
||||
- System SHALL create and manage datasets
|
||||
- System SHALL create ZVOLs for block storage
|
||||
- System SHALL support scrub operations with progress monitoring
|
||||
|
||||
4.3 Snapshot Management
|
||||
- System SHALL support manual snapshot creation
|
||||
- System SHALL support automated snapshot policies
|
||||
- System SHALL allow per-dataset snapshot enable/disable
|
||||
- System SHALL prune snapshots based on retention policy
|
||||
|
||||
4.4 SMB Service
|
||||
- System SHALL create SMB shares mapped to datasets
|
||||
- System SHALL manage share permissions
|
||||
- System SHALL apply configuration atomically
|
||||
- System SHALL reload service safely
|
||||
|
||||
4.5 NFS Service
|
||||
- System SHALL create NFS exports per dataset
|
||||
- System SHALL support RW/RO and client restrictions
|
||||
- System SHALL regenerate exports from desired state
|
||||
- System SHALL reload NFS exports safely
|
||||
|
||||
4.6 iSCSI Block Storage
|
||||
- System SHALL provision ZVOL-backed LUNs
|
||||
- System SHALL create iSCSI targets with IQN
|
||||
- System SHALL map LUNs to targets
|
||||
- System SHALL configure initiator ACLs
|
||||
- System SHALL expose connection instructions
|
||||
|
||||
4.6.1 Virtual Tape Library (VTL)
|
||||
- System SHALL manage mhvtl service (start, stop, restart)
|
||||
- System SHALL create and manage virtual tape libraries (media changers)
|
||||
- System SHALL create and manage virtual tape drives (LTO-5 through LTO-8)
|
||||
- System SHALL create and manage virtual tape cartridges
|
||||
- System SHALL support tape operations (load, eject, read, write)
|
||||
- System SHALL manage library_contents files for tape inventory
|
||||
- System SHALL validate drive ID conflicts to prevent device path collisions
|
||||
- System SHALL automatically restart mhvtl service after configuration changes
|
||||
- System SHALL support multiple vendors (IBM, HP, Quantum, Tandberg, Overland)
|
||||
- System SHALL enforce RBAC for VTL operations (Administrator and Operator only)
|
||||
|
||||
4.7 Job Management
|
||||
- System SHALL execute long-running operations as jobs
|
||||
- System SHALL track job status and progress
|
||||
- System SHALL persist job history
|
||||
- Failed jobs SHALL not leave system inconsistent
|
||||
|
||||
4.8 Audit Logging
|
||||
- System SHALL log all mutating operations
|
||||
- Audit log SHALL record actor, action, resource, and timestamp
|
||||
- Audit log SHALL be immutable from the UI
|
||||
|
||||
4.9 Web GUI
|
||||
- System SHALL provide a web-based management interface
|
||||
- GUI SHALL support partial updates
|
||||
- GUI SHALL display system health and alerts
|
||||
- Destructive actions SHALL require confirmation
|
||||
|
||||
4.10 TUI
|
||||
- System SHALL provide a local console interface
|
||||
- TUI SHALL support initial system setup
|
||||
- TUI SHALL allow monitoring and maintenance operations
|
||||
- TUI SHALL function without web UI availability
|
||||
|
||||
4.11 Monitoring & Metrics
|
||||
- System SHALL expose /metrics in Prometheus format
|
||||
- System SHALL expose pool health and capacity metrics
|
||||
- System SHALL expose job failure metrics
|
||||
- GUI SHALL present a metrics summary
|
||||
|
||||
4.12 Update & Maintenance
|
||||
- System SHALL support safe update mechanisms
|
||||
- Configuration SHALL be backed up prior to updates
|
||||
- Maintenance mode SHALL disable user operations
|
||||
|
||||
==================================================
|
||||
|
||||
5. NON-FUNCTIONAL REQUIREMENTS
|
||||
--------------------------------------------------
|
||||
|
||||
5.1 Reliability
|
||||
- Storage operations SHALL be transactional where possible
|
||||
- System SHALL recover gracefully from partial failures
|
||||
|
||||
5.2 Performance
|
||||
- Management UI read operations SHOULD respond within 500ms
|
||||
- Background jobs SHALL not block UI responsiveness
|
||||
|
||||
5.3 Security
|
||||
- HTTPS SHALL be enforced for the web UI
|
||||
- Secrets SHALL NOT be logged in plaintext
|
||||
- Least-privilege access SHALL be enforced
|
||||
|
||||
5.4 Maintainability
|
||||
- Configuration SHALL be declarative
|
||||
- System SHALL provide diagnostic information for support
|
||||
|
||||
==================================================
|
||||
|
||||
6. CONSTRAINTS & ASSUMPTIONS
|
||||
--------------------------------------------------
|
||||
- Single-node controller
|
||||
- Linux kernel with ZFS support
|
||||
- Local storage only
|
||||
|
||||
==================================================
|
||||
|
||||
7. ACCEPTANCE CRITERIA (v1)
|
||||
--------------------------------------------------
|
||||
AtlasOS v1 is accepted when:
|
||||
- ZFS pool, dataset, share, and LUN lifecycle works end-to-end
|
||||
- Snapshot policies are active and observable
|
||||
- RBAC and audit logging are enforced
|
||||
- GUI, TUI, and metrics endpoints are functional
|
||||
- No manual configuration file edits are required
|
||||
|
||||
==================================================
|
||||
END OF DOCUMENT
|
||||
71
README.md
71
README.md
@@ -1,13 +1,80 @@
|
||||
# atlasOS
|
||||
# AtlasOS
|
||||
|
||||
atlasOS is an appliance-style storage controller build by Adastra
|
||||
AtlasOS is an appliance-style storage controller build by Adastra
|
||||
|
||||
**v1 Focus**
|
||||
- ZFS storage engine
|
||||
- SMB / NFS / iSCSI (ZVOL)
|
||||
- Virtual Tape Library (VTL) with mhvtl
|
||||
- Auto snapshots (sanoid)
|
||||
- RBAC + audit
|
||||
- TUI (Bubble Tea) + Web GUI (HTMX)
|
||||
- Prometheus metrics
|
||||
|
||||
> This repository contains the management plane and appliance tooling.
|
||||
|
||||
## Quick Installation
|
||||
|
||||
### Standard Installation (with internet)
|
||||
```bash
|
||||
sudo ./installer/install.sh
|
||||
```
|
||||
|
||||
### Airgap Installation (offline)
|
||||
```bash
|
||||
# Step 1: Download bundle (on internet-connected system)
|
||||
sudo ./installer/bundle-downloader.sh ./atlas-bundle
|
||||
|
||||
# Step 2: Transfer bundle to airgap system
|
||||
# Step 3: Install on airgap system
|
||||
sudo ./installer/install.sh --offline-bundle /path/to/atlas-bundle
|
||||
```
|
||||
|
||||
See `installer/README.md` and `docs/INSTALLATION.md` for detailed instructions.
|
||||
|
||||
## Features
|
||||
|
||||
### Storage Management
|
||||
- **ZFS**: Pool, dataset, and ZVOL management with health monitoring
|
||||
- **SMB/CIFS**: Windows file sharing with permission management
|
||||
- **NFS**: Network file sharing with client access control
|
||||
- **iSCSI**: Block storage with target and LUN management
|
||||
|
||||
### Virtual Tape Library (VTL)
|
||||
- **Media Changers**: Create and manage virtual tape libraries
|
||||
- **Tape Drives**: Configure virtual drives (LTO-5 through LTO-8)
|
||||
- **Tape Cartridges**: Create and manage virtual tapes
|
||||
- **Tape Operations**: Load, eject, and manage tape media
|
||||
- **Multi-Vendor Support**: IBM, HP, Quantum, Tandberg, Overland
|
||||
- **Automatic Service Management**: Auto-restart mhvtl after configuration changes
|
||||
|
||||
### Security & Access Control
|
||||
- **RBAC**: Role-based access control (Administrator, Operator, Viewer)
|
||||
- **Audit Logging**: Immutable audit trail for all operations
|
||||
- **Authentication**: JWT-based authentication
|
||||
|
||||
### Monitoring
|
||||
- **Prometheus Metrics**: System and storage metrics
|
||||
- **Health Monitoring**: Pool health and capacity tracking
|
||||
- **Job Management**: Track long-running operations
|
||||
|
||||
## Installation Directory
|
||||
|
||||
Atlas is installed to `/opt/atlas` by default. The installer script will:
|
||||
1. Install all required dependencies (ZFS, SMB, NFS, iSCSI, mhvtl)
|
||||
2. Build Atlas binaries
|
||||
3. Set up systemd services
|
||||
4. Configure directories and permissions
|
||||
|
||||
## Pushing Changes to Repository
|
||||
|
||||
Use the provided script to commit and push changes:
|
||||
|
||||
```bash
|
||||
./scripts/push-to-repo.sh "Your commit message"
|
||||
```
|
||||
|
||||
Or skip version update:
|
||||
```bash
|
||||
./scripts/push-to-repo.sh "Your commit message" --skip-version
|
||||
```
|
||||
|
||||
7
atlas.code-workspace
Normal file
7
atlas.code-workspace
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"folders": [
|
||||
{
|
||||
"path": "."
|
||||
}
|
||||
]
|
||||
}
|
||||
BIN
data/atlas.db
Normal file
BIN
data/atlas.db
Normal file
Binary file not shown.
174
docs/AIRGAP_INSTALLATION.md
Normal file
174
docs/AIRGAP_INSTALLATION.md
Normal file
@@ -0,0 +1,174 @@
|
||||
# Airgap Installation Guide for AtlasOS
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS installer supports airgap (offline) installation for data centers without internet access. All required packages and dependencies are bundled into a single directory that can be transferred to the airgap system.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Step 1: Download Bundle (On System with Internet)
|
||||
|
||||
On a system with internet access and Ubuntu 24.04:
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone <repository-url>
|
||||
cd atlas
|
||||
|
||||
# Run bundle downloader (requires root)
|
||||
sudo ./installer/bundle-downloader.sh ./atlas-bundle
|
||||
```
|
||||
|
||||
This will create a directory `./atlas-bundle` containing:
|
||||
- All required .deb packages (~100-200 packages)
|
||||
- All dependencies
|
||||
- Go binary (fallback)
|
||||
- Manifest and README files
|
||||
|
||||
**Estimated bundle size:** 500MB - 1GB
|
||||
|
||||
### Step 2: Transfer Bundle to Airgap System
|
||||
|
||||
Transfer the entire bundle directory to your airgap system using:
|
||||
- USB drive
|
||||
- Internal network (if available)
|
||||
- Physical media
|
||||
|
||||
```bash
|
||||
# Example: Copy to USB drive
|
||||
cp -r ./atlas-bundle /media/usb/
|
||||
|
||||
# On airgap system: Copy from USB
|
||||
cp -r /media/usb/atlas-bundle /tmp/
|
||||
```
|
||||
|
||||
### Step 3: Install on Airgap System
|
||||
|
||||
On the airgap system (Ubuntu 24.04):
|
||||
|
||||
```bash
|
||||
# Navigate to bundle directory
|
||||
cd /tmp/atlas-bundle
|
||||
|
||||
# Run installer with offline bundle
|
||||
cd /path/to/atlas
|
||||
sudo ./installer/install.sh --offline-bundle /tmp/atlas-bundle
|
||||
```
|
||||
|
||||
## Bundle Contents
|
||||
|
||||
The bundle includes:
|
||||
|
||||
### Main Packages
|
||||
- **Build Tools**: build-essential, git, curl, wget
|
||||
- **ZFS**: zfsutils-linux, zfs-zed, zfs-initramfs
|
||||
- **Storage Services**: samba, samba-common-bin, nfs-kernel-server, rpcbind
|
||||
- **iSCSI**: targetcli-fb
|
||||
- **Database**: sqlite3, libsqlite3-dev
|
||||
- **Go Compiler**: golang-go
|
||||
- **Utilities**: openssl, net-tools, iproute2
|
||||
|
||||
### Dependencies
|
||||
All transitive dependencies are automatically included.
|
||||
|
||||
## Verification
|
||||
|
||||
Before transferring, verify the bundle:
|
||||
|
||||
```bash
|
||||
# Count .deb files (should be 100-200)
|
||||
find ./atlas-bundle -name "*.deb" | wc -l
|
||||
|
||||
# Check manifest
|
||||
cat ./atlas-bundle/MANIFEST.txt
|
||||
|
||||
# Check total size
|
||||
du -sh ./atlas-bundle
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Missing Dependencies
|
||||
|
||||
If installation fails with dependency errors:
|
||||
|
||||
1. Ensure all .deb files are present in bundle
|
||||
2. Check that bundle was created on Ubuntu 24.04
|
||||
3. Verify system architecture matches (amd64/arm64)
|
||||
|
||||
### Go Installation Issues
|
||||
|
||||
If Go is not found after installation:
|
||||
|
||||
1. Check if `golang-go` package is installed: `dpkg -l | grep golang-go`
|
||||
2. If missing, the bundle includes `go.tar.gz` as fallback
|
||||
3. Installer will automatically extract it if needed
|
||||
|
||||
### Package Conflicts
|
||||
|
||||
If you encounter package conflicts:
|
||||
|
||||
```bash
|
||||
# Fix broken packages
|
||||
sudo apt-get install -f -y
|
||||
|
||||
# Or manually install specific packages
|
||||
sudo dpkg -i /path/to/bundle/*.deb
|
||||
sudo apt-get install -f -y
|
||||
```
|
||||
|
||||
## Bundle Maintenance
|
||||
|
||||
### Updating Bundle
|
||||
|
||||
To update the bundle with newer packages:
|
||||
|
||||
1. Run `./installer/bundle-downloader.sh` again on internet-connected system
|
||||
2. This will download latest versions
|
||||
3. Transfer new bundle to airgap system
|
||||
|
||||
### Bundle Size Optimization
|
||||
|
||||
To reduce bundle size (optional):
|
||||
|
||||
```bash
|
||||
# Remove unnecessary packages (be careful!)
|
||||
# Only remove if you're certain they're not needed
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- Verify bundle integrity before transferring
|
||||
- Use secure transfer methods (encrypted USB, secure network)
|
||||
- Keep bundle in secure location on airgap system
|
||||
- Verify package signatures if possible
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Custom Bundle Location
|
||||
|
||||
```bash
|
||||
# Download to custom location
|
||||
sudo ./installer/bundle-downloader.sh /opt/atlas-bundles/ubuntu24.04
|
||||
|
||||
# Install from custom location
|
||||
sudo ./installer/install.sh --offline-bundle /opt/atlas-bundles/ubuntu24.04
|
||||
```
|
||||
|
||||
### Partial Bundle (if some packages already installed)
|
||||
|
||||
If some packages are already installed on airgap system:
|
||||
|
||||
```bash
|
||||
# Installer will skip already-installed packages
|
||||
# Missing packages will be installed from bundle
|
||||
sudo ./installer/install.sh --offline-bundle /path/to/bundle
|
||||
```
|
||||
|
||||
## Support
|
||||
|
||||
For issues with airgap installation:
|
||||
1. Check installation logs
|
||||
2. Verify bundle completeness
|
||||
3. Ensure Ubuntu 24.04 compatibility
|
||||
4. Review MANIFEST.txt for package list
|
||||
278
docs/API_SECURITY.md
Normal file
278
docs/API_SECURITY.md
Normal file
@@ -0,0 +1,278 @@
|
||||
# API Security & Rate Limiting
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS implements comprehensive API security measures including rate limiting, security headers, CORS protection, and request validation to protect the API from abuse and attacks.
|
||||
|
||||
## Rate Limiting
|
||||
|
||||
### Token Bucket Algorithm
|
||||
|
||||
The rate limiter uses a token bucket algorithm:
|
||||
- **Default Rate**: 100 requests per minute per client
|
||||
- **Window**: 60 seconds
|
||||
- **Token Refill**: Tokens are refilled based on elapsed time
|
||||
- **Per-Client**: Rate limiting is applied per IP address or user ID
|
||||
|
||||
### Rate Limit Headers
|
||||
|
||||
All responses include rate limit headers:
|
||||
|
||||
```
|
||||
X-RateLimit-Limit: 100
|
||||
X-RateLimit-Window: 60
|
||||
```
|
||||
|
||||
### Rate Limit Exceeded
|
||||
|
||||
When rate limit is exceeded, the API returns:
|
||||
|
||||
```json
|
||||
{
|
||||
"code": "SERVICE_UNAVAILABLE",
|
||||
"message": "rate limit exceeded",
|
||||
"details": "too many requests, please try again later"
|
||||
}
|
||||
```
|
||||
|
||||
**HTTP Status**: `429 Too Many Requests`
|
||||
|
||||
### Client Identification
|
||||
|
||||
Rate limiting uses different keys based on authentication:
|
||||
|
||||
- **Authenticated Users**: `user:{user_id}` - More granular per-user limiting
|
||||
- **Unauthenticated**: `ip:{ip_address}` - IP-based limiting
|
||||
|
||||
### Public Endpoints
|
||||
|
||||
Public endpoints (login, health checks) are excluded from rate limiting to ensure availability.
|
||||
|
||||
## Security Headers
|
||||
|
||||
All responses include security headers:
|
||||
|
||||
### X-Content-Type-Options
|
||||
- **Value**: `nosniff`
|
||||
- **Purpose**: Prevents MIME type sniffing
|
||||
|
||||
### X-Frame-Options
|
||||
- **Value**: `DENY`
|
||||
- **Purpose**: Prevents clickjacking attacks
|
||||
|
||||
### X-XSS-Protection
|
||||
- **Value**: `1; mode=block`
|
||||
- **Purpose**: Enables XSS filtering in browsers
|
||||
|
||||
### Referrer-Policy
|
||||
- **Value**: `strict-origin-when-cross-origin`
|
||||
- **Purpose**: Controls referrer information
|
||||
|
||||
### Permissions-Policy
|
||||
- **Value**: `geolocation=(), microphone=(), camera=()`
|
||||
- **Purpose**: Disables unnecessary browser features
|
||||
|
||||
### Strict-Transport-Security (HSTS)
|
||||
- **Value**: `max-age=31536000; includeSubDomains`
|
||||
- **Purpose**: Forces HTTPS connections (only on HTTPS)
|
||||
- **Note**: Only added when request is over TLS
|
||||
|
||||
### Content-Security-Policy (CSP)
|
||||
- **Value**: `default-src 'self'; script-src 'self' 'unsafe-inline' https://cdn.jsdelivr.net; style-src 'self' 'unsafe-inline' https://cdn.jsdelivr.net; img-src 'self' data:; font-src 'self' https://cdn.jsdelivr.net; connect-src 'self';`
|
||||
- **Purpose**: Restricts resource loading to prevent XSS
|
||||
|
||||
## CORS (Cross-Origin Resource Sharing)
|
||||
|
||||
### Allowed Origins
|
||||
|
||||
By default, the following origins are allowed:
|
||||
|
||||
- `http://localhost:8080`
|
||||
- `http://localhost:3000`
|
||||
- `http://127.0.0.1:8080`
|
||||
- Same-origin requests (no Origin header)
|
||||
|
||||
### CORS Headers
|
||||
|
||||
When a request comes from an allowed origin:
|
||||
|
||||
```
|
||||
Access-Control-Allow-Origin: http://localhost:8080
|
||||
Access-Control-Allow-Methods: GET, POST, PUT, DELETE, PATCH, OPTIONS
|
||||
Access-Control-Allow-Headers: Content-Type, Authorization, X-Requested-With
|
||||
Access-Control-Allow-Credentials: true
|
||||
Access-Control-Max-Age: 3600
|
||||
```
|
||||
|
||||
### Preflight Requests
|
||||
|
||||
OPTIONS requests are handled automatically:
|
||||
|
||||
- **Status**: `204 No Content`
|
||||
- **Headers**: All CORS headers included
|
||||
- **Purpose**: Browser preflight checks
|
||||
|
||||
## Request Size Limits
|
||||
|
||||
### Maximum Request Body Size
|
||||
|
||||
- **Limit**: 10 MB (10,485,760 bytes)
|
||||
- **Enforcement**: Automatic via `http.MaxBytesReader`
|
||||
- **Error**: Returns `413 Request Entity Too Large` if exceeded
|
||||
|
||||
### Content-Type Validation
|
||||
|
||||
POST, PUT, and PATCH requests must include a valid `Content-Type` header:
|
||||
|
||||
**Allowed Types:**
|
||||
- `application/json`
|
||||
- `application/x-www-form-urlencoded`
|
||||
- `multipart/form-data`
|
||||
|
||||
**Error Response:**
|
||||
```json
|
||||
{
|
||||
"code": "BAD_REQUEST",
|
||||
"message": "Content-Type must be application/json"
|
||||
}
|
||||
```
|
||||
|
||||
## Middleware Chain Order
|
||||
|
||||
Security middleware is applied in the following order (outer to inner):
|
||||
|
||||
1. **CORS** - Handles preflight requests
|
||||
2. **Security Headers** - Adds security headers
|
||||
3. **Request Size Limit** - Enforces 10MB limit
|
||||
4. **Content-Type Validation** - Validates request content type
|
||||
5. **Rate Limiting** - Enforces rate limits
|
||||
6. **Error Recovery** - Catches panics
|
||||
7. **Request ID** - Generates request IDs
|
||||
8. **Logging** - Logs requests
|
||||
9. **Audit** - Records audit logs
|
||||
10. **Authentication** - Validates JWT tokens
|
||||
11. **Routes** - Handles requests
|
||||
|
||||
## Public Endpoints
|
||||
|
||||
The following endpoints are excluded from certain security checks:
|
||||
|
||||
- `/api/v1/auth/login` - Rate limiting, Content-Type validation
|
||||
- `/api/v1/auth/logout` - Rate limiting, Content-Type validation
|
||||
- `/healthz` - Rate limiting, Content-Type validation
|
||||
- `/metrics` - Rate limiting, Content-Type validation
|
||||
- `/api/docs` - Rate limiting, Content-Type validation
|
||||
- `/api/openapi.yaml` - Rate limiting, Content-Type validation
|
||||
|
||||
## Best Practices
|
||||
|
||||
### For API Consumers
|
||||
|
||||
1. **Respect Rate Limits**: Implement exponential backoff when rate limited
|
||||
2. **Use Authentication**: Authenticated users get better rate limits
|
||||
3. **Include Content-Type**: Always include `Content-Type: application/json`
|
||||
4. **Handle Errors**: Check for `429` status and retry after delay
|
||||
5. **Request Size**: Keep request bodies under 10MB
|
||||
|
||||
### For Administrators
|
||||
|
||||
1. **Monitor Rate Limits**: Check logs for rate limit violations
|
||||
2. **Adjust Limits**: Modify rate limit values in code if needed
|
||||
3. **CORS Configuration**: Update allowed origins for production
|
||||
4. **HTTPS**: Always use HTTPS in production for HSTS
|
||||
5. **Security Headers**: Review CSP policy for your use case
|
||||
|
||||
## Configuration
|
||||
|
||||
### Rate Limiting
|
||||
|
||||
Rate limits are currently hardcoded but can be configured:
|
||||
|
||||
```go
|
||||
// In rate_limit.go
|
||||
rateLimiter := NewRateLimiter(100, time.Minute) // 100 req/min
|
||||
```
|
||||
|
||||
### CORS Origins
|
||||
|
||||
Update allowed origins in `security_middleware.go`:
|
||||
|
||||
```go
|
||||
allowedOrigins := []string{
|
||||
"https://yourdomain.com",
|
||||
"https://app.yourdomain.com",
|
||||
}
|
||||
```
|
||||
|
||||
### Request Size Limit
|
||||
|
||||
Modify in `app.go`:
|
||||
|
||||
```go
|
||||
a.requestSizeMiddleware(10*1024*1024) // 10MB
|
||||
```
|
||||
|
||||
## Error Responses
|
||||
|
||||
### Rate Limit Exceeded
|
||||
|
||||
```json
|
||||
{
|
||||
"code": "SERVICE_UNAVAILABLE",
|
||||
"message": "rate limit exceeded",
|
||||
"details": "too many requests, please try again later"
|
||||
}
|
||||
```
|
||||
|
||||
**Status**: `429 Too Many Requests`
|
||||
|
||||
### Request Too Large
|
||||
|
||||
```json
|
||||
{
|
||||
"code": "BAD_REQUEST",
|
||||
"message": "request body too large"
|
||||
}
|
||||
```
|
||||
|
||||
**Status**: `413 Request Entity Too Large`
|
||||
|
||||
### Invalid Content-Type
|
||||
|
||||
```json
|
||||
{
|
||||
"code": "BAD_REQUEST",
|
||||
"message": "Content-Type must be application/json"
|
||||
}
|
||||
```
|
||||
|
||||
**Status**: `400 Bad Request`
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Rate Limit Metrics
|
||||
|
||||
Monitor rate limit violations:
|
||||
|
||||
- Check audit logs for rate limit events
|
||||
- Monitor `429` status codes in access logs
|
||||
- Track rate limit headers in responses
|
||||
|
||||
### Security Events
|
||||
|
||||
Monitor for security-related events:
|
||||
|
||||
- Invalid Content-Type headers
|
||||
- Request size violations
|
||||
- CORS violations (check server logs)
|
||||
- Authentication failures
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Configurable Rate Limits**: Environment variable configuration
|
||||
2. **Per-Endpoint Limits**: Different limits for different endpoints
|
||||
3. **IP Whitelisting**: Bypass rate limits for trusted IPs
|
||||
4. **Rate Limit Metrics**: Prometheus metrics for rate limiting
|
||||
5. **Distributed Rate Limiting**: Redis-based for multi-instance deployments
|
||||
6. **Advanced CORS**: Configurable CORS via environment variables
|
||||
7. **Request Timeout**: Configurable request timeout limits
|
||||
125
docs/BACKGROUND_JOBS.md
Normal file
125
docs/BACKGROUND_JOBS.md
Normal file
@@ -0,0 +1,125 @@
|
||||
# Background Job System
|
||||
|
||||
The AtlasOS API includes a background job system that automatically executes snapshot policies and manages long-running operations.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Components
|
||||
|
||||
1. **Job Manager** (`internal/job/manager.go`)
|
||||
- Tracks job lifecycle (pending, running, completed, failed, cancelled)
|
||||
- Stores job metadata and progress
|
||||
- Thread-safe job operations
|
||||
|
||||
2. **Snapshot Scheduler** (`internal/snapshot/scheduler.go`)
|
||||
- Automatically creates snapshots based on policies
|
||||
- Prunes old snapshots based on retention rules
|
||||
- Runs every 15 minutes by default
|
||||
|
||||
3. **Integration**
|
||||
- Scheduler starts automatically when API server starts
|
||||
- Gracefully stops on server shutdown
|
||||
- Jobs are accessible via API endpoints
|
||||
|
||||
## How It Works
|
||||
|
||||
### Snapshot Creation
|
||||
|
||||
The scheduler checks all enabled snapshot policies every 15 minutes and:
|
||||
|
||||
1. **Frequent snapshots**: Creates every 15 minutes if `frequent > 0`
|
||||
2. **Hourly snapshots**: Creates every hour if `hourly > 0`
|
||||
3. **Daily snapshots**: Creates daily if `daily > 0`
|
||||
4. **Weekly snapshots**: Creates weekly if `weekly > 0`
|
||||
5. **Monthly snapshots**: Creates monthly if `monthly > 0`
|
||||
6. **Yearly snapshots**: Creates yearly if `yearly > 0`
|
||||
|
||||
Snapshot names follow the pattern: `{type}-{timestamp}` (e.g., `hourly-20241214-143000`)
|
||||
|
||||
### Snapshot Pruning
|
||||
|
||||
When `autoprune` is enabled, the scheduler:
|
||||
|
||||
1. Groups snapshots by type (frequent, hourly, daily, etc.)
|
||||
2. Sorts by creation time (newest first)
|
||||
3. Keeps only the number specified in the policy
|
||||
4. Deletes older snapshots that exceed the retention count
|
||||
|
||||
### Job Tracking
|
||||
|
||||
Every snapshot operation creates a job that tracks:
|
||||
- Status (pending → running → completed/failed)
|
||||
- Progress (0-100%)
|
||||
- Error messages (if failed)
|
||||
- Timestamps (created, started, completed)
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### List Jobs
|
||||
```bash
|
||||
GET /api/v1/jobs
|
||||
GET /api/v1/jobs?status=running
|
||||
```
|
||||
|
||||
### Get Job
|
||||
```bash
|
||||
GET /api/v1/jobs/{id}
|
||||
```
|
||||
|
||||
### Cancel Job
|
||||
```bash
|
||||
POST /api/v1/jobs/{id}/cancel
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
The scheduler interval is hardcoded to 15 minutes. To change it, modify:
|
||||
|
||||
```go
|
||||
// In internal/httpapp/app.go
|
||||
scheduler.Start(15 * time.Minute) // Change interval here
|
||||
```
|
||||
|
||||
## Example Workflow
|
||||
|
||||
1. **Create a snapshot policy:**
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/api/v1/snapshot-policies \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"dataset": "pool/dataset",
|
||||
"hourly": 24,
|
||||
"daily": 7,
|
||||
"autosnap": true,
|
||||
"autoprune": true
|
||||
}'
|
||||
```
|
||||
|
||||
2. **Scheduler automatically:**
|
||||
- Creates hourly snapshots (keeps 24)
|
||||
- Creates daily snapshots (keeps 7)
|
||||
- Prunes old snapshots beyond retention
|
||||
|
||||
3. **Monitor jobs:**
|
||||
```bash
|
||||
curl http://localhost:8080/api/v1/jobs
|
||||
```
|
||||
|
||||
## Job Statuses
|
||||
|
||||
- `pending`: Job created but not started
|
||||
- `running`: Job is currently executing
|
||||
- `completed`: Job finished successfully
|
||||
- `failed`: Job encountered an error
|
||||
- `cancelled`: Job was cancelled by user
|
||||
|
||||
## Notes
|
||||
|
||||
- Jobs are stored in-memory (will be lost on restart)
|
||||
- Scheduler runs in a background goroutine
|
||||
- Snapshot operations are synchronous (blocking)
|
||||
- For production, consider:
|
||||
- Database persistence for jobs
|
||||
- Async job execution with worker pool
|
||||
- Job history retention policies
|
||||
- Metrics/alerting for failed jobs
|
||||
307
docs/BACKUP_RESTORE.md
Normal file
307
docs/BACKUP_RESTORE.md
Normal file
@@ -0,0 +1,307 @@
|
||||
# Configuration Backup & Restore
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS provides comprehensive configuration backup and restore functionality, allowing you to save and restore all system configurations including users, storage services (SMB/NFS/iSCSI), and snapshot policies.
|
||||
|
||||
## Features
|
||||
|
||||
- **Full Configuration Backup**: Backs up all system configurations
|
||||
- **Compressed Archives**: Backups are stored as gzipped tar archives
|
||||
- **Metadata Tracking**: Each backup includes metadata (ID, timestamp, description, size)
|
||||
- **Verification**: Verify backup integrity before restore
|
||||
- **Dry Run**: Test restore operations without making changes
|
||||
- **Selective Restore**: Restore specific components or full system
|
||||
|
||||
## Configuration
|
||||
|
||||
Set the backup directory using the `ATLAS_BACKUP_DIR` environment variable:
|
||||
|
||||
```bash
|
||||
export ATLAS_BACKUP_DIR=/var/lib/atlas/backups
|
||||
./atlas-api
|
||||
```
|
||||
|
||||
If not set, defaults to `data/backups` in the current directory.
|
||||
|
||||
## Backup Contents
|
||||
|
||||
A backup includes:
|
||||
|
||||
- **Users**: All user accounts (passwords cannot be restored - users must reset)
|
||||
- **SMB Shares**: All SMB/CIFS share configurations
|
||||
- **NFS Exports**: All NFS export configurations
|
||||
- **iSCSI Targets**: All iSCSI targets and LUN mappings
|
||||
- **Snapshot Policies**: All automated snapshot policies
|
||||
- **System Config**: Database path and other system settings
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Create Backup
|
||||
|
||||
**POST** `/api/v1/backups`
|
||||
|
||||
Creates a new backup of all system configurations.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"description": "Backup before major changes"
|
||||
}
|
||||
```
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"id": "backup-1703123456",
|
||||
"created_at": "2024-12-20T10:30:56Z",
|
||||
"version": "1.0",
|
||||
"description": "Backup before major changes",
|
||||
"size": 24576
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/api/v1/backups \
|
||||
-H "Authorization: Bearer <token>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"description": "Weekly backup"}'
|
||||
```
|
||||
|
||||
### List Backups
|
||||
|
||||
**GET** `/api/v1/backups`
|
||||
|
||||
Lists all available backups.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "backup-1703123456",
|
||||
"created_at": "2024-12-20T10:30:56Z",
|
||||
"version": "1.0",
|
||||
"description": "Weekly backup",
|
||||
"size": 24576
|
||||
},
|
||||
{
|
||||
"id": "backup-1703037056",
|
||||
"created_at": "2024-12-19T10:30:56Z",
|
||||
"version": "1.0",
|
||||
"description": "",
|
||||
"size": 18432
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
curl -X GET http://localhost:8080/api/v1/backups \
|
||||
-H "Authorization: Bearer <token>"
|
||||
```
|
||||
|
||||
### Get Backup Details
|
||||
|
||||
**GET** `/api/v1/backups/{id}`
|
||||
|
||||
Retrieves metadata for a specific backup.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"id": "backup-1703123456",
|
||||
"created_at": "2024-12-20T10:30:56Z",
|
||||
"version": "1.0",
|
||||
"description": "Weekly backup",
|
||||
"size": 24576
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
curl -X GET http://localhost:8080/api/v1/backups/backup-1703123456 \
|
||||
-H "Authorization: Bearer <token>"
|
||||
```
|
||||
|
||||
### Verify Backup
|
||||
|
||||
**GET** `/api/v1/backups/{id}?verify=true`
|
||||
|
||||
Verifies that a backup file is valid and can be restored.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"message": "backup is valid",
|
||||
"backup_id": "backup-1703123456",
|
||||
"metadata": {
|
||||
"id": "backup-1703123456",
|
||||
"created_at": "2024-12-20T10:30:56Z",
|
||||
"version": "1.0",
|
||||
"description": "Weekly backup",
|
||||
"size": 24576
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
curl -X GET "http://localhost:8080/api/v1/backups/backup-1703123456?verify=true" \
|
||||
-H "Authorization: Bearer <token>"
|
||||
```
|
||||
|
||||
### Restore Backup
|
||||
|
||||
**POST** `/api/v1/backups/{id}/restore`
|
||||
|
||||
Restores configuration from a backup.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"dry_run": false
|
||||
}
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `dry_run` (optional): If `true`, shows what would be restored without making changes
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"message": "backup restored successfully",
|
||||
"backup_id": "backup-1703123456"
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# Dry run (test restore)
|
||||
curl -X POST http://localhost:8080/api/v1/backups/backup-1703123456/restore \
|
||||
-H "Authorization: Bearer <token>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"dry_run": true}'
|
||||
|
||||
# Actual restore
|
||||
curl -X POST http://localhost:8080/api/v1/backups/backup-1703123456/restore \
|
||||
-H "Authorization: Bearer <token>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"dry_run": false}'
|
||||
```
|
||||
|
||||
### Delete Backup
|
||||
|
||||
**DELETE** `/api/v1/backups/{id}`
|
||||
|
||||
Deletes a backup file and its metadata.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"message": "backup deleted",
|
||||
"backup_id": "backup-1703123456"
|
||||
}
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
curl -X DELETE http://localhost:8080/api/v1/backups/backup-1703123456 \
|
||||
-H "Authorization: Bearer <token>"
|
||||
```
|
||||
|
||||
## Restore Process
|
||||
|
||||
When restoring a backup:
|
||||
|
||||
1. **Verification**: Backup is verified before restore
|
||||
2. **User Restoration**:
|
||||
- Users are restored with temporary passwords
|
||||
- Default admin user (user-1) is skipped
|
||||
- Users must reset their passwords after restore
|
||||
3. **Storage Services**:
|
||||
- SMB shares, NFS exports, and iSCSI targets are restored
|
||||
- Existing configurations are skipped (not overwritten)
|
||||
- Service configurations are automatically applied
|
||||
4. **Snapshot Policies**:
|
||||
- Policies are restored by dataset
|
||||
- Existing policies are skipped
|
||||
5. **Service Application**:
|
||||
- Samba, NFS, and iSCSI services are reconfigured
|
||||
- Errors are logged but don't fail the restore
|
||||
|
||||
## Backup File Format
|
||||
|
||||
Backups are stored as gzipped tar archives containing:
|
||||
|
||||
- `metadata.json`: Backup metadata (ID, timestamp, description, etc.)
|
||||
- `config.json`: All configuration data (users, shares, exports, targets, policies)
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Regular Backups**: Create backups before major configuration changes
|
||||
2. **Verify Before Restore**: Always verify backups before restoring
|
||||
3. **Test Restores**: Use dry run to test restore operations
|
||||
4. **Backup Retention**: Keep multiple backups for different time periods
|
||||
5. **Offsite Storage**: Copy backups to external storage for disaster recovery
|
||||
6. **Password Management**: Users must reset passwords after restore
|
||||
|
||||
## Limitations
|
||||
|
||||
- **Passwords**: User passwords cannot be restored (security feature)
|
||||
- **ZFS Data**: Backups only include configuration, not ZFS pool/dataset data
|
||||
- **Audit Logs**: Audit logs are not included in backups
|
||||
- **Jobs**: Background jobs are not included in backups
|
||||
|
||||
## Error Handling
|
||||
|
||||
- **Invalid Backup**: Verification fails if backup is corrupted
|
||||
- **Missing Resources**: Restore skips resources that already exist
|
||||
- **Service Errors**: Service configuration errors are logged but don't fail restore
|
||||
- **Partial Restore**: Restore continues even if some components fail
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Backup Storage**: Store backups in secure locations
|
||||
2. **Access Control**: Backup endpoints require authentication
|
||||
3. **Password Security**: Passwords are never included in backups
|
||||
4. **Encryption**: Consider encrypting backups for sensitive environments
|
||||
|
||||
## Example Workflow
|
||||
|
||||
```bash
|
||||
# 1. Create backup before changes
|
||||
BACKUP_ID=$(curl -X POST http://localhost:8080/api/v1/backups \
|
||||
-H "Authorization: Bearer <token>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"description": "Before major changes"}' \
|
||||
| jq -r '.id')
|
||||
|
||||
# 2. Verify backup
|
||||
curl -X GET "http://localhost:8080/api/v1/backups/$BACKUP_ID?verify=true" \
|
||||
-H "Authorization: Bearer <token>"
|
||||
|
||||
# 3. Make configuration changes
|
||||
# ... make changes ...
|
||||
|
||||
# 4. Test restore (dry run)
|
||||
curl -X POST "http://localhost:8080/api/v1/backups/$BACKUP_ID/restore" \
|
||||
-H "Authorization: Bearer <token>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"dry_run": true}'
|
||||
|
||||
# 5. Restore if needed
|
||||
curl -X POST "http://localhost:8080/api/v1/backups/$BACKUP_ID/restore" \
|
||||
-H "Authorization: Bearer <token>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"dry_run": false}'
|
||||
```
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
- **Scheduled Backups**: Automatic backup scheduling
|
||||
- **Incremental Backups**: Only backup changes since last backup
|
||||
- **Backup Encryption**: Encrypt backup files
|
||||
- **Remote Storage**: Support for S3, FTP, etc.
|
||||
- **Backup Compression**: Additional compression options
|
||||
- **Selective Restore**: Restore specific components only
|
||||
84
docs/DATABASE.md
Normal file
84
docs/DATABASE.md
Normal file
@@ -0,0 +1,84 @@
|
||||
# Database Persistence
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS now supports SQLite-based database persistence for configuration and state management. The database layer is optional - if no database path is provided, the system operates in in-memory mode (data is lost on restart).
|
||||
|
||||
## Configuration
|
||||
|
||||
Set the `ATLAS_DB_PATH` environment variable to enable database persistence:
|
||||
|
||||
```bash
|
||||
export ATLAS_DB_PATH=/var/lib/atlas/atlas.db
|
||||
./atlas-api
|
||||
```
|
||||
|
||||
If not set, the system defaults to `data/atlas.db` in the current directory.
|
||||
|
||||
## Database Schema
|
||||
|
||||
The database includes tables for:
|
||||
|
||||
- **users** - User accounts and authentication
|
||||
- **audit_logs** - Audit trail with indexes for efficient querying
|
||||
- **smb_shares** - SMB/CIFS share configurations
|
||||
- **nfs_exports** - NFS export configurations
|
||||
- **iscsi_targets** - iSCSI target configurations
|
||||
- **iscsi_luns** - iSCSI LUN mappings
|
||||
- **snapshot_policies** - Automated snapshot policies
|
||||
|
||||
## Current Status
|
||||
|
||||
✅ **Database Infrastructure**: Complete
|
||||
- SQLite database connection and migration system
|
||||
- Schema definitions for all entities
|
||||
- Optional database mode (falls back to in-memory if not configured)
|
||||
|
||||
⏳ **Store Migration**: In Progress
|
||||
- Stores currently use in-memory implementations
|
||||
- Database-backed implementations can be added incrementally
|
||||
- Pattern established for migration
|
||||
|
||||
## Migration Pattern
|
||||
|
||||
To migrate a store to use the database:
|
||||
|
||||
1. Add database field to store struct
|
||||
2. Update `New*Store()` to accept `*db.DB` parameter
|
||||
3. Implement database queries in CRUD methods
|
||||
4. Update `app.go` to pass database to store constructor
|
||||
|
||||
Example pattern:
|
||||
|
||||
```go
|
||||
type UserStore struct {
|
||||
db *db.DB
|
||||
mu sync.RWMutex
|
||||
// ... other fields
|
||||
}
|
||||
|
||||
func NewUserStore(db *db.DB, auth *Service) *UserStore {
|
||||
// Initialize with database
|
||||
}
|
||||
|
||||
func (s *UserStore) Create(...) (*User, error) {
|
||||
// Use database instead of in-memory map
|
||||
_, err := s.db.Exec("INSERT INTO users ...")
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
- **Persistence**: Configuration survives restarts
|
||||
- **Audit Trail**: Historical audit logs preserved
|
||||
- **Scalability**: Can migrate to PostgreSQL/MySQL later
|
||||
- **Backup**: Simple file-based backup (SQLite database file)
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Migrate user store to database (highest priority for security)
|
||||
2. Migrate audit log store (for historical tracking)
|
||||
3. Migrate storage service stores (SMB/NFS/iSCSI)
|
||||
4. Migrate snapshot policy store
|
||||
5. Add database backup/restore utilities
|
||||
242
docs/ERROR_HANDLING.md
Normal file
242
docs/ERROR_HANDLING.md
Normal file
@@ -0,0 +1,242 @@
|
||||
# Error Handling & Recovery
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS implements comprehensive error handling with structured error responses, graceful degradation, and automatic recovery mechanisms to ensure system reliability and good user experience.
|
||||
|
||||
## Error Types
|
||||
|
||||
### Structured API Errors
|
||||
|
||||
All API errors follow a consistent structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"code": "NOT_FOUND",
|
||||
"message": "dataset not found",
|
||||
"details": "tank/missing"
|
||||
}
|
||||
```
|
||||
|
||||
### Error Codes
|
||||
|
||||
- `INTERNAL_ERROR` - Unexpected server errors (500)
|
||||
- `NOT_FOUND` - Resource not found (404)
|
||||
- `BAD_REQUEST` - Invalid request parameters (400)
|
||||
- `CONFLICT` - Resource conflict (409)
|
||||
- `UNAUTHORIZED` - Authentication required (401)
|
||||
- `FORBIDDEN` - Insufficient permissions (403)
|
||||
- `SERVICE_UNAVAILABLE` - Service temporarily unavailable (503)
|
||||
- `VALIDATION_ERROR` - Input validation failed (400)
|
||||
|
||||
## Error Handling Patterns
|
||||
|
||||
### 1. Structured Error Responses
|
||||
|
||||
All errors use the `errors.APIError` type for consistent formatting:
|
||||
|
||||
```go
|
||||
if resource == nil {
|
||||
writeError(w, errors.ErrNotFound("dataset").WithDetails(datasetName))
|
||||
return
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Graceful Degradation
|
||||
|
||||
Service operations (SMB/NFS/iSCSI) use graceful degradation:
|
||||
|
||||
- **Desired State Stored**: Configuration is always stored in the store
|
||||
- **Service Application**: Service configuration is applied asynchronously
|
||||
- **Non-Blocking**: Service failures don't fail API requests
|
||||
- **Retry Ready**: Failed operations can be retried later
|
||||
|
||||
Example:
|
||||
```go
|
||||
// Store the configuration (always succeeds)
|
||||
share, err := a.smbStore.Create(...)
|
||||
|
||||
// Apply to service (may fail, but doesn't block)
|
||||
if err := a.smbService.ApplyConfiguration(shares); err != nil {
|
||||
// Log but don't fail - desired state is stored
|
||||
log.Printf("SMB service configuration failed (non-fatal): %v", err)
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Panic Recovery
|
||||
|
||||
All HTTP handlers are wrapped with panic recovery middleware:
|
||||
|
||||
```go
|
||||
func (a *App) errorMiddleware(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
defer recoverPanic(w, r)
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
Panics are caught and converted to proper error responses instead of crashing the server.
|
||||
|
||||
### 4. Atomic Operations with Rollback
|
||||
|
||||
Service configuration operations are atomic with automatic rollback:
|
||||
|
||||
1. **Write to temporary file** (`*.atlas.tmp`)
|
||||
2. **Backup existing config** (`.backup`)
|
||||
3. **Atomically replace** config file
|
||||
4. **Reload service**
|
||||
5. **On failure**: Automatically restore backup
|
||||
|
||||
Example (SMB):
|
||||
```go
|
||||
// Write to temp file
|
||||
os.WriteFile(tmpPath, config, 0644)
|
||||
|
||||
// Backup existing
|
||||
cp config.conf config.conf.backup
|
||||
|
||||
// Atomic replace
|
||||
os.Rename(tmpPath, configPath)
|
||||
|
||||
// Reload service
|
||||
if err := reloadService(); err != nil {
|
||||
// Restore backup automatically
|
||||
os.Rename(backupPath, configPath)
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
## Retry Mechanisms
|
||||
|
||||
### Retry Configuration
|
||||
|
||||
The `errors.Retry` function provides configurable retry logic:
|
||||
|
||||
```go
|
||||
config := errors.DefaultRetryConfig() // 3 attempts with exponential backoff
|
||||
err := errors.Retry(func() error {
|
||||
return serviceOperation()
|
||||
}, config)
|
||||
```
|
||||
|
||||
### Default Retry Behavior
|
||||
|
||||
- **Max Attempts**: 3
|
||||
- **Backoff**: Exponential (100ms, 200ms, 400ms)
|
||||
- **Use Case**: Transient failures (network, temporary service unavailability)
|
||||
|
||||
## Error Recovery
|
||||
|
||||
### Service Configuration Recovery
|
||||
|
||||
When service configuration fails:
|
||||
|
||||
1. **Configuration is stored** (desired state preserved)
|
||||
2. **Error is logged** (for debugging)
|
||||
3. **Operation continues** (API request succeeds)
|
||||
4. **Manual retry available** (via API or automatic retry later)
|
||||
|
||||
### Database Recovery
|
||||
|
||||
- **Connection failures**: Logged and retried
|
||||
- **Transaction failures**: Rolled back automatically
|
||||
- **Schema errors**: Detected during migration
|
||||
|
||||
### ZFS Operation Recovery
|
||||
|
||||
- **Command failures**: Returned as errors to caller
|
||||
- **Partial failures**: State is preserved, operation can be retried
|
||||
- **Validation**: Performed before destructive operations
|
||||
|
||||
## Error Logging
|
||||
|
||||
All errors are logged with context:
|
||||
|
||||
```go
|
||||
log.Printf("create SMB share error: %v", err)
|
||||
log.Printf("%s service error: %v", serviceName, err)
|
||||
```
|
||||
|
||||
Error logs include:
|
||||
- Error message
|
||||
- Operation context
|
||||
- Resource identifiers
|
||||
- Timestamp (via standard log)
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Always Use Structured Errors
|
||||
|
||||
```go
|
||||
// Good
|
||||
writeError(w, errors.ErrNotFound("pool").WithDetails(poolName))
|
||||
|
||||
// Avoid
|
||||
writeJSON(w, http.StatusNotFound, map[string]string{"error": "not found"})
|
||||
```
|
||||
|
||||
### 2. Handle Service Errors Gracefully
|
||||
|
||||
```go
|
||||
// Good - graceful degradation
|
||||
if err := service.Apply(); err != nil {
|
||||
log.Printf("service error (non-fatal): %v", err)
|
||||
// Continue - desired state is stored
|
||||
}
|
||||
|
||||
// Avoid - failing the request
|
||||
if err := service.Apply(); err != nil {
|
||||
return err // Don't fail the whole request
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Validate Before Operations
|
||||
|
||||
```go
|
||||
// Good - validate first
|
||||
if !datasetExists {
|
||||
writeError(w, errors.ErrNotFound("dataset"))
|
||||
return
|
||||
}
|
||||
// Then perform operation
|
||||
```
|
||||
|
||||
### 4. Use Context for Error Details
|
||||
|
||||
```go
|
||||
// Good - include context
|
||||
writeError(w, errors.ErrInternal("failed to create pool").WithDetails(err.Error()))
|
||||
|
||||
// Avoid - generic errors
|
||||
writeError(w, errors.ErrInternal("error"))
|
||||
```
|
||||
|
||||
## Error Response Format
|
||||
|
||||
All error responses follow this structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"code": "ERROR_CODE",
|
||||
"message": "Human-readable error message",
|
||||
"details": "Additional context (optional)"
|
||||
}
|
||||
```
|
||||
|
||||
HTTP status codes match error types:
|
||||
- `400` - Bad Request / Validation Error
|
||||
- `401` - Unauthorized
|
||||
- `403` - Forbidden
|
||||
- `404` - Not Found
|
||||
- `409` - Conflict
|
||||
- `500` - Internal Error
|
||||
- `503` - Service Unavailable
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Error Tracking**: Centralized error tracking and alerting
|
||||
2. **Automatic Retry Queue**: Background retry for failed operations
|
||||
3. **Error Metrics**: Track error rates by type and endpoint
|
||||
4. **User-Friendly Messages**: More descriptive error messages
|
||||
5. **Error Correlation**: Link related errors for debugging
|
||||
297
docs/HTTPS_TLS.md
Normal file
297
docs/HTTPS_TLS.md
Normal file
@@ -0,0 +1,297 @@
|
||||
# HTTPS/TLS Support
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS supports HTTPS/TLS encryption for secure communication. TLS can be enabled via environment variables, and the system will automatically enforce HTTPS connections when TLS is enabled.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
TLS is configured via environment variables:
|
||||
|
||||
- **`ATLAS_TLS_CERT`**: Path to TLS certificate file (PEM format)
|
||||
- **`ATLAS_TLS_KEY`**: Path to TLS private key file (PEM format)
|
||||
- **`ATLAS_TLS_ENABLED`**: Force enable TLS (optional, auto-enabled if cert/key provided)
|
||||
|
||||
### Automatic Detection
|
||||
|
||||
TLS is automatically enabled if both `ATLAS_TLS_CERT` and `ATLAS_TLS_KEY` are set:
|
||||
|
||||
```bash
|
||||
export ATLAS_TLS_CERT=/etc/atlas/tls/cert.pem
|
||||
export ATLAS_TLS_KEY=/etc/atlas/tls/key.pem
|
||||
./atlas-api
|
||||
```
|
||||
|
||||
### Explicit Enable
|
||||
|
||||
Force TLS even if cert/key are not set (will fail at startup if cert/key missing):
|
||||
|
||||
```bash
|
||||
export ATLAS_TLS_ENABLED=true
|
||||
export ATLAS_TLS_CERT=/etc/atlas/tls/cert.pem
|
||||
export ATLAS_TLS_KEY=/etc/atlas/tls/key.pem
|
||||
./atlas-api
|
||||
```
|
||||
|
||||
## Certificate Requirements
|
||||
|
||||
### Certificate Format
|
||||
|
||||
- **Format**: PEM (Privacy-Enhanced Mail)
|
||||
- **Certificate**: X.509 certificate
|
||||
- **Key**: RSA or ECDSA private key
|
||||
- **Chain**: Certificate chain can be included in cert file
|
||||
|
||||
### Certificate Validation
|
||||
|
||||
At startup, the system validates:
|
||||
- Certificate file exists
|
||||
- Key file exists
|
||||
- Certificate and key are valid and match
|
||||
- Certificate is not expired (checked by Go's TLS library)
|
||||
|
||||
## TLS Configuration
|
||||
|
||||
### Supported TLS Versions
|
||||
|
||||
- **Minimum**: TLS 1.2
|
||||
- **Maximum**: TLS 1.3
|
||||
|
||||
### Cipher Suites
|
||||
|
||||
The system uses secure cipher suites:
|
||||
|
||||
- `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`
|
||||
- `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`
|
||||
- `TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305`
|
||||
- `TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305`
|
||||
- `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`
|
||||
- `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`
|
||||
|
||||
### Elliptic Curves
|
||||
|
||||
Preferred curves:
|
||||
- `CurveP256`
|
||||
- `CurveP384`
|
||||
- `CurveP521`
|
||||
- `X25519`
|
||||
|
||||
## HTTPS Enforcement
|
||||
|
||||
### Automatic Redirect
|
||||
|
||||
When TLS is enabled, HTTP requests are automatically redirected to HTTPS:
|
||||
|
||||
```
|
||||
HTTP Request → 301 Moved Permanently → HTTPS
|
||||
```
|
||||
|
||||
### Exceptions
|
||||
|
||||
HTTPS enforcement is skipped for:
|
||||
- **Health checks**: `/healthz`, `/health` (allows monitoring)
|
||||
- **Localhost**: Requests from `localhost`, `127.0.0.1`, `::1` (development)
|
||||
|
||||
### Reverse Proxy Support
|
||||
|
||||
The system respects `X-Forwarded-Proto` header for reverse proxy setups:
|
||||
|
||||
```
|
||||
X-Forwarded-Proto: https
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Development (HTTP)
|
||||
|
||||
```bash
|
||||
# No TLS configuration - runs on HTTP
|
||||
./atlas-api
|
||||
```
|
||||
|
||||
### Production (HTTPS)
|
||||
|
||||
```bash
|
||||
# Enable TLS
|
||||
export ATLAS_TLS_CERT=/etc/ssl/certs/atlas.crt
|
||||
export ATLAS_TLS_KEY=/etc/ssl/private/atlas.key
|
||||
export ATLAS_HTTP_ADDR=:8443
|
||||
./atlas-api
|
||||
```
|
||||
|
||||
### Using Let's Encrypt
|
||||
|
||||
```bash
|
||||
# Let's Encrypt certificates
|
||||
export ATLAS_TLS_CERT=/etc/letsencrypt/live/atlas.example.com/fullchain.pem
|
||||
export ATLAS_TLS_KEY=/etc/letsencrypt/live/atlas.example.com/privkey.pem
|
||||
./atlas-api
|
||||
```
|
||||
|
||||
### Self-Signed Certificate (Testing)
|
||||
|
||||
Generate a self-signed certificate:
|
||||
|
||||
```bash
|
||||
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
|
||||
```
|
||||
|
||||
Use it:
|
||||
|
||||
```bash
|
||||
export ATLAS_TLS_CERT=./cert.pem
|
||||
export ATLAS_TLS_KEY=./key.pem
|
||||
./atlas-api
|
||||
```
|
||||
|
||||
## Security Headers
|
||||
|
||||
When TLS is enabled, additional security headers are set:
|
||||
|
||||
### HSTS (HTTP Strict Transport Security)
|
||||
|
||||
```
|
||||
Strict-Transport-Security: max-age=31536000; includeSubDomains
|
||||
```
|
||||
|
||||
- **Max Age**: 1 year (31536000 seconds)
|
||||
- **Include Subdomains**: Yes
|
||||
- **Purpose**: Forces browsers to use HTTPS
|
||||
|
||||
### Content Security Policy
|
||||
|
||||
CSP is configured to work with HTTPS:
|
||||
|
||||
```
|
||||
Content-Security-Policy: default-src 'self'; ...
|
||||
```
|
||||
|
||||
## Reverse Proxy Setup
|
||||
|
||||
### Nginx
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name atlas.example.com;
|
||||
|
||||
ssl_certificate /etc/ssl/certs/atlas.crt;
|
||||
ssl_certificate_key /etc/ssl/private/atlas.key;
|
||||
|
||||
location / {
|
||||
proxy_pass http://localhost:8080;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Apache
|
||||
|
||||
```apache
|
||||
<VirtualHost *:443>
|
||||
ServerName atlas.example.com
|
||||
|
||||
SSLEngine on
|
||||
SSLCertificateFile /etc/ssl/certs/atlas.crt
|
||||
SSLCertificateKeyFile /etc/ssl/private/atlas.key
|
||||
|
||||
ProxyPass / http://localhost:8080/
|
||||
ProxyPassReverse / http://localhost:8080/
|
||||
|
||||
RequestHeader set X-Forwarded-Proto "https"
|
||||
</VirtualHost>
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Certificate Not Found
|
||||
|
||||
```
|
||||
TLS configuration error: TLS certificate file not found: /path/to/cert.pem
|
||||
```
|
||||
|
||||
**Solution**: Verify certificate file path and permissions.
|
||||
|
||||
### Certificate/Key Mismatch
|
||||
|
||||
```
|
||||
TLS configuration error: load TLS certificate: tls: private key does not match public key
|
||||
```
|
||||
|
||||
**Solution**: Ensure certificate and key files match.
|
||||
|
||||
### Certificate Expired
|
||||
|
||||
```
|
||||
TLS handshake error: x509: certificate has expired or is not yet valid
|
||||
```
|
||||
|
||||
**Solution**: Renew certificate or use a valid certificate.
|
||||
|
||||
### Port Already in Use
|
||||
|
||||
```
|
||||
listen tcp :8443: bind: address already in use
|
||||
```
|
||||
|
||||
**Solution**: Change port or stop conflicting service.
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Use Valid Certificates
|
||||
|
||||
- **Production**: Use certificates from trusted CAs (Let's Encrypt, commercial CAs)
|
||||
- **Development**: Self-signed certificates are acceptable
|
||||
- **Testing**: Use test certificates with short expiration
|
||||
|
||||
### 2. Certificate Renewal
|
||||
|
||||
- **Monitor Expiration**: Set up alerts for certificate expiration
|
||||
- **Auto-Renewal**: Use tools like `certbot` for Let's Encrypt
|
||||
- **Graceful Reload**: Restart service after certificate renewal
|
||||
|
||||
### 3. Key Security
|
||||
|
||||
- **Permissions**: Restrict key file permissions (`chmod 600`)
|
||||
- **Ownership**: Use dedicated user for key file
|
||||
- **Storage**: Store keys securely, never commit to version control
|
||||
|
||||
### 4. TLS Configuration
|
||||
|
||||
- **Minimum Version**: TLS 1.2 or higher
|
||||
- **Cipher Suites**: Use strong cipher suites only
|
||||
- **HSTS**: Enable HSTS for production
|
||||
|
||||
### 5. Reverse Proxy
|
||||
|
||||
- **Terminate TLS**: Terminate TLS at reverse proxy for better performance
|
||||
- **Forward Headers**: Forward `X-Forwarded-Proto` header
|
||||
- **Health Checks**: Allow HTTP for health checks
|
||||
|
||||
## Compliance
|
||||
|
||||
### SRS Requirement
|
||||
|
||||
Per SRS section 5.3 Security:
|
||||
- **HTTPS SHALL be enforced for the web UI** ✅
|
||||
|
||||
This implementation:
|
||||
- ✅ Supports TLS/HTTPS
|
||||
- ✅ Enforces HTTPS when TLS is enabled
|
||||
- ✅ Provides secure cipher suites
|
||||
- ✅ Includes HSTS headers
|
||||
- ✅ Validates certificates
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Certificate Auto-Renewal**: Automatic certificate renewal
|
||||
2. **OCSP Stapling**: Online Certificate Status Protocol stapling
|
||||
3. **Certificate Rotation**: Seamless certificate rotation
|
||||
4. **TLS 1.4 Support**: Support for future TLS versions
|
||||
5. **Client Certificate Authentication**: Mutual TLS (mTLS)
|
||||
6. **Certificate Monitoring**: Certificate expiration monitoring
|
||||
499
docs/INSTALLATION.md
Normal file
499
docs/INSTALLATION.md
Normal file
@@ -0,0 +1,499 @@
|
||||
# AtlasOS Installation Guide
|
||||
|
||||
## Overview
|
||||
|
||||
This guide covers installing AtlasOS on a Linux system for testing and production use.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### System Requirements
|
||||
|
||||
- **OS**: Linux (Ubuntu 20.04+, Debian 11+, Fedora 34+, RHEL 8+)
|
||||
- **Kernel**: Linux kernel with ZFS support
|
||||
- **RAM**: Minimum 2GB, recommended 4GB+
|
||||
- **Disk**: Minimum 10GB free space
|
||||
- **Network**: Network interface for iSCSI/SMB/NFS
|
||||
|
||||
### Required Software
|
||||
|
||||
- ZFS utilities (`zfsutils-linux` or `zfs`)
|
||||
- Samba (`samba`)
|
||||
- NFS server (`nfs-kernel-server` or `nfs-utils`)
|
||||
- iSCSI target (`targetcli`)
|
||||
- SQLite (`sqlite3`)
|
||||
- Go compiler (`golang-go` or `golang`) - for building from source
|
||||
- Build tools (`build-essential` or `gcc make`)
|
||||
|
||||
## Quick Installation
|
||||
|
||||
### Automated Installer
|
||||
|
||||
The easiest way to install AtlasOS is using the provided installer script:
|
||||
|
||||
```bash
|
||||
# Clone or download the repository
|
||||
cd /path/to/atlas
|
||||
|
||||
# Run installer (requires root)
|
||||
sudo ./installer/install.sh
|
||||
```
|
||||
|
||||
The installer will:
|
||||
1. Install all dependencies
|
||||
2. Create system user and directories
|
||||
3. Build binaries
|
||||
4. Create systemd service
|
||||
5. Set up configuration
|
||||
6. Start the service
|
||||
|
||||
### Installation Options
|
||||
|
||||
```bash
|
||||
# Custom installation directory
|
||||
sudo ./installer/install.sh --install-dir /opt/custom-atlas
|
||||
|
||||
# Custom data directory
|
||||
sudo ./installer/install.sh --data-dir /mnt/atlas-data
|
||||
|
||||
# Skip dependency installation (if already installed)
|
||||
sudo ./installer/install.sh --skip-deps
|
||||
|
||||
# Skip building binaries (use pre-built)
|
||||
sudo ./installer/install.sh --skip-build
|
||||
|
||||
# Custom HTTP address
|
||||
sudo ./installer/install.sh --http-addr :8443
|
||||
|
||||
# Show help
|
||||
sudo ./installer/install.sh --help
|
||||
```
|
||||
|
||||
## Manual Installation
|
||||
|
||||
### Step 1: Install Dependencies
|
||||
|
||||
#### Ubuntu/Debian
|
||||
|
||||
```bash
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y \
|
||||
zfsutils-linux \
|
||||
samba \
|
||||
nfs-kernel-server \
|
||||
targetcli-fb \
|
||||
sqlite3 \
|
||||
golang-go \
|
||||
git \
|
||||
build-essential
|
||||
```
|
||||
|
||||
**Note:** On newer Ubuntu/Debian versions, the iSCSI target CLI is packaged as `targetcli-fb`. If `targetcli-fb` is not available, try `targetcli`.
|
||||
|
||||
#### Fedora/RHEL/CentOS
|
||||
|
||||
```bash
|
||||
# Fedora
|
||||
sudo dnf install -y \
|
||||
zfs \
|
||||
samba \
|
||||
nfs-utils \
|
||||
targetcli \
|
||||
sqlite \
|
||||
golang \
|
||||
git \
|
||||
gcc \
|
||||
make
|
||||
|
||||
# RHEL/CentOS (with EPEL)
|
||||
sudo yum install -y epel-release
|
||||
sudo yum install -y \
|
||||
zfs \
|
||||
samba \
|
||||
nfs-utils \
|
||||
targetcli \
|
||||
sqlite \
|
||||
golang \
|
||||
git \
|
||||
gcc \
|
||||
make
|
||||
```
|
||||
|
||||
### Step 2: Load ZFS Module
|
||||
|
||||
```bash
|
||||
# Load ZFS kernel module
|
||||
sudo modprobe zfs
|
||||
|
||||
# Make it persistent
|
||||
echo "zfs" | sudo tee -a /etc/modules-load.d/zfs.conf
|
||||
```
|
||||
|
||||
### Step 3: Create System User
|
||||
|
||||
```bash
|
||||
sudo useradd -r -s /bin/false -d /var/lib/atlas atlas
|
||||
```
|
||||
|
||||
### Step 4: Create Directories
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /opt/atlas/bin
|
||||
sudo mkdir -p /var/lib/atlas
|
||||
sudo mkdir -p /etc/atlas
|
||||
sudo mkdir -p /var/log/atlas
|
||||
sudo mkdir -p /var/lib/atlas/backups
|
||||
|
||||
sudo chown -R atlas:atlas /var/lib/atlas
|
||||
sudo chown -R atlas:atlas /var/log/atlas
|
||||
sudo chown -R atlas:atlas /etc/atlas
|
||||
```
|
||||
|
||||
### Step 5: Build Binaries
|
||||
|
||||
```bash
|
||||
cd /path/to/atlas
|
||||
go build -o /opt/atlas/bin/atlas-api ./cmd/atlas-api
|
||||
go build -o /opt/atlas/bin/atlas-tui ./cmd/atlas-tui
|
||||
|
||||
sudo chown root:root /opt/atlas/bin/atlas-api
|
||||
sudo chown root:root /opt/atlas/bin/atlas-tui
|
||||
sudo chmod 755 /opt/atlas/bin/atlas-api
|
||||
sudo chmod 755 /opt/atlas/bin/atlas-tui
|
||||
```
|
||||
|
||||
### Step 6: Create Systemd Service
|
||||
|
||||
Create `/etc/systemd/system/atlas-api.service`:
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=AtlasOS Storage Controller API
|
||||
After=network.target zfs.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=atlas
|
||||
Group=atlas
|
||||
WorkingDirectory=/opt/atlas
|
||||
ExecStart=/opt/atlas/bin/atlas-api
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
SyslogIdentifier=atlas-api
|
||||
|
||||
Environment="ATLAS_HTTP_ADDR=:8080"
|
||||
Environment="ATLAS_DB_PATH=/var/lib/atlas/atlas.db"
|
||||
Environment="ATLAS_BACKUP_DIR=/var/lib/atlas/backups"
|
||||
Environment="ATLAS_LOG_LEVEL=INFO"
|
||||
Environment="ATLAS_LOG_FORMAT=json"
|
||||
|
||||
NoNewPrivileges=true
|
||||
PrivateTmp=true
|
||||
ProtectSystem=strict
|
||||
ProtectHome=true
|
||||
ReadWritePaths=/var/lib/atlas /var/log/atlas /var/lib/atlas/backups /etc/atlas
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
Reload systemd:
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable atlas-api
|
||||
```
|
||||
|
||||
### Step 7: Configure Environment
|
||||
|
||||
Create `/etc/atlas/atlas.conf`:
|
||||
|
||||
```bash
|
||||
# HTTP Server
|
||||
ATLAS_HTTP_ADDR=:8080
|
||||
|
||||
# Database
|
||||
ATLAS_DB_PATH=/var/lib/atlas/atlas.db
|
||||
|
||||
# Backup Directory
|
||||
ATLAS_BACKUP_DIR=/var/lib/atlas/backups
|
||||
|
||||
# Logging
|
||||
ATLAS_LOG_LEVEL=INFO
|
||||
ATLAS_LOG_FORMAT=json
|
||||
|
||||
# JWT Secret (generate with: openssl rand -hex 32)
|
||||
ATLAS_JWT_SECRET=$(openssl rand -hex 32)
|
||||
```
|
||||
|
||||
### Step 8: Start Service
|
||||
|
||||
```bash
|
||||
sudo systemctl start atlas-api
|
||||
sudo systemctl status atlas-api
|
||||
```
|
||||
|
||||
## Post-Installation
|
||||
|
||||
### Create Initial Admin User
|
||||
|
||||
After installation, create the initial admin user:
|
||||
|
||||
**Via API:**
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/api/v1/users \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"username": "admin",
|
||||
"password": "your-secure-password",
|
||||
"email": "admin@example.com",
|
||||
"role": "administrator"
|
||||
}'
|
||||
```
|
||||
|
||||
**Via TUI:**
|
||||
```bash
|
||||
/opt/atlas/bin/atlas-tui
|
||||
```
|
||||
|
||||
### Configure TLS (Optional)
|
||||
|
||||
1. Generate or obtain TLS certificates
|
||||
2. Place certificates in `/etc/atlas/tls/`:
|
||||
```bash
|
||||
sudo cp cert.pem /etc/atlas/tls/
|
||||
sudo cp key.pem /etc/atlas/tls/
|
||||
sudo chown atlas:atlas /etc/atlas/tls/*
|
||||
sudo chmod 600 /etc/atlas/tls/*
|
||||
```
|
||||
|
||||
3. Update configuration:
|
||||
```bash
|
||||
echo "ATLAS_TLS_ENABLED=true" | sudo tee -a /etc/atlas/atlas.conf
|
||||
echo "ATLAS_TLS_CERT=/etc/atlas/tls/cert.pem" | sudo tee -a /etc/atlas/atlas.conf
|
||||
echo "ATLAS_TLS_KEY=/etc/atlas/tls/key.pem" | sudo tee -a /etc/atlas/atlas.conf
|
||||
```
|
||||
|
||||
4. Restart service:
|
||||
```bash
|
||||
sudo systemctl restart atlas-api
|
||||
```
|
||||
|
||||
### Verify Installation
|
||||
|
||||
1. **Check Service Status:**
|
||||
```bash
|
||||
sudo systemctl status atlas-api
|
||||
```
|
||||
|
||||
2. **Check Logs:**
|
||||
```bash
|
||||
sudo journalctl -u atlas-api -f
|
||||
```
|
||||
|
||||
3. **Test API:**
|
||||
```bash
|
||||
curl http://localhost:8080/healthz
|
||||
```
|
||||
|
||||
4. **Access Web UI:**
|
||||
Open browser: `http://localhost:8080`
|
||||
|
||||
5. **Access API Docs:**
|
||||
Open browser: `http://localhost:8080/api/docs`
|
||||
|
||||
## Service Management
|
||||
|
||||
### Start/Stop/Restart
|
||||
|
||||
```bash
|
||||
sudo systemctl start atlas-api
|
||||
sudo systemctl stop atlas-api
|
||||
sudo systemctl restart atlas-api
|
||||
sudo systemctl status atlas-api
|
||||
```
|
||||
|
||||
### View Logs
|
||||
|
||||
```bash
|
||||
# Follow logs
|
||||
sudo journalctl -u atlas-api -f
|
||||
|
||||
# Last 100 lines
|
||||
sudo journalctl -u atlas-api -n 100
|
||||
|
||||
# Since boot
|
||||
sudo journalctl -u atlas-api -b
|
||||
```
|
||||
|
||||
### Enable/Disable Auto-Start
|
||||
|
||||
```bash
|
||||
sudo systemctl enable atlas-api # Enable on boot
|
||||
sudo systemctl disable atlas-api # Disable on boot
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Configuration is done via environment variables:
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `ATLAS_HTTP_ADDR` | `:8080` | HTTP server address |
|
||||
| `ATLAS_DB_PATH` | `data/atlas.db` | SQLite database path |
|
||||
| `ATLAS_BACKUP_DIR` | `data/backups` | Backup directory |
|
||||
| `ATLAS_LOG_LEVEL` | `INFO` | Log level (DEBUG, INFO, WARN, ERROR) |
|
||||
| `ATLAS_LOG_FORMAT` | `text` | Log format (text, json) |
|
||||
| `ATLAS_JWT_SECRET` | - | JWT signing secret (required) |
|
||||
| `ATLAS_TLS_ENABLED` | `false` | Enable TLS |
|
||||
| `ATLAS_TLS_CERT` | - | TLS certificate file |
|
||||
| `ATLAS_TLS_KEY` | - | TLS private key file |
|
||||
|
||||
### Configuration File
|
||||
|
||||
Edit `/etc/atlas/atlas.conf` and restart service:
|
||||
|
||||
```bash
|
||||
sudo systemctl restart atlas-api
|
||||
```
|
||||
|
||||
## Uninstallation
|
||||
|
||||
### Remove Service
|
||||
|
||||
```bash
|
||||
sudo systemctl stop atlas-api
|
||||
sudo systemctl disable atlas-api
|
||||
sudo rm /etc/systemd/system/atlas-api.service
|
||||
sudo systemctl daemon-reload
|
||||
```
|
||||
|
||||
### Remove Files
|
||||
|
||||
```bash
|
||||
sudo rm -rf /opt/atlas
|
||||
sudo rm -rf /var/lib/atlas
|
||||
sudo rm -rf /etc/atlas
|
||||
sudo rm -rf /var/log/atlas
|
||||
```
|
||||
|
||||
### Remove User
|
||||
|
||||
```bash
|
||||
sudo userdel atlas
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Service Won't Start
|
||||
|
||||
1. **Check Logs:**
|
||||
```bash
|
||||
sudo journalctl -u atlas-api -n 50
|
||||
```
|
||||
|
||||
2. **Check Permissions:**
|
||||
```bash
|
||||
ls -la /opt/atlas/bin/
|
||||
ls -la /var/lib/atlas/
|
||||
```
|
||||
|
||||
3. **Check Dependencies:**
|
||||
```bash
|
||||
which zpool
|
||||
which smbd
|
||||
which targetcli
|
||||
```
|
||||
|
||||
### Port Already in Use
|
||||
|
||||
If port 8080 is already in use:
|
||||
|
||||
```bash
|
||||
# Change port in configuration
|
||||
echo "ATLAS_HTTP_ADDR=:8443" | sudo tee -a /etc/atlas/atlas.conf
|
||||
sudo systemctl restart atlas-api
|
||||
```
|
||||
|
||||
### Database Errors
|
||||
|
||||
If database errors occur:
|
||||
|
||||
```bash
|
||||
# Check database file permissions
|
||||
ls -la /var/lib/atlas/atlas.db
|
||||
|
||||
# Fix permissions
|
||||
sudo chown atlas:atlas /var/lib/atlas/atlas.db
|
||||
sudo chmod 600 /var/lib/atlas/atlas.db
|
||||
```
|
||||
|
||||
### ZFS Not Available
|
||||
|
||||
If ZFS commands fail:
|
||||
|
||||
```bash
|
||||
# Load ZFS module
|
||||
sudo modprobe zfs
|
||||
|
||||
# Check ZFS version
|
||||
zfs --version
|
||||
|
||||
# Verify ZFS pools
|
||||
sudo zpool list
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Firewall
|
||||
|
||||
Configure firewall to allow access:
|
||||
|
||||
```bash
|
||||
# UFW (Ubuntu)
|
||||
sudo ufw allow 8080/tcp
|
||||
|
||||
# firewalld (Fedora/RHEL)
|
||||
sudo firewall-cmd --add-port=8080/tcp --permanent
|
||||
sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
### TLS/HTTPS
|
||||
|
||||
Always use HTTPS in production:
|
||||
|
||||
1. Obtain valid certificates (Let's Encrypt recommended)
|
||||
2. Configure TLS in `/etc/atlas/atlas.conf`
|
||||
3. Restart service
|
||||
|
||||
### JWT Secret
|
||||
|
||||
Generate a strong JWT secret:
|
||||
|
||||
```bash
|
||||
openssl rand -hex 32
|
||||
```
|
||||
|
||||
Store securely in `/etc/atlas/atlas.conf` with restricted permissions.
|
||||
|
||||
## Next Steps
|
||||
|
||||
After installation:
|
||||
|
||||
1. **Create Admin User**: Set up initial administrator account
|
||||
2. **Configure Storage**: Create ZFS pools and datasets
|
||||
3. **Set Up Services**: Configure SMB, NFS, or iSCSI shares
|
||||
4. **Enable Snapshots**: Configure snapshot policies
|
||||
5. **Review Security**: Enable TLS, configure firewall
|
||||
6. **Monitor**: Set up monitoring and alerts
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
|
||||
- Check logs: `journalctl -u atlas-api`
|
||||
- Review documentation: `docs/` directory
|
||||
- API documentation: `http://localhost:8080/api/docs`
|
||||
318
docs/ISCSI_CONNECTION.md
Normal file
318
docs/ISCSI_CONNECTION.md
Normal file
@@ -0,0 +1,318 @@
|
||||
# iSCSI Connection Instructions
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS provides iSCSI connection instructions to help users connect initiators to iSCSI targets. The system automatically generates platform-specific commands for Linux, Windows, and macOS.
|
||||
|
||||
## API Endpoint
|
||||
|
||||
### Get Connection Instructions
|
||||
|
||||
**GET** `/api/v1/iscsi/targets/{id}/connection`
|
||||
|
||||
Returns connection instructions for an iSCSI target, including platform-specific commands.
|
||||
|
||||
**Query Parameters:**
|
||||
- `port` (optional): Portal port number (default: 3260)
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"iqn": "iqn.2024-12.com.atlas:target1",
|
||||
"portal": "192.168.1.100:3260",
|
||||
"portal_ip": "192.168.1.100",
|
||||
"portal_port": 3260,
|
||||
"luns": [
|
||||
{
|
||||
"id": 0,
|
||||
"zvol": "tank/iscsi/lun1",
|
||||
"size": 10737418240
|
||||
}
|
||||
],
|
||||
"commands": {
|
||||
"linux": [
|
||||
"# Discover target",
|
||||
"iscsiadm -m discovery -t sendtargets -p 192.168.1.100:3260",
|
||||
"",
|
||||
"# Login to target",
|
||||
"iscsiadm -m node -T iqn.2024-12.com.atlas:target1 -p 192.168.1.100:3260 --login",
|
||||
"",
|
||||
"# Verify connection",
|
||||
"iscsiadm -m session",
|
||||
"",
|
||||
"# Logout (when done)",
|
||||
"iscsiadm -m node -T iqn.2024-12.com.atlas:target1 -p 192.168.1.100:3260 --logout"
|
||||
],
|
||||
"windows": [
|
||||
"# Open PowerShell as Administrator",
|
||||
"",
|
||||
"# Add iSCSI target portal",
|
||||
"New-IscsiTargetPortal -TargetPortalAddress 192.168.1.100 -TargetPortalPortNumber 3260",
|
||||
"",
|
||||
"# Connect to target",
|
||||
"Connect-IscsiTarget -NodeAddress iqn.2024-12.com.atlas:target1",
|
||||
"",
|
||||
"# Verify connection",
|
||||
"Get-IscsiSession",
|
||||
"",
|
||||
"# Disconnect (when done)",
|
||||
"Disconnect-IscsiTarget -NodeAddress iqn.2024-12.com.atlas:target1"
|
||||
],
|
||||
"macos": [
|
||||
"# macOS uses built-in iSCSI support",
|
||||
"# Use System Preferences > Network > iSCSI",
|
||||
"",
|
||||
"# Or use command line (if iscsiutil is available)",
|
||||
"iscsiutil -a -t iqn.2024-12.com.atlas:target1 -p 192.168.1.100:3260",
|
||||
"",
|
||||
"# Portal: 192.168.1.100:3260",
|
||||
"# Target IQN: iqn.2024-12.com.atlas:target1"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Get Connection Instructions
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/api/v1/iscsi/targets/iscsi-1/connection \
|
||||
-H "Authorization: Bearer $TOKEN"
|
||||
```
|
||||
|
||||
### With Custom Port
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8080/api/v1/iscsi/targets/iscsi-1/connection?port=3261" \
|
||||
-H "Authorization: Bearer $TOKEN"
|
||||
```
|
||||
|
||||
## Platform-Specific Instructions
|
||||
|
||||
### Linux
|
||||
|
||||
**Prerequisites:**
|
||||
- `open-iscsi` package installed
|
||||
- `iscsid` service running
|
||||
|
||||
**Steps:**
|
||||
1. Discover the target
|
||||
2. Login to the target
|
||||
3. Verify connection
|
||||
4. Use the device (appears as `/dev/sdX` or `/dev/disk/by-id/...`)
|
||||
5. Logout when done
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# Discover target
|
||||
iscsiadm -m discovery -t sendtargets -p 192.168.1.100:3260
|
||||
|
||||
# Login to target
|
||||
iscsiadm -m node -T iqn.2024-12.com.atlas:target1 -p 192.168.1.100:3260 --login
|
||||
|
||||
# Verify connection
|
||||
iscsiadm -m session
|
||||
|
||||
# Find device
|
||||
lsblk
|
||||
# or
|
||||
ls -l /dev/disk/by-id/ | grep iqn
|
||||
|
||||
# Logout when done
|
||||
iscsiadm -m node -T iqn.2024-12.com.atlas:target1 -p 192.168.1.100:3260 --logout
|
||||
```
|
||||
|
||||
### Windows
|
||||
|
||||
**Prerequisites:**
|
||||
- Windows 8+ or Windows Server 2012+
|
||||
- PowerShell (run as Administrator)
|
||||
|
||||
**Steps:**
|
||||
1. Add iSCSI target portal
|
||||
2. Connect to target
|
||||
3. Verify connection
|
||||
4. Initialize disk in Disk Management
|
||||
5. Disconnect when done
|
||||
|
||||
**Example (PowerShell as Administrator):**
|
||||
```powershell
|
||||
# Add portal
|
||||
New-IscsiTargetPortal -TargetPortalAddress 192.168.1.100 -TargetPortalPortNumber 3260
|
||||
|
||||
# Connect to target
|
||||
Connect-IscsiTarget -NodeAddress iqn.2024-12.com.atlas:target1
|
||||
|
||||
# Verify connection
|
||||
Get-IscsiSession
|
||||
|
||||
# Initialize disk in Disk Management
|
||||
# (Open Disk Management, find new disk, initialize and format)
|
||||
|
||||
# Disconnect when done
|
||||
Disconnect-IscsiTarget -NodeAddress iqn.2024-12.com.atlas:target1
|
||||
```
|
||||
|
||||
### macOS
|
||||
|
||||
**Prerequisites:**
|
||||
- macOS 10.13+ (High Sierra or later)
|
||||
- iSCSI initiator software (third-party)
|
||||
|
||||
**Steps:**
|
||||
1. Use GUI iSCSI initiator (if available)
|
||||
2. Or use command line tools
|
||||
3. Configure connection settings
|
||||
4. Connect to target
|
||||
|
||||
**Note:** macOS doesn't have built-in iSCSI support. Use third-party software like:
|
||||
- GlobalSAN iSCSI Initiator
|
||||
- ATTO Xtend SAN iSCSI
|
||||
|
||||
## Portal IP Detection
|
||||
|
||||
The system automatically detects the portal IP address using:
|
||||
|
||||
1. **Primary Method**: Parse `targetcli` output to find configured portal IP
|
||||
2. **Fallback Method**: Use system IP from `hostname -I`
|
||||
3. **Default**: `127.0.0.1` if detection fails
|
||||
|
||||
**Custom Portal IP:**
|
||||
|
||||
If the detected IP is incorrect, you can manually specify it by:
|
||||
- Setting environment variable `ATLAS_ISCSI_PORTAL_IP`
|
||||
- Or modifying the connection instructions after retrieval
|
||||
|
||||
## LUN Information
|
||||
|
||||
The connection instructions include LUN information:
|
||||
|
||||
- **ID**: LUN number (typically 0, 1, 2, ...)
|
||||
- **ZVOL**: ZFS volume backing the LUN
|
||||
- **Size**: LUN size in bytes
|
||||
|
||||
**Example:**
|
||||
```json
|
||||
"luns": [
|
||||
{
|
||||
"id": 0,
|
||||
"zvol": "tank/iscsi/lun1",
|
||||
"size": 10737418240
|
||||
},
|
||||
{
|
||||
"id": 1,
|
||||
"zvol": "tank/iscsi/lun2",
|
||||
"size": 21474836480
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Initiator ACLs
|
||||
|
||||
iSCSI targets can be configured with initiator ACLs to restrict access:
|
||||
|
||||
```json
|
||||
{
|
||||
"iqn": "iqn.2024-12.com.atlas:target1",
|
||||
"initiators": [
|
||||
"iqn.2024-12.com.client:initiator1"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Only initiators in the ACL list can connect to the target.
|
||||
|
||||
### CHAP Authentication
|
||||
|
||||
For production deployments, configure CHAP authentication:
|
||||
|
||||
1. Set up CHAP credentials in target configuration
|
||||
2. Configure initiator with matching credentials
|
||||
3. Use authentication in connection commands
|
||||
|
||||
**Note:** CHAP configuration is not yet exposed via API (future enhancement).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Connection Fails
|
||||
|
||||
1. **Check Target Status**: Verify target is enabled
|
||||
2. **Check Portal**: Verify portal IP and port are correct
|
||||
3. **Check Network**: Ensure network connectivity
|
||||
4. **Check ACLs**: Verify initiator IQN is in ACL list
|
||||
5. **Check Firewall**: Ensure port 3260 (or custom port) is open
|
||||
|
||||
### Portal IP Incorrect
|
||||
|
||||
If the detected portal IP is wrong:
|
||||
|
||||
1. Check `targetcli` configuration
|
||||
2. Verify network interfaces
|
||||
3. Manually override in connection commands
|
||||
|
||||
### LUN Not Visible
|
||||
|
||||
1. **Check LUN Mapping**: Verify LUN is mapped to target
|
||||
2. **Check ZVOL**: Verify ZVOL exists and is accessible
|
||||
3. **Rescan**: Rescan iSCSI session on initiator
|
||||
4. **Check Permissions**: Verify initiator has access
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Use ACLs
|
||||
|
||||
Always configure initiator ACLs to restrict access:
|
||||
- Only allow known initiators
|
||||
- Use descriptive initiator IQNs
|
||||
- Regularly review ACL lists
|
||||
|
||||
### 2. Use CHAP Authentication
|
||||
|
||||
For production:
|
||||
- Enable CHAP authentication
|
||||
- Use strong credentials
|
||||
- Rotate credentials regularly
|
||||
|
||||
### 3. Monitor Connections
|
||||
|
||||
- Monitor active iSCSI sessions
|
||||
- Track connection/disconnection events
|
||||
- Set up alerts for connection failures
|
||||
|
||||
### 4. Test Connections
|
||||
|
||||
Before production use:
|
||||
- Test connection from initiator
|
||||
- Verify LUN visibility
|
||||
- Test read/write operations
|
||||
- Test disconnection/reconnection
|
||||
|
||||
### 5. Document Configuration
|
||||
|
||||
- Document portal IPs and ports
|
||||
- Document initiator IQNs
|
||||
- Document LUN mappings
|
||||
- Keep connection instructions accessible
|
||||
|
||||
## Compliance with SRS
|
||||
|
||||
Per SRS section 4.6 iSCSI Block Storage:
|
||||
|
||||
- ✅ **Provision ZVOL-backed LUNs**: Implemented
|
||||
- ✅ **Create iSCSI targets with IQN**: Implemented
|
||||
- ✅ **Map LUNs to targets**: Implemented
|
||||
- ✅ **Configure initiator ACLs**: Implemented
|
||||
- ✅ **Expose connection instructions**: Implemented (Priority 21)
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **CHAP Authentication**: API support for CHAP configuration
|
||||
2. **Portal Management**: Manage multiple portals per target
|
||||
3. **Connection Monitoring**: Real-time connection status
|
||||
4. **Auto-Discovery**: Automatic initiator discovery
|
||||
5. **Connection Templates**: Pre-configured connection templates
|
||||
6. **Connection History**: Track connection/disconnection events
|
||||
7. **Multi-Path Support**: Instructions for multi-path configurations
|
||||
366
docs/LOGGING_DIAGNOSTICS.md
Normal file
366
docs/LOGGING_DIAGNOSTICS.md
Normal file
@@ -0,0 +1,366 @@
|
||||
# Logging & Diagnostics
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS provides comprehensive logging and diagnostic capabilities to help monitor system health, troubleshoot issues, and understand system behavior.
|
||||
|
||||
## Structured Logging
|
||||
|
||||
### Logger Package
|
||||
|
||||
The `internal/logger` package provides structured logging with:
|
||||
|
||||
- **Log Levels**: DEBUG, INFO, WARN, ERROR
|
||||
- **JSON Mode**: Optional JSON-formatted output
|
||||
- **Structured Fields**: Key-value pairs for context
|
||||
- **Thread-Safe**: Safe for concurrent use
|
||||
|
||||
### Configuration
|
||||
|
||||
Configure logging via environment variables:
|
||||
|
||||
```bash
|
||||
# Log level (DEBUG, INFO, WARN, ERROR)
|
||||
export ATLAS_LOG_LEVEL=INFO
|
||||
|
||||
# Log format (json or text)
|
||||
export ATLAS_LOG_FORMAT=json
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
```go
|
||||
import "gitea.avt.data-center.id/othman.suseno/atlas/internal/logger"
|
||||
|
||||
// Simple logging
|
||||
logger.Info("User logged in")
|
||||
logger.Error("Failed to create pool", err)
|
||||
|
||||
// With fields
|
||||
logger.Info("Pool created", map[string]interface{}{
|
||||
"pool": "tank",
|
||||
"size": "10TB",
|
||||
})
|
||||
```
|
||||
|
||||
### Log Levels
|
||||
|
||||
- **DEBUG**: Detailed information for debugging
|
||||
- **INFO**: General informational messages
|
||||
- **WARN**: Warning messages for potential issues
|
||||
- **ERROR**: Error messages for failures
|
||||
|
||||
## Request Logging
|
||||
|
||||
### Access Logs
|
||||
|
||||
All HTTP requests are logged with:
|
||||
|
||||
- **Timestamp**: Request time
|
||||
- **Method**: HTTP method (GET, POST, etc.)
|
||||
- **Path**: Request path
|
||||
- **Status**: HTTP status code
|
||||
- **Duration**: Request processing time
|
||||
- **Request ID**: Unique request identifier
|
||||
- **Remote Address**: Client IP address
|
||||
|
||||
**Example Log Entry:**
|
||||
```
|
||||
2024-12-20T10:30:56Z [INFO] 192.168.1.100 GET /api/v1/pools status=200 rid=abc123 dur=45ms
|
||||
```
|
||||
|
||||
### Request ID
|
||||
|
||||
Every request gets a unique request ID:
|
||||
|
||||
- **Header**: `X-Request-Id`
|
||||
- **Usage**: Track requests across services
|
||||
- **Format**: 32-character hex string
|
||||
|
||||
## Diagnostic Endpoints
|
||||
|
||||
### System Information
|
||||
|
||||
**GET** `/api/v1/system/info`
|
||||
|
||||
Returns comprehensive system information:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "v0.1.0-dev",
|
||||
"uptime": "3600 seconds",
|
||||
"go_version": "go1.21.0",
|
||||
"num_goroutines": 15,
|
||||
"memory": {
|
||||
"alloc": 1048576,
|
||||
"total_alloc": 52428800,
|
||||
"sys": 2097152,
|
||||
"num_gc": 5
|
||||
},
|
||||
"services": {
|
||||
"smb": {
|
||||
"status": "running",
|
||||
"last_check": "2024-12-20T10:30:56Z"
|
||||
},
|
||||
"nfs": {
|
||||
"status": "running",
|
||||
"last_check": "2024-12-20T10:30:56Z"
|
||||
},
|
||||
"iscsi": {
|
||||
"status": "stopped",
|
||||
"last_check": "2024-12-20T10:30:56Z"
|
||||
}
|
||||
},
|
||||
"database": {
|
||||
"connected": true,
|
||||
"path": "/var/lib/atlas/atlas.db"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Health Check
|
||||
|
||||
**GET** `/health`
|
||||
|
||||
Detailed health check with component status:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "healthy",
|
||||
"timestamp": "2024-12-20T10:30:56Z",
|
||||
"checks": {
|
||||
"zfs": "healthy",
|
||||
"database": "healthy",
|
||||
"smb": "healthy",
|
||||
"nfs": "healthy",
|
||||
"iscsi": "stopped"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Status Values:**
|
||||
- `healthy`: Component is working correctly
|
||||
- `degraded`: Some components have issues but system is operational
|
||||
- `unhealthy`: Critical components are failing
|
||||
|
||||
**HTTP Status Codes:**
|
||||
- `200 OK`: System is healthy or degraded
|
||||
- `503 Service Unavailable`: System is unhealthy
|
||||
|
||||
### System Logs
|
||||
|
||||
**GET** `/api/v1/system/logs?limit=100`
|
||||
|
||||
Returns recent system logs (from audit logs):
|
||||
|
||||
```json
|
||||
{
|
||||
"logs": [
|
||||
{
|
||||
"timestamp": "2024-12-20T10:30:56Z",
|
||||
"level": "INFO",
|
||||
"actor": "user-1",
|
||||
"action": "pool.create",
|
||||
"resource": "pool:tank",
|
||||
"result": "success",
|
||||
"ip": "192.168.1.100"
|
||||
}
|
||||
],
|
||||
"count": 1
|
||||
}
|
||||
```
|
||||
|
||||
**Query Parameters:**
|
||||
- `limit`: Maximum number of logs to return (default: 100, max: 1000)
|
||||
|
||||
### Garbage Collection
|
||||
|
||||
**POST** `/api/v1/system/gc`
|
||||
|
||||
Triggers garbage collection and returns memory statistics:
|
||||
|
||||
```json
|
||||
{
|
||||
"before": {
|
||||
"alloc": 1048576,
|
||||
"total_alloc": 52428800,
|
||||
"sys": 2097152,
|
||||
"num_gc": 5
|
||||
},
|
||||
"after": {
|
||||
"alloc": 512000,
|
||||
"total_alloc": 52428800,
|
||||
"sys": 2097152,
|
||||
"num_gc": 6
|
||||
},
|
||||
"freed": 536576
|
||||
}
|
||||
```
|
||||
|
||||
## Audit Logging
|
||||
|
||||
Audit logs track all mutating operations:
|
||||
|
||||
- **Actor**: User ID or "system"
|
||||
- **Action**: Operation type (e.g., "pool.create")
|
||||
- **Resource**: Resource identifier
|
||||
- **Result**: "success" or "failure"
|
||||
- **IP**: Client IP address
|
||||
- **User Agent**: Client user agent
|
||||
- **Timestamp**: Operation time
|
||||
|
||||
See [Audit Logging Documentation](./AUDIT_LOGGING.md) for details.
|
||||
|
||||
## Log Rotation
|
||||
|
||||
### Current Implementation
|
||||
|
||||
- **In-Memory**: Audit logs stored in memory
|
||||
- **Rotation**: Automatic rotation when max logs reached
|
||||
- **Limit**: Configurable (default: 10,000 logs)
|
||||
|
||||
### Future Enhancements
|
||||
|
||||
- **File Logging**: Write logs to files
|
||||
- **Automatic Rotation**: Rotate log files by size/age
|
||||
- **Compression**: Compress old log files
|
||||
- **Retention**: Configurable retention policies
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Use Appropriate Log Levels
|
||||
|
||||
```go
|
||||
// Debug - detailed information
|
||||
logger.Debug("Processing request", map[string]interface{}{
|
||||
"request_id": reqID,
|
||||
"user": userID,
|
||||
})
|
||||
|
||||
// Info - important events
|
||||
logger.Info("User logged in", map[string]interface{}{
|
||||
"user": userID,
|
||||
})
|
||||
|
||||
// Warn - potential issues
|
||||
logger.Warn("High memory usage", map[string]interface{}{
|
||||
"usage": "85%",
|
||||
})
|
||||
|
||||
// Error - failures
|
||||
logger.Error("Failed to create pool", err, map[string]interface{}{
|
||||
"pool": poolName,
|
||||
})
|
||||
```
|
||||
|
||||
### 2. Include Context
|
||||
|
||||
Always include relevant context in logs:
|
||||
|
||||
```go
|
||||
// Good
|
||||
logger.Info("Pool created", map[string]interface{}{
|
||||
"pool": poolName,
|
||||
"size": poolSize,
|
||||
"user": userID,
|
||||
})
|
||||
|
||||
// Avoid
|
||||
logger.Info("Pool created")
|
||||
```
|
||||
|
||||
### 3. Use Request IDs
|
||||
|
||||
Include request IDs in logs for tracing:
|
||||
|
||||
```go
|
||||
reqID := r.Context().Value(requestIDKey).(string)
|
||||
logger.Info("Processing request", map[string]interface{}{
|
||||
"request_id": reqID,
|
||||
})
|
||||
```
|
||||
|
||||
### 4. Monitor Health Endpoints
|
||||
|
||||
Regularly check health endpoints:
|
||||
|
||||
```bash
|
||||
# Simple health check
|
||||
curl http://localhost:8080/healthz
|
||||
|
||||
# Detailed health check
|
||||
curl http://localhost:8080/health
|
||||
|
||||
# System information
|
||||
curl http://localhost:8080/api/v1/system/info
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Key Metrics
|
||||
|
||||
Monitor these metrics for system health:
|
||||
|
||||
- **Request Duration**: Track in access logs
|
||||
- **Error Rate**: Count of error responses
|
||||
- **Memory Usage**: Check via `/api/v1/system/info`
|
||||
- **Goroutine Count**: Monitor for leaks
|
||||
- **Service Status**: Check service health
|
||||
|
||||
### Alerting
|
||||
|
||||
Set up alerts for:
|
||||
|
||||
- **Unhealthy Status**: System health check fails
|
||||
- **High Error Rate**: Too many error responses
|
||||
- **Memory Leaks**: Continuously increasing memory
|
||||
- **Service Failures**: Services not running
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Check System Health
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/health
|
||||
```
|
||||
|
||||
### View System Information
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/api/v1/system/info
|
||||
```
|
||||
|
||||
### Check Recent Logs
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/api/v1/system/logs?limit=50
|
||||
```
|
||||
|
||||
### Trigger GC
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/api/v1/system/gc
|
||||
```
|
||||
|
||||
### View Request Logs
|
||||
|
||||
Check application logs for request details:
|
||||
|
||||
```bash
|
||||
# If logging to stdout
|
||||
./atlas-api | grep "GET /api/v1/pools"
|
||||
|
||||
# If logging to file
|
||||
tail -f /var/log/atlas-api.log | grep "status=500"
|
||||
```
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **File Logging**: Write logs to files with rotation
|
||||
2. **Log Aggregation**: Support for centralized logging (ELK, Loki)
|
||||
3. **Structured Logging**: Full JSON logging support
|
||||
4. **Log Levels per Component**: Different levels for different components
|
||||
5. **Performance Logging**: Detailed performance metrics
|
||||
6. **Distributed Tracing**: Request tracing across services
|
||||
7. **Log Filtering**: Filter logs by level, component, etc.
|
||||
8. **Real-time Log Streaming**: Stream logs via WebSocket
|
||||
303
docs/MAINTENANCE_MODE.md
Normal file
303
docs/MAINTENANCE_MODE.md
Normal file
@@ -0,0 +1,303 @@
|
||||
# Maintenance Mode & Update Management
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS provides a maintenance mode feature that allows administrators to safely disable user operations during system updates or maintenance. When maintenance mode is enabled, all mutating operations (create, update, delete) are blocked except for users explicitly allowed.
|
||||
|
||||
## Features
|
||||
|
||||
- **Maintenance Mode**: Disable user operations during maintenance
|
||||
- **Automatic Backup**: Optionally create backup before entering maintenance
|
||||
- **Allowed Users**: Specify users who can operate during maintenance
|
||||
- **Health Check Integration**: Maintenance status included in health checks
|
||||
- **Audit Logging**: All maintenance mode changes are logged
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Get Maintenance Status
|
||||
|
||||
**GET** `/api/v1/maintenance`
|
||||
|
||||
Returns the current maintenance mode status.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"enabled": false,
|
||||
"enabled_at": "2024-12-20T10:30:00Z",
|
||||
"enabled_by": "admin",
|
||||
"reason": "System update",
|
||||
"allowed_users": ["admin"],
|
||||
"last_backup_id": "backup-1703123456"
|
||||
}
|
||||
```
|
||||
|
||||
### Enable Maintenance Mode
|
||||
|
||||
**POST** `/api/v1/maintenance`
|
||||
|
||||
Enables maintenance mode. Requires administrator role.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"reason": "System update to v1.1.0",
|
||||
"allowed_users": ["admin"],
|
||||
"create_backup": true
|
||||
}
|
||||
```
|
||||
|
||||
**Fields:**
|
||||
- `reason` (string, required): Reason for entering maintenance mode
|
||||
- `allowed_users` (array of strings, optional): User IDs allowed to operate during maintenance
|
||||
- `create_backup` (boolean, optional): Create automatic backup before entering maintenance
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"message": "maintenance mode enabled",
|
||||
"status": {
|
||||
"enabled": true,
|
||||
"enabled_at": "2024-12-20T10:30:00Z",
|
||||
"enabled_by": "admin",
|
||||
"reason": "System update to v1.1.0",
|
||||
"allowed_users": ["admin"],
|
||||
"last_backup_id": "backup-1703123456"
|
||||
},
|
||||
"backup_id": "backup-1703123456"
|
||||
}
|
||||
```
|
||||
|
||||
### Disable Maintenance Mode
|
||||
|
||||
**POST** `/api/v1/maintenance/disable`
|
||||
|
||||
Disables maintenance mode. Requires administrator role.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"message": "maintenance mode disabled"
|
||||
}
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Enable Maintenance Mode with Backup
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/api/v1/maintenance \
|
||||
-H "Authorization: Bearer $TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"reason": "System update to v1.1.0",
|
||||
"allowed_users": ["admin"],
|
||||
"create_backup": true
|
||||
}'
|
||||
```
|
||||
|
||||
### Check Maintenance Status
|
||||
|
||||
```bash
|
||||
curl http://localhost:8080/api/v1/maintenance \
|
||||
-H "Authorization: Bearer $TOKEN"
|
||||
```
|
||||
|
||||
### Disable Maintenance Mode
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/api/v1/maintenance/disable \
|
||||
-H "Authorization: Bearer $TOKEN"
|
||||
```
|
||||
|
||||
## Behavior
|
||||
|
||||
### When Maintenance Mode is Enabled
|
||||
|
||||
1. **Read Operations**: All GET requests continue to work normally
|
||||
2. **Mutating Operations**: All POST, PUT, PATCH, DELETE requests are blocked
|
||||
3. **Allowed Users**: Users in the `allowed_users` list can still perform operations
|
||||
4. **Public Endpoints**: Public endpoints (login, health checks) continue to work
|
||||
5. **Error Response**: Blocked operations return `503 Service Unavailable` with message:
|
||||
```json
|
||||
{
|
||||
"code": "SERVICE_UNAVAILABLE",
|
||||
"message": "system is in maintenance mode",
|
||||
"details": "the system is currently in maintenance mode and user operations are disabled"
|
||||
}
|
||||
```
|
||||
|
||||
### Middleware Order
|
||||
|
||||
Maintenance mode middleware is applied after authentication but before routes:
|
||||
|
||||
1. CORS
|
||||
2. Compression
|
||||
3. Security headers
|
||||
4. Request size limit
|
||||
5. Content-Type validation
|
||||
6. Rate limiting
|
||||
7. Caching
|
||||
8. Error recovery
|
||||
9. Request ID
|
||||
10. Logging
|
||||
11. Audit
|
||||
12. **Maintenance mode** ← Blocks operations
|
||||
13. Authentication
|
||||
14. Routes
|
||||
|
||||
## Health Check Integration
|
||||
|
||||
The health check endpoint (`/health`) includes maintenance mode status:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "maintenance",
|
||||
"timestamp": "2024-12-20T10:30:00Z",
|
||||
"checks": {
|
||||
"zfs": "healthy",
|
||||
"database": "healthy",
|
||||
"smb": "healthy",
|
||||
"nfs": "healthy",
|
||||
"iscsi": "healthy",
|
||||
"maintenance": "enabled"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
When maintenance mode is enabled:
|
||||
- Status may change from "healthy" to "maintenance"
|
||||
- `checks.maintenance` will be "enabled"
|
||||
|
||||
## Automatic Backup
|
||||
|
||||
When `create_backup: true` is specified:
|
||||
|
||||
1. A backup is created automatically before entering maintenance
|
||||
2. The backup ID is stored in maintenance status
|
||||
3. The backup includes:
|
||||
- All user accounts
|
||||
- All SMB shares
|
||||
- All NFS exports
|
||||
- All iSCSI targets
|
||||
- All snapshot policies
|
||||
- System configuration
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Before System Updates
|
||||
|
||||
1. **Create Backup**: Always enable `create_backup: true`
|
||||
2. **Notify Users**: Inform users about maintenance window
|
||||
3. **Allow Administrators**: Include admin users in `allowed_users`
|
||||
4. **Document Reason**: Provide clear reason for maintenance
|
||||
|
||||
### During Maintenance
|
||||
|
||||
1. **Monitor Status**: Check `/api/v1/maintenance` periodically
|
||||
2. **Verify Backup**: Confirm backup was created successfully
|
||||
3. **Perform Updates**: Execute system updates or maintenance tasks
|
||||
4. **Test Operations**: Verify system functionality
|
||||
|
||||
### After Maintenance
|
||||
|
||||
1. **Disable Maintenance**: Use `/api/v1/maintenance/disable`
|
||||
2. **Verify Services**: Check all services are running
|
||||
3. **Test Operations**: Verify normal operations work
|
||||
4. **Review Logs**: Check audit logs for any issues
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Administrator Only**: Only administrators can enable/disable maintenance mode
|
||||
2. **Audit Logging**: All maintenance mode changes are logged
|
||||
3. **Allowed Users**: Only specified users can operate during maintenance
|
||||
4. **Token Validation**: Maintenance mode respects authentication
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Maintenance Mode Already Enabled
|
||||
|
||||
```json
|
||||
{
|
||||
"code": "INTERNAL_ERROR",
|
||||
"message": "failed to enable maintenance mode",
|
||||
"details": "maintenance mode is already enabled"
|
||||
}
|
||||
```
|
||||
|
||||
### Maintenance Mode Not Enabled
|
||||
|
||||
```json
|
||||
{
|
||||
"code": "INTERNAL_ERROR",
|
||||
"message": "failed to disable maintenance mode",
|
||||
"details": "maintenance mode is not enabled"
|
||||
}
|
||||
```
|
||||
|
||||
### Backup Creation Failure
|
||||
|
||||
If backup creation fails, maintenance mode is not enabled:
|
||||
|
||||
```json
|
||||
{
|
||||
"code": "INTERNAL_ERROR",
|
||||
"message": "failed to create backup",
|
||||
"details": "error details..."
|
||||
}
|
||||
```
|
||||
|
||||
## Integration with Update Process
|
||||
|
||||
### Recommended Update Workflow
|
||||
|
||||
1. **Enable Maintenance Mode**:
|
||||
```bash
|
||||
POST /api/v1/maintenance
|
||||
{
|
||||
"reason": "Updating to v1.1.0",
|
||||
"allowed_users": ["admin"],
|
||||
"create_backup": true
|
||||
}
|
||||
```
|
||||
|
||||
2. **Verify Backup**:
|
||||
```bash
|
||||
GET /api/v1/backups/{backup_id}
|
||||
```
|
||||
|
||||
3. **Perform System Update**:
|
||||
- Stop services if needed
|
||||
- Update binaries/configurations
|
||||
- Restart services
|
||||
|
||||
4. **Verify System Health**:
|
||||
```bash
|
||||
GET /health
|
||||
```
|
||||
|
||||
5. **Disable Maintenance Mode**:
|
||||
```bash
|
||||
POST /api/v1/maintenance/disable
|
||||
```
|
||||
|
||||
6. **Test Operations**:
|
||||
- Verify normal operations work
|
||||
- Check service status
|
||||
- Review logs
|
||||
|
||||
## Limitations
|
||||
|
||||
1. **No Automatic Disable**: Maintenance mode must be manually disabled
|
||||
2. **No Scheduled Maintenance**: Maintenance mode must be enabled manually
|
||||
3. **No Maintenance History**: Only current status is available
|
||||
4. **No Notifications**: No automatic notifications to users
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Scheduled Maintenance**: Schedule maintenance windows
|
||||
2. **Maintenance History**: Track maintenance mode history
|
||||
3. **User Notifications**: Notify users when maintenance starts/ends
|
||||
4. **Automatic Disable**: Auto-disable after specified duration
|
||||
5. **Maintenance Templates**: Predefined maintenance scenarios
|
||||
6. **Rollback Support**: Automatic rollback on update failure
|
||||
296
docs/PERFORMANCE_OPTIMIZATION.md
Normal file
296
docs/PERFORMANCE_OPTIMIZATION.md
Normal file
@@ -0,0 +1,296 @@
|
||||
# Performance Optimization
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS implements several performance optimizations to improve response times, reduce bandwidth usage, and enhance overall system efficiency.
|
||||
|
||||
## Compression
|
||||
|
||||
### Gzip Compression Middleware
|
||||
|
||||
All HTTP responses are automatically compressed using gzip when the client supports it.
|
||||
|
||||
**Features:**
|
||||
- **Automatic Detection**: Checks `Accept-Encoding` header
|
||||
- **Content-Type Filtering**: Skips compression for already-compressed content (images, videos, zip files)
|
||||
- **Transparent**: Works automatically for all responses
|
||||
|
||||
**Benefits:**
|
||||
- Reduces bandwidth usage by 60-80% for JSON/text responses
|
||||
- Faster response times, especially for large payloads
|
||||
- Lower server load
|
||||
|
||||
**Example:**
|
||||
```bash
|
||||
# Request with compression
|
||||
curl -H "Accept-Encoding: gzip" http://localhost:8080/api/v1/pools
|
||||
|
||||
# Response includes:
|
||||
# Content-Encoding: gzip
|
||||
# Vary: Accept-Encoding
|
||||
```
|
||||
|
||||
## Response Caching
|
||||
|
||||
### HTTP Response Cache
|
||||
|
||||
GET requests are cached to reduce database and computation overhead.
|
||||
|
||||
**Features:**
|
||||
- **TTL-Based**: 5-minute default cache lifetime
|
||||
- **ETag Support**: HTTP ETag validation for conditional requests
|
||||
- **Automatic Cleanup**: Expired entries removed automatically
|
||||
- **Cache Headers**: `X-Cache: HIT/MISS` header indicates cache status
|
||||
|
||||
**Cache Key Generation:**
|
||||
- Includes HTTP method, path, and query string
|
||||
- SHA256 hash for consistent key length
|
||||
- Per-request unique keys
|
||||
|
||||
**Cached Endpoints:**
|
||||
- Public GET endpoints (pools, datasets, ZVOLs lists)
|
||||
- Static resources
|
||||
- Read-only operations
|
||||
|
||||
**Non-Cached Endpoints:**
|
||||
- Authenticated endpoints (user-specific data)
|
||||
- Dynamic endpoints (`/metrics`, `/health`, `/dashboard`)
|
||||
- Mutating operations (POST, PUT, DELETE)
|
||||
|
||||
**ETag Support:**
|
||||
```bash
|
||||
# First request
|
||||
curl http://localhost:8080/api/v1/pools
|
||||
# Response: ETag: "abc123..." X-Cache: MISS
|
||||
|
||||
# Conditional request
|
||||
curl -H "If-None-Match: \"abc123...\"" http://localhost:8080/api/v1/pools
|
||||
# Response: 304 Not Modified (no body)
|
||||
```
|
||||
|
||||
**Cache Invalidation:**
|
||||
- Automatic expiration after TTL
|
||||
- Manual invalidation via cache API (future enhancement)
|
||||
- Pattern-based invalidation support
|
||||
|
||||
## Database Connection Pooling
|
||||
|
||||
### Optimized Connection Pool
|
||||
|
||||
SQLite database connections are pooled for better performance.
|
||||
|
||||
**Configuration:**
|
||||
```go
|
||||
conn.SetMaxOpenConns(25) // Maximum open connections
|
||||
conn.SetMaxIdleConns(5) // Maximum idle connections
|
||||
conn.SetConnMaxLifetime(5 * time.Minute) // Connection lifetime
|
||||
```
|
||||
|
||||
**WAL Mode:**
|
||||
- Write-Ahead Logging enabled for better concurrency
|
||||
- Improved read performance
|
||||
- Better handling of concurrent readers
|
||||
|
||||
**Benefits:**
|
||||
- Reduced connection overhead
|
||||
- Better resource utilization
|
||||
- Improved concurrent request handling
|
||||
|
||||
## Middleware Chain Optimization
|
||||
|
||||
### Efficient Middleware Order
|
||||
|
||||
Middleware is ordered for optimal performance:
|
||||
|
||||
1. **CORS** - Early exit for preflight
|
||||
2. **Compression** - Compress responses early
|
||||
3. **Security Headers** - Add headers once
|
||||
4. **Request Size Limit** - Reject large requests early
|
||||
5. **Content-Type Validation** - Validate early
|
||||
6. **Rate Limiting** - Protect resources
|
||||
7. **Caching** - Return cached responses quickly
|
||||
8. **Error Recovery** - Catch panics
|
||||
9. **Request ID** - Generate ID once
|
||||
10. **Logging** - Log after processing
|
||||
11. **Audit** - Record after success
|
||||
12. **Authentication** - Validate last (after cache check)
|
||||
|
||||
**Performance Impact:**
|
||||
- Cached responses skip most middleware
|
||||
- Early validation prevents unnecessary processing
|
||||
- Compression reduces bandwidth
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Use Caching Effectively
|
||||
|
||||
```bash
|
||||
# Cache-friendly requests
|
||||
GET /api/v1/pools # Cached
|
||||
GET /api/v1/datasets # Cached
|
||||
|
||||
# Non-cached (dynamic)
|
||||
GET /api/v1/dashboard # Not cached (real-time data)
|
||||
GET /api/v1/system/info # Not cached (system state)
|
||||
```
|
||||
|
||||
### 2. Leverage ETags
|
||||
|
||||
```bash
|
||||
# Check if content changed
|
||||
curl -H "If-None-Match: \"etag-value\"" /api/v1/pools
|
||||
|
||||
# Server responds with 304 if unchanged
|
||||
```
|
||||
|
||||
### 3. Enable Compression
|
||||
|
||||
```bash
|
||||
# Always include Accept-Encoding header
|
||||
curl -H "Accept-Encoding: gzip" /api/v1/pools
|
||||
```
|
||||
|
||||
### 4. Monitor Cache Performance
|
||||
|
||||
Check `X-Cache` header:
|
||||
- `HIT`: Response served from cache
|
||||
- `MISS`: Response generated fresh
|
||||
|
||||
### 5. Database Queries
|
||||
|
||||
- Use connection pooling (automatic)
|
||||
- WAL mode enabled for better concurrency
|
||||
- Connection lifetime managed automatically
|
||||
|
||||
## Performance Metrics
|
||||
|
||||
### Response Times
|
||||
|
||||
Monitor response times via:
|
||||
- Access logs (duration in logs)
|
||||
- `/metrics` endpoint (Prometheus metrics)
|
||||
- Request ID tracking
|
||||
|
||||
### Cache Hit Rate
|
||||
|
||||
Monitor cache effectiveness:
|
||||
- Check `X-Cache: HIT` vs `X-Cache: MISS` in responses
|
||||
- Higher hit rate = better performance
|
||||
|
||||
### Compression Ratio
|
||||
|
||||
Monitor bandwidth savings:
|
||||
- Compare compressed vs uncompressed sizes
|
||||
- Typical savings: 60-80% for JSON/text
|
||||
|
||||
## Configuration
|
||||
|
||||
### Cache TTL
|
||||
|
||||
Default: 5 minutes
|
||||
|
||||
To modify, edit `cache_middleware.go`:
|
||||
```go
|
||||
cache := NewResponseCache(5 * time.Minute) // Change TTL here
|
||||
```
|
||||
|
||||
### Compression
|
||||
|
||||
Automatic for all responses when client supports gzip.
|
||||
|
||||
To disable for specific endpoints, modify `compression_middleware.go`.
|
||||
|
||||
### Database Pool
|
||||
|
||||
Current settings:
|
||||
- Max Open: 25 connections
|
||||
- Max Idle: 5 connections
|
||||
- Max Lifetime: 5 minutes
|
||||
|
||||
To modify, edit `db/db.go`:
|
||||
```go
|
||||
conn.SetMaxOpenConns(25) // Adjust as needed
|
||||
conn.SetMaxIdleConns(5) // Adjust as needed
|
||||
conn.SetConnMaxLifetime(5 * time.Minute) // Adjust as needed
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Cache Statistics
|
||||
|
||||
Monitor cache performance:
|
||||
- Check `X-Cache` headers in responses
|
||||
- Track cache hit/miss ratios
|
||||
- Monitor cache size (future enhancement)
|
||||
|
||||
### Compression Statistics
|
||||
|
||||
Monitor compression effectiveness:
|
||||
- Check `Content-Encoding: gzip` in responses
|
||||
- Compare response sizes
|
||||
- Monitor bandwidth usage
|
||||
|
||||
### Database Performance
|
||||
|
||||
Monitor database:
|
||||
- Connection pool usage
|
||||
- Query performance
|
||||
- Connection lifetime
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Redis Cache**: Distributed caching for multi-instance deployments
|
||||
2. **Cache Statistics**: Detailed cache metrics endpoint
|
||||
3. **Configurable TTL**: Per-endpoint cache TTL configuration
|
||||
4. **Cache Warming**: Pre-populate cache for common requests
|
||||
5. **Compression Levels**: Configurable compression levels
|
||||
6. **Query Caching**: Cache database query results
|
||||
7. **Response Streaming**: Stream large responses
|
||||
8. **HTTP/2 Support**: Better multiplexing and compression
|
||||
9. **CDN Integration**: Edge caching for static resources
|
||||
10. **Performance Profiling**: Built-in performance profiler
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Cache Not Working
|
||||
|
||||
1. Check if endpoint is cacheable (GET request, public endpoint)
|
||||
2. Verify `X-Cache` header in response
|
||||
3. Check cache TTL hasn't expired
|
||||
4. Ensure endpoint isn't in skip list
|
||||
|
||||
### Compression Not Working
|
||||
|
||||
1. Verify client sends `Accept-Encoding: gzip` header
|
||||
2. Check response includes `Content-Encoding: gzip`
|
||||
3. Ensure content type isn't excluded (images, videos)
|
||||
|
||||
### Database Performance Issues
|
||||
|
||||
1. Check connection pool settings
|
||||
2. Monitor connection usage
|
||||
3. Verify WAL mode is enabled
|
||||
4. Check for long-running queries
|
||||
|
||||
## Performance Benchmarks
|
||||
|
||||
### Typical Improvements
|
||||
|
||||
- **Response Time**: 30-50% faster for cached responses
|
||||
- **Bandwidth**: 60-80% reduction with compression
|
||||
- **Database Load**: 40-60% reduction with caching
|
||||
- **Concurrent Requests**: 2-3x improvement with connection pooling
|
||||
|
||||
### Example Metrics
|
||||
|
||||
```
|
||||
Before Optimization:
|
||||
- Average response time: 150ms
|
||||
- Bandwidth per request: 10KB
|
||||
- Database queries per request: 3
|
||||
|
||||
After Optimization:
|
||||
- Average response time: 50ms (cached) / 120ms (uncached)
|
||||
- Bandwidth per request: 3KB (compressed)
|
||||
- Database queries per request: 1.2 (with caching)
|
||||
```
|
||||
226
docs/POSTGRESQL_MIGRATION.md
Normal file
226
docs/POSTGRESQL_MIGRATION.md
Normal file
@@ -0,0 +1,226 @@
|
||||
# PostgreSQL Migration Guide
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS now supports both SQLite and PostgreSQL databases. You can switch between them by changing the database connection string.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Using PostgreSQL
|
||||
|
||||
Set the `ATLAS_DB_CONN` environment variable to a PostgreSQL connection string:
|
||||
|
||||
```bash
|
||||
export ATLAS_DB_CONN="postgres://username:password@localhost:5432/atlas?sslmode=disable"
|
||||
./atlas-api
|
||||
```
|
||||
|
||||
### Using SQLite (Default)
|
||||
|
||||
Set the `ATLAS_DB_PATH` environment variable to a file path:
|
||||
|
||||
```bash
|
||||
export ATLAS_DB_PATH="/var/lib/atlas/atlas.db"
|
||||
./atlas-api
|
||||
```
|
||||
|
||||
Or use the connection string format:
|
||||
|
||||
```bash
|
||||
export ATLAS_DB_CONN="sqlite:///var/lib/atlas/atlas.db"
|
||||
./atlas-api
|
||||
```
|
||||
|
||||
## Connection String Formats
|
||||
|
||||
### PostgreSQL
|
||||
|
||||
```
|
||||
postgres://[user[:password]@][netloc][:port][/dbname][?param1=value1&...]
|
||||
```
|
||||
|
||||
Examples:
|
||||
- `postgres://user:pass@localhost:5432/atlas`
|
||||
- `postgres://user:pass@localhost:5432/atlas?sslmode=disable`
|
||||
- `postgresql://user:pass@db.example.com:5432/atlas?sslmode=require`
|
||||
|
||||
### SQLite
|
||||
|
||||
- File path: `/var/lib/atlas/atlas.db`
|
||||
- Connection string: `sqlite:///var/lib/atlas/atlas.db`
|
||||
|
||||
## Setup PostgreSQL Database
|
||||
|
||||
### 1. Install PostgreSQL
|
||||
|
||||
**Ubuntu/Debian:**
|
||||
```bash
|
||||
sudo apt-get update
|
||||
sudo apt-get install postgresql postgresql-contrib
|
||||
```
|
||||
|
||||
**CentOS/RHEL:**
|
||||
```bash
|
||||
sudo yum install postgresql-server postgresql-contrib
|
||||
sudo postgresql-setup initdb
|
||||
sudo systemctl start postgresql
|
||||
sudo systemctl enable postgresql
|
||||
```
|
||||
|
||||
### 2. Create Database and User
|
||||
|
||||
```bash
|
||||
# Switch to postgres user
|
||||
sudo -u postgres psql
|
||||
|
||||
# Create database
|
||||
CREATE DATABASE atlas;
|
||||
|
||||
# Create user
|
||||
CREATE USER atlas_user WITH PASSWORD 'your_secure_password';
|
||||
|
||||
# Grant privileges
|
||||
GRANT ALL PRIVILEGES ON DATABASE atlas TO atlas_user;
|
||||
|
||||
# Exit
|
||||
\q
|
||||
```
|
||||
|
||||
### 3. Configure AtlasOS
|
||||
|
||||
Update your systemd service file (`/etc/systemd/system/atlas-api.service`):
|
||||
|
||||
```ini
|
||||
[Service]
|
||||
Environment="ATLAS_DB_CONN=postgres://atlas_user:your_secure_password@localhost:5432/atlas?sslmode=disable"
|
||||
```
|
||||
|
||||
Or update `/etc/atlas/atlas.conf`:
|
||||
|
||||
```bash
|
||||
# PostgreSQL connection string
|
||||
ATLAS_DB_CONN=postgres://atlas_user:your_secure_password@localhost:5432/atlas?sslmode=disable
|
||||
```
|
||||
|
||||
### 4. Restart Service
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart atlas-api
|
||||
```
|
||||
|
||||
## Migration from SQLite to PostgreSQL
|
||||
|
||||
### Option 1: Fresh Start (Recommended for new installations)
|
||||
|
||||
1. Set up PostgreSQL database (see above)
|
||||
2. Update connection string
|
||||
3. Restart service - tables will be created automatically
|
||||
|
||||
### Option 2: Data Migration
|
||||
|
||||
If you have existing SQLite data:
|
||||
|
||||
1. **Export from SQLite:**
|
||||
```bash
|
||||
sqlite3 /var/lib/atlas/atlas.db .dump > atlas_backup.sql
|
||||
```
|
||||
|
||||
2. **Convert SQL to PostgreSQL format:**
|
||||
- Replace `INTEGER` with `BOOLEAN` for boolean fields
|
||||
- Replace `TEXT` with `VARCHAR(255)` or `TEXT` as appropriate
|
||||
- Update timestamp formats
|
||||
|
||||
3. **Import to PostgreSQL:**
|
||||
```bash
|
||||
psql -U atlas_user -d atlas < converted_backup.sql
|
||||
```
|
||||
|
||||
## Rebuilding the Application
|
||||
|
||||
### 1. Install PostgreSQL Development Libraries
|
||||
|
||||
**Ubuntu/Debian:**
|
||||
```bash
|
||||
sudo apt-get install libpq-dev
|
||||
```
|
||||
|
||||
**CentOS/RHEL:**
|
||||
```bash
|
||||
sudo yum install postgresql-devel
|
||||
```
|
||||
|
||||
### 2. Update Dependencies
|
||||
|
||||
```bash
|
||||
go mod tidy
|
||||
```
|
||||
|
||||
### 3. Build
|
||||
|
||||
```bash
|
||||
go build -o atlas-api ./cmd/atlas-api
|
||||
go build -o atlas-tui ./cmd/atlas-tui
|
||||
```
|
||||
|
||||
Or use the installer:
|
||||
|
||||
```bash
|
||||
sudo ./installer/install.sh
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description | Example |
|
||||
|----------|-------------|---------|
|
||||
| `ATLAS_DB_CONN` | Database connection string (takes precedence) | `postgres://user:pass@host:5432/db` |
|
||||
| `ATLAS_DB_PATH` | SQLite database path (fallback if `ATLAS_DB_CONN` not set) | `/var/lib/atlas/atlas.db` |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Connection Refused
|
||||
|
||||
- Check PostgreSQL is running: `sudo systemctl status postgresql`
|
||||
- Verify connection string format
|
||||
- Check firewall rules for port 5432
|
||||
|
||||
### Authentication Failed
|
||||
|
||||
- Verify username and password
|
||||
- Check `pg_hba.conf` for authentication settings
|
||||
- Ensure user has proper permissions
|
||||
|
||||
### Database Not Found
|
||||
|
||||
- Verify database exists: `psql -l`
|
||||
- Check database name in connection string
|
||||
|
||||
### SSL Mode Errors
|
||||
|
||||
- For local connections, use `?sslmode=disable`
|
||||
- For production, configure SSL properly
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### PostgreSQL Advantages
|
||||
|
||||
- Better concurrency (multiple writers)
|
||||
- Advanced query optimization
|
||||
- Better for high-traffic scenarios
|
||||
- Supports replication and clustering
|
||||
|
||||
### SQLite Advantages
|
||||
|
||||
- Zero configuration
|
||||
- Single file deployment
|
||||
- Lower resource usage
|
||||
- Perfect for small deployments
|
||||
|
||||
## Schema Differences
|
||||
|
||||
The application automatically handles schema differences:
|
||||
|
||||
- **SQLite**: Uses `INTEGER` for booleans, `TEXT` for strings
|
||||
- **PostgreSQL**: Uses `BOOLEAN` for booleans, `VARCHAR/TEXT` for strings
|
||||
|
||||
The migration system creates the appropriate schema based on the database type.
|
||||
150
docs/RBAC_PERMISSIONS.md
Normal file
150
docs/RBAC_PERMISSIONS.md
Normal file
@@ -0,0 +1,150 @@
|
||||
# Role-Based Access Control (RBAC) - Current Implementation
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS implements a three-tier role-based access control system with the following roles:
|
||||
|
||||
1. **Administrator** (`administrator`) - Full system control
|
||||
2. **Operator** (`operator`) - Storage and service operations
|
||||
3. **Viewer** (`viewer`) - Read-only access
|
||||
|
||||
## Current Implementation Status
|
||||
|
||||
### ✅ Fully Implemented (Administrator-Only)
|
||||
|
||||
These operations **require Administrator role**:
|
||||
|
||||
- **User Management**: Create, update, delete users, list users
|
||||
- **Service Management**: Start, stop, restart, reload services, view service logs
|
||||
- **Maintenance Mode**: Enable/disable maintenance mode
|
||||
|
||||
### ⚠️ Partially Implemented (Authentication Required, No Role Check)
|
||||
|
||||
These operations **require authentication** but **don't check specific roles** (any authenticated user can perform them):
|
||||
|
||||
- **ZFS Operations**: Create/delete pools, datasets, ZVOLs, import/export pools, scrub operations
|
||||
- **Snapshot Management**: Create/delete snapshots, create/delete snapshot policies
|
||||
- **Storage Services**: Create/update/delete SMB shares, NFS exports, iSCSI targets
|
||||
- **Backup & Restore**: Create backups, restore backups
|
||||
|
||||
### ✅ Public (No Authentication Required)
|
||||
|
||||
These endpoints are **publicly accessible**:
|
||||
|
||||
- **Read-Only Operations**: List pools, datasets, ZVOLs, shares, exports, targets, snapshots
|
||||
- **Dashboard Data**: System statistics and health information
|
||||
- **Web UI Pages**: All HTML pages (authentication required for mutations via API)
|
||||
|
||||
## Role Definitions
|
||||
|
||||
### Administrator (`administrator`)
|
||||
- **Full system access**
|
||||
- Can manage users (create, update, delete)
|
||||
- Can manage services (start, stop, restart, reload)
|
||||
- Can enable/disable maintenance mode
|
||||
- Can perform all storage operations
|
||||
- Can view audit logs
|
||||
|
||||
### Operator (`operator`)
|
||||
- **Storage and service operations** (intended)
|
||||
- Currently: Same as authenticated user (can perform storage operations)
|
||||
- Should be able to: Create/manage pools, datasets, shares, snapshots
|
||||
- Should NOT be able to: Manage users, manage services, maintenance mode
|
||||
|
||||
### Viewer (`viewer`)
|
||||
- **Read-only access** (intended)
|
||||
- Currently: Can view all public data
|
||||
- Should be able to: View all system information
|
||||
- Should NOT be able to: Perform any mutations (create, update, delete)
|
||||
|
||||
## Current Permission Matrix
|
||||
|
||||
| Operation | Administrator | Operator | Viewer | Unauthenticated |
|
||||
|-----------|--------------|----------|--------|-----------------|
|
||||
| **User Management** |
|
||||
| List users | ✅ | ❌ | ❌ | ❌ |
|
||||
| Create user | ✅ | ❌ | ❌ | ❌ |
|
||||
| Update user | ✅ | ❌ | ❌ | ❌ |
|
||||
| Delete user | ✅ | ❌ | ❌ | ❌ |
|
||||
| **Service Management** |
|
||||
| View service status | ✅ | ❌ | ❌ | ❌ |
|
||||
| Start/stop/restart service | ✅ | ❌ | ❌ | ❌ |
|
||||
| View service logs | ✅ | ❌ | ❌ | ❌ |
|
||||
| **Storage Operations** |
|
||||
| List pools/datasets/ZVOLs | ✅ | ✅ | ✅ | ✅ (public) |
|
||||
| Create pool/dataset/ZVOL | ✅ | ✅* | ❌ | ❌ |
|
||||
| Delete pool/dataset/ZVOL | ✅ | ✅* | ❌ | ❌ |
|
||||
| Import/export pool | ✅ | ✅* | ❌ | ❌ |
|
||||
| **Share Management** |
|
||||
| List shares/exports/targets | ✅ | ✅ | ✅ | ✅ (public) |
|
||||
| Create share/export/target | ✅ | ✅* | ❌ | ❌ |
|
||||
| Update share/export/target | ✅ | ✅* | ❌ | ❌ |
|
||||
| Delete share/export/target | ✅ | ✅* | ❌ | ❌ |
|
||||
| **Snapshot Management** |
|
||||
| List snapshots/policies | ✅ | ✅ | ✅ | ✅ (public) |
|
||||
| Create snapshot/policy | ✅ | ✅* | ❌ | ❌ |
|
||||
| Delete snapshot/policy | ✅ | ✅* | ❌ | ❌ |
|
||||
| **Maintenance Mode** |
|
||||
| View status | ✅ | ✅ | ✅ | ✅ (public) |
|
||||
| Enable/disable | ✅ | ❌ | ❌ | ❌ |
|
||||
|
||||
*Currently works but not explicitly restricted - any authenticated user can perform these operations
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Role Checking
|
||||
|
||||
Roles are checked using the `requireRole()` middleware:
|
||||
|
||||
```go
|
||||
// Example: Administrator-only endpoint
|
||||
a.mux.HandleFunc("/api/v1/users", methodHandler(
|
||||
func(w http.ResponseWriter, r *http.Request) { a.handleListUsers(w, r) },
|
||||
func(w http.ResponseWriter, r *http.Request) {
|
||||
adminRole := models.RoleAdministrator
|
||||
a.requireRole(adminRole)(http.HandlerFunc(a.handleCreateUser)).ServeHTTP(w, r)
|
||||
},
|
||||
nil, nil, nil,
|
||||
))
|
||||
```
|
||||
|
||||
### Multiple Roles Support
|
||||
|
||||
The `requireRole()` function accepts multiple roles:
|
||||
|
||||
```go
|
||||
// Allow both Administrator and Operator
|
||||
a.requireRole(models.RoleAdministrator, models.RoleOperator)(handler)
|
||||
```
|
||||
|
||||
### Current Limitations
|
||||
|
||||
1. **No Operator/Viewer Differentiation**: Most storage operations don't check roles - they only require authentication
|
||||
2. **Hardcoded Role Checks**: Role permissions are defined in route handlers, not in a centralized permission matrix
|
||||
3. **No Granular Permissions**: Can't assign specific permissions (e.g., "can create pools but not delete them")
|
||||
|
||||
## Future Improvements
|
||||
|
||||
To properly implement Operator and Viewer roles:
|
||||
|
||||
1. **Add Role Checks to Storage Operations**:
|
||||
- Allow Operator and Administrator for create/update/delete operations
|
||||
- Restrict Viewer to read-only (GET requests only)
|
||||
|
||||
2. **Centralize Permission Matrix**:
|
||||
- Create a permission configuration file or database table
|
||||
- Map operations to required roles
|
||||
|
||||
3. **Granular Permissions** (Future):
|
||||
- Allow custom permission sets
|
||||
- Support resource-level permissions (e.g., "can manage pool X but not pool Y")
|
||||
|
||||
## Testing Roles
|
||||
|
||||
To test different roles:
|
||||
|
||||
1. Create users with different roles via the Management page
|
||||
2. Login as each user
|
||||
3. Attempt operations and verify permissions
|
||||
|
||||
**Note**: Currently, most operations work for any authenticated user. Only user management, service management, and maintenance mode are properly restricted to Administrators.
|
||||
163
docs/SERVICE_INTEGRATION.md
Normal file
163
docs/SERVICE_INTEGRATION.md
Normal file
@@ -0,0 +1,163 @@
|
||||
# Service Daemon Integration
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS integrates with system storage daemons (Samba, NFS, iSCSI) to automatically apply configuration changes. When storage services are created, updated, or deleted via the API, the system daemons are automatically reconfigured.
|
||||
|
||||
## Architecture
|
||||
|
||||
The service integration layer (`internal/services/`) provides:
|
||||
|
||||
- **Configuration Generation**: Converts API models to daemon-specific configuration formats
|
||||
- **Atomic Updates**: Writes to temporary files, then atomically replaces configuration
|
||||
- **Safe Reloads**: Reloads services without interrupting active connections
|
||||
- **Error Recovery**: Automatically restores backups on configuration failures
|
||||
|
||||
## SMB/Samba Integration
|
||||
|
||||
### Configuration
|
||||
- **Config File**: `/etc/samba/smb.conf`
|
||||
- **Service**: `smbd` (Samba daemon)
|
||||
- **Reload Method**: `smbcontrol all reload-config` or `systemctl reload smbd`
|
||||
|
||||
### Features
|
||||
- Generates Samba configuration from SMB share definitions
|
||||
- Supports read-only, guest access, and user restrictions
|
||||
- Automatically reloads Samba after configuration changes
|
||||
- Validates configuration syntax using `testparm`
|
||||
|
||||
### Example
|
||||
When an SMB share is created via API:
|
||||
1. Share is stored in the store
|
||||
2. All shares are retrieved
|
||||
3. Samba configuration is generated
|
||||
4. Configuration is written to `/etc/samba/smb.conf`
|
||||
5. Samba service is reloaded
|
||||
|
||||
## NFS Integration
|
||||
|
||||
### Configuration
|
||||
- **Config File**: `/etc/exports`
|
||||
- **Service**: `nfs-server`
|
||||
- **Reload Method**: `exportfs -ra`
|
||||
|
||||
### Features
|
||||
- Generates `/etc/exports` format from NFS export definitions
|
||||
- Supports read-only, client restrictions, and root squash
|
||||
- Automatically reloads NFS exports after configuration changes
|
||||
- Handles multiple clients per export
|
||||
|
||||
### Example
|
||||
When an NFS export is created via API:
|
||||
1. Export is stored in the store
|
||||
2. All exports are retrieved
|
||||
3. `/etc/exports` is generated
|
||||
4. Exports file is written atomically
|
||||
5. NFS exports are reloaded using `exportfs -ra`
|
||||
|
||||
## iSCSI Integration
|
||||
|
||||
### Configuration
|
||||
- **Tool**: `targetcli` (LIO target framework)
|
||||
- **Service**: `target` (systemd service)
|
||||
- **Method**: Direct targetcli commands
|
||||
|
||||
### Features
|
||||
- Creates iSCSI targets with IQN
|
||||
- Configures initiator ACLs
|
||||
- Maps ZVOLs as LUNs
|
||||
- Manages target enable/disable state
|
||||
|
||||
### Example
|
||||
When an iSCSI target is created via API:
|
||||
1. Target is stored in the store
|
||||
2. All targets are retrieved
|
||||
3. For each target:
|
||||
- Target is created via `targetcli`
|
||||
- Initiator ACLs are configured
|
||||
- LUNs are mapped to ZVOLs
|
||||
4. Configuration is applied atomically
|
||||
|
||||
## Safety Features
|
||||
|
||||
### Atomic Configuration Updates
|
||||
1. Write configuration to temporary file (`*.atlas.tmp`)
|
||||
2. Backup existing configuration (`.backup`)
|
||||
3. Atomically replace configuration file
|
||||
4. Reload service
|
||||
5. On failure, restore backup
|
||||
|
||||
### Error Handling
|
||||
- Configuration errors are logged but don't fail API requests
|
||||
- Service reload failures trigger automatic backup restoration
|
||||
- Validation is performed before applying changes (where supported)
|
||||
|
||||
## Service Status
|
||||
|
||||
Each service provides a `GetStatus()` method to check if the daemon is running:
|
||||
|
||||
```go
|
||||
// Check Samba status
|
||||
running, err := smbService.GetStatus()
|
||||
|
||||
// Check NFS status
|
||||
running, err := nfsService.GetStatus()
|
||||
|
||||
// Check iSCSI status
|
||||
running, err := iscsiService.GetStatus()
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
### Samba
|
||||
- `samba` package installed
|
||||
- `smbcontrol` command available
|
||||
- Write access to `/etc/samba/smb.conf`
|
||||
- Root/sudo privileges for service reload
|
||||
|
||||
### NFS
|
||||
- `nfs-kernel-server` package installed
|
||||
- `exportfs` command available
|
||||
- Write access to `/etc/exports`
|
||||
- Root/sudo privileges for export reload
|
||||
|
||||
### iSCSI
|
||||
- `targetcli` package installed
|
||||
- LIO target framework enabled
|
||||
- Root/sudo privileges for targetcli operations
|
||||
- ZVOL backend support
|
||||
|
||||
## Configuration Flow
|
||||
|
||||
```
|
||||
API Request → Store Update → Service Integration → Daemon Configuration
|
||||
↓ ↓ ↓ ↓
|
||||
Create/ Store in Generate Config Write & Reload
|
||||
Update/ Memory/DB from Models Service
|
||||
Delete
|
||||
```
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Async Configuration**: Queue configuration changes for background processing
|
||||
2. **Validation API**: Pre-validate configurations before applying
|
||||
3. **Rollback Support**: Automatic rollback on service failures
|
||||
4. **Status Monitoring**: Real-time service health monitoring
|
||||
5. **Configuration Diff**: Show what will change before applying
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Samba Configuration Not Applied
|
||||
- Check Samba service status: `systemctl status smbd`
|
||||
- Validate configuration: `testparm -s`
|
||||
- Check logs: `journalctl -u smbd`
|
||||
|
||||
### NFS Exports Not Working
|
||||
- Check NFS service status: `systemctl status nfs-server`
|
||||
- Verify exports: `exportfs -v`
|
||||
- Check permissions on exported paths
|
||||
|
||||
### iSCSI Targets Not Created
|
||||
- Verify targetcli is installed: `which targetcli`
|
||||
- Check LIO service: `systemctl status target`
|
||||
- Review targetcli output for errors
|
||||
305
docs/SMB_LDAP_AD_INTEGRATION.md
Normal file
305
docs/SMB_LDAP_AD_INTEGRATION.md
Normal file
@@ -0,0 +1,305 @@
|
||||
# SMB/CIFS Shares - LDAP/Active Directory Integration
|
||||
|
||||
## Skema Autentikasi Saat Ini
|
||||
|
||||
### Implementasi Current (v0.1.0-dev)
|
||||
|
||||
1. **Samba Configuration:**
|
||||
- `security = user` - User-based authentication
|
||||
- User management terpisah antara:
|
||||
- **Atlas Web UI**: In-memory `UserStore` (untuk login web)
|
||||
- **Samba**: User harus dibuat manual di sistem Linux menggunakan `smbpasswd` atau `pdbedit`
|
||||
|
||||
2. **Masalah yang Ada:**
|
||||
- ❌ User Atlas (web UI) ≠ User Samba (SMB access)
|
||||
- ❌ Tidak ada sinkronisasi user antara Atlas dan Samba
|
||||
- ❌ User harus dibuat manual di sistem untuk akses SMB
|
||||
- ❌ Tidak ada integrasi dengan LDAP/AD
|
||||
- ❌ `ValidUsers` di SMB share hanya berupa list username string, tidak terintegrasi dengan sistem user management
|
||||
|
||||
3. **Arsitektur Saat Ini:**
|
||||
```
|
||||
Atlas Web UI (UserStore) ──┐
|
||||
├──> Tidak terhubung
|
||||
Samba (smbpasswd/pdbedit) ─┘
|
||||
```
|
||||
|
||||
## Feasibility untuk LDAP/AD Integration
|
||||
|
||||
### ✅ **SANGAT FEASIBLE**
|
||||
|
||||
Samba memiliki dukungan native untuk LDAP dan Active Directory:
|
||||
|
||||
1. **Samba Security Modes:**
|
||||
- `security = ads` - Active Directory Domain Services (recommended untuk AD)
|
||||
- `security = domain` - NT4 Domain (legacy)
|
||||
- `passdb backend = ldapsam` - LDAP backend untuk user database
|
||||
|
||||
2. **Keuntungan Integrasi LDAP/AD:**
|
||||
- ✅ Single Sign-On (SSO) - user login sekali untuk semua service
|
||||
- ✅ Centralized user management - tidak perlu manage user di multiple tempat
|
||||
- ✅ Group-based access control - bisa assign share berdasarkan AD groups
|
||||
- ✅ Enterprise-ready - sesuai dengan best practices enterprise storage
|
||||
- ✅ Audit trail yang lebih baik - semua akses ter-track di AD
|
||||
|
||||
## Rekomendasi Implementasi
|
||||
|
||||
### Phase 1: LDAP/AD Configuration Support (Priority: High)
|
||||
|
||||
**1. Tambahkan Configuration Model:**
|
||||
```go
|
||||
// internal/models/config.go
|
||||
type LDAPConfig struct {
|
||||
Enabled bool `json:"enabled"`
|
||||
Type string `json:"type"` // "ldap" or "ad"
|
||||
Server string `json:"server"` // LDAP/AD server FQDN or IP
|
||||
BaseDN string `json:"base_dn"` // Base DN for searches
|
||||
BindDN string `json:"bind_dn"` // Service account DN
|
||||
BindPassword string `json:"bind_password"` // Service account password
|
||||
UserDN string `json:"user_dn"` // User DN template (e.g., "CN=Users,DC=example,DC=com")
|
||||
GroupDN string `json:"group_dn"` // Group DN template
|
||||
Realm string `json:"realm"` // AD realm (e.g., "EXAMPLE.COM")
|
||||
Workgroup string `json:"workgroup"` // Workgroup name
|
||||
}
|
||||
```
|
||||
|
||||
**2. Update SMB Service untuk Support LDAP/AD:**
|
||||
```go
|
||||
// internal/services/smb.go
|
||||
func (s *SMBService) generateConfig(shares []models.SMBShare, ldapConfig *models.LDAPConfig) (string, error) {
|
||||
var b strings.Builder
|
||||
|
||||
b.WriteString("[global]\n")
|
||||
b.WriteString(" server string = AtlasOS Storage Server\n")
|
||||
b.WriteString(" dns proxy = no\n")
|
||||
|
||||
if ldapConfig != nil && ldapConfig.Enabled {
|
||||
if ldapConfig.Type == "ad" {
|
||||
// Active Directory mode
|
||||
b.WriteString(" security = ads\n")
|
||||
b.WriteString(fmt.Sprintf(" realm = %s\n", ldapConfig.Realm))
|
||||
b.WriteString(fmt.Sprintf(" workgroup = %s\n", ldapConfig.Workgroup))
|
||||
b.WriteString(" idmap config * : backend = tdb\n")
|
||||
b.WriteString(" idmap config * : range = 10000-20000\n")
|
||||
b.WriteString(" winbind enum users = yes\n")
|
||||
b.WriteString(" winbind enum groups = yes\n")
|
||||
} else {
|
||||
// LDAP mode
|
||||
b.WriteString(" security = user\n")
|
||||
b.WriteString(" passdb backend = ldapsam:ldap://" + ldapConfig.Server + "\n")
|
||||
b.WriteString(fmt.Sprintf(" ldap admin dn = %s\n", ldapConfig.BindDN))
|
||||
b.WriteString(fmt.Sprintf(" ldap suffix = %s\n", ldapConfig.BaseDN))
|
||||
b.WriteString(fmt.Sprintf(" ldap user suffix = %s\n", ldapConfig.UserDN))
|
||||
b.WriteString(fmt.Sprintf(" ldap group suffix = %s\n", ldapConfig.GroupDN))
|
||||
}
|
||||
} else {
|
||||
// Default: user mode (current implementation)
|
||||
b.WriteString(" security = user\n")
|
||||
b.WriteString(" map to guest = Bad User\n")
|
||||
}
|
||||
|
||||
// ... rest of share configuration
|
||||
}
|
||||
```
|
||||
|
||||
**3. Tambahkan API Endpoints untuk LDAP/AD Config:**
|
||||
```go
|
||||
// internal/httpapp/api_handlers.go
|
||||
// GET /api/v1/config/ldap - Get LDAP/AD configuration
|
||||
// PUT /api/v1/config/ldap - Update LDAP/AD configuration
|
||||
// POST /api/v1/config/ldap/test - Test LDAP/AD connection
|
||||
```
|
||||
|
||||
### Phase 2: User Sync & Group Support (Priority: Medium)
|
||||
|
||||
**1. LDAP/AD User Sync Service:**
|
||||
```go
|
||||
// internal/services/ldap.go
|
||||
type LDAPService struct {
|
||||
config *models.LDAPConfig
|
||||
conn *ldap.Conn
|
||||
}
|
||||
|
||||
func (s *LDAPService) SyncUsers() ([]LDAPUser, error) {
|
||||
// Query LDAP/AD untuk get users
|
||||
// Return list of users dengan attributes
|
||||
}
|
||||
|
||||
func (s *LDAPService) SyncGroups() ([]LDAPGroup, error) {
|
||||
// Query LDAP/AD untuk get groups
|
||||
// Return list of groups dengan members
|
||||
}
|
||||
|
||||
func (s *LDAPService) Authenticate(username, password string) (*LDAPUser, error) {
|
||||
// Authenticate user against LDAP/AD
|
||||
}
|
||||
```
|
||||
|
||||
**2. Update SMB Share Model untuk Support Groups:**
|
||||
```go
|
||||
// internal/models/storage.go
|
||||
type SMBShare struct {
|
||||
// ... existing fields
|
||||
ValidUsers []string `json:"valid_users"` // Username list
|
||||
ValidGroups []string `json:"valid_groups"` // Group name list (NEW)
|
||||
}
|
||||
```
|
||||
|
||||
**3. Update Samba Config untuk Support Groups:**
|
||||
```go
|
||||
if len(share.ValidUsers) > 0 {
|
||||
b.WriteString(fmt.Sprintf(" valid users = %s\n", strings.Join(share.ValidUsers, ", ")))
|
||||
}
|
||||
if len(share.ValidGroups) > 0 {
|
||||
b.WriteString(fmt.Sprintf(" valid groups = %s\n", strings.Join(share.ValidGroups, ", ")))
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: UI Integration (Priority: Medium)
|
||||
|
||||
**1. LDAP/AD Configuration Page:**
|
||||
- Form untuk configure LDAP/AD connection
|
||||
- Test connection button
|
||||
- Display sync status
|
||||
- Manual sync button
|
||||
|
||||
**2. Update SMB Share Creation UI:**
|
||||
- Dropdown untuk select users dari LDAP/AD (bukan manual input)
|
||||
- Dropdown untuk select groups dari LDAP/AD
|
||||
- Auto-complete untuk username/group search
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### Step 1: Add LDAP Library Dependency
|
||||
```bash
|
||||
go get github.com/go-ldap/ldap/v3
|
||||
```
|
||||
|
||||
### Step 2: Create LDAP Service
|
||||
- Implement `internal/services/ldap.go`
|
||||
- Support both LDAP and AD protocols
|
||||
- Handle connection, authentication, and queries
|
||||
|
||||
### Step 3: Update SMB Service
|
||||
- Modify `generateConfig()` to accept LDAP config
|
||||
- Support both `security = ads` and `passdb backend = ldapsam`
|
||||
|
||||
### Step 4: Add Configuration Storage
|
||||
- Store LDAP/AD config (encrypted password)
|
||||
- Add API endpoints for config management
|
||||
|
||||
### Step 5: Update UI
|
||||
- Add LDAP/AD configuration page
|
||||
- Update SMB share creation form
|
||||
- Add user/group selector with LDAP/AD integration
|
||||
|
||||
## Dependencies & Requirements
|
||||
|
||||
### System Packages:
|
||||
```bash
|
||||
# For AD integration
|
||||
sudo apt-get install winbind libnss-winbind libpam-winbind krb5-user
|
||||
|
||||
# For LDAP integration
|
||||
sudo apt-get install libnss-ldap libpam-ldap ldap-utils
|
||||
|
||||
# Samba packages (should already be installed)
|
||||
sudo apt-get install samba samba-common-bin
|
||||
```
|
||||
|
||||
### Go Dependencies:
|
||||
```go
|
||||
// go.mod
|
||||
require (
|
||||
github.com/go-ldap/ldap/v3 v3.4.6
|
||||
)
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Password Storage:**
|
||||
- Encrypt LDAP bind password di storage
|
||||
- Use environment variables atau secret management untuk production
|
||||
|
||||
2. **TLS/SSL:**
|
||||
- Always use `ldaps://` (LDAP over TLS) untuk production
|
||||
- Support certificate validation
|
||||
|
||||
3. **Service Account:**
|
||||
- Use dedicated service account dengan minimal permissions
|
||||
- Read-only access untuk user/group queries
|
||||
|
||||
4. **Network Security:**
|
||||
- Firewall rules untuk LDAP/AD ports (389, 636, 88, 445)
|
||||
- Consider VPN atau private network untuk LDAP/AD server
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
1. **Unit Tests:**
|
||||
- LDAP connection handling
|
||||
- User/group query parsing
|
||||
- Samba config generation dengan LDAP/AD
|
||||
|
||||
2. **Integration Tests:**
|
||||
- Test dengan LDAP server (OpenLDAP)
|
||||
- Test dengan AD server (Windows Server atau Samba AD)
|
||||
- Test user authentication flow
|
||||
|
||||
3. **Manual Testing:**
|
||||
- Create SMB share dengan AD user
|
||||
- Create SMB share dengan AD group
|
||||
- Test access dari Windows client
|
||||
- Test access dari Linux client
|
||||
|
||||
## Migration Path
|
||||
|
||||
### For Existing Installations:
|
||||
|
||||
1. **Backward Compatibility:**
|
||||
- Keep support untuk `security = user` mode
|
||||
- Existing shares tetap berfungsi
|
||||
- LDAP/AD adalah optional enhancement
|
||||
|
||||
2. **Gradual Migration:**
|
||||
- Admin bisa enable LDAP/AD secara gradual
|
||||
- Test dengan non-production shares dulu
|
||||
- Migrate user-by-user atau group-by-group
|
||||
|
||||
## Estimated Effort
|
||||
|
||||
- **Phase 1 (LDAP/AD Config):** 2-3 days
|
||||
- **Phase 2 (User Sync & Groups):** 3-4 days
|
||||
- **Phase 3 (UI Integration):** 2-3 days
|
||||
- **Testing & Documentation:** 2-3 days
|
||||
|
||||
**Total: ~10-13 days** untuk full LDAP/AD integration
|
||||
|
||||
## Alternative: Hybrid Approach
|
||||
|
||||
Jika full LDAP/AD integration terlalu kompleks untuk sekarang, bisa implement **hybrid approach**:
|
||||
|
||||
1. **Keep current `security = user` mode**
|
||||
2. **Add manual user import from LDAP/AD:**
|
||||
- Admin bisa sync users dari LDAP/AD ke local Samba
|
||||
- Users tetap di-manage di Samba, tapi source of truth adalah LDAP/AD
|
||||
- Periodic sync job untuk update users
|
||||
|
||||
3. **Benefits:**
|
||||
- Simpler implementation
|
||||
- No need untuk complex Samba AD join
|
||||
- Still provides centralized user management
|
||||
|
||||
## Conclusion
|
||||
|
||||
✅ **LDAP/AD integration sangat feasible dan recommended untuk enterprise storage solution**
|
||||
|
||||
**Recommended Approach:**
|
||||
1. Start dengan **Phase 1** (LDAP/AD config support)
|
||||
2. Test dengan environment development
|
||||
3. Gradually implement Phase 2 dan 3
|
||||
4. Consider hybrid approach jika full integration terlalu complex
|
||||
|
||||
**Priority:**
|
||||
- High untuk enterprise customers yang sudah punya AD/LDAP infrastructure
|
||||
- Medium untuk SMB customers yang mungkin belum punya AD/LDAP
|
||||
|
||||
366
docs/TESTING.md
Normal file
366
docs/TESTING.md
Normal file
@@ -0,0 +1,366 @@
|
||||
# Testing Infrastructure
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS includes a comprehensive testing infrastructure with unit tests, integration tests, and test utilities to ensure code quality and reliability.
|
||||
|
||||
## Test Structure
|
||||
|
||||
```
|
||||
atlas/
|
||||
├── internal/
|
||||
│ ├── validation/
|
||||
│ │ └── validator_test.go # Unit tests for validation
|
||||
│ ├── errors/
|
||||
│ │ └── errors_test.go # Unit tests for error handling
|
||||
│ └── testing/
|
||||
│ └── helpers.go # Test utilities and helpers
|
||||
└── test/
|
||||
└── integration_test.go # Integration tests
|
||||
```
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Run All Tests
|
||||
|
||||
```bash
|
||||
go test ./...
|
||||
```
|
||||
|
||||
### Run Tests for Specific Package
|
||||
|
||||
```bash
|
||||
# Validation tests
|
||||
go test ./internal/validation -v
|
||||
|
||||
# Error handling tests
|
||||
go test ./internal/errors -v
|
||||
|
||||
# Integration tests
|
||||
go test ./test -v
|
||||
```
|
||||
|
||||
### Run Tests with Coverage
|
||||
|
||||
```bash
|
||||
go test ./... -cover
|
||||
```
|
||||
|
||||
### Generate Coverage Report
|
||||
|
||||
```bash
|
||||
go test ./... -coverprofile=coverage.out
|
||||
go tool cover -html=coverage.out
|
||||
```
|
||||
|
||||
## Unit Tests
|
||||
|
||||
### Validation Tests
|
||||
|
||||
Tests for input validation functions:
|
||||
|
||||
```bash
|
||||
go test ./internal/validation -v
|
||||
```
|
||||
|
||||
**Coverage:**
|
||||
- ZFS name validation
|
||||
- Username validation
|
||||
- Password validation
|
||||
- Email validation
|
||||
- Share name validation
|
||||
- IQN validation
|
||||
- Size format validation
|
||||
- Path validation
|
||||
- CIDR validation
|
||||
- String sanitization
|
||||
- Path sanitization
|
||||
|
||||
**Example:**
|
||||
```go
|
||||
func TestValidateZFSName(t *testing.T) {
|
||||
err := ValidateZFSName("tank")
|
||||
if err != nil {
|
||||
t.Errorf("expected no error for valid name")
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling Tests
|
||||
|
||||
Tests for error handling and API errors:
|
||||
|
||||
```bash
|
||||
go test ./internal/errors -v
|
||||
```
|
||||
|
||||
**Coverage:**
|
||||
- Error code validation
|
||||
- HTTP status code mapping
|
||||
- Error message formatting
|
||||
- Error details attachment
|
||||
|
||||
## Integration Tests
|
||||
|
||||
### Test Server
|
||||
|
||||
The integration test framework provides a test server:
|
||||
|
||||
```go
|
||||
ts := NewTestServer(t)
|
||||
defer ts.Close()
|
||||
```
|
||||
|
||||
**Features:**
|
||||
- In-memory database for tests
|
||||
- Test HTTP client
|
||||
- Authentication helpers
|
||||
- Request helpers
|
||||
|
||||
### Authentication Testing
|
||||
|
||||
```go
|
||||
// Login and get token
|
||||
ts.Login(t, "admin", "admin")
|
||||
|
||||
// Make authenticated request
|
||||
resp := ts.Get(t, "/api/v1/pools")
|
||||
```
|
||||
|
||||
### Request Helpers
|
||||
|
||||
```go
|
||||
// GET request
|
||||
resp := ts.Get(t, "/api/v1/pools")
|
||||
|
||||
// POST request
|
||||
resp := ts.Post(t, "/api/v1/pools", map[string]interface{}{
|
||||
"name": "tank",
|
||||
"vdevs": []string{"/dev/sda"},
|
||||
})
|
||||
```
|
||||
|
||||
## Test Utilities
|
||||
|
||||
### Test Helpers Package
|
||||
|
||||
The `internal/testing` package provides utilities:
|
||||
|
||||
**MakeRequest**: Create and execute HTTP requests
|
||||
```go
|
||||
recorder := MakeRequest(t, handler, TestRequest{
|
||||
Method: "GET",
|
||||
Path: "/api/v1/pools",
|
||||
})
|
||||
```
|
||||
|
||||
**Assertions**:
|
||||
- `AssertStatusCode`: Check HTTP status code
|
||||
- `AssertJSONResponse`: Validate JSON response
|
||||
- `AssertErrorResponse`: Check error response format
|
||||
- `AssertSuccessResponse`: Validate success response
|
||||
- `AssertHeader`: Check response headers
|
||||
|
||||
**Example:**
|
||||
```go
|
||||
recorder := MakeRequest(t, handler, TestRequest{
|
||||
Method: "GET",
|
||||
Path: "/api/v1/pools",
|
||||
})
|
||||
|
||||
AssertStatusCode(t, recorder, http.StatusOK)
|
||||
response := AssertJSONResponse(t, recorder)
|
||||
```
|
||||
|
||||
### Mock Clients
|
||||
|
||||
**MockZFSClient**: Mock ZFS client for testing
|
||||
|
||||
```go
|
||||
mockClient := NewMockZFSClient()
|
||||
mockClient.AddPool(map[string]interface{}{
|
||||
"name": "tank",
|
||||
"size": "10TB",
|
||||
})
|
||||
```
|
||||
|
||||
## Writing Tests
|
||||
|
||||
### Unit Test Template
|
||||
|
||||
```go
|
||||
func TestFunctionName(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input string
|
||||
wantErr bool
|
||||
}{
|
||||
{"valid input", "valid", false},
|
||||
{"invalid input", "invalid", true},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
err := FunctionName(tt.input)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("FunctionName(%q) error = %v, wantErr %v",
|
||||
tt.input, err, tt.wantErr)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Integration Test Template
|
||||
|
||||
```go
|
||||
func TestEndpoint(t *testing.T) {
|
||||
ts := NewTestServer(t)
|
||||
defer ts.Close()
|
||||
|
||||
ts.Login(t, "admin", "admin")
|
||||
|
||||
resp := ts.Get(t, "/api/v1/endpoint")
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
t.Errorf("expected status 200, got %d", resp.StatusCode)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Test Coverage Goals
|
||||
|
||||
### Current Coverage
|
||||
|
||||
- **Validation Package**: ~95% coverage
|
||||
- **Error Package**: ~90% coverage
|
||||
- **Integration Tests**: Core endpoints covered
|
||||
|
||||
### Target Coverage
|
||||
|
||||
- **Unit Tests**: >80% coverage for all packages
|
||||
- **Integration Tests**: All API endpoints
|
||||
- **Edge Cases**: Error conditions and boundary cases
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Test Naming
|
||||
|
||||
Use descriptive test names:
|
||||
```go
|
||||
func TestValidateZFSName_ValidName_ReturnsNoError(t *testing.T) {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Table-Driven Tests
|
||||
|
||||
Use table-driven tests for multiple cases:
|
||||
```go
|
||||
tests := []struct {
|
||||
name string
|
||||
input string
|
||||
wantErr bool
|
||||
}{
|
||||
// test cases
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Test Isolation
|
||||
|
||||
Each test should be independent:
|
||||
```go
|
||||
func TestSomething(t *testing.T) {
|
||||
// Setup
|
||||
ts := NewTestServer(t)
|
||||
defer ts.Close() // Cleanup
|
||||
|
||||
// Test
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Error Testing
|
||||
|
||||
Test both success and error cases:
|
||||
```go
|
||||
// Success case
|
||||
err := ValidateZFSName("tank")
|
||||
if err != nil {
|
||||
t.Error("expected no error")
|
||||
}
|
||||
|
||||
// Error case
|
||||
err = ValidateZFSName("")
|
||||
if err == nil {
|
||||
t.Error("expected error for empty name")
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Use Test Helpers
|
||||
|
||||
Use helper functions for common patterns:
|
||||
```go
|
||||
recorder := MakeRequest(t, handler, TestRequest{
|
||||
Method: "GET",
|
||||
Path: "/api/v1/pools",
|
||||
})
|
||||
AssertStatusCode(t, recorder, http.StatusOK)
|
||||
```
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
### GitHub Actions Example
|
||||
|
||||
```yaml
|
||||
name: Tests
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: '1.21'
|
||||
- run: go test ./... -v
|
||||
- run: go test ./... -coverprofile=coverage.out
|
||||
- run: go tool cover -func=coverage.out
|
||||
```
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **More Unit Tests**: Expand coverage for all packages
|
||||
2. **Integration Tests**: Complete API endpoint coverage
|
||||
3. **Performance Tests**: Benchmark critical paths
|
||||
4. **Load Tests**: Stress testing with high concurrency
|
||||
5. **Mock Services**: Mock external dependencies
|
||||
6. **Test Fixtures**: Reusable test data
|
||||
7. **Golden Files**: Compare outputs to expected results
|
||||
8. **Fuzzing**: Property-based testing
|
||||
9. **Race Detection**: Test for race conditions
|
||||
10. **Test Documentation**: Generate test documentation
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Tests Failing
|
||||
|
||||
1. **Check Test Output**: Run with `-v` flag for verbose output
|
||||
2. **Check Dependencies**: Ensure all dependencies are available
|
||||
3. **Check Environment**: Verify test environment setup
|
||||
4. **Check Test Data**: Ensure test data is correct
|
||||
|
||||
### Coverage Issues
|
||||
|
||||
1. **Run Coverage**: `go test ./... -cover`
|
||||
2. **View Report**: `go tool cover -html=coverage.out`
|
||||
3. **Identify Gaps**: Look for untested code paths
|
||||
4. **Add Tests**: Write tests for uncovered code
|
||||
|
||||
### Integration Test Issues
|
||||
|
||||
1. **Check Server**: Verify test server starts correctly
|
||||
2. **Check Database**: Ensure in-memory database works
|
||||
3. **Check Auth**: Verify authentication in tests
|
||||
4. **Check Cleanup**: Ensure proper cleanup after tests
|
||||
376
docs/TUI.md
Normal file
376
docs/TUI.md
Normal file
@@ -0,0 +1,376 @@
|
||||
# Terminal User Interface (TUI)
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS provides a Terminal User Interface (TUI) for managing the storage system from the command line. The TUI provides an interactive menu-driven interface that connects to the Atlas API.
|
||||
|
||||
## Features
|
||||
|
||||
- **Interactive Menus**: Navigate through system features with simple menus
|
||||
- **Authentication**: Secure login to the API
|
||||
- **ZFS Management**: View and manage pools, datasets, and ZVOLs
|
||||
- **Storage Services**: Manage SMB shares, NFS exports, and iSCSI targets
|
||||
- **Snapshot Management**: Create snapshots and manage policies
|
||||
- **System Information**: View system health and diagnostics
|
||||
- **Backup & Restore**: Manage configuration backups
|
||||
|
||||
## Installation
|
||||
|
||||
Build the TUI binary:
|
||||
|
||||
```bash
|
||||
go build ./cmd/atlas-tui
|
||||
```
|
||||
|
||||
Or use the Makefile:
|
||||
|
||||
```bash
|
||||
make build
|
||||
```
|
||||
|
||||
This creates the `atlas-tui` binary.
|
||||
|
||||
## Configuration
|
||||
|
||||
### API URL
|
||||
|
||||
Set the API URL via environment variable:
|
||||
|
||||
```bash
|
||||
export ATLAS_API_URL=http://localhost:8080
|
||||
./atlas-tui
|
||||
```
|
||||
|
||||
Default: `http://localhost:8080`
|
||||
|
||||
## Usage
|
||||
|
||||
### Starting the TUI
|
||||
|
||||
```bash
|
||||
./atlas-tui
|
||||
```
|
||||
|
||||
### Authentication
|
||||
|
||||
On first run, you'll be prompted to login:
|
||||
|
||||
```
|
||||
=== AtlasOS Login ===
|
||||
Username: admin
|
||||
Password: ****
|
||||
Login successful!
|
||||
```
|
||||
|
||||
### Main Menu
|
||||
|
||||
```
|
||||
=== AtlasOS Terminal Interface ===
|
||||
1. ZFS Management
|
||||
2. Storage Services
|
||||
3. Snapshots
|
||||
4. System Information
|
||||
5. Backup & Restore
|
||||
0. Exit
|
||||
```
|
||||
|
||||
## Menu Options
|
||||
|
||||
### 1. ZFS Management
|
||||
|
||||
**Sub-menu:**
|
||||
- List Pools
|
||||
- List Datasets
|
||||
- List ZVOLs
|
||||
- List Disks
|
||||
|
||||
**Example - List Pools:**
|
||||
```
|
||||
=== ZFS Pools ===
|
||||
1. tank
|
||||
Size: 10TB
|
||||
Used: 2TB
|
||||
```
|
||||
|
||||
### 2. Storage Services
|
||||
|
||||
**Sub-menu:**
|
||||
- SMB Shares
|
||||
- NFS Exports
|
||||
- iSCSI Targets
|
||||
|
||||
**SMB Shares:**
|
||||
- List Shares
|
||||
- Create Share
|
||||
|
||||
**Example - Create SMB Share:**
|
||||
```
|
||||
Share name: data-share
|
||||
Dataset: tank/data
|
||||
Path (optional, press Enter to auto-detect):
|
||||
Description (optional): Main data share
|
||||
SMB share created successfully!
|
||||
Share: data-share
|
||||
```
|
||||
|
||||
**NFS Exports:**
|
||||
- List Exports
|
||||
- Create Export
|
||||
|
||||
**Example - Create NFS Export:**
|
||||
```
|
||||
Dataset: tank/data
|
||||
Path (optional, press Enter to auto-detect):
|
||||
Clients (comma-separated, e.g., 192.168.1.0/24,*): 192.168.1.0/24
|
||||
NFS export created successfully!
|
||||
Export: /tank/data
|
||||
```
|
||||
|
||||
**iSCSI Targets:**
|
||||
- List Targets
|
||||
- Create Target
|
||||
|
||||
**Example - Create iSCSI Target:**
|
||||
```
|
||||
IQN (e.g., iqn.2024-12.com.atlas:target1): iqn.2024-12.com.atlas:target1
|
||||
iSCSI target created successfully!
|
||||
Target: iqn.2024-12.com.atlas:target1
|
||||
```
|
||||
|
||||
### 3. Snapshots
|
||||
|
||||
**Sub-menu:**
|
||||
- List Snapshots
|
||||
- Create Snapshot
|
||||
- List Snapshot Policies
|
||||
|
||||
**Example - Create Snapshot:**
|
||||
```
|
||||
Dataset name: tank/data
|
||||
Snapshot name: backup-2024-12-20
|
||||
Snapshot created successfully!
|
||||
Snapshot: tank/data@backup-2024-12-20
|
||||
```
|
||||
|
||||
### 4. System Information
|
||||
|
||||
**Sub-menu:**
|
||||
- System Info
|
||||
- Health Check
|
||||
- Dashboard
|
||||
|
||||
**System Info:**
|
||||
```
|
||||
=== System Information ===
|
||||
Version: v0.1.0-dev
|
||||
Uptime: 3600 seconds
|
||||
Go Version: go1.21.0
|
||||
Goroutines: 15
|
||||
|
||||
Services:
|
||||
smb: running
|
||||
nfs: running
|
||||
iscsi: stopped
|
||||
```
|
||||
|
||||
**Health Check:**
|
||||
```
|
||||
=== Health Check ===
|
||||
Status: healthy
|
||||
|
||||
Component Checks:
|
||||
zfs: healthy
|
||||
database: healthy
|
||||
smb: healthy
|
||||
nfs: healthy
|
||||
iscsi: stopped
|
||||
```
|
||||
|
||||
**Dashboard:**
|
||||
```
|
||||
=== Dashboard ===
|
||||
Pools: 2
|
||||
Datasets: 10
|
||||
SMB Shares: 5
|
||||
NFS Exports: 3
|
||||
iSCSI Targets: 2
|
||||
```
|
||||
|
||||
### 5. Backup & Restore
|
||||
|
||||
**Sub-menu:**
|
||||
- List Backups
|
||||
- Create Backup
|
||||
- Restore Backup
|
||||
|
||||
**Example - Create Backup:**
|
||||
```
|
||||
Description (optional): Weekly backup
|
||||
Backup created successfully!
|
||||
Backup ID: backup-1703123456
|
||||
```
|
||||
|
||||
**Example - Restore Backup:**
|
||||
```
|
||||
=== Backups ===
|
||||
1. backup-1703123456
|
||||
Created: 2024-12-20T10:30:56Z
|
||||
Description: Weekly backup
|
||||
|
||||
Backup ID: backup-1703123456
|
||||
Restore backup? This will overwrite current configuration. (yes/no): yes
|
||||
Backup restored successfully!
|
||||
```
|
||||
|
||||
## Navigation
|
||||
|
||||
- **Select Option**: Enter the number or letter corresponding to the menu option
|
||||
- **Back**: Enter `0` to go back to the previous menu
|
||||
- **Exit**: Enter `0`, `q`, or `exit` to quit the application
|
||||
- **Interrupt**: Press `Ctrl+C` for graceful shutdown
|
||||
|
||||
## Keyboard Shortcuts
|
||||
|
||||
- `Ctrl+C`: Graceful shutdown
|
||||
- `0`: Back/Exit
|
||||
- `q`: Exit
|
||||
- `exit`: Exit
|
||||
|
||||
## Examples
|
||||
|
||||
### Complete Workflow
|
||||
|
||||
```bash
|
||||
# Start TUI
|
||||
./atlas-tui
|
||||
|
||||
# Login
|
||||
Username: admin
|
||||
Password: admin
|
||||
|
||||
# Navigate to ZFS Management
|
||||
Select option: 1
|
||||
|
||||
# List pools
|
||||
Select option: 1
|
||||
|
||||
# Go back
|
||||
Select option: 0
|
||||
|
||||
# Create SMB share
|
||||
Select option: 2
|
||||
Select option: 1
|
||||
Select option: 2
|
||||
Share name: myshare
|
||||
Dataset: tank/data
|
||||
...
|
||||
|
||||
# Exit
|
||||
Select option: 0
|
||||
Select option: 0
|
||||
```
|
||||
|
||||
## API Client
|
||||
|
||||
The TUI uses an HTTP client to communicate with the Atlas API:
|
||||
|
||||
- **Authentication**: JWT token-based authentication
|
||||
- **Error Handling**: Clear error messages for API failures
|
||||
- **Timeout**: 30-second timeout for requests
|
||||
|
||||
## Error Handling
|
||||
|
||||
The TUI handles errors gracefully:
|
||||
|
||||
- **Connection Errors**: Clear messages when API is unreachable
|
||||
- **Authentication Errors**: Prompts for re-authentication
|
||||
- **API Errors**: Displays error messages from API responses
|
||||
- **Invalid Input**: Validates user input before sending requests
|
||||
|
||||
## Configuration File
|
||||
|
||||
Future enhancement: Support for configuration file:
|
||||
|
||||
```yaml
|
||||
api_url: http://localhost:8080
|
||||
username: admin
|
||||
# Token can be stored (with appropriate security)
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Password Input**: Currently visible (future: hidden input)
|
||||
2. **Token Storage**: Token stored in memory only
|
||||
3. **HTTPS**: Use HTTPS for production API URLs
|
||||
4. **Credentials**: Never log credentials
|
||||
|
||||
## Limitations
|
||||
|
||||
- **Password Visibility**: Passwords are currently visible during input
|
||||
- **No Token Persistence**: Must login on each TUI start
|
||||
- **Basic Interface**: Text-based menus (not a full TUI library)
|
||||
- **Limited Error Recovery**: Some errors require restart
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Hidden Password Input**: Use library to hide password input
|
||||
2. **Token Persistence**: Store token securely for session persistence
|
||||
3. **Advanced TUI**: Use Bubble Tea or similar for rich interface
|
||||
4. **Command Mode**: Support command-line arguments for non-interactive use
|
||||
5. **Configuration File**: Support for config file
|
||||
6. **Auto-completion**: Tab completion for commands
|
||||
7. **History**: Command history support
|
||||
8. **Color Output**: Colored output for better readability
|
||||
9. **Progress Indicators**: Show progress for long operations
|
||||
10. **Batch Operations**: Support for batch operations
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Connection Errors
|
||||
|
||||
```
|
||||
Error: request failed: dial tcp 127.0.0.1:8080: connect: connection refused
|
||||
```
|
||||
|
||||
**Solution**: Ensure the API server is running:
|
||||
```bash
|
||||
./atlas-api
|
||||
```
|
||||
|
||||
### Authentication Errors
|
||||
|
||||
```
|
||||
Error: login failed: invalid credentials
|
||||
```
|
||||
|
||||
**Solution**: Check username and password. Default credentials:
|
||||
- Username: `admin`
|
||||
- Password: `admin`
|
||||
|
||||
### API URL Configuration
|
||||
|
||||
If API is on a different host/port:
|
||||
|
||||
```bash
|
||||
export ATLAS_API_URL=http://192.168.1.100:8080
|
||||
./atlas-tui
|
||||
```
|
||||
|
||||
## Comparison with Web GUI
|
||||
|
||||
| Feature | TUI | Web GUI |
|
||||
|---------|-----|---------|
|
||||
| **Access** | Local console | Browser |
|
||||
| **Setup** | No browser needed | Requires browser |
|
||||
| **Network** | Works offline (local) | Requires network |
|
||||
| **Rich UI** | Text-based | HTML/CSS/JS |
|
||||
| **Initial Setup** | Ideal for setup | Better for daily use |
|
||||
| **Maintenance** | Good for maintenance | Good for monitoring |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use TUI for Initial Setup**: TUI is ideal for initial system configuration
|
||||
2. **Use Web GUI for Daily Operations**: Web GUI provides better visualization
|
||||
3. **Keep API Running**: TUI requires the API server to be running
|
||||
4. **Secure Credentials**: Don't share credentials or tokens
|
||||
5. **Use HTTPS in Production**: Always use HTTPS for production API URLs
|
||||
232
docs/VALIDATION.md
Normal file
232
docs/VALIDATION.md
Normal file
@@ -0,0 +1,232 @@
|
||||
# Input Validation & Sanitization
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS implements comprehensive input validation and sanitization to ensure data integrity, security, and prevent injection attacks. All user inputs are validated before processing.
|
||||
|
||||
## Validation Rules
|
||||
|
||||
### ZFS Names (Pools, Datasets, ZVOLs, Snapshots)
|
||||
|
||||
**Rules:**
|
||||
- Must start with alphanumeric character
|
||||
- Can contain: `a-z`, `A-Z`, `0-9`, `_`, `-`, `.`, `:`
|
||||
- Cannot start with `-` or `.`
|
||||
- Maximum length: 256 characters
|
||||
- Cannot be empty
|
||||
|
||||
**Example:**
|
||||
```go
|
||||
if err := validation.ValidateZFSName("tank/data"); err != nil {
|
||||
// Handle error
|
||||
}
|
||||
```
|
||||
|
||||
### Usernames
|
||||
|
||||
**Rules:**
|
||||
- Minimum length: 3 characters
|
||||
- Maximum length: 32 characters
|
||||
- Can contain: `a-z`, `A-Z`, `0-9`, `_`, `-`, `.`
|
||||
- Must start with alphanumeric character
|
||||
|
||||
**Example:**
|
||||
```go
|
||||
if err := validation.ValidateUsername("admin"); err != nil {
|
||||
// Handle error
|
||||
}
|
||||
```
|
||||
|
||||
### Passwords
|
||||
|
||||
**Rules:**
|
||||
- Minimum length: 8 characters
|
||||
- Maximum length: 128 characters
|
||||
- Must contain at least one letter
|
||||
- Must contain at least one number
|
||||
|
||||
**Example:**
|
||||
```go
|
||||
if err := validation.ValidatePassword("SecurePass123"); err != nil {
|
||||
// Handle error
|
||||
}
|
||||
```
|
||||
|
||||
### Email Addresses
|
||||
|
||||
**Rules:**
|
||||
- Optional field (can be empty)
|
||||
- Maximum length: 254 characters
|
||||
- Must match email format pattern
|
||||
- Basic format validation (RFC 5322 simplified)
|
||||
|
||||
**Example:**
|
||||
```go
|
||||
if err := validation.ValidateEmail("user@example.com"); err != nil {
|
||||
// Handle error
|
||||
}
|
||||
```
|
||||
|
||||
### SMB Share Names
|
||||
|
||||
**Rules:**
|
||||
- Maximum length: 80 characters
|
||||
- Can contain: `a-z`, `A-Z`, `0-9`, `_`, `-`, `.`
|
||||
- Cannot be reserved Windows names (CON, PRN, AUX, NUL, COM1-9, LPT1-9)
|
||||
- Must start with alphanumeric character
|
||||
|
||||
**Example:**
|
||||
```go
|
||||
if err := validation.ValidateShareName("data-share"); err != nil {
|
||||
// Handle error
|
||||
}
|
||||
```
|
||||
|
||||
### iSCSI IQN (Qualified Name)
|
||||
|
||||
**Rules:**
|
||||
- Must start with `iqn.`
|
||||
- Format: `iqn.yyyy-mm.reversed.domain:identifier`
|
||||
- Maximum length: 223 characters
|
||||
- Year-month format validation
|
||||
|
||||
**Example:**
|
||||
```go
|
||||
if err := validation.ValidateIQN("iqn.2024-12.com.atlas:storage.target1"); err != nil {
|
||||
// Handle error
|
||||
}
|
||||
```
|
||||
|
||||
### Size Strings
|
||||
|
||||
**Rules:**
|
||||
- Format: number followed by optional unit (K, M, G, T, P)
|
||||
- Units: K (kilobytes), M (megabytes), G (gigabytes), T (terabytes), P (petabytes)
|
||||
- Case insensitive
|
||||
|
||||
**Examples:**
|
||||
- `"10"` - 10 bytes
|
||||
- `"10K"` - 10 kilobytes
|
||||
- `"1G"` - 1 gigabyte
|
||||
- `"2T"` - 2 terabytes
|
||||
|
||||
**Example:**
|
||||
```go
|
||||
if err := validation.ValidateSize("10G"); err != nil {
|
||||
// Handle error
|
||||
}
|
||||
```
|
||||
|
||||
### Filesystem Paths
|
||||
|
||||
**Rules:**
|
||||
- Must be absolute (start with `/`)
|
||||
- Maximum length: 4096 characters
|
||||
- Cannot contain `..` (path traversal)
|
||||
- Cannot contain `//` (double slashes)
|
||||
- Cannot contain null bytes
|
||||
|
||||
**Example:**
|
||||
```go
|
||||
if err := validation.ValidatePath("/tank/data"); err != nil {
|
||||
// Handle error
|
||||
}
|
||||
```
|
||||
|
||||
### CIDR/Hostname (NFS Clients)
|
||||
|
||||
**Rules:**
|
||||
- Can be wildcard: `*`
|
||||
- Can be CIDR notation: `192.168.1.0/24`
|
||||
- Can be hostname: `server.example.com`
|
||||
- Hostname must follow DNS rules
|
||||
|
||||
**Example:**
|
||||
```go
|
||||
if err := validation.ValidateCIDR("192.168.1.0/24"); err != nil {
|
||||
// Handle error
|
||||
}
|
||||
```
|
||||
|
||||
## Sanitization
|
||||
|
||||
### String Sanitization
|
||||
|
||||
Removes potentially dangerous characters:
|
||||
- Null bytes (`\x00`)
|
||||
- Control characters (ASCII < 32, except space)
|
||||
- Removes leading/trailing whitespace
|
||||
|
||||
**Example:**
|
||||
```go
|
||||
clean := validation.SanitizeString(userInput)
|
||||
```
|
||||
|
||||
### Path Sanitization
|
||||
|
||||
Normalizes filesystem paths:
|
||||
- Removes leading/trailing whitespace
|
||||
- Normalizes slashes (backslash to forward slash)
|
||||
- Removes multiple consecutive slashes
|
||||
|
||||
**Example:**
|
||||
```go
|
||||
cleanPath := validation.SanitizePath("/tank//data/")
|
||||
// Result: "/tank/data"
|
||||
```
|
||||
|
||||
## Integration
|
||||
|
||||
### In API Handlers
|
||||
|
||||
Validation is integrated into all create/update handlers:
|
||||
|
||||
```go
|
||||
func (a *App) handleCreatePool(w http.ResponseWriter, r *http.Request) {
|
||||
// ... decode request ...
|
||||
|
||||
// Validate pool name
|
||||
if err := validation.ValidateZFSName(req.Name); err != nil {
|
||||
writeError(w, errors.ErrValidation(err.Error()))
|
||||
return
|
||||
}
|
||||
|
||||
// ... continue with creation ...
|
||||
}
|
||||
```
|
||||
|
||||
### Error Responses
|
||||
|
||||
Validation errors return structured error responses:
|
||||
|
||||
```json
|
||||
{
|
||||
"code": "VALIDATION_ERROR",
|
||||
"message": "validation error on field 'name': name cannot be empty",
|
||||
"details": ""
|
||||
}
|
||||
```
|
||||
|
||||
## Security Benefits
|
||||
|
||||
1. **Injection Prevention**: Validates inputs prevent command injection
|
||||
2. **Path Traversal Protection**: Path validation prevents directory traversal
|
||||
3. **Data Integrity**: Ensures data conforms to expected formats
|
||||
4. **System Stability**: Prevents invalid operations that could crash services
|
||||
5. **User Experience**: Clear error messages guide users to correct input
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Validate Early**: Validate inputs as soon as they're received
|
||||
2. **Sanitize Before Storage**: Sanitize strings before storing in database
|
||||
3. **Validate Format**: Check format before parsing (e.g., size strings)
|
||||
4. **Check Length**: Enforce maximum lengths to prevent DoS
|
||||
5. **Whitelist Characters**: Only allow known-safe characters
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Custom Validators**: Domain-specific validation rules
|
||||
2. **Validation Middleware**: Automatic validation for all endpoints
|
||||
3. **Schema Validation**: JSON schema validation
|
||||
4. **Rate Limiting**: Prevent abuse through validation
|
||||
5. **Input Normalization**: Automatic normalization of valid inputs
|
||||
306
docs/ZFS_OPERATIONS.md
Normal file
306
docs/ZFS_OPERATIONS.md
Normal file
@@ -0,0 +1,306 @@
|
||||
# ZFS Operations
|
||||
|
||||
## Overview
|
||||
|
||||
AtlasOS provides comprehensive ZFS pool management including pool creation, import, export, scrubbing with progress monitoring, and health status reporting.
|
||||
|
||||
## Pool Operations
|
||||
|
||||
### List Pools
|
||||
|
||||
**GET** `/api/v1/pools`
|
||||
|
||||
Returns all ZFS pools.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
[
|
||||
{
|
||||
"name": "tank",
|
||||
"status": "ONLINE",
|
||||
"size": 1099511627776,
|
||||
"allocated": 536870912000,
|
||||
"free": 562641027776,
|
||||
"health": "ONLINE",
|
||||
"created_at": "2024-01-15T10:30:00Z"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### Get Pool
|
||||
|
||||
**GET** `/api/v1/pools/{name}`
|
||||
|
||||
Returns details for a specific pool.
|
||||
|
||||
### Create Pool
|
||||
|
||||
**POST** `/api/v1/pools`
|
||||
|
||||
Creates a new ZFS pool.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"name": "tank",
|
||||
"vdevs": ["sda", "sdb"],
|
||||
"options": {
|
||||
"ashift": "12"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Destroy Pool
|
||||
|
||||
**DELETE** `/api/v1/pools/{name}`
|
||||
|
||||
Destroys a ZFS pool. **Warning**: This is a destructive operation.
|
||||
|
||||
## Pool Import/Export
|
||||
|
||||
### List Available Pools
|
||||
|
||||
**GET** `/api/v1/pools/available`
|
||||
|
||||
Lists pools that can be imported (pools that exist but are not currently imported).
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"pools": ["tank", "backup"]
|
||||
}
|
||||
```
|
||||
|
||||
### Import Pool
|
||||
|
||||
**POST** `/api/v1/pools/import`
|
||||
|
||||
Imports a ZFS pool.
|
||||
|
||||
**Request Body:**
|
||||
```json
|
||||
{
|
||||
"name": "tank",
|
||||
"options": {
|
||||
"readonly": "on"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Options:**
|
||||
- `readonly`: Set pool to read-only mode (`on`/`off`)
|
||||
- Other ZFS pool properties
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"message": "pool imported",
|
||||
"name": "tank"
|
||||
}
|
||||
```
|
||||
|
||||
### Export Pool
|
||||
|
||||
**POST** `/api/v1/pools/{name}/export`
|
||||
|
||||
Exports a ZFS pool (makes it unavailable but preserves data).
|
||||
|
||||
**Request Body (optional):**
|
||||
```json
|
||||
{
|
||||
"force": false
|
||||
}
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `force` (boolean): Force export even if pool is in use
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"message": "pool exported",
|
||||
"name": "tank"
|
||||
}
|
||||
```
|
||||
|
||||
## Scrub Operations
|
||||
|
||||
### Start Scrub
|
||||
|
||||
**POST** `/api/v1/pools/{name}/scrub`
|
||||
|
||||
Starts a scrub operation on a pool. Scrub verifies data integrity and repairs any errors found.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"message": "scrub started",
|
||||
"pool": "tank"
|
||||
}
|
||||
```
|
||||
|
||||
### Get Scrub Status
|
||||
|
||||
**GET** `/api/v1/pools/{name}/scrub`
|
||||
|
||||
Returns detailed scrub status with progress information.
|
||||
|
||||
**Response:**
|
||||
```json
|
||||
{
|
||||
"status": "in_progress",
|
||||
"progress": 45.2,
|
||||
"time_elapsed": "2h15m",
|
||||
"time_remain": "30m",
|
||||
"speed": "100M/s",
|
||||
"errors": 0,
|
||||
"repaired": 0,
|
||||
"last_scrub": "2024-12-15T10:30:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Status Values:**
|
||||
- `idle`: No scrub in progress
|
||||
- `in_progress`: Scrub is currently running
|
||||
- `completed`: Scrub completed successfully
|
||||
- `error`: Scrub encountered errors
|
||||
|
||||
**Progress Fields:**
|
||||
- `progress`: Percentage complete (0-100)
|
||||
- `time_elapsed`: Time since scrub started
|
||||
- `time_remain`: Estimated time remaining
|
||||
- `speed`: Current scrub speed
|
||||
- `errors`: Number of errors found
|
||||
- `repaired`: Number of errors repaired
|
||||
- `last_scrub`: Timestamp of last completed scrub
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Import a Pool
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/api/v1/pools/import \
|
||||
-H "Authorization: Bearer $TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"name": "tank"
|
||||
}'
|
||||
```
|
||||
|
||||
### Start Scrub and Monitor Progress
|
||||
|
||||
```bash
|
||||
# Start scrub
|
||||
curl -X POST http://localhost:8080/api/v1/pools/tank/scrub \
|
||||
-H "Authorization: Bearer $TOKEN"
|
||||
|
||||
# Check progress
|
||||
curl http://localhost:8080/api/v1/pools/tank/scrub \
|
||||
-H "Authorization: Bearer $TOKEN"
|
||||
```
|
||||
|
||||
### Export Pool
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/api/v1/pools/tank/export \
|
||||
-H "Authorization: Bearer $TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"force": false
|
||||
}'
|
||||
```
|
||||
|
||||
## Scrub Best Practices
|
||||
|
||||
### When to Scrub
|
||||
|
||||
- **Regular Schedule**: Monthly or quarterly
|
||||
- **After Disk Failures**: After replacing failed disks
|
||||
- **Before Major Operations**: Before pool upgrades or migrations
|
||||
- **After Data Corruption**: If data integrity issues are suspected
|
||||
|
||||
### Monitoring Scrub Progress
|
||||
|
||||
1. **Start Scrub**: Use POST endpoint to start
|
||||
2. **Monitor Progress**: Poll GET endpoint every few minutes
|
||||
3. **Check Errors**: Monitor `errors` and `repaired` fields
|
||||
4. **Wait for Completion**: Wait until `status` is `completed`
|
||||
|
||||
### Scrub Performance
|
||||
|
||||
- **Impact**: Scrub operations can impact pool performance
|
||||
- **Scheduling**: Schedule during low-usage periods
|
||||
- **Duration**: Large pools may take hours or days
|
||||
- **I/O**: Scrub generates significant I/O load
|
||||
|
||||
## Pool Import/Export Use Cases
|
||||
|
||||
### Import Use Cases
|
||||
|
||||
1. **System Reboot**: Pools are automatically imported on boot
|
||||
2. **Manual Import**: Import pools that were exported
|
||||
3. **Read-Only Import**: Import pool in read-only mode for inspection
|
||||
4. **Recovery**: Import pools from backup systems
|
||||
|
||||
### Export Use Cases
|
||||
|
||||
1. **System Shutdown**: Export pools before shutdown
|
||||
2. **Maintenance**: Export pools for maintenance operations
|
||||
3. **Migration**: Export pools before moving to another system
|
||||
4. **Backup**: Export pools before creating full backups
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Pool Not Found
|
||||
|
||||
```json
|
||||
{
|
||||
"code": "NOT_FOUND",
|
||||
"message": "pool not found"
|
||||
}
|
||||
```
|
||||
|
||||
### Scrub Already Running
|
||||
|
||||
```json
|
||||
{
|
||||
"code": "CONFLICT",
|
||||
"message": "scrub already in progress"
|
||||
}
|
||||
```
|
||||
|
||||
### Pool in Use (Export)
|
||||
|
||||
```json
|
||||
{
|
||||
"code": "CONFLICT",
|
||||
"message": "pool is in use, cannot export"
|
||||
}
|
||||
```
|
||||
|
||||
Use `force: true` to force export (use with caution).
|
||||
|
||||
## Compliance with SRS
|
||||
|
||||
Per SRS section 4.2 ZFS Management:
|
||||
|
||||
- ✅ **List available disks**: Implemented
|
||||
- ✅ **Create pools**: Implemented
|
||||
- ✅ **Import pools**: Implemented (Priority 20)
|
||||
- ✅ **Export pools**: Implemented (Priority 20)
|
||||
- ✅ **Report pool health**: Implemented
|
||||
- ✅ **Create and manage datasets**: Implemented
|
||||
- ✅ **Create ZVOLs**: Implemented
|
||||
- ✅ **Scrub operations**: Implemented
|
||||
- ✅ **Progress monitoring**: Implemented (Priority 19)
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Scrub Scheduling**: Automatic scheduled scrubs
|
||||
2. **Scrub Notifications**: Alerts when scrub completes or finds errors
|
||||
3. **Pool Health Alerts**: Automatic alerts for pool health issues
|
||||
4. **Import History**: Track pool import/export history
|
||||
5. **Pool Properties**: Manage pool properties via API
|
||||
6. **VDEV Management**: Add/remove vdevs from pools
|
||||
7. **Pool Upgrade**: Upgrade pool version
|
||||
8. **Resilver Operations**: Monitor and manage resilver operations
|
||||
1866
docs/openapi.yaml
Normal file
1866
docs/openapi.yaml
Normal file
File diff suppressed because it is too large
Load Diff
34
fix-sudoers.sh
Executable file
34
fix-sudoers.sh
Executable file
@@ -0,0 +1,34 @@
|
||||
#!/bin/bash
|
||||
# Quick fix script to add current user to ZFS sudoers for development
|
||||
# Usage: sudo ./fix-sudoers.sh
|
||||
|
||||
set -e
|
||||
|
||||
CURRENT_USER=$(whoami)
|
||||
SUDOERS_FILE="/etc/sudoers.d/atlas-zfs"
|
||||
|
||||
echo "Adding $CURRENT_USER to ZFS sudoers for development..."
|
||||
|
||||
# Check if sudoers file exists
|
||||
if [ ! -f "$SUDOERS_FILE" ]; then
|
||||
echo "Creating sudoers file..."
|
||||
cat > "$SUDOERS_FILE" <<EOF
|
||||
# Allow ZFS commands without password for development
|
||||
# This file is auto-generated - modify with caution
|
||||
EOF
|
||||
chmod 440 "$SUDOERS_FILE"
|
||||
fi
|
||||
|
||||
# Check if user is already in the file
|
||||
if grep -q "^$CURRENT_USER" "$SUDOERS_FILE"; then
|
||||
echo "User $CURRENT_USER already has ZFS sudoers access"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Add user to sudoers file
|
||||
cat >> "$SUDOERS_FILE" <<EOF
|
||||
$CURRENT_USER ALL=(ALL) NOPASSWD: /usr/sbin/zpool, /usr/bin/zpool, /sbin/zpool, /usr/sbin/zfs, /usr/bin/zfs, /sbin/zfs
|
||||
EOF
|
||||
|
||||
echo "Added $CURRENT_USER to ZFS sudoers"
|
||||
echo "You can now run atlas-api without sudo password prompts"
|
||||
22
go.mod
22
go.mod
@@ -1,3 +1,23 @@
|
||||
module example.com/atlasos
|
||||
module gitea.avt.data-center.id/othman.suseno/atlas
|
||||
|
||||
go 1.24.4
|
||||
|
||||
require (
|
||||
github.com/golang-jwt/jwt/v5 v5.3.0
|
||||
github.com/lib/pq v1.10.9
|
||||
golang.org/x/crypto v0.46.0
|
||||
modernc.org/sqlite v1.40.1
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/dustin/go-humanize v1.0.1 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
github.com/ncruces/go-strftime v0.1.9 // indirect
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
|
||||
golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b // indirect
|
||||
golang.org/x/sys v0.39.0 // indirect
|
||||
modernc.org/libc v1.66.10 // indirect
|
||||
modernc.org/mathutil v1.7.1 // indirect
|
||||
modernc.org/memory v1.11.0 // indirect
|
||||
)
|
||||
|
||||
55
go.sum
Normal file
55
go.sum
Normal file
@@ -0,0 +1,55 @@
|
||||
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
|
||||
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
|
||||
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
|
||||
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
|
||||
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4=
|
||||
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||
golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=
|
||||
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
|
||||
golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b h1:M2rDM6z3Fhozi9O7NWsxAkg/yqS/lQJ6PmkyIV3YP+o=
|
||||
golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b/go.mod h1:3//PLf8L/X+8b4vuAfHzxeRUl04Adcb341+IGKfnqS8=
|
||||
golang.org/x/mod v0.27.0 h1:kb+q2PyFnEADO2IEF935ehFUXlWiNjJWtRNgBLSfbxQ=
|
||||
golang.org/x/mod v0.27.0/go.mod h1:rWI627Fq0DEoudcK+MBkNkCe0EetEaDSwJJkCcjpazc=
|
||||
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
|
||||
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
|
||||
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/tools v0.36.0 h1:kWS0uv/zsvHEle1LbV5LE8QujrxB3wfQyxHfhOk0Qkg=
|
||||
golang.org/x/tools v0.36.0/go.mod h1:WBDiHKJK8YgLHlcQPYQzNCkUxUypCaa5ZegCVutKm+s=
|
||||
modernc.org/cc/v4 v4.26.5 h1:xM3bX7Mve6G8K8b+T11ReenJOT+BmVqQj0FY5T4+5Y4=
|
||||
modernc.org/cc/v4 v4.26.5/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
|
||||
modernc.org/ccgo/v4 v4.28.1 h1:wPKYn5EC/mYTqBO373jKjvX2n+3+aK7+sICCv4Fjy1A=
|
||||
modernc.org/ccgo/v4 v4.28.1/go.mod h1:uD+4RnfrVgE6ec9NGguUNdhqzNIeeomeXf6CL0GTE5Q=
|
||||
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA=
|
||||
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
|
||||
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
|
||||
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
|
||||
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
|
||||
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
|
||||
modernc.org/libc v1.66.10 h1:yZkb3YeLx4oynyR+iUsXsybsX4Ubx7MQlSYEw4yj59A=
|
||||
modernc.org/libc v1.66.10/go.mod h1:8vGSEwvoUoltr4dlywvHqjtAqHBaw0j1jI7iFBTAr2I=
|
||||
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
|
||||
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
|
||||
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
|
||||
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
|
||||
modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
|
||||
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
|
||||
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
|
||||
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
|
||||
modernc.org/sqlite v1.40.1 h1:VfuXcxcUWWKRBuP8+BR9L7VnmusMgBNNnBYGEe9w/iY=
|
||||
modernc.org/sqlite v1.40.1/go.mod h1:9fjQZ0mB1LLP0GYrp39oOJXx/I2sxEnZtzCmEQIKvGE=
|
||||
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
|
||||
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
|
||||
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
|
||||
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
|
||||
51
installer/README.md
Normal file
51
installer/README.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# AtlasOS Installer
|
||||
|
||||
This directory contains installation scripts for AtlasOS on Ubuntu 24.04.
|
||||
|
||||
## Files
|
||||
|
||||
- **`install.sh`** - Main installation script
|
||||
- **`bundle-downloader.sh`** - Downloads all packages for airgap installation
|
||||
- **`README.md`** - This file
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Standard Installation (with internet)
|
||||
|
||||
```bash
|
||||
# From repository root
|
||||
sudo ./installer/install.sh
|
||||
```
|
||||
|
||||
### Airgap Installation (offline)
|
||||
|
||||
**Step 1: Download bundle (on internet-connected system)**
|
||||
```bash
|
||||
sudo ./installer/bundle-downloader.sh ./atlas-bundle
|
||||
```
|
||||
|
||||
**Step 2: Transfer bundle to airgap system**
|
||||
|
||||
**Step 3: Install on airgap system**
|
||||
```bash
|
||||
sudo ./installer/install.sh --offline-bundle /path/to/atlas-bundle
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
See help for all options:
|
||||
```bash
|
||||
sudo ./installer/install.sh --help
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
- **Installation Guide**: `../docs/INSTALLATION.md`
|
||||
- **Airgap Installation**: `../docs/AIRGAP_INSTALLATION.md`
|
||||
|
||||
## Requirements
|
||||
|
||||
- Ubuntu 24.04 (Noble Numbat)
|
||||
- Root/sudo access
|
||||
- Internet connection (for standard installation)
|
||||
- Or offline bundle (for airgap installation)
|
||||
34
installer/atlas-bundle-ubuntu24.04/MANIFEST.txt
Normal file
34
installer/atlas-bundle-ubuntu24.04/MANIFEST.txt
Normal file
@@ -0,0 +1,34 @@
|
||||
AtlasOS Bundle for Ubuntu 24.04 (Noble Numbat)
|
||||
Generated: 2025-12-15 14:23:10 UTC
|
||||
Packages: 21 main packages + dependencies
|
||||
|
||||
Main Packages:
|
||||
build-essential
|
||||
git
|
||||
curl
|
||||
wget
|
||||
ca-certificates
|
||||
software-properties-common
|
||||
apt-transport-https
|
||||
zfsutils-linux
|
||||
zfs-zed
|
||||
zfs-initramfs
|
||||
samba
|
||||
samba-common-bin
|
||||
nfs-kernel-server
|
||||
rpcbind
|
||||
targetcli-fb
|
||||
sqlite3
|
||||
libsqlite3-dev
|
||||
golang-go
|
||||
openssl
|
||||
net-tools
|
||||
iproute2
|
||||
|
||||
Total .deb files: 326
|
||||
|
||||
Installation Instructions:
|
||||
1. Transfer this entire directory to your airgap system
|
||||
2. Run: sudo ./installer/install.sh --offline-bundle "/app/atlas/installer/atlas-bundle-ubuntu24.04"
|
||||
|
||||
Note: Ensure all .deb files are present before transferring
|
||||
42
installer/atlas-bundle-ubuntu24.04/README.md
Normal file
42
installer/atlas-bundle-ubuntu24.04/README.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# AtlasOS Offline Bundle for Ubuntu 24.04
|
||||
|
||||
This bundle contains all required packages and dependencies for installing AtlasOS on an airgap (offline) Ubuntu 24.04 system.
|
||||
|
||||
## Contents
|
||||
|
||||
- All required .deb packages with dependencies
|
||||
- Go binary (fallback, if needed)
|
||||
- Installation manifest
|
||||
|
||||
## Usage
|
||||
|
||||
1. Transfer this entire directory to your airgap system
|
||||
2. On the airgap system, run:
|
||||
```bash
|
||||
sudo ./installer/install.sh --offline-bundle /path/to/this/directory
|
||||
```
|
||||
|
||||
## Bundle Size
|
||||
|
||||
The bundle typically contains:
|
||||
- ~100-200 .deb packages (including dependencies)
|
||||
- Total size: ~500MB - 1GB (depending on architecture)
|
||||
|
||||
## Verification
|
||||
|
||||
Before transferring, verify the bundle:
|
||||
```bash
|
||||
# Count .deb files
|
||||
find . -name "*.deb" | wc -l
|
||||
|
||||
# Check manifest
|
||||
cat MANIFEST.txt
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If installation fails:
|
||||
1. Check that all .deb files are present
|
||||
2. Verify you're on Ubuntu 24.04
|
||||
3. Check disk space (need at least 2GB free)
|
||||
4. Review installation logs
|
||||
BIN
installer/atlas-bundle-ubuntu24.04/adduser_3.137ubuntu1_all.deb
Normal file
BIN
installer/atlas-bundle-ubuntu24.04/adduser_3.137ubuntu1_all.deb
Normal file
Binary file not shown.
Binary file not shown.
BIN
installer/atlas-bundle-ubuntu24.04/apt_2.8.3_amd64.deb
Normal file
BIN
installer/atlas-bundle-ubuntu24.04/apt_2.8.3_amd64.deb
Normal file
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
BIN
installer/atlas-bundle-ubuntu24.04/debconf_1.5.86ubuntu1_all.deb
Normal file
BIN
installer/atlas-bundle-ubuntu24.04/debconf_1.5.86ubuntu1_all.deb
Normal file
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
BIN
installer/atlas-bundle-ubuntu24.04/go.tar.gz
Normal file
BIN
installer/atlas-bundle-ubuntu24.04/go.tar.gz
Normal file
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
BIN
installer/atlas-bundle-ubuntu24.04/iso-codes_4.16.0-1_all.deb
Normal file
BIN
installer/atlas-bundle-ubuntu24.04/iso-codes_4.16.0-1_all.deb
Normal file
Binary file not shown.
Binary file not shown.
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user