This commit is contained in:
297
docs/HTTPS_TLS.md
Normal file
297
docs/HTTPS_TLS.md
Normal file
@@ -0,0 +1,297 @@
|
|||||||
|
# HTTPS/TLS Support
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
AtlasOS supports HTTPS/TLS encryption for secure communication. TLS can be enabled via environment variables, and the system will automatically enforce HTTPS connections when TLS is enabled.
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
|
||||||
|
TLS is configured via environment variables:
|
||||||
|
|
||||||
|
- **`ATLAS_TLS_CERT`**: Path to TLS certificate file (PEM format)
|
||||||
|
- **`ATLAS_TLS_KEY`**: Path to TLS private key file (PEM format)
|
||||||
|
- **`ATLAS_TLS_ENABLED`**: Force enable TLS (optional, auto-enabled if cert/key provided)
|
||||||
|
|
||||||
|
### Automatic Detection
|
||||||
|
|
||||||
|
TLS is automatically enabled if both `ATLAS_TLS_CERT` and `ATLAS_TLS_KEY` are set:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export ATLAS_TLS_CERT=/etc/atlas/tls/cert.pem
|
||||||
|
export ATLAS_TLS_KEY=/etc/atlas/tls/key.pem
|
||||||
|
./atlas-api
|
||||||
|
```
|
||||||
|
|
||||||
|
### Explicit Enable
|
||||||
|
|
||||||
|
Force TLS even if cert/key are not set (will fail at startup if cert/key missing):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export ATLAS_TLS_ENABLED=true
|
||||||
|
export ATLAS_TLS_CERT=/etc/atlas/tls/cert.pem
|
||||||
|
export ATLAS_TLS_KEY=/etc/atlas/tls/key.pem
|
||||||
|
./atlas-api
|
||||||
|
```
|
||||||
|
|
||||||
|
## Certificate Requirements
|
||||||
|
|
||||||
|
### Certificate Format
|
||||||
|
|
||||||
|
- **Format**: PEM (Privacy-Enhanced Mail)
|
||||||
|
- **Certificate**: X.509 certificate
|
||||||
|
- **Key**: RSA or ECDSA private key
|
||||||
|
- **Chain**: Certificate chain can be included in cert file
|
||||||
|
|
||||||
|
### Certificate Validation
|
||||||
|
|
||||||
|
At startup, the system validates:
|
||||||
|
- Certificate file exists
|
||||||
|
- Key file exists
|
||||||
|
- Certificate and key are valid and match
|
||||||
|
- Certificate is not expired (checked by Go's TLS library)
|
||||||
|
|
||||||
|
## TLS Configuration
|
||||||
|
|
||||||
|
### Supported TLS Versions
|
||||||
|
|
||||||
|
- **Minimum**: TLS 1.2
|
||||||
|
- **Maximum**: TLS 1.3
|
||||||
|
|
||||||
|
### Cipher Suites
|
||||||
|
|
||||||
|
The system uses secure cipher suites:
|
||||||
|
|
||||||
|
- `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`
|
||||||
|
- `TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384`
|
||||||
|
- `TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305`
|
||||||
|
- `TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305`
|
||||||
|
- `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`
|
||||||
|
- `TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256`
|
||||||
|
|
||||||
|
### Elliptic Curves
|
||||||
|
|
||||||
|
Preferred curves:
|
||||||
|
- `CurveP256`
|
||||||
|
- `CurveP384`
|
||||||
|
- `CurveP521`
|
||||||
|
- `X25519`
|
||||||
|
|
||||||
|
## HTTPS Enforcement
|
||||||
|
|
||||||
|
### Automatic Redirect
|
||||||
|
|
||||||
|
When TLS is enabled, HTTP requests are automatically redirected to HTTPS:
|
||||||
|
|
||||||
|
```
|
||||||
|
HTTP Request → 301 Moved Permanently → HTTPS
|
||||||
|
```
|
||||||
|
|
||||||
|
### Exceptions
|
||||||
|
|
||||||
|
HTTPS enforcement is skipped for:
|
||||||
|
- **Health checks**: `/healthz`, `/health` (allows monitoring)
|
||||||
|
- **Localhost**: Requests from `localhost`, `127.0.0.1`, `::1` (development)
|
||||||
|
|
||||||
|
### Reverse Proxy Support
|
||||||
|
|
||||||
|
The system respects `X-Forwarded-Proto` header for reverse proxy setups:
|
||||||
|
|
||||||
|
```
|
||||||
|
X-Forwarded-Proto: https
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Development (HTTP)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# No TLS configuration - runs on HTTP
|
||||||
|
./atlas-api
|
||||||
|
```
|
||||||
|
|
||||||
|
### Production (HTTPS)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Enable TLS
|
||||||
|
export ATLAS_TLS_CERT=/etc/ssl/certs/atlas.crt
|
||||||
|
export ATLAS_TLS_KEY=/etc/ssl/private/atlas.key
|
||||||
|
export ATLAS_HTTP_ADDR=:8443
|
||||||
|
./atlas-api
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using Let's Encrypt
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Let's Encrypt certificates
|
||||||
|
export ATLAS_TLS_CERT=/etc/letsencrypt/live/atlas.example.com/fullchain.pem
|
||||||
|
export ATLAS_TLS_KEY=/etc/letsencrypt/live/atlas.example.com/privkey.pem
|
||||||
|
./atlas-api
|
||||||
|
```
|
||||||
|
|
||||||
|
### Self-Signed Certificate (Testing)
|
||||||
|
|
||||||
|
Generate a self-signed certificate:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
|
||||||
|
```
|
||||||
|
|
||||||
|
Use it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export ATLAS_TLS_CERT=./cert.pem
|
||||||
|
export ATLAS_TLS_KEY=./key.pem
|
||||||
|
./atlas-api
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Headers
|
||||||
|
|
||||||
|
When TLS is enabled, additional security headers are set:
|
||||||
|
|
||||||
|
### HSTS (HTTP Strict Transport Security)
|
||||||
|
|
||||||
|
```
|
||||||
|
Strict-Transport-Security: max-age=31536000; includeSubDomains
|
||||||
|
```
|
||||||
|
|
||||||
|
- **Max Age**: 1 year (31536000 seconds)
|
||||||
|
- **Include Subdomains**: Yes
|
||||||
|
- **Purpose**: Forces browsers to use HTTPS
|
||||||
|
|
||||||
|
### Content Security Policy
|
||||||
|
|
||||||
|
CSP is configured to work with HTTPS:
|
||||||
|
|
||||||
|
```
|
||||||
|
Content-Security-Policy: default-src 'self'; ...
|
||||||
|
```
|
||||||
|
|
||||||
|
## Reverse Proxy Setup
|
||||||
|
|
||||||
|
### Nginx
|
||||||
|
|
||||||
|
```nginx
|
||||||
|
server {
|
||||||
|
listen 443 ssl;
|
||||||
|
server_name atlas.example.com;
|
||||||
|
|
||||||
|
ssl_certificate /etc/ssl/certs/atlas.crt;
|
||||||
|
ssl_certificate_key /etc/ssl/private/atlas.key;
|
||||||
|
|
||||||
|
location / {
|
||||||
|
proxy_pass http://localhost:8080;
|
||||||
|
proxy_set_header Host $host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||||
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Apache
|
||||||
|
|
||||||
|
```apache
|
||||||
|
<VirtualHost *:443>
|
||||||
|
ServerName atlas.example.com
|
||||||
|
|
||||||
|
SSLEngine on
|
||||||
|
SSLCertificateFile /etc/ssl/certs/atlas.crt
|
||||||
|
SSLCertificateKeyFile /etc/ssl/private/atlas.key
|
||||||
|
|
||||||
|
ProxyPass / http://localhost:8080/
|
||||||
|
ProxyPassReverse / http://localhost:8080/
|
||||||
|
|
||||||
|
RequestHeader set X-Forwarded-Proto "https"
|
||||||
|
</VirtualHost>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Certificate Not Found
|
||||||
|
|
||||||
|
```
|
||||||
|
TLS configuration error: TLS certificate file not found: /path/to/cert.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solution**: Verify certificate file path and permissions.
|
||||||
|
|
||||||
|
### Certificate/Key Mismatch
|
||||||
|
|
||||||
|
```
|
||||||
|
TLS configuration error: load TLS certificate: tls: private key does not match public key
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solution**: Ensure certificate and key files match.
|
||||||
|
|
||||||
|
### Certificate Expired
|
||||||
|
|
||||||
|
```
|
||||||
|
TLS handshake error: x509: certificate has expired or is not yet valid
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solution**: Renew certificate or use a valid certificate.
|
||||||
|
|
||||||
|
### Port Already in Use
|
||||||
|
|
||||||
|
```
|
||||||
|
listen tcp :8443: bind: address already in use
|
||||||
|
```
|
||||||
|
|
||||||
|
**Solution**: Change port or stop conflicting service.
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### 1. Use Valid Certificates
|
||||||
|
|
||||||
|
- **Production**: Use certificates from trusted CAs (Let's Encrypt, commercial CAs)
|
||||||
|
- **Development**: Self-signed certificates are acceptable
|
||||||
|
- **Testing**: Use test certificates with short expiration
|
||||||
|
|
||||||
|
### 2. Certificate Renewal
|
||||||
|
|
||||||
|
- **Monitor Expiration**: Set up alerts for certificate expiration
|
||||||
|
- **Auto-Renewal**: Use tools like `certbot` for Let's Encrypt
|
||||||
|
- **Graceful Reload**: Restart service after certificate renewal
|
||||||
|
|
||||||
|
### 3. Key Security
|
||||||
|
|
||||||
|
- **Permissions**: Restrict key file permissions (`chmod 600`)
|
||||||
|
- **Ownership**: Use dedicated user for key file
|
||||||
|
- **Storage**: Store keys securely, never commit to version control
|
||||||
|
|
||||||
|
### 4. TLS Configuration
|
||||||
|
|
||||||
|
- **Minimum Version**: TLS 1.2 or higher
|
||||||
|
- **Cipher Suites**: Use strong cipher suites only
|
||||||
|
- **HSTS**: Enable HSTS for production
|
||||||
|
|
||||||
|
### 5. Reverse Proxy
|
||||||
|
|
||||||
|
- **Terminate TLS**: Terminate TLS at reverse proxy for better performance
|
||||||
|
- **Forward Headers**: Forward `X-Forwarded-Proto` header
|
||||||
|
- **Health Checks**: Allow HTTP for health checks
|
||||||
|
|
||||||
|
## Compliance
|
||||||
|
|
||||||
|
### SRS Requirement
|
||||||
|
|
||||||
|
Per SRS section 5.3 Security:
|
||||||
|
- **HTTPS SHALL be enforced for the web UI** ✅
|
||||||
|
|
||||||
|
This implementation:
|
||||||
|
- ✅ Supports TLS/HTTPS
|
||||||
|
- ✅ Enforces HTTPS when TLS is enabled
|
||||||
|
- ✅ Provides secure cipher suites
|
||||||
|
- ✅ Includes HSTS headers
|
||||||
|
- ✅ Validates certificates
|
||||||
|
|
||||||
|
## Future Enhancements
|
||||||
|
|
||||||
|
1. **Certificate Auto-Renewal**: Automatic certificate renewal
|
||||||
|
2. **OCSP Stapling**: Online Certificate Status Protocol stapling
|
||||||
|
3. **Certificate Rotation**: Seamless certificate rotation
|
||||||
|
4. **TLS 1.4 Support**: Support for future TLS versions
|
||||||
|
5. **Client Certificate Authentication**: Mutual TLS (mTLS)
|
||||||
|
6. **Certificate Monitoring**: Certificate expiration monitoring
|
||||||
306
docs/ZFS_OPERATIONS.md
Normal file
306
docs/ZFS_OPERATIONS.md
Normal file
@@ -0,0 +1,306 @@
|
|||||||
|
# ZFS Operations
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
AtlasOS provides comprehensive ZFS pool management including pool creation, import, export, scrubbing with progress monitoring, and health status reporting.
|
||||||
|
|
||||||
|
## Pool Operations
|
||||||
|
|
||||||
|
### List Pools
|
||||||
|
|
||||||
|
**GET** `/api/v1/pools`
|
||||||
|
|
||||||
|
Returns all ZFS pools.
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"name": "tank",
|
||||||
|
"status": "ONLINE",
|
||||||
|
"size": 1099511627776,
|
||||||
|
"allocated": 536870912000,
|
||||||
|
"free": 562641027776,
|
||||||
|
"health": "ONLINE",
|
||||||
|
"created_at": "2024-01-15T10:30:00Z"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Get Pool
|
||||||
|
|
||||||
|
**GET** `/api/v1/pools/{name}`
|
||||||
|
|
||||||
|
Returns details for a specific pool.
|
||||||
|
|
||||||
|
### Create Pool
|
||||||
|
|
||||||
|
**POST** `/api/v1/pools`
|
||||||
|
|
||||||
|
Creates a new ZFS pool.
|
||||||
|
|
||||||
|
**Request Body:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "tank",
|
||||||
|
"vdevs": ["sda", "sdb"],
|
||||||
|
"options": {
|
||||||
|
"ashift": "12"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Destroy Pool
|
||||||
|
|
||||||
|
**DELETE** `/api/v1/pools/{name}`
|
||||||
|
|
||||||
|
Destroys a ZFS pool. **Warning**: This is a destructive operation.
|
||||||
|
|
||||||
|
## Pool Import/Export
|
||||||
|
|
||||||
|
### List Available Pools
|
||||||
|
|
||||||
|
**GET** `/api/v1/pools/available`
|
||||||
|
|
||||||
|
Lists pools that can be imported (pools that exist but are not currently imported).
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"pools": ["tank", "backup"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Import Pool
|
||||||
|
|
||||||
|
**POST** `/api/v1/pools/import`
|
||||||
|
|
||||||
|
Imports a ZFS pool.
|
||||||
|
|
||||||
|
**Request Body:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "tank",
|
||||||
|
"options": {
|
||||||
|
"readonly": "on"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Options:**
|
||||||
|
- `readonly`: Set pool to read-only mode (`on`/`off`)
|
||||||
|
- Other ZFS pool properties
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"message": "pool imported",
|
||||||
|
"name": "tank"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Export Pool
|
||||||
|
|
||||||
|
**POST** `/api/v1/pools/{name}/export`
|
||||||
|
|
||||||
|
Exports a ZFS pool (makes it unavailable but preserves data).
|
||||||
|
|
||||||
|
**Request Body (optional):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"force": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters:**
|
||||||
|
- `force` (boolean): Force export even if pool is in use
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"message": "pool exported",
|
||||||
|
"name": "tank"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Scrub Operations
|
||||||
|
|
||||||
|
### Start Scrub
|
||||||
|
|
||||||
|
**POST** `/api/v1/pools/{name}/scrub`
|
||||||
|
|
||||||
|
Starts a scrub operation on a pool. Scrub verifies data integrity and repairs any errors found.
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"message": "scrub started",
|
||||||
|
"pool": "tank"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Get Scrub Status
|
||||||
|
|
||||||
|
**GET** `/api/v1/pools/{name}/scrub`
|
||||||
|
|
||||||
|
Returns detailed scrub status with progress information.
|
||||||
|
|
||||||
|
**Response:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "in_progress",
|
||||||
|
"progress": 45.2,
|
||||||
|
"time_elapsed": "2h15m",
|
||||||
|
"time_remain": "30m",
|
||||||
|
"speed": "100M/s",
|
||||||
|
"errors": 0,
|
||||||
|
"repaired": 0,
|
||||||
|
"last_scrub": "2024-12-15T10:30:00Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Status Values:**
|
||||||
|
- `idle`: No scrub in progress
|
||||||
|
- `in_progress`: Scrub is currently running
|
||||||
|
- `completed`: Scrub completed successfully
|
||||||
|
- `error`: Scrub encountered errors
|
||||||
|
|
||||||
|
**Progress Fields:**
|
||||||
|
- `progress`: Percentage complete (0-100)
|
||||||
|
- `time_elapsed`: Time since scrub started
|
||||||
|
- `time_remain`: Estimated time remaining
|
||||||
|
- `speed`: Current scrub speed
|
||||||
|
- `errors`: Number of errors found
|
||||||
|
- `repaired`: Number of errors repaired
|
||||||
|
- `last_scrub`: Timestamp of last completed scrub
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Import a Pool
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST http://localhost:8080/api/v1/pools/import \
|
||||||
|
-H "Authorization: Bearer $TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"name": "tank"
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Start Scrub and Monitor Progress
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start scrub
|
||||||
|
curl -X POST http://localhost:8080/api/v1/pools/tank/scrub \
|
||||||
|
-H "Authorization: Bearer $TOKEN"
|
||||||
|
|
||||||
|
# Check progress
|
||||||
|
curl http://localhost:8080/api/v1/pools/tank/scrub \
|
||||||
|
-H "Authorization: Bearer $TOKEN"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Export Pool
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -X POST http://localhost:8080/api/v1/pools/tank/export \
|
||||||
|
-H "Authorization: Bearer $TOKEN" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{
|
||||||
|
"force": false
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Scrub Best Practices
|
||||||
|
|
||||||
|
### When to Scrub
|
||||||
|
|
||||||
|
- **Regular Schedule**: Monthly or quarterly
|
||||||
|
- **After Disk Failures**: After replacing failed disks
|
||||||
|
- **Before Major Operations**: Before pool upgrades or migrations
|
||||||
|
- **After Data Corruption**: If data integrity issues are suspected
|
||||||
|
|
||||||
|
### Monitoring Scrub Progress
|
||||||
|
|
||||||
|
1. **Start Scrub**: Use POST endpoint to start
|
||||||
|
2. **Monitor Progress**: Poll GET endpoint every few minutes
|
||||||
|
3. **Check Errors**: Monitor `errors` and `repaired` fields
|
||||||
|
4. **Wait for Completion**: Wait until `status` is `completed`
|
||||||
|
|
||||||
|
### Scrub Performance
|
||||||
|
|
||||||
|
- **Impact**: Scrub operations can impact pool performance
|
||||||
|
- **Scheduling**: Schedule during low-usage periods
|
||||||
|
- **Duration**: Large pools may take hours or days
|
||||||
|
- **I/O**: Scrub generates significant I/O load
|
||||||
|
|
||||||
|
## Pool Import/Export Use Cases
|
||||||
|
|
||||||
|
### Import Use Cases
|
||||||
|
|
||||||
|
1. **System Reboot**: Pools are automatically imported on boot
|
||||||
|
2. **Manual Import**: Import pools that were exported
|
||||||
|
3. **Read-Only Import**: Import pool in read-only mode for inspection
|
||||||
|
4. **Recovery**: Import pools from backup systems
|
||||||
|
|
||||||
|
### Export Use Cases
|
||||||
|
|
||||||
|
1. **System Shutdown**: Export pools before shutdown
|
||||||
|
2. **Maintenance**: Export pools for maintenance operations
|
||||||
|
3. **Migration**: Export pools before moving to another system
|
||||||
|
4. **Backup**: Export pools before creating full backups
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### Pool Not Found
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"code": "NOT_FOUND",
|
||||||
|
"message": "pool not found"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Scrub Already Running
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"code": "CONFLICT",
|
||||||
|
"message": "scrub already in progress"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pool in Use (Export)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"code": "CONFLICT",
|
||||||
|
"message": "pool is in use, cannot export"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Use `force: true` to force export (use with caution).
|
||||||
|
|
||||||
|
## Compliance with SRS
|
||||||
|
|
||||||
|
Per SRS section 4.2 ZFS Management:
|
||||||
|
|
||||||
|
- ✅ **List available disks**: Implemented
|
||||||
|
- ✅ **Create pools**: Implemented
|
||||||
|
- ✅ **Import pools**: Implemented (Priority 20)
|
||||||
|
- ✅ **Export pools**: Implemented (Priority 20)
|
||||||
|
- ✅ **Report pool health**: Implemented
|
||||||
|
- ✅ **Create and manage datasets**: Implemented
|
||||||
|
- ✅ **Create ZVOLs**: Implemented
|
||||||
|
- ✅ **Scrub operations**: Implemented
|
||||||
|
- ✅ **Progress monitoring**: Implemented (Priority 19)
|
||||||
|
|
||||||
|
## Future Enhancements
|
||||||
|
|
||||||
|
1. **Scrub Scheduling**: Automatic scheduled scrubs
|
||||||
|
2. **Scrub Notifications**: Alerts when scrub completes or finds errors
|
||||||
|
3. **Pool Health Alerts**: Automatic alerts for pool health issues
|
||||||
|
4. **Import History**: Track pool import/export history
|
||||||
|
5. **Pool Properties**: Manage pool properties via API
|
||||||
|
6. **VDEV Management**: Add/remove vdevs from pools
|
||||||
|
7. **Pool Upgrade**: Upgrade pool version
|
||||||
|
8. **Resilver Operations**: Monitor and manage resilver operations
|
||||||
@@ -100,35 +100,112 @@ func (a *App) handleGetPool(w http.ResponseWriter, r *http.Request) {
|
|||||||
func (a *App) handleDeletePool(w http.ResponseWriter, r *http.Request) {
|
func (a *App) handleDeletePool(w http.ResponseWriter, r *http.Request) {
|
||||||
name := pathParam(r, "/api/v1/pools/")
|
name := pathParam(r, "/api/v1/pools/")
|
||||||
if name == "" {
|
if name == "" {
|
||||||
writeJSON(w, http.StatusBadRequest, map[string]string{"error": "pool name required"})
|
writeError(w, errors.ErrBadRequest("pool name required"))
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := a.zfs.DestroyPool(name); err != nil {
|
if err := a.zfs.DestroyPool(name); err != nil {
|
||||||
log.Printf("destroy pool error: %v", err)
|
log.Printf("destroy pool error: %v", err)
|
||||||
writeJSON(w, http.StatusInternalServerError, map[string]string{"error": err.Error()})
|
writeError(w, errors.ErrInternal("failed to destroy pool").WithDetails(err.Error()))
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
writeJSON(w, http.StatusOK, map[string]string{"message": "pool destroyed", "name": name})
|
writeJSON(w, http.StatusOK, map[string]string{"message": "pool destroyed", "name": name})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (a *App) handleImportPool(w http.ResponseWriter, r *http.Request) {
|
||||||
|
var req struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Options map[string]string `json:"options,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||||
|
writeError(w, errors.ErrBadRequest("invalid request body").WithDetails(err.Error()))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if req.Name == "" {
|
||||||
|
writeError(w, errors.ErrBadRequest("pool name required"))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := a.zfs.ImportPool(req.Name, req.Options); err != nil {
|
||||||
|
log.Printf("import pool error: %v", err)
|
||||||
|
writeError(w, errors.ErrInternal("failed to import pool").WithDetails(err.Error()))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
writeJSON(w, http.StatusOK, map[string]string{"message": "pool imported", "name": req.Name})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (a *App) handleExportPool(w http.ResponseWriter, r *http.Request) {
|
||||||
|
name := pathParam(r, "/api/v1/pools/")
|
||||||
|
if name == "" {
|
||||||
|
writeError(w, errors.ErrBadRequest("pool name required"))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var req struct {
|
||||||
|
Force bool `json:"force,omitempty"`
|
||||||
|
}
|
||||||
|
// Force is optional, decode if body exists
|
||||||
|
_ = json.NewDecoder(r.Body).Decode(&req)
|
||||||
|
|
||||||
|
if err := a.zfs.ExportPool(name, req.Force); err != nil {
|
||||||
|
log.Printf("export pool error: %v", err)
|
||||||
|
writeError(w, errors.ErrInternal("failed to export pool").WithDetails(err.Error()))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
writeJSON(w, http.StatusOK, map[string]string{"message": "pool exported", "name": name})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (a *App) handleListAvailablePools(w http.ResponseWriter, r *http.Request) {
|
||||||
|
pools, err := a.zfs.ListAvailablePools()
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("list available pools error: %v", err)
|
||||||
|
writeError(w, errors.ErrInternal("failed to list available pools").WithDetails(err.Error()))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
writeJSON(w, http.StatusOK, map[string]interface{}{
|
||||||
|
"pools": pools,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
func (a *App) handleScrubPool(w http.ResponseWriter, r *http.Request) {
|
func (a *App) handleScrubPool(w http.ResponseWriter, r *http.Request) {
|
||||||
name := pathParam(r, "/api/v1/pools/")
|
name := pathParam(r, "/api/v1/pools/")
|
||||||
if name == "" {
|
if name == "" {
|
||||||
writeJSON(w, http.StatusBadRequest, map[string]string{"error": "pool name required"})
|
writeError(w, errors.ErrBadRequest("pool name required"))
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := a.zfs.ScrubPool(name); err != nil {
|
if err := a.zfs.ScrubPool(name); err != nil {
|
||||||
log.Printf("scrub pool error: %v", err)
|
log.Printf("scrub pool error: %v", err)
|
||||||
writeJSON(w, http.StatusInternalServerError, map[string]string{"error": err.Error()})
|
writeError(w, errors.ErrInternal("failed to start scrub").WithDetails(err.Error()))
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
writeJSON(w, http.StatusOK, map[string]string{"message": "scrub started", "pool": name})
|
writeJSON(w, http.StatusOK, map[string]string{"message": "scrub started", "pool": name})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (a *App) handleGetScrubStatus(w http.ResponseWriter, r *http.Request) {
|
||||||
|
name := pathParam(r, "/api/v1/pools/")
|
||||||
|
if name == "" {
|
||||||
|
writeError(w, errors.ErrBadRequest("pool name required"))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
status, err := a.zfs.GetScrubStatus(name)
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("get scrub status error: %v", err)
|
||||||
|
writeError(w, errors.ErrInternal("failed to get scrub status").WithDetails(err.Error()))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
writeJSON(w, http.StatusOK, status)
|
||||||
|
}
|
||||||
|
|
||||||
// Dataset Handlers
|
// Dataset Handlers
|
||||||
func (a *App) handleListDatasets(w http.ResponseWriter, r *http.Request) {
|
func (a *App) handleListDatasets(w http.ResponseWriter, r *http.Request) {
|
||||||
pool := r.URL.Query().Get("pool")
|
pool := r.URL.Query().Get("pool")
|
||||||
|
|||||||
@@ -18,6 +18,7 @@ import (
|
|||||||
"gitea.avt.data-center.id/othman.suseno/atlas/internal/services"
|
"gitea.avt.data-center.id/othman.suseno/atlas/internal/services"
|
||||||
"gitea.avt.data-center.id/othman.suseno/atlas/internal/snapshot"
|
"gitea.avt.data-center.id/othman.suseno/atlas/internal/snapshot"
|
||||||
"gitea.avt.data-center.id/othman.suseno/atlas/internal/storage"
|
"gitea.avt.data-center.id/othman.suseno/atlas/internal/storage"
|
||||||
|
"gitea.avt.data-center.id/othman.suseno/atlas/internal/tls"
|
||||||
"gitea.avt.data-center.id/othman.suseno/atlas/internal/zfs"
|
"gitea.avt.data-center.id/othman.suseno/atlas/internal/zfs"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -50,6 +51,7 @@ type App struct {
|
|||||||
startTime time.Time
|
startTime time.Time
|
||||||
backupService *backup.Service
|
backupService *backup.Service
|
||||||
maintenanceService *maintenance.Service
|
maintenanceService *maintenance.Service
|
||||||
|
tlsConfig *tls.Config
|
||||||
}
|
}
|
||||||
|
|
||||||
func New(cfg Config) (*App, error) {
|
func New(cfg Config) (*App, error) {
|
||||||
@@ -112,27 +114,38 @@ func New(cfg Config) (*App, error) {
|
|||||||
return nil, fmt.Errorf("init backup service: %w", err)
|
return nil, fmt.Errorf("init backup service: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Initialize maintenance service
|
||||||
|
maintenanceService := maintenance.NewService()
|
||||||
|
|
||||||
|
// Initialize TLS configuration
|
||||||
|
tlsConfig := tls.LoadConfig()
|
||||||
|
if err := tlsConfig.Validate(); err != nil {
|
||||||
|
return nil, fmt.Errorf("TLS configuration: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
a := &App{
|
a := &App{
|
||||||
cfg: cfg,
|
cfg: cfg,
|
||||||
tmpl: tmpl,
|
tmpl: tmpl,
|
||||||
mux: http.NewServeMux(),
|
mux: http.NewServeMux(),
|
||||||
zfs: zfsService,
|
zfs: zfsService,
|
||||||
snapshotPolicy: policyStore,
|
snapshotPolicy: policyStore,
|
||||||
jobManager: jobMgr,
|
jobManager: jobMgr,
|
||||||
scheduler: scheduler,
|
scheduler: scheduler,
|
||||||
authService: authService,
|
authService: authService,
|
||||||
userStore: userStore,
|
userStore: userStore,
|
||||||
auditStore: auditStore,
|
auditStore: auditStore,
|
||||||
smbStore: smbStore,
|
smbStore: smbStore,
|
||||||
nfsStore: nfsStore,
|
nfsStore: nfsStore,
|
||||||
iscsiStore: iscsiStore,
|
iscsiStore: iscsiStore,
|
||||||
database: database,
|
database: database,
|
||||||
smbService: smbService,
|
smbService: smbService,
|
||||||
nfsService: nfsService,
|
nfsService: nfsService,
|
||||||
iscsiService: iscsiService,
|
iscsiService: iscsiService,
|
||||||
metricsCollector: metricsCollector,
|
metricsCollector: metricsCollector,
|
||||||
startTime: startTime,
|
startTime: startTime,
|
||||||
backupService: backupService,
|
backupService: backupService,
|
||||||
|
maintenanceService: maintenanceService,
|
||||||
|
tlsConfig: tlsConfig,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Start snapshot scheduler (runs every 15 minutes)
|
// Start snapshot scheduler (runs every 15 minutes)
|
||||||
@@ -144,33 +157,36 @@ func New(cfg Config) (*App, error) {
|
|||||||
|
|
||||||
func (a *App) Router() http.Handler {
|
func (a *App) Router() http.Handler {
|
||||||
// Middleware chain order (outer to inner):
|
// Middleware chain order (outer to inner):
|
||||||
// 1. CORS (handles preflight)
|
// 1. HTTPS enforcement (redirect HTTP to HTTPS)
|
||||||
// 2. Compression (gzip)
|
// 2. CORS (handles preflight)
|
||||||
// 3. Security headers
|
// 3. Compression (gzip)
|
||||||
// 4. Request size limit (10MB)
|
// 4. Security headers
|
||||||
// 5. Content-Type validation
|
// 5. Request size limit (10MB)
|
||||||
// 6. Rate limiting
|
// 6. Content-Type validation
|
||||||
// 7. Caching (for GET requests)
|
// 7. Rate limiting
|
||||||
// 8. Error recovery
|
// 8. Caching (for GET requests)
|
||||||
// 9. Request ID
|
// 9. Error recovery
|
||||||
// 10. Logging
|
// 10. Request ID
|
||||||
// 11. Audit
|
// 11. Logging
|
||||||
// 12. Authentication
|
// 12. Audit
|
||||||
// 13. Maintenance mode (blocks operations during maintenance)
|
// 13. Authentication
|
||||||
// 14. Routes
|
// 14. Maintenance mode (blocks operations during maintenance)
|
||||||
return a.corsMiddleware(
|
// 15. Routes
|
||||||
a.compressionMiddleware(
|
return a.httpsEnforcementMiddleware(
|
||||||
a.securityHeadersMiddleware(
|
a.corsMiddleware(
|
||||||
a.requestSizeMiddleware(10 * 1024 * 1024)(
|
a.compressionMiddleware(
|
||||||
a.validateContentTypeMiddleware(
|
a.securityHeadersMiddleware(
|
||||||
a.rateLimitMiddleware(
|
a.requestSizeMiddleware(10 * 1024 * 1024)(
|
||||||
a.cacheMiddleware(
|
a.validateContentTypeMiddleware(
|
||||||
a.errorMiddleware(
|
a.rateLimitMiddleware(
|
||||||
requestID(
|
a.cacheMiddleware(
|
||||||
logging(
|
a.errorMiddleware(
|
||||||
a.auditMiddleware(
|
requestID(
|
||||||
a.maintenanceMiddleware(
|
logging(
|
||||||
a.authMiddleware(a.mux),
|
a.auditMiddleware(
|
||||||
|
a.maintenanceMiddleware(
|
||||||
|
a.authMiddleware(a.mux),
|
||||||
|
),
|
||||||
),
|
),
|
||||||
),
|
),
|
||||||
),
|
),
|
||||||
|
|||||||
76
internal/httpapp/https_middleware.go
Normal file
76
internal/httpapp/https_middleware.go
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
package httpapp
|
||||||
|
|
||||||
|
import (
|
||||||
|
"net/http"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"gitea.avt.data-center.id/othman.suseno/atlas/internal/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
// httpsEnforcementMiddleware enforces HTTPS connections
|
||||||
|
func (a *App) httpsEnforcementMiddleware(next http.Handler) http.Handler {
|
||||||
|
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
// Skip HTTPS enforcement for health checks and localhost
|
||||||
|
if a.isPublicEndpoint(r.URL.Path) || isLocalhost(r) {
|
||||||
|
next.ServeHTTP(w, r)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// If TLS is enabled, enforce HTTPS
|
||||||
|
if a.tlsConfig != nil && a.tlsConfig.Enabled {
|
||||||
|
// Check if request is already over HTTPS
|
||||||
|
if r.TLS != nil {
|
||||||
|
next.ServeHTTP(w, r)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check X-Forwarded-Proto header (for reverse proxies)
|
||||||
|
if r.Header.Get("X-Forwarded-Proto") == "https" {
|
||||||
|
next.ServeHTTP(w, r)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Redirect HTTP to HTTPS
|
||||||
|
httpsURL := "https://" + r.Host + r.URL.RequestURI()
|
||||||
|
http.Redirect(w, r, httpsURL, http.StatusMovedPermanently)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
next.ServeHTTP(w, r)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// isLocalhost checks if the request is from localhost
|
||||||
|
func isLocalhost(r *http.Request) bool {
|
||||||
|
host := r.Host
|
||||||
|
if strings.Contains(host, ":") {
|
||||||
|
host = strings.Split(host, ":")[0]
|
||||||
|
}
|
||||||
|
return host == "localhost" || host == "127.0.0.1" || host == "::1"
|
||||||
|
}
|
||||||
|
|
||||||
|
// requireHTTPSMiddleware requires HTTPS for all requests (strict mode)
|
||||||
|
func (a *App) requireHTTPSMiddleware(next http.Handler) http.Handler {
|
||||||
|
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
// Skip for health checks
|
||||||
|
if a.isPublicEndpoint(r.URL.Path) {
|
||||||
|
next.ServeHTTP(w, r)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// If TLS is enabled, require HTTPS
|
||||||
|
if a.tlsConfig != nil && a.tlsConfig.Enabled {
|
||||||
|
// Check if request is over HTTPS
|
||||||
|
if r.TLS == nil && r.Header.Get("X-Forwarded-Proto") != "https" {
|
||||||
|
writeError(w, errors.NewAPIError(
|
||||||
|
errors.ErrCodeForbidden,
|
||||||
|
"HTTPS required",
|
||||||
|
http.StatusForbidden,
|
||||||
|
).WithDetails("this endpoint requires HTTPS"))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
next.ServeHTTP(w, r)
|
||||||
|
})
|
||||||
|
}
|
||||||
@@ -60,12 +60,30 @@ func pathParam(r *http.Request, prefix string) string {
|
|||||||
|
|
||||||
// handlePoolOps routes pool operations by method
|
// handlePoolOps routes pool operations by method
|
||||||
func (a *App) handlePoolOps(w http.ResponseWriter, r *http.Request) {
|
func (a *App) handlePoolOps(w http.ResponseWriter, r *http.Request) {
|
||||||
|
// Extract pool name from path like /api/v1/pools/tank
|
||||||
|
name := pathParam(r, "/api/v1/pools/")
|
||||||
|
if name == "" {
|
||||||
|
writeError(w, errors.ErrBadRequest("pool name required"))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
if strings.HasSuffix(r.URL.Path, "/scrub") {
|
if strings.HasSuffix(r.URL.Path, "/scrub") {
|
||||||
if r.Method == http.MethodPost {
|
if r.Method == http.MethodPost {
|
||||||
a.handleScrubPool(w, r)
|
a.handleScrubPool(w, r)
|
||||||
return
|
} else if r.Method == http.MethodGet {
|
||||||
|
a.handleGetScrubStatus(w, r)
|
||||||
|
} else {
|
||||||
|
writeError(w, errors.NewAPIError(errors.ErrCodeBadRequest, "method not allowed", http.StatusMethodNotAllowed))
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if strings.HasSuffix(r.URL.Path, "/export") {
|
||||||
|
if r.Method == http.MethodPost {
|
||||||
|
a.handleExportPool(w, r)
|
||||||
|
} else {
|
||||||
|
writeError(w, errors.NewAPIError(errors.ErrCodeBadRequest, "method not allowed", http.StatusMethodNotAllowed))
|
||||||
}
|
}
|
||||||
http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -80,6 +80,15 @@ func (a *App) routes() {
|
|||||||
func(w http.ResponseWriter, r *http.Request) { a.handleCreatePool(w, r) },
|
func(w http.ResponseWriter, r *http.Request) { a.handleCreatePool(w, r) },
|
||||||
nil, nil, nil,
|
nil, nil, nil,
|
||||||
))
|
))
|
||||||
|
a.mux.HandleFunc("/api/v1/pools/available", methodHandler(
|
||||||
|
func(w http.ResponseWriter, r *http.Request) { a.handleListAvailablePools(w, r) },
|
||||||
|
nil, nil, nil, nil,
|
||||||
|
))
|
||||||
|
a.mux.HandleFunc("/api/v1/pools/import", methodHandler(
|
||||||
|
nil,
|
||||||
|
func(w http.ResponseWriter, r *http.Request) { a.handleImportPool(w, r) },
|
||||||
|
nil, nil, nil,
|
||||||
|
))
|
||||||
a.mux.HandleFunc("/api/v1/pools/", a.handlePoolOps)
|
a.mux.HandleFunc("/api/v1/pools/", a.handlePoolOps)
|
||||||
|
|
||||||
a.mux.HandleFunc("/api/v1/datasets", methodHandler(
|
a.mux.HandleFunc("/api/v1/datasets", methodHandler(
|
||||||
|
|||||||
104
internal/tls/config.go
Normal file
104
internal/tls/config.go
Normal file
@@ -0,0 +1,104 @@
|
|||||||
|
package tls
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/tls"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Note: This package is named "tls" but provides configuration for crypto/tls
|
||||||
|
|
||||||
|
// Config holds TLS configuration
|
||||||
|
type Config struct {
|
||||||
|
CertFile string
|
||||||
|
KeyFile string
|
||||||
|
MinVersion uint16
|
||||||
|
MaxVersion uint16
|
||||||
|
Enabled bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// LoadConfig loads TLS configuration from environment variables
|
||||||
|
func LoadConfig() *Config {
|
||||||
|
cfg := &Config{
|
||||||
|
CertFile: os.Getenv("ATLAS_TLS_CERT"),
|
||||||
|
KeyFile: os.Getenv("ATLAS_TLS_KEY"),
|
||||||
|
MinVersion: tls.VersionTLS12,
|
||||||
|
MaxVersion: tls.VersionTLS13,
|
||||||
|
Enabled: false,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Enable TLS if certificate and key are provided
|
||||||
|
if cfg.CertFile != "" && cfg.KeyFile != "" {
|
||||||
|
cfg.Enabled = true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if TLS is explicitly enabled
|
||||||
|
if os.Getenv("ATLAS_TLS_ENABLED") == "true" {
|
||||||
|
cfg.Enabled = true
|
||||||
|
}
|
||||||
|
|
||||||
|
return cfg
|
||||||
|
}
|
||||||
|
|
||||||
|
// BuildTLSConfig builds a crypto/tls.Config from the configuration
|
||||||
|
func (c *Config) BuildTLSConfig() (*tls.Config, error) {
|
||||||
|
if !c.Enabled {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify certificate and key files exist
|
||||||
|
if _, err := os.Stat(c.CertFile); os.IsNotExist(err) {
|
||||||
|
return nil, fmt.Errorf("TLS certificate file not found: %s", c.CertFile)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := os.Stat(c.KeyFile); os.IsNotExist(err) {
|
||||||
|
return nil, fmt.Errorf("TLS key file not found: %s", c.KeyFile)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load certificate
|
||||||
|
cert, err := tls.LoadX509KeyPair(c.CertFile, c.KeyFile)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("load TLS certificate: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
config := &tls.Config{
|
||||||
|
Certificates: []tls.Certificate{cert},
|
||||||
|
MinVersion: c.MinVersion,
|
||||||
|
MaxVersion: c.MaxVersion,
|
||||||
|
// Security best practices
|
||||||
|
PreferServerCipherSuites: true,
|
||||||
|
CurvePreferences: []tls.CurveID{
|
||||||
|
tls.CurveP256,
|
||||||
|
tls.CurveP384,
|
||||||
|
tls.CurveP521,
|
||||||
|
tls.X25519,
|
||||||
|
},
|
||||||
|
CipherSuites: []uint16{
|
||||||
|
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
|
||||||
|
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
|
||||||
|
tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,
|
||||||
|
tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
|
||||||
|
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
|
||||||
|
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
return config, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate validates the TLS configuration
|
||||||
|
func (c *Config) Validate() error {
|
||||||
|
if !c.Enabled {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.CertFile == "" {
|
||||||
|
return fmt.Errorf("TLS certificate file is required when TLS is enabled")
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.KeyFile == "" {
|
||||||
|
return fmt.Errorf("TLS key file is required when TLS is enabled")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -136,26 +136,184 @@ func (s *Service) DestroyPool(name string) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ImportPool imports a ZFS pool
|
||||||
|
func (s *Service) ImportPool(name string, options map[string]string) error {
|
||||||
|
args := []string{"import"}
|
||||||
|
|
||||||
|
// Add options
|
||||||
|
for k, v := range options {
|
||||||
|
args = append(args, "-o", fmt.Sprintf("%s=%s", k, v))
|
||||||
|
}
|
||||||
|
|
||||||
|
args = append(args, name)
|
||||||
|
_, err := s.execCommand(s.zpoolPath, args...)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExportPool exports a ZFS pool
|
||||||
|
func (s *Service) ExportPool(name string, force bool) error {
|
||||||
|
args := []string{"export"}
|
||||||
|
if force {
|
||||||
|
args = append(args, "-f")
|
||||||
|
}
|
||||||
|
args = append(args, name)
|
||||||
|
_, err := s.execCommand(s.zpoolPath, args...)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListAvailablePools returns pools that can be imported
|
||||||
|
func (s *Service) ListAvailablePools() ([]string, error) {
|
||||||
|
output, err := s.execCommand(s.zpoolPath, "import")
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var pools []string
|
||||||
|
lines := strings.Split(output, "\n")
|
||||||
|
for _, line := range lines {
|
||||||
|
line = strings.TrimSpace(line)
|
||||||
|
if line == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
// Parse pool name from output like "pool: tank"
|
||||||
|
if strings.HasPrefix(line, "pool:") {
|
||||||
|
parts := strings.Fields(line)
|
||||||
|
if len(parts) >= 2 {
|
||||||
|
pools = append(pools, parts[1])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return pools, nil
|
||||||
|
}
|
||||||
|
|
||||||
// ScrubPool starts a scrub operation on a pool
|
// ScrubPool starts a scrub operation on a pool
|
||||||
func (s *Service) ScrubPool(name string) error {
|
func (s *Service) ScrubPool(name string) error {
|
||||||
_, err := s.execCommand(s.zpoolPath, "scrub", name)
|
_, err := s.execCommand(s.zpoolPath, "scrub", name)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetScrubStatus returns the current scrub status
|
// ScrubStatus represents detailed scrub operation status
|
||||||
func (s *Service) GetScrubStatus(name string) (string, error) {
|
type ScrubStatus struct {
|
||||||
output, err := s.execCommand(s.zpoolPath, "status", name)
|
Status string `json:"status"` // idle, in_progress, completed, error
|
||||||
if err != nil {
|
Progress float64 `json:"progress"` // 0-100
|
||||||
return "", err
|
TimeElapsed string `json:"time_elapsed"` // e.g., "2h 15m"
|
||||||
|
TimeRemain string `json:"time_remain"` // e.g., "30m"
|
||||||
|
Speed string `json:"speed"` // e.g., "100M/s"
|
||||||
|
Errors int `json:"errors"` // number of errors found
|
||||||
|
Repaired int `json:"repaired"` // number of errors repaired
|
||||||
|
LastScrub string `json:"last_scrub"` // timestamp of last completed scrub
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetScrubStatus returns detailed scrub status with progress
|
||||||
|
func (s *Service) GetScrubStatus(name string) (*ScrubStatus, error) {
|
||||||
|
status := &ScrubStatus{
|
||||||
|
Status: "idle",
|
||||||
}
|
}
|
||||||
|
|
||||||
if strings.Contains(output, "scrub in progress") {
|
// Get pool status
|
||||||
return "in_progress", nil
|
output, err := s.execCommand(s.zpoolPath, "status", name)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
}
|
}
|
||||||
if strings.Contains(output, "scrub repaired") {
|
|
||||||
return "completed", nil
|
// Parse scrub information
|
||||||
|
lines := strings.Split(output, "\n")
|
||||||
|
inScrubSection := false
|
||||||
|
for _, line := range lines {
|
||||||
|
line = strings.TrimSpace(line)
|
||||||
|
|
||||||
|
// Check if scrub is in progress
|
||||||
|
if strings.Contains(line, "scrub in progress") {
|
||||||
|
status.Status = "in_progress"
|
||||||
|
inScrubSection = true
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if scrub completed
|
||||||
|
if strings.Contains(line, "scrub repaired") || strings.Contains(line, "scrub completed") {
|
||||||
|
status.Status = "completed"
|
||||||
|
status.Progress = 100.0
|
||||||
|
// Extract repair information
|
||||||
|
if strings.Contains(line, "repaired") {
|
||||||
|
// Try to extract number of repairs
|
||||||
|
parts := strings.Fields(line)
|
||||||
|
for i, part := range parts {
|
||||||
|
if part == "repaired" && i > 0 {
|
||||||
|
// Previous part might be the number
|
||||||
|
if repaired, err := strconv.Atoi(parts[i-1]); err == nil {
|
||||||
|
status.Repaired = repaired
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse progress percentage
|
||||||
|
if strings.Contains(line, "%") && inScrubSection {
|
||||||
|
// Extract percentage from line like "scan: 45.2% done"
|
||||||
|
parts := strings.Fields(line)
|
||||||
|
for _, part := range parts {
|
||||||
|
if strings.HasSuffix(part, "%") {
|
||||||
|
if pct, err := strconv.ParseFloat(strings.TrimSuffix(part, "%"), 64); err == nil {
|
||||||
|
status.Progress = pct
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse time elapsed
|
||||||
|
if strings.Contains(line, "elapsed") && inScrubSection {
|
||||||
|
// Extract time like "elapsed: 2h15m"
|
||||||
|
parts := strings.Fields(line)
|
||||||
|
for i, part := range parts {
|
||||||
|
if part == "elapsed:" && i+1 < len(parts) {
|
||||||
|
status.TimeElapsed = parts[i+1]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse time remaining
|
||||||
|
if strings.Contains(line, "remaining") && inScrubSection {
|
||||||
|
parts := strings.Fields(line)
|
||||||
|
for i, part := range parts {
|
||||||
|
if part == "remaining:" && i+1 < len(parts) {
|
||||||
|
status.TimeRemain = parts[i+1]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse speed
|
||||||
|
if strings.Contains(line, "scan rate") && inScrubSection {
|
||||||
|
parts := strings.Fields(line)
|
||||||
|
for i, part := range parts {
|
||||||
|
if part == "rate" && i+1 < len(parts) {
|
||||||
|
status.Speed = parts[i+1]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse errors
|
||||||
|
if strings.Contains(line, "errors:") && inScrubSection {
|
||||||
|
parts := strings.Fields(line)
|
||||||
|
for i, part := range parts {
|
||||||
|
if part == "errors:" && i+1 < len(parts) {
|
||||||
|
if errs, err := strconv.Atoi(parts[i+1]); err == nil {
|
||||||
|
status.Errors = errs
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return "idle", nil
|
|
||||||
|
// Get last scrub time from pool properties
|
||||||
|
lastScrub, err := s.execCommand(s.zfsPath, "get", "-H", "-o", "value", "lastscrub", name)
|
||||||
|
if err == nil && lastScrub != "-" && lastScrub != "" {
|
||||||
|
status.LastScrub = strings.TrimSpace(lastScrub)
|
||||||
|
}
|
||||||
|
|
||||||
|
return status, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// ListDatasets returns all datasets in a pool (or all if pool is empty)
|
// ListDatasets returns all datasets in a pool (or all if pool is empty)
|
||||||
|
|||||||
Reference in New Issue
Block a user