Storage Provider Management
Comprehensive guide to configuring and managing cloud storage providers for backup operations in OEC.SH platform.
Overview
Feature Category: Storage Management
Required Permission: org.storage.create, org.storage.list, org.storage.update, org.storage.delete
API Prefix: /api/v1/backups/storage-configs
Supported Providers: 6 (S3, R2, B2, MinIO, FTP, SFTP)
Storage providers are the foundation of OEC.SH's backup system. All environment backups (PostgreSQL database dumps + Odoo filestore) are stored in your configured cloud storage, giving you full control and ownership of your backup data.
Key Benefits
- Multi-Cloud Support: Choose from 6 storage providers based on cost, performance, and compliance needs
- BYOS (Bring Your Own Storage): You own your backup data - no vendor lock-in
- Encryption: Credentials encrypted at-rest in database, data encrypted in-transit (TLS/SSL)
- Default Provider: Set organization-wide default for automated backups
- Connection Testing: Validate credentials before saving configuration
- Usage Tracking: Monitor storage consumption and object counts
Supported Storage Providers
Provider Comparison
| Provider | Best For | Pricing | Egress Fees | Setup Complexity |
|---|---|---|---|---|
| AWS S3 | Enterprise, Multi-region | $0.023/GB/mo | $0.09/GB | Low |
| Cloudflare R2 | Cost optimization, Downloads | $0.015/GB/mo | $0 (Zero) | Low |
| Backblaze B2 | Long-term archival | $0.005/GB/mo | $0.01/GB | Medium |
| MinIO | Self-hosted, GDPR compliance | Infrastructure cost | N/A | Medium |
| FTP/SFTP | Legacy systems, On-premise | Infrastructure cost | N/A | Low |
Provider Selection Guide
Choose AWS S3 if you need:
- Multi-region replication
- Advanced lifecycle policies (Glacier, Deep Archive)
- Enterprise SLAs and support
- Integration with AWS ecosystem
Choose Cloudflare R2 if you need:
- Zero egress fees (ideal for frequent downloads)
- Global CDN distribution
- S3 compatibility without AWS vendor lock-in
- Cost-effective storage (0.023/GB)
Choose Backblaze B2 if you need:
- Lowest storage cost for archival
- Simple, predictable pricing
- Good for infrequently accessed backups
- Transparent egress fees ($0.01/GB)
Choose MinIO if you need:
- Self-hosted storage (full control)
- GDPR/data residency compliance
- On-premise backup solution
- S3-compatible private cloud
Choose FTP/SFTP if you have:
- Existing FTP infrastructure
- Legacy backup systems
- Simple file-based storage needs
- Direct server-to-server transfers
Provider Configurations
1. AWS S3
Use Case: Enterprise-grade cloud storage with multi-region support and advanced features.
Prerequisites
- AWS Account: Sign up at aws.amazon.com (opens in a new tab)
- S3 Bucket: Create bucket in desired region
- IAM User: Create IAM user with S3 permissions
Required IAM Permissions
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::YOUR_BUCKET_NAME",
"arn:aws:s3:::YOUR_BUCKET_NAME/*"
]
}
]
}Configuration Example
# API Request
curl -X POST https://your-domain.com/api/v1/backups/storage-configs?organization_id=<ORG_ID> \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "AWS S3 Production",
"provider": "aws_s3",
"bucket": "oecsh-backups-prod",
"region": "us-east-1",
"access_key": "AKIA...",
"secret_key": "YOUR_SECRET_KEY_HERE",
"path_prefix": "backups/",
"storage_class": "STANDARD",
"is_default": true
}'Configuration Fields
- name: Display name (e.g., "AWS S3 Production")
- provider:
aws_s3 - bucket: S3 bucket name (must already exist)
- region: AWS region (e.g.,
us-east-1,eu-west-1,ap-southeast-1) - access_key: IAM user Access Key ID
- secret_key: IAM user Secret Access Key
- path_prefix: Optional prefix for all keys (e.g.,
backups/org-123/) - storage_class: S3 storage class (
STANDARD,STANDARD_IA,GLACIER,DEEP_ARCHIVE) - is_default: Set as organization default for automated backups
Regional Endpoints
- US East:
us-east-1(N. Virginia) - Default, lowest latency for US - US West:
us-west-2(Oregon) - Europe:
eu-west-1(Ireland),eu-central-1(Frankfurt) - Asia Pacific:
ap-southeast-1(Singapore),ap-northeast-1(Tokyo)
Storage Classes Comparison
| Class | Use Case | Pricing | Retrieval Time |
|---|---|---|---|
| STANDARD | Active backups | $0.023/GB/mo | Immediate |
| STANDARD_IA | Infrequent access | $0.0125/GB/mo | Immediate |
| GLACIER | Long-term archive | $0.004/GB/mo | Minutes to hours |
| DEEP_ARCHIVE | Compliance archive | $0.00099/GB/mo | 12-48 hours |
2. Cloudflare R2
Use Case: Zero-egress cloud storage, perfect for frequent backup downloads and cost optimization.
Prerequisites
- Cloudflare Account: Sign up at cloudflare.com (opens in a new tab)
- R2 Subscription: Enable R2 storage (requires payment method)
- R2 Bucket: Create bucket via Cloudflare dashboard
Setup Steps
Step 1: Create R2 Bucket
- Log in to Cloudflare Dashboard → R2
- Click "Create Bucket"
- Enter bucket name:
oecsh-backups - (Optional) Choose location hint for performance
- Click "Create Bucket"
Step 2: Generate API Token
- Navigate to R2 → Manage R2 API Tokens
- Click "Create API Token"
- Token name:
OEC.SH Backup Storage - Permissions:
- ✅ Object Read & Write
- ✅ (Optional) Admin Read & Write for lifecycle management
- Click "Create API Token"
- Save credentials immediately (shown only once):
Access Key ID: <32-character-key> Secret Access Key: <43-character-secret>
Step 3: Find Account ID
- In R2 dashboard, note your Account ID
- Format: 32 hexadecimal characters (e.g.,
a1b2c3d4e5f6...)
Configuration Example
# API Request
curl -X POST https://your-domain.com/api/v1/backups/storage-configs?organization_id=<ORG_ID> \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "Cloudflare R2 Backups",
"provider": "cloudflare_r2",
"bucket": "oecsh-backups",
"account_id": "a1b2c3d4e5f6...",
"access_key": "YOUR_ACCESS_KEY_ID",
"secret_key": "YOUR_SECRET_ACCESS_KEY",
"path_prefix": "backups/",
"is_default": true
}'Configuration Fields
- name: Display name (e.g., "Cloudflare R2 Production")
- provider:
cloudflare_r2 - bucket: R2 bucket name
- account_id: Cloudflare account ID (32 characters)
- access_key: R2 API token Access Key ID
- secret_key: R2 API token Secret Access Key
- path_prefix: Optional prefix for organization
- is_default: Set as default provider
Pricing (as of December 2024)
- Storage: $0.015/GB/month
- Class A Operations (writes): $4.50/million requests
- Class B Operations (reads): $0.36/million requests
- Egress: $0 (ZERO) - Unlimited free downloads
Why R2 is Recommended
✅ Zero Egress Fees: Download backups without bandwidth charges ✅ 50% Cheaper: 0.023/GB ✅ S3 Compatible: Drop-in replacement, same API ✅ Global Network: Cloudflare's 275+ edge locations ✅ Built-in DDoS Protection: Cloudflare's security layer
3. Backblaze B2
Use Case: Lowest-cost storage for long-term backup archival with predictable pricing.
Prerequisites
- Backblaze Account: Sign up at backblaze.com (opens in a new tab)
- B2 Bucket: Create bucket via B2 dashboard
- Application Key: Generate with read/write permissions
Setup Steps
Step 1: Create B2 Bucket
- Log in to Backblaze Dashboard → B2 Cloud Storage
- Click "Create a Bucket"
- Bucket name:
oecsh-backups(must be globally unique) - Bucket type: Private
- Default encryption: Enabled (recommended)
- Object lock: Disabled (not needed for backups)
- Click "Create Bucket"
Step 2: Generate Application Key
- Navigate to App Keys tab
- Click "Add a New Application Key"
- Name:
OEC.SH Backup Access - Access:
- Allow access to: Select bucket → Choose your bucket
- Type of Access: Read and Write
- Click "Create New Key"
- Save credentials immediately:
keyID: <25-character-app-key-id> applicationKey: <31-character-secret>
Step 3: Note Region
B2 regions:
us-west-001- US West (California)us-west-002- US West (Arizona)us-west-004- US West (Oregon)us-east-005- US East (Florida)eu-central-003- EU Central (Amsterdam)
Configuration Example
# API Request
curl -X POST https://your-domain.com/api/v1/backups/storage-configs?organization_id=<ORG_ID> \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "Backblaze B2 Archive",
"provider": "backblaze_b2",
"bucket": "oecsh-backups",
"region": "us-west-004",
"access_key": "YOUR_KEY_ID",
"secret_key": "YOUR_APPLICATION_KEY",
"path_prefix": "archives/",
"is_default": false
}'Configuration Fields
- name: Display name (e.g., "B2 Long-term Archive")
- provider:
backblaze_b2 - bucket: B2 bucket name (globally unique)
- region: B2 region code (e.g.,
us-west-004) - access_key: Application Key ID (called
keyIDin B2) - secret_key: Application Key (called
applicationKeyin B2) - path_prefix: Optional folder structure
- is_default: Set as default provider
Pricing (as of December 2024)
- Storage: $0.005/GB/month (first 10GB free)
- Downloads: First 1GB/day free, then $0.01/GB
- API Calls: 2,500 free/day, then $0.004/10,000 calls
- Uploads: FREE (unlimited)
Cost Example
For 100GB of backups with 5GB downloads/month:
- Storage: 100GB × 0.50/month**
- Downloads: (5GB - 1GB free) × 0.04/month**
- **Total: 3.55 for S3)
4. MinIO
Use Case: Self-hosted S3-compatible storage for data sovereignty and compliance.
Prerequisites
- MinIO Server: Running MinIO instance (self-hosted or managed)
- MinIO Bucket: Create bucket via MinIO Console or
mc - Access Credentials: MinIO access key and secret key
Setup Steps
Step 1: Install MinIO (if self-hosting)
# Docker installation (recommended)
docker run -d \
--name minio \
-p 9000:9000 \
-p 9001:9001 \
-v /data/minio:/data \
-e MINIO_ROOT_USER=minioadmin \
-e MINIO_ROOT_PASSWORD=minioadmin123 \
quay.io/minio/minio server /data --console-address ":9001"Step 2: Create Bucket
# Using MinIO Client (mc)
mc alias set myminio http://localhost:9000 minioadmin minioadmin123
mc mb myminio/oecsh-backups
mc policy set download myminio/oecsh-backupsOr via MinIO Console:
- Navigate to
http://localhost:9001 - Login with root credentials
- Go to Buckets → Create Bucket
- Bucket name:
oecsh-backups
Step 3: Generate Access Key
- MinIO Console → Identity → Users
- Create user:
oecsh-backup-service - Attach policy:
readwriteonoecsh-backups - Generate Access Keys
- Save credentials:
Access Key: <20-character-key> Secret Key: <40-character-secret>
Configuration Example
# API Request
curl -X POST https://your-domain.com/api/v1/backups/storage-configs?organization_id=<ORG_ID> \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "MinIO Self-Hosted",
"provider": "minio",
"bucket": "oecsh-backups",
"endpoint_url": "http://minio.internal:9000",
"access_key": "YOUR_ACCESS_KEY",
"secret_key": "YOUR_SECRET_KEY",
"path_prefix": "production/",
"is_default": true
}'Configuration Fields
- name: Display name (e.g., "MinIO On-Premise")
- provider:
minio - bucket: MinIO bucket name
- endpoint_url: MinIO server URL (e.g.,
http://minio:9000,https://s3.example.com) - access_key: MinIO access key
- secret_key: MinIO secret key
- path_prefix: Optional folder structure
- is_default: Set as default provider
Security Best Practices
✅ Use HTTPS: Always use TLS/SSL for production ✅ Separate Credentials: Don't use root credentials for backups ✅ Bucket Policies: Restrict access to specific prefixes ✅ Network Isolation: Keep MinIO on private network ✅ Encryption: Enable server-side encryption (SSE)
5. FTP (File Transfer Protocol)
Use Case: Legacy systems, simple file-based storage, compatibility with existing infrastructure.
Prerequisites
- FTP Server: Running FTP server (ProFTPD, vsftpd, FileZilla Server)
- FTP Account: Username and password with read/write permissions
- Base Directory: Writable directory for backups
Configuration Example
# API Request
curl -X POST https://your-domain.com/api/v1/backups/storage-configs?organization_id=<ORG_ID> \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "FTP Legacy Storage",
"provider": "ftp",
"bucket": "backups",
"ftp_host": "ftp.example.com",
"ftp_port": 21,
"ftp_use_ssl": true,
"ftp_passive_mode": true,
"ftp_base_path": "/backups/oecsh",
"access_key": "ftp_username",
"secret_key": "ftp_password",
"path_prefix": "production/",
"is_default": false
}'Configuration Fields
- name: Display name (e.g., "FTP Legacy Server")
- provider:
ftp - bucket: Logical bucket name (used for organization)
- ftp_host: FTP server hostname or IP address
- ftp_port: FTP port (default: 21, FTPS: 990)
- ftp_use_ssl: Enable FTPS (FTP over SSL/TLS) - strongly recommended
- ftp_passive_mode: Use passive mode (recommended for firewalls)
- ftp_base_path: Base directory on FTP server (e.g.,
/backups) - access_key: FTP username
- secret_key: FTP password
- path_prefix: Subdirectory within base_path
FTP vs FTPS
| Feature | FTP | FTPS (FTP over SSL/TLS) |
|---|---|---|
| Encryption | ❌ None | ✅ TLS/SSL |
| Security | ⚠️ Low | ✅ High |
| Port | 21 | 21 (explicit) or 990 (implicit) |
| Use Case | Internal networks | Production |
Recommendation: Always use FTPS (ftp_use_ssl: true) for production environments.
6. SFTP (SSH File Transfer Protocol)
Use Case: Secure file transfers over SSH, common in enterprise environments.
Prerequisites
- SSH Server: Running SSH server with SFTP subsystem
- SSH Account: Username and password (or SSH key)
- Base Directory: Writable directory for backups
Configuration Example
# API Request
curl -X POST https://your-domain.com/api/v1/backups/storage-configs?organization_id=<ORG_ID> \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "SFTP Secure Storage",
"provider": "sftp",
"bucket": "backups",
"ftp_host": "sftp.example.com",
"ftp_port": 22,
"ftp_base_path": "/home/backups/oecsh",
"access_key": "backup_user",
"secret_key": "secure_password",
"path_prefix": "production/",
"is_default": true
}'Configuration Fields
- name: Display name (e.g., "SFTP Production")
- provider:
sftp - bucket: Logical bucket name
- ftp_host: SFTP server hostname or IP
- ftp_port: SSH port (default: 22)
- ftp_base_path: Base directory (e.g.,
/home/backups) - access_key: SSH username
- secret_key: SSH password or private key
- path_prefix: Subdirectory structure
SFTP vs FTP
| Feature | SFTP | FTP/FTPS |
|---|---|---|
| Protocol | SSH (port 22) | FTP (port 21) |
| Encryption | ✅ Always encrypted | ⚠️ Optional (FTPS) |
| Firewall | ✅ Single port | ⚠️ Multiple ports (passive) |
| Authentication | Password + Key | Password only |
| Use Case | Modern, secure | Legacy systems |
Recommendation: Use SFTP over FTP/FTPS when possible for better security and firewall compatibility.
Add Storage Provider
Web UI Method
Step 1: Navigate to Storage Settings
- Go to Dashboard → Settings
- Click "Storage" tab in left sidebar
- Click "Add Storage Provider" button
Step 2: Select Provider Type
Choose from 6 provider options:
- AWS S3
- Cloudflare R2 (recommended)
- Backblaze B2
- MinIO
- FTP
- SFTP
Step 3: Fill Configuration
Enter provider-specific credentials and settings (see provider sections above).
Step 4: Test Connection
Click "Test Connection" button to validate credentials before saving.
Step 5: Save Configuration
After successful test, click "Create Storage Configuration".
API Method
Endpoint: POST /api/v1/backups/storage-configs
Query Parameters:
organization_id(required): Organization UUID
Request Body (example for Cloudflare R2):
{
"name": "Cloudflare R2 Production",
"provider": "cloudflare_r2",
"bucket": "oecsh-backups",
"account_id": "a1b2c3d4e5f6789...",
"access_key": "YOUR_ACCESS_KEY_ID",
"secret_key": "YOUR_SECRET_ACCESS_KEY",
"path_prefix": "backups/",
"is_default": true
}Response (201 Created):
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"organization_id": "123e4567-e89b-12d3-a456-426614174000",
"name": "Cloudflare R2 Production",
"provider": "cloudflare_r2",
"bucket": "oecsh-backups",
"region": "auto",
"endpoint_url": null,
"path_prefix": "backups/",
"storage_class": null,
"is_default": true,
"is_active": true,
"total_size_bytes": 0,
"object_count": 0,
"last_used_at": null,
"created_at": "2024-12-11T10:30:00Z",
"updated_at": "2024-12-11T10:30:00Z"
}Error Responses:
400 Bad Request: Invalid provider configuration or credentials403 Forbidden: Missingorg.storage.createpermission or BYOS disabled409 Conflict: Storage configuration name already exists
Test Connection
Test storage provider connectivity before saving configuration to catch credential or network issues early.
Web UI Method
- Fill in storage configuration form
- Click "Test Connection" button
- Wait for validation (typically 2-5 seconds)
- Review test results:
- ✅ Success: Green checkmark, shows latency
- ❌ Failure: Red error message with details
API Method
Endpoint: POST /api/v1/backups/storage-configs/test-connection
Query Parameters:
organization_id(required): Organization UUID
Request Body:
{
"name": "Test Configuration",
"provider": "cloudflare_r2",
"bucket": "oecsh-backups",
"account_id": "YOUR_ACCOUNT_ID",
"access_key": "YOUR_ACCESS_KEY",
"secret_key": "YOUR_SECRET_KEY",
"path_prefix": "backups/"
}Response (200 OK):
{
"success": true,
"message": "Connection successful",
"bucket_exists": true,
"can_write": true,
"can_read": true,
"latency_ms": 234
}Failed Test Example:
{
"success": false,
"message": "Connection failed: Invalid credentials",
"bucket_exists": null,
"can_write": null,
"can_read": null,
"latency_ms": null
}What Tests Validate
✅ Network Connectivity: Can reach storage endpoint ✅ Authentication: Valid credentials ✅ Bucket Exists: Bucket/container accessible ✅ Write Permissions: Can create test object ✅ Read Permissions: Can retrieve test object ✅ Delete Permissions: Can delete test object
Default Provider
Set an organization-wide default storage provider for automated backup policies.
Why Set Default?
- Automated Backups: Backup policies use default provider automatically
- Manual Backups: Pre-selected in UI when creating manual backups
- Consistency: All environments use same storage unless overridden
- Convenience: Don't specify storage for every backup operation
Set via Web UI
- Navigate to Dashboard → Settings → Storage
- Find storage configuration in list
- Click "Set as Default" button
- Confirm action
Only one provider can be default per organization.
Set via API
Endpoint: POST /api/v1/backups/storage-configs/{config_id}/set-default
Response (200 OK):
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"name": "Cloudflare R2 Production",
"is_default": true,
...
}Default Provider Behavior
- New backup policies: Automatically use default provider
- Manual backups: Pre-selected in UI (can override)
- API backups without storage_config_id: Use default
- Changing default: Previously created backups unaffected (stored in specific storage)
Storage Encryption
OEC.SH implements multi-layer encryption for storage security.
Encryption at Rest
Database Encryption (credentials):
- Storage provider credentials encrypted using AES-256-GCM
- Encryption key derived from
ENCRYPTION_KEYenvironment variable - Credentials decrypted only during backup/restore operations
- Never logged or exposed in API responses
Storage Provider Encryption (backup data):
- AWS S3: Server-side encryption (SSE-S3 or SSE-KMS)
- Cloudflare R2: Automatic encryption at rest
- Backblaze B2: Server-side encryption enabled by default
- MinIO: SSE-S3 compatible encryption
- FTP/SFTP: File-level encryption on server (if configured)
Encryption in Transit
HTTPS/TLS:
- All S3-compatible providers use TLS 1.2+ for data transfer
- Cloudflare R2: TLS 1.3 with Cloudflare's security layer
- FTP: Use FTPS (FTP over SSL/TLS) -
ftp_use_ssl: true - SFTP: SSH protocol with AES-256 encryption
SSH Tunneling (for backup operations):
- Database dumps transferred over SSH from VMs
- Filestore archives transferred over SSH
- Only final storage upload uses storage provider protocol
Security Best Practices
✅ Use HTTPS/TLS: Always enable SSL/TLS for production ✅ Rotate Credentials: Periodically rotate access keys (every 90 days) ✅ Principle of Least Privilege: Grant minimum required permissions ✅ Audit Logs: Monitor storage access logs for anomalies ✅ Network Isolation: Keep storage on private network when possible ✅ Backup Verification: Use checksums to verify data integrity
Backup Retention
Configure retention policies using GFS (Grandfather-Father-Son) scheme for intelligent backup lifecycle management.
GFS Retention Tiers
| Tier | Retention Period | Frequency | Use Case |
|---|---|---|---|
| Daily | 7 days (default) | Every backup | Recent changes |
| Weekly | 4 weeks (default) | Sunday backups | Weekly milestones |
| Monthly | 12 months (default) | 1st of month | Monthly archives |
| Yearly | 2 years (default) | Jan 1st backups | Long-term compliance |
| Permanent | Never expires | Manual backups | Critical snapshots |
How Retention Works
- Backup Creation: Backup assigned tier based on date/time
- Expiration Calculation:
expires_atset based on tier and policy - Automatic Cleanup: ARQ worker job deletes expired backups daily
- Policy Updates: Updating policy recalculates expiration for existing backups
Configure Retention Policy
API Endpoint: POST /api/v1/backups/environments/{environment_id}/policy
Request Body:
{
"is_enabled": true,
"schedule_cron": "0 2 * * *",
"timezone": "UTC",
"daily_retention": 7,
"weekly_retention": 4,
"monthly_retention": 12,
"yearly_retention": 2,
"weekly_backup_day": 6,
"storage_config_id": "550e8400-e29b-41d4-a716-446655440000",
"notify_on_success": false,
"notify_on_failure": true
}Fields:
schedule_cron: Cron expression (e.g.,0 2 * * *= 2 AM daily)timezone: IANA timezone (e.g.,America/New_York,Europe/London)daily_retention: Days to keep daily backups (0-365)weekly_retention: Weeks to keep weekly backups (0-52)monthly_retention: Months to keep monthly backups (0-60)yearly_retention: Years to keep yearly backups (0-10)weekly_backup_day: Day for weekly backup (0=Monday, 6=Sunday)
Retention Examples
Aggressive Cleanup (minimize storage cost):
{
"daily_retention": 3,
"weekly_retention": 2,
"monthly_retention": 3,
"yearly_retention": 1
}- Last 3 days of daily backups
- Last 2 Sundays
- Last 3 months (1st of month)
- Last New Year's Day
Balanced Retention (recommended):
{
"daily_retention": 7,
"weekly_retention": 4,
"monthly_retention": 12,
"yearly_retention": 2
}- Last week of daily backups
- Last 4 Sundays
- Last 12 months
- Last 2 years
Compliance Retention (7-year audit requirement):
{
"daily_retention": 30,
"weekly_retention": 12,
"monthly_retention": 24,
"yearly_retention": 7
}- Last month of daily backups
- Last 3 months of weekly backups
- Last 2 years of monthly backups
- Last 7 years of yearly backups
Permissions
Storage management requires specific organization-level permissions from the Permission Matrix system.
Required Permissions
| Permission | Action | Applies To |
|---|---|---|
org.storage.create | Create storage configuration | POST /storage-configs |
org.storage.list | List storage configurations | GET /storage-configs |
org.storage.view | View storage details | GET /storage-configs/{id} |
org.storage.update | Update storage configuration | PATCH /storage-configs/{id} |
org.storage.delete | Delete storage configuration | DELETE /storage-configs/{id} |
Permission Matrix Roles
| Role | Create | List | View | Update | Delete |
|---|---|---|---|---|---|
| portal_admin | ✅ | ✅ | ✅ | ✅ | ✅ |
| org_owner | ✅ | ✅ | ✅ | ✅ | ✅ |
| org_admin | ✅ | ✅ | ✅ | ✅ | ❌ |
| org_member | ❌ | ✅ | ✅ | ❌ | ❌ |
| project_admin | ❌ | ✅ | ✅ | ❌ | ❌ |
| project_member | ❌ | ✅ | ✅ | ❌ | ❌ |
Check Permissions
Frontend (React):
import { useAbilities } from '@/hooks/useAbilities';
import { AbilityGate } from '@/components/auth/AbilityGate';
function StorageSettings() {
const { can } = useAbilities({ organizationId: orgId });
return (
<>
{/* Conditional rendering */}
{can('org.storage.create') && (
<Button onClick={handleCreate}>Add Storage Provider</Button>
)}
{/* Component-level gate */}
<AbilityGate permission="org.storage.delete" organizationId={orgId}>
<Button onClick={handleDelete}>Delete Storage</Button>
</AbilityGate>
</>
);
}Backend (FastAPI):
from core.permissions import check_permission, require_permission
# Route-level decorator
@router.post("/storage-configs")
@require_permission("org.storage.create", org_id_param="organization_id")
async def create_storage_config(
organization_id: UUID,
current_user: CurrentUser,
db: DBSession,
):
# Route logic
pass
# Manual check
has_permission = await check_permission(
db=db,
user=current_user,
permission_code="org.storage.update",
organization_id=organization_id,
)
if not has_permission:
raise HTTPException(403, "Permission denied")API Reference
List Storage Configurations
Endpoint: GET /api/v1/backups/storage-configs
Query Parameters:
organization_id(required): Organization UUID
Response (200 OK):
[
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"organization_id": "123e4567-e89b-12d3-a456-426614174000",
"name": "Cloudflare R2 Production",
"provider": "cloudflare_r2",
"bucket": "oecsh-backups",
"region": "auto",
"endpoint_url": null,
"path_prefix": "backups/",
"storage_class": null,
"is_default": true,
"is_active": true,
"total_size_bytes": 524288000,
"object_count": 42,
"last_used_at": "2024-12-11T08:30:00Z",
"created_at": "2024-12-01T10:00:00Z",
"updated_at": "2024-12-11T08:30:00Z"
}
]Create Storage Configuration
Endpoint: POST /api/v1/backups/storage-configs
Query Parameters:
organization_id(required): Organization UUID
Request Body:
{
"name": "Cloudflare R2 Production",
"provider": "cloudflare_r2",
"bucket": "oecsh-backups",
"account_id": "a1b2c3d4e5f6789...",
"access_key": "YOUR_ACCESS_KEY_ID",
"secret_key": "YOUR_SECRET_ACCESS_KEY",
"path_prefix": "backups/",
"is_default": true
}Response (201 Created): Same as GET response
Get Storage Configuration
Endpoint: GET /api/v1/backups/storage-configs/{config_id}
Response (200 OK): Single storage configuration object
Update Storage Configuration
Endpoint: PATCH /api/v1/backups/storage-configs/{config_id}
Request Body (all fields optional):
{
"name": "Updated Name",
"access_key": "NEW_ACCESS_KEY",
"secret_key": "NEW_SECRET_KEY",
"is_active": true,
"is_default": false
}Response (200 OK): Updated storage configuration
Delete Storage Configuration
Endpoint: DELETE /api/v1/backups/storage-configs/{config_id}
Response (204 No Content)
Error (400 Bad Request):
{
"detail": "Cannot delete storage config with 42 backups. Delete or migrate backups first."
}Test Connection
Endpoint: POST /api/v1/backups/storage-configs/test-connection
Query Parameters:
organization_id(required): Organization UUID
Request Body: Same as create storage configuration
Response (200 OK):
{
"success": true,
"message": "Connection successful",
"bucket_exists": true,
"can_write": true,
"can_read": true,
"latency_ms": 234
}Set Default Storage
Endpoint: POST /api/v1/backups/storage-configs/{config_id}/set-default
Response (200 OK): Updated storage configuration with is_default: true
Troubleshooting
Connection Issues
Symptom: "Connection failed: Unable to reach endpoint"
Causes & Solutions:
-
Network Connectivity
- ✅ Check firewall rules allow HTTPS (443) outbound
- ✅ Verify DNS resolution:
nslookup storage-endpoint.com - ✅ Test connectivity:
curl -I https://storage-endpoint.com
-
Incorrect Endpoint URL
- ✅ AWS S3: Leave endpoint_url blank (uses default)
- ✅ Cloudflare R2: Endpoint auto-generated from account_id
- ✅ MinIO: Verify endpoint format (e.g.,
http://minio:9000)
-
SSL Certificate Issues
- ✅ Use valid SSL certificates for HTTPS endpoints
- ✅ For self-signed certs, configure trusted CA bundles
- ✅ MinIO: Use
https://endpoint in production
Permission Errors
Symptom: "Access Denied" or "403 Forbidden"
Causes & Solutions:
-
Insufficient IAM/Bucket Permissions
- ✅ AWS S3: Verify IAM policy includes required actions
- ✅ R2: Check API token has "Object Read & Write"
- ✅ B2: Verify application key has read/write access to bucket
- ✅ MinIO: Check user/policy allows bucket operations
-
Bucket Policy Conflicts
- ✅ Ensure bucket policy doesn't deny access
- ✅ Check bucket ACLs don't override permissions
- ✅ Verify no organization SCPs blocking access (AWS)
-
Missing Permission in OEC.SH
- ✅ User needs
org.storage.createpermission - ✅ Check role assignment:
org_adminor higher - ✅ Review Permission Matrix in Settings
- ✅ User needs
Credential Issues
Symptom: "Invalid credentials" or "Authentication failed"
Causes & Solutions:
-
Wrong Credentials
- ✅ Verify access key and secret key copied correctly
- ✅ Check for extra whitespace or newlines
- ✅ Regenerate keys if unsure (old keys may be revoked)
-
Expired Credentials
- ✅ AWS: IAM user credentials don't expire (unless rotated)
- ✅ R2/B2: API tokens don't expire
- ✅ MinIO: Check user account status
-
Region Mismatch (AWS S3 only)
- ✅ Bucket in
us-west-2but config saysus-east-1 - ✅ Use correct region for bucket location
- ✅ Bucket in
Bucket Not Found
Symptom: "NoSuchBucket" or "Bucket does not exist"
Causes & Solutions:
-
Bucket Doesn't Exist
- ✅ Create bucket in storage provider dashboard
- ✅ Verify bucket name spelling (case-sensitive for some providers)
-
Wrong Account/Region
- ✅ Cloudflare R2: Verify account_id is correct
- ✅ AWS S3: Check bucket in specified region
- ✅ Backblaze B2: Bucket names are globally unique
-
Cross-Region Access (AWS S3)
- ✅ Use correct region in configuration
- ✅ Or use S3 global endpoint (higher latency)
Slow Upload/Download
Symptom: Backups taking longer than expected
Causes & Solutions:
-
Network Bandwidth
- ✅ Check server internet speed:
speedtest-cli - ✅ Monitor network usage during backup
- ✅ Consider upgrading server network tier
- ✅ Check server internet speed:
-
Geographic Distance
- ✅ Use storage provider region closest to server
- ✅ AWS: Choose region near your VMs
- ✅ Cloudflare R2: Benefits from global network
-
Large Filestore
- ✅ Enable compression (already done by default)
- ✅ Consider excluding temporary files
- ✅ Archive old filestore data
-
Provider Rate Limits
- ✅ AWS S3: No rate limits on standard operations
- ✅ R2: Check Class A/B operation limits
- ✅ Implement exponential backoff for retries
Storage Quota Exceeded
Symptom: "Storage quota exceeded" or upload failures
Causes & Solutions:
-
Provider Storage Limit
- ✅ Check storage provider dashboard for usage
- ✅ Upgrade storage tier if needed
- ✅ Delete old/unnecessary backups
-
Cost Budget Exceeded
- ✅ Review storage costs in billing dashboard
- ✅ Implement aggressive retention policy
- ✅ Switch to cheaper provider (e.g., B2)
-
Bucket Policy Quota
- ✅ Some buckets have size quotas configured
- ✅ Remove or increase bucket quota
FTP/SFTP Specific Issues
Symptom: FTP/SFTP connection failures
Causes & Solutions:
-
Firewall Blocking Ports
- ✅ FTP: Allow port 21 (control) + passive port range
- ✅ FTPS: Allow port 990 (implicit) or 21 (explicit)
- ✅ SFTP: Allow port 22 (SSH)
-
Passive Mode Issues
- ✅ Enable passive mode:
ftp_passive_mode: true - ✅ Configure passive port range on FTP server
- ✅ Use SFTP instead (single port, simpler)
- ✅ Enable passive mode:
-
Base Path Permissions
- ✅ Verify
ftp_base_pathdirectory exists - ✅ Check FTP user has write permissions
- ✅ Test with FTP client:
ftp ftp.example.com
- ✅ Verify
-
SSL/TLS Certificate Issues (FTPS)
- ✅ Use valid SSL certificate on FTP server
- ✅ For self-signed: May need to disable cert verification
- ✅ Check SSL mode: explicit vs implicit
Related Documentation
- Storage Setup Guide - Initial storage configuration walkthrough
- Backup Management - Creating and managing backups
- Restore Operations - Restoring from backups
- Backup Policies - Automated backup scheduling
- Permission Matrix - Role-based access control
- Environment Management - Environment lifecycle
Support
Need Help?
- Documentation: docs.oec.sh (opens in a new tab)
- Community: Discord (opens in a new tab)
- Enterprise Support: support@oec.sh
- Security Issues: security@oec.sh