Features
Storage
Storage Providers

Storage Provider Management

Comprehensive guide to configuring and managing cloud storage providers for backup operations in OEC.SH platform.


Overview

Feature Category: Storage Management Required Permission: org.storage.create, org.storage.list, org.storage.update, org.storage.delete API Prefix: /api/v1/backups/storage-configs Supported Providers: 6 (S3, R2, B2, MinIO, FTP, SFTP)

Storage providers are the foundation of OEC.SH's backup system. All environment backups (PostgreSQL database dumps + Odoo filestore) are stored in your configured cloud storage, giving you full control and ownership of your backup data.

Key Benefits

  • Multi-Cloud Support: Choose from 6 storage providers based on cost, performance, and compliance needs
  • BYOS (Bring Your Own Storage): You own your backup data - no vendor lock-in
  • Encryption: Credentials encrypted at-rest in database, data encrypted in-transit (TLS/SSL)
  • Default Provider: Set organization-wide default for automated backups
  • Connection Testing: Validate credentials before saving configuration
  • Usage Tracking: Monitor storage consumption and object counts

Supported Storage Providers

Provider Comparison

ProviderBest ForPricingEgress FeesSetup Complexity
AWS S3Enterprise, Multi-region$0.023/GB/mo$0.09/GBLow
Cloudflare R2Cost optimization, Downloads$0.015/GB/mo$0 (Zero)Low
Backblaze B2Long-term archival$0.005/GB/mo$0.01/GBMedium
MinIOSelf-hosted, GDPR complianceInfrastructure costN/AMedium
FTP/SFTPLegacy systems, On-premiseInfrastructure costN/ALow

Provider Selection Guide

Choose AWS S3 if you need:

  • Multi-region replication
  • Advanced lifecycle policies (Glacier, Deep Archive)
  • Enterprise SLAs and support
  • Integration with AWS ecosystem

Choose Cloudflare R2 if you need:

  • Zero egress fees (ideal for frequent downloads)
  • Global CDN distribution
  • S3 compatibility without AWS vendor lock-in
  • Cost-effective storage (0.015/GBvsS3s0.015/GB vs S3's 0.023/GB)

Choose Backblaze B2 if you need:

  • Lowest storage cost for archival
  • Simple, predictable pricing
  • Good for infrequently accessed backups
  • Transparent egress fees ($0.01/GB)

Choose MinIO if you need:

  • Self-hosted storage (full control)
  • GDPR/data residency compliance
  • On-premise backup solution
  • S3-compatible private cloud

Choose FTP/SFTP if you have:

  • Existing FTP infrastructure
  • Legacy backup systems
  • Simple file-based storage needs
  • Direct server-to-server transfers

Provider Configurations

1. AWS S3

Use Case: Enterprise-grade cloud storage with multi-region support and advanced features.

Prerequisites

  1. AWS Account: Sign up at aws.amazon.com (opens in a new tab)
  2. S3 Bucket: Create bucket in desired region
  3. IAM User: Create IAM user with S3 permissions

Required IAM Permissions

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::YOUR_BUCKET_NAME",
        "arn:aws:s3:::YOUR_BUCKET_NAME/*"
      ]
    }
  ]
}

Configuration Example

# API Request
curl -X POST https://your-domain.com/api/v1/backups/storage-configs?organization_id=<ORG_ID> \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "AWS S3 Production",
    "provider": "aws_s3",
    "bucket": "oecsh-backups-prod",
    "region": "us-east-1",
    "access_key": "AKIA...",
    "secret_key": "YOUR_SECRET_KEY_HERE",
    "path_prefix": "backups/",
    "storage_class": "STANDARD",
    "is_default": true
  }'

Configuration Fields

  • name: Display name (e.g., "AWS S3 Production")
  • provider: aws_s3
  • bucket: S3 bucket name (must already exist)
  • region: AWS region (e.g., us-east-1, eu-west-1, ap-southeast-1)
  • access_key: IAM user Access Key ID
  • secret_key: IAM user Secret Access Key
  • path_prefix: Optional prefix for all keys (e.g., backups/org-123/)
  • storage_class: S3 storage class (STANDARD, STANDARD_IA, GLACIER, DEEP_ARCHIVE)
  • is_default: Set as organization default for automated backups

Regional Endpoints

  • US East: us-east-1 (N. Virginia) - Default, lowest latency for US
  • US West: us-west-2 (Oregon)
  • Europe: eu-west-1 (Ireland), eu-central-1 (Frankfurt)
  • Asia Pacific: ap-southeast-1 (Singapore), ap-northeast-1 (Tokyo)

Storage Classes Comparison

ClassUse CasePricingRetrieval Time
STANDARDActive backups$0.023/GB/moImmediate
STANDARD_IAInfrequent access$0.0125/GB/moImmediate
GLACIERLong-term archive$0.004/GB/moMinutes to hours
DEEP_ARCHIVECompliance archive$0.00099/GB/mo12-48 hours

2. Cloudflare R2

Use Case: Zero-egress cloud storage, perfect for frequent backup downloads and cost optimization.

Prerequisites

  1. Cloudflare Account: Sign up at cloudflare.com (opens in a new tab)
  2. R2 Subscription: Enable R2 storage (requires payment method)
  3. R2 Bucket: Create bucket via Cloudflare dashboard

Setup Steps

Step 1: Create R2 Bucket

  1. Log in to Cloudflare Dashboard → R2
  2. Click "Create Bucket"
  3. Enter bucket name: oecsh-backups
  4. (Optional) Choose location hint for performance
  5. Click "Create Bucket"

Step 2: Generate API Token

  1. Navigate to R2Manage R2 API Tokens
  2. Click "Create API Token"
  3. Token name: OEC.SH Backup Storage
  4. Permissions:
    • ✅ Object Read & Write
    • ✅ (Optional) Admin Read & Write for lifecycle management
  5. Click "Create API Token"
  6. Save credentials immediately (shown only once):
    Access Key ID: <32-character-key>
    Secret Access Key: <43-character-secret>

Step 3: Find Account ID

  1. In R2 dashboard, note your Account ID
  2. Format: 32 hexadecimal characters (e.g., a1b2c3d4e5f6...)

Configuration Example

# API Request
curl -X POST https://your-domain.com/api/v1/backups/storage-configs?organization_id=<ORG_ID> \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Cloudflare R2 Backups",
    "provider": "cloudflare_r2",
    "bucket": "oecsh-backups",
    "account_id": "a1b2c3d4e5f6...",
    "access_key": "YOUR_ACCESS_KEY_ID",
    "secret_key": "YOUR_SECRET_ACCESS_KEY",
    "path_prefix": "backups/",
    "is_default": true
  }'

Configuration Fields

  • name: Display name (e.g., "Cloudflare R2 Production")
  • provider: cloudflare_r2
  • bucket: R2 bucket name
  • account_id: Cloudflare account ID (32 characters)
  • access_key: R2 API token Access Key ID
  • secret_key: R2 API token Secret Access Key
  • path_prefix: Optional prefix for organization
  • is_default: Set as default provider

Pricing (as of December 2024)

  • Storage: $0.015/GB/month
  • Class A Operations (writes): $4.50/million requests
  • Class B Operations (reads): $0.36/million requests
  • Egress: $0 (ZERO) - Unlimited free downloads

Why R2 is Recommended

Zero Egress Fees: Download backups without bandwidth charges ✅ 50% Cheaper: 0.015/GBvsS3s0.015/GB vs S3's 0.023/GB ✅ S3 Compatible: Drop-in replacement, same API ✅ Global Network: Cloudflare's 275+ edge locations ✅ Built-in DDoS Protection: Cloudflare's security layer


3. Backblaze B2

Use Case: Lowest-cost storage for long-term backup archival with predictable pricing.

Prerequisites

  1. Backblaze Account: Sign up at backblaze.com (opens in a new tab)
  2. B2 Bucket: Create bucket via B2 dashboard
  3. Application Key: Generate with read/write permissions

Setup Steps

Step 1: Create B2 Bucket

  1. Log in to Backblaze Dashboard → B2 Cloud Storage
  2. Click "Create a Bucket"
  3. Bucket name: oecsh-backups (must be globally unique)
  4. Bucket type: Private
  5. Default encryption: Enabled (recommended)
  6. Object lock: Disabled (not needed for backups)
  7. Click "Create Bucket"

Step 2: Generate Application Key

  1. Navigate to App Keys tab
  2. Click "Add a New Application Key"
  3. Name: OEC.SH Backup Access
  4. Access:
    • Allow access to: Select bucket → Choose your bucket
    • Type of Access: Read and Write
  5. Click "Create New Key"
  6. Save credentials immediately:
    keyID: <25-character-app-key-id>
    applicationKey: <31-character-secret>

Step 3: Note Region

B2 regions:

  • us-west-001 - US West (California)
  • us-west-002 - US West (Arizona)
  • us-west-004 - US West (Oregon)
  • us-east-005 - US East (Florida)
  • eu-central-003 - EU Central (Amsterdam)

Configuration Example

# API Request
curl -X POST https://your-domain.com/api/v1/backups/storage-configs?organization_id=<ORG_ID> \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Backblaze B2 Archive",
    "provider": "backblaze_b2",
    "bucket": "oecsh-backups",
    "region": "us-west-004",
    "access_key": "YOUR_KEY_ID",
    "secret_key": "YOUR_APPLICATION_KEY",
    "path_prefix": "archives/",
    "is_default": false
  }'

Configuration Fields

  • name: Display name (e.g., "B2 Long-term Archive")
  • provider: backblaze_b2
  • bucket: B2 bucket name (globally unique)
  • region: B2 region code (e.g., us-west-004)
  • access_key: Application Key ID (called keyID in B2)
  • secret_key: Application Key (called applicationKey in B2)
  • path_prefix: Optional folder structure
  • is_default: Set as default provider

Pricing (as of December 2024)

  • Storage: $0.005/GB/month (first 10GB free)
  • Downloads: First 1GB/day free, then $0.01/GB
  • API Calls: 2,500 free/day, then $0.004/10,000 calls
  • Uploads: FREE (unlimited)

Cost Example

For 100GB of backups with 5GB downloads/month:

  • Storage: 100GB × 0.005=0.005 = **0.50/month**
  • Downloads: (5GB - 1GB free) × 0.01=0.01 = **0.04/month**
  • **Total: 0.54/month(vs0.54/month** (vs 3.55 for S3)

4. MinIO

Use Case: Self-hosted S3-compatible storage for data sovereignty and compliance.

Prerequisites

  1. MinIO Server: Running MinIO instance (self-hosted or managed)
  2. MinIO Bucket: Create bucket via MinIO Console or mc
  3. Access Credentials: MinIO access key and secret key

Setup Steps

Step 1: Install MinIO (if self-hosting)

# Docker installation (recommended)
docker run -d \
  --name minio \
  -p 9000:9000 \
  -p 9001:9001 \
  -v /data/minio:/data \
  -e MINIO_ROOT_USER=minioadmin \
  -e MINIO_ROOT_PASSWORD=minioadmin123 \
  quay.io/minio/minio server /data --console-address ":9001"

Step 2: Create Bucket

# Using MinIO Client (mc)
mc alias set myminio http://localhost:9000 minioadmin minioadmin123
mc mb myminio/oecsh-backups
mc policy set download myminio/oecsh-backups

Or via MinIO Console:

  1. Navigate to http://localhost:9001
  2. Login with root credentials
  3. Go to BucketsCreate Bucket
  4. Bucket name: oecsh-backups

Step 3: Generate Access Key

  1. MinIO Console → IdentityUsers
  2. Create user: oecsh-backup-service
  3. Attach policy: readwrite on oecsh-backups
  4. Generate Access Keys
  5. Save credentials:
    Access Key: <20-character-key>
    Secret Key: <40-character-secret>

Configuration Example

# API Request
curl -X POST https://your-domain.com/api/v1/backups/storage-configs?organization_id=<ORG_ID> \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "MinIO Self-Hosted",
    "provider": "minio",
    "bucket": "oecsh-backups",
    "endpoint_url": "http://minio.internal:9000",
    "access_key": "YOUR_ACCESS_KEY",
    "secret_key": "YOUR_SECRET_KEY",
    "path_prefix": "production/",
    "is_default": true
  }'

Configuration Fields

  • name: Display name (e.g., "MinIO On-Premise")
  • provider: minio
  • bucket: MinIO bucket name
  • endpoint_url: MinIO server URL (e.g., http://minio:9000, https://s3.example.com)
  • access_key: MinIO access key
  • secret_key: MinIO secret key
  • path_prefix: Optional folder structure
  • is_default: Set as default provider

Security Best Practices

Use HTTPS: Always use TLS/SSL for production ✅ Separate Credentials: Don't use root credentials for backups ✅ Bucket Policies: Restrict access to specific prefixes ✅ Network Isolation: Keep MinIO on private network ✅ Encryption: Enable server-side encryption (SSE)


5. FTP (File Transfer Protocol)

Use Case: Legacy systems, simple file-based storage, compatibility with existing infrastructure.

Prerequisites

  1. FTP Server: Running FTP server (ProFTPD, vsftpd, FileZilla Server)
  2. FTP Account: Username and password with read/write permissions
  3. Base Directory: Writable directory for backups

Configuration Example

# API Request
curl -X POST https://your-domain.com/api/v1/backups/storage-configs?organization_id=<ORG_ID> \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "FTP Legacy Storage",
    "provider": "ftp",
    "bucket": "backups",
    "ftp_host": "ftp.example.com",
    "ftp_port": 21,
    "ftp_use_ssl": true,
    "ftp_passive_mode": true,
    "ftp_base_path": "/backups/oecsh",
    "access_key": "ftp_username",
    "secret_key": "ftp_password",
    "path_prefix": "production/",
    "is_default": false
  }'

Configuration Fields

  • name: Display name (e.g., "FTP Legacy Server")
  • provider: ftp
  • bucket: Logical bucket name (used for organization)
  • ftp_host: FTP server hostname or IP address
  • ftp_port: FTP port (default: 21, FTPS: 990)
  • ftp_use_ssl: Enable FTPS (FTP over SSL/TLS) - strongly recommended
  • ftp_passive_mode: Use passive mode (recommended for firewalls)
  • ftp_base_path: Base directory on FTP server (e.g., /backups)
  • access_key: FTP username
  • secret_key: FTP password
  • path_prefix: Subdirectory within base_path

FTP vs FTPS

FeatureFTPFTPS (FTP over SSL/TLS)
Encryption❌ None✅ TLS/SSL
Security⚠️ Low✅ High
Port2121 (explicit) or 990 (implicit)
Use CaseInternal networksProduction

Recommendation: Always use FTPS (ftp_use_ssl: true) for production environments.


6. SFTP (SSH File Transfer Protocol)

Use Case: Secure file transfers over SSH, common in enterprise environments.

Prerequisites

  1. SSH Server: Running SSH server with SFTP subsystem
  2. SSH Account: Username and password (or SSH key)
  3. Base Directory: Writable directory for backups

Configuration Example

# API Request
curl -X POST https://your-domain.com/api/v1/backups/storage-configs?organization_id=<ORG_ID> \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "SFTP Secure Storage",
    "provider": "sftp",
    "bucket": "backups",
    "ftp_host": "sftp.example.com",
    "ftp_port": 22,
    "ftp_base_path": "/home/backups/oecsh",
    "access_key": "backup_user",
    "secret_key": "secure_password",
    "path_prefix": "production/",
    "is_default": true
  }'

Configuration Fields

  • name: Display name (e.g., "SFTP Production")
  • provider: sftp
  • bucket: Logical bucket name
  • ftp_host: SFTP server hostname or IP
  • ftp_port: SSH port (default: 22)
  • ftp_base_path: Base directory (e.g., /home/backups)
  • access_key: SSH username
  • secret_key: SSH password or private key
  • path_prefix: Subdirectory structure

SFTP vs FTP

FeatureSFTPFTP/FTPS
ProtocolSSH (port 22)FTP (port 21)
Encryption✅ Always encrypted⚠️ Optional (FTPS)
Firewall✅ Single port⚠️ Multiple ports (passive)
AuthenticationPassword + KeyPassword only
Use CaseModern, secureLegacy systems

Recommendation: Use SFTP over FTP/FTPS when possible for better security and firewall compatibility.


Add Storage Provider

Web UI Method

Step 1: Navigate to Storage Settings

  1. Go to DashboardSettings
  2. Click "Storage" tab in left sidebar
  3. Click "Add Storage Provider" button

Step 2: Select Provider Type

Choose from 6 provider options:

  • AWS S3
  • Cloudflare R2 (recommended)
  • Backblaze B2
  • MinIO
  • FTP
  • SFTP

Step 3: Fill Configuration

Enter provider-specific credentials and settings (see provider sections above).

Step 4: Test Connection

Click "Test Connection" button to validate credentials before saving.

Step 5: Save Configuration

After successful test, click "Create Storage Configuration".


API Method

Endpoint: POST /api/v1/backups/storage-configs

Query Parameters:

  • organization_id (required): Organization UUID

Request Body (example for Cloudflare R2):

{
  "name": "Cloudflare R2 Production",
  "provider": "cloudflare_r2",
  "bucket": "oecsh-backups",
  "account_id": "a1b2c3d4e5f6789...",
  "access_key": "YOUR_ACCESS_KEY_ID",
  "secret_key": "YOUR_SECRET_ACCESS_KEY",
  "path_prefix": "backups/",
  "is_default": true
}

Response (201 Created):

{
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "organization_id": "123e4567-e89b-12d3-a456-426614174000",
  "name": "Cloudflare R2 Production",
  "provider": "cloudflare_r2",
  "bucket": "oecsh-backups",
  "region": "auto",
  "endpoint_url": null,
  "path_prefix": "backups/",
  "storage_class": null,
  "is_default": true,
  "is_active": true,
  "total_size_bytes": 0,
  "object_count": 0,
  "last_used_at": null,
  "created_at": "2024-12-11T10:30:00Z",
  "updated_at": "2024-12-11T10:30:00Z"
}

Error Responses:

  • 400 Bad Request: Invalid provider configuration or credentials
  • 403 Forbidden: Missing org.storage.create permission or BYOS disabled
  • 409 Conflict: Storage configuration name already exists

Test Connection

Test storage provider connectivity before saving configuration to catch credential or network issues early.

Web UI Method

  1. Fill in storage configuration form
  2. Click "Test Connection" button
  3. Wait for validation (typically 2-5 seconds)
  4. Review test results:
    • Success: Green checkmark, shows latency
    • Failure: Red error message with details

API Method

Endpoint: POST /api/v1/backups/storage-configs/test-connection

Query Parameters:

  • organization_id (required): Organization UUID

Request Body:

{
  "name": "Test Configuration",
  "provider": "cloudflare_r2",
  "bucket": "oecsh-backups",
  "account_id": "YOUR_ACCOUNT_ID",
  "access_key": "YOUR_ACCESS_KEY",
  "secret_key": "YOUR_SECRET_KEY",
  "path_prefix": "backups/"
}

Response (200 OK):

{
  "success": true,
  "message": "Connection successful",
  "bucket_exists": true,
  "can_write": true,
  "can_read": true,
  "latency_ms": 234
}

Failed Test Example:

{
  "success": false,
  "message": "Connection failed: Invalid credentials",
  "bucket_exists": null,
  "can_write": null,
  "can_read": null,
  "latency_ms": null
}

What Tests Validate

Network Connectivity: Can reach storage endpoint ✅ Authentication: Valid credentials ✅ Bucket Exists: Bucket/container accessible ✅ Write Permissions: Can create test object ✅ Read Permissions: Can retrieve test object ✅ Delete Permissions: Can delete test object


Default Provider

Set an organization-wide default storage provider for automated backup policies.

Why Set Default?

  • Automated Backups: Backup policies use default provider automatically
  • Manual Backups: Pre-selected in UI when creating manual backups
  • Consistency: All environments use same storage unless overridden
  • Convenience: Don't specify storage for every backup operation

Set via Web UI

  1. Navigate to DashboardSettingsStorage
  2. Find storage configuration in list
  3. Click "Set as Default" button
  4. Confirm action

Only one provider can be default per organization.

Set via API

Endpoint: POST /api/v1/backups/storage-configs/{config_id}/set-default

Response (200 OK):

{
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "name": "Cloudflare R2 Production",
  "is_default": true,
  ...
}

Default Provider Behavior

  • New backup policies: Automatically use default provider
  • Manual backups: Pre-selected in UI (can override)
  • API backups without storage_config_id: Use default
  • Changing default: Previously created backups unaffected (stored in specific storage)

Storage Encryption

OEC.SH implements multi-layer encryption for storage security.

Encryption at Rest

Database Encryption (credentials):

  • Storage provider credentials encrypted using AES-256-GCM
  • Encryption key derived from ENCRYPTION_KEY environment variable
  • Credentials decrypted only during backup/restore operations
  • Never logged or exposed in API responses

Storage Provider Encryption (backup data):

  • AWS S3: Server-side encryption (SSE-S3 or SSE-KMS)
  • Cloudflare R2: Automatic encryption at rest
  • Backblaze B2: Server-side encryption enabled by default
  • MinIO: SSE-S3 compatible encryption
  • FTP/SFTP: File-level encryption on server (if configured)

Encryption in Transit

HTTPS/TLS:

  • All S3-compatible providers use TLS 1.2+ for data transfer
  • Cloudflare R2: TLS 1.3 with Cloudflare's security layer
  • FTP: Use FTPS (FTP over SSL/TLS) - ftp_use_ssl: true
  • SFTP: SSH protocol with AES-256 encryption

SSH Tunneling (for backup operations):

  • Database dumps transferred over SSH from VMs
  • Filestore archives transferred over SSH
  • Only final storage upload uses storage provider protocol

Security Best Practices

Use HTTPS/TLS: Always enable SSL/TLS for production ✅ Rotate Credentials: Periodically rotate access keys (every 90 days) ✅ Principle of Least Privilege: Grant minimum required permissions ✅ Audit Logs: Monitor storage access logs for anomalies ✅ Network Isolation: Keep storage on private network when possible ✅ Backup Verification: Use checksums to verify data integrity


Backup Retention

Configure retention policies using GFS (Grandfather-Father-Son) scheme for intelligent backup lifecycle management.

GFS Retention Tiers

TierRetention PeriodFrequencyUse Case
Daily7 days (default)Every backupRecent changes
Weekly4 weeks (default)Sunday backupsWeekly milestones
Monthly12 months (default)1st of monthMonthly archives
Yearly2 years (default)Jan 1st backupsLong-term compliance
PermanentNever expiresManual backupsCritical snapshots

How Retention Works

  1. Backup Creation: Backup assigned tier based on date/time
  2. Expiration Calculation: expires_at set based on tier and policy
  3. Automatic Cleanup: ARQ worker job deletes expired backups daily
  4. Policy Updates: Updating policy recalculates expiration for existing backups

Configure Retention Policy

API Endpoint: POST /api/v1/backups/environments/{environment_id}/policy

Request Body:

{
  "is_enabled": true,
  "schedule_cron": "0 2 * * *",
  "timezone": "UTC",
  "daily_retention": 7,
  "weekly_retention": 4,
  "monthly_retention": 12,
  "yearly_retention": 2,
  "weekly_backup_day": 6,
  "storage_config_id": "550e8400-e29b-41d4-a716-446655440000",
  "notify_on_success": false,
  "notify_on_failure": true
}

Fields:

  • schedule_cron: Cron expression (e.g., 0 2 * * * = 2 AM daily)
  • timezone: IANA timezone (e.g., America/New_York, Europe/London)
  • daily_retention: Days to keep daily backups (0-365)
  • weekly_retention: Weeks to keep weekly backups (0-52)
  • monthly_retention: Months to keep monthly backups (0-60)
  • yearly_retention: Years to keep yearly backups (0-10)
  • weekly_backup_day: Day for weekly backup (0=Monday, 6=Sunday)

Retention Examples

Aggressive Cleanup (minimize storage cost):

{
  "daily_retention": 3,
  "weekly_retention": 2,
  "monthly_retention": 3,
  "yearly_retention": 1
}
  • Last 3 days of daily backups
  • Last 2 Sundays
  • Last 3 months (1st of month)
  • Last New Year's Day

Balanced Retention (recommended):

{
  "daily_retention": 7,
  "weekly_retention": 4,
  "monthly_retention": 12,
  "yearly_retention": 2
}
  • Last week of daily backups
  • Last 4 Sundays
  • Last 12 months
  • Last 2 years

Compliance Retention (7-year audit requirement):

{
  "daily_retention": 30,
  "weekly_retention": 12,
  "monthly_retention": 24,
  "yearly_retention": 7
}
  • Last month of daily backups
  • Last 3 months of weekly backups
  • Last 2 years of monthly backups
  • Last 7 years of yearly backups

Permissions

Storage management requires specific organization-level permissions from the Permission Matrix system.

Required Permissions

PermissionActionApplies To
org.storage.createCreate storage configurationPOST /storage-configs
org.storage.listList storage configurationsGET /storage-configs
org.storage.viewView storage detailsGET /storage-configs/{id}
org.storage.updateUpdate storage configurationPATCH /storage-configs/{id}
org.storage.deleteDelete storage configurationDELETE /storage-configs/{id}

Permission Matrix Roles

RoleCreateListViewUpdateDelete
portal_admin
org_owner
org_admin
org_member
project_admin
project_member

Check Permissions

Frontend (React):

import { useAbilities } from '@/hooks/useAbilities';
import { AbilityGate } from '@/components/auth/AbilityGate';
 
function StorageSettings() {
  const { can } = useAbilities({ organizationId: orgId });
 
  return (
    <>
      {/* Conditional rendering */}
      {can('org.storage.create') && (
        <Button onClick={handleCreate}>Add Storage Provider</Button>
      )}
 
      {/* Component-level gate */}
      <AbilityGate permission="org.storage.delete" organizationId={orgId}>
        <Button onClick={handleDelete}>Delete Storage</Button>
      </AbilityGate>
    </>
  );
}

Backend (FastAPI):

from core.permissions import check_permission, require_permission
 
# Route-level decorator
@router.post("/storage-configs")
@require_permission("org.storage.create", org_id_param="organization_id")
async def create_storage_config(
    organization_id: UUID,
    current_user: CurrentUser,
    db: DBSession,
):
    # Route logic
    pass
 
# Manual check
has_permission = await check_permission(
    db=db,
    user=current_user,
    permission_code="org.storage.update",
    organization_id=organization_id,
)
if not has_permission:
    raise HTTPException(403, "Permission denied")

API Reference

List Storage Configurations

Endpoint: GET /api/v1/backups/storage-configs

Query Parameters:

  • organization_id (required): Organization UUID

Response (200 OK):

[
  {
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "organization_id": "123e4567-e89b-12d3-a456-426614174000",
    "name": "Cloudflare R2 Production",
    "provider": "cloudflare_r2",
    "bucket": "oecsh-backups",
    "region": "auto",
    "endpoint_url": null,
    "path_prefix": "backups/",
    "storage_class": null,
    "is_default": true,
    "is_active": true,
    "total_size_bytes": 524288000,
    "object_count": 42,
    "last_used_at": "2024-12-11T08:30:00Z",
    "created_at": "2024-12-01T10:00:00Z",
    "updated_at": "2024-12-11T08:30:00Z"
  }
]

Create Storage Configuration

Endpoint: POST /api/v1/backups/storage-configs

Query Parameters:

  • organization_id (required): Organization UUID

Request Body:

{
  "name": "Cloudflare R2 Production",
  "provider": "cloudflare_r2",
  "bucket": "oecsh-backups",
  "account_id": "a1b2c3d4e5f6789...",
  "access_key": "YOUR_ACCESS_KEY_ID",
  "secret_key": "YOUR_SECRET_ACCESS_KEY",
  "path_prefix": "backups/",
  "is_default": true
}

Response (201 Created): Same as GET response


Get Storage Configuration

Endpoint: GET /api/v1/backups/storage-configs/{config_id}

Response (200 OK): Single storage configuration object


Update Storage Configuration

Endpoint: PATCH /api/v1/backups/storage-configs/{config_id}

Request Body (all fields optional):

{
  "name": "Updated Name",
  "access_key": "NEW_ACCESS_KEY",
  "secret_key": "NEW_SECRET_KEY",
  "is_active": true,
  "is_default": false
}

Response (200 OK): Updated storage configuration


Delete Storage Configuration

Endpoint: DELETE /api/v1/backups/storage-configs/{config_id}

Response (204 No Content)

Error (400 Bad Request):

{
  "detail": "Cannot delete storage config with 42 backups. Delete or migrate backups first."
}

Test Connection

Endpoint: POST /api/v1/backups/storage-configs/test-connection

Query Parameters:

  • organization_id (required): Organization UUID

Request Body: Same as create storage configuration

Response (200 OK):

{
  "success": true,
  "message": "Connection successful",
  "bucket_exists": true,
  "can_write": true,
  "can_read": true,
  "latency_ms": 234
}

Set Default Storage

Endpoint: POST /api/v1/backups/storage-configs/{config_id}/set-default

Response (200 OK): Updated storage configuration with is_default: true


Troubleshooting

Connection Issues

Symptom: "Connection failed: Unable to reach endpoint"

Causes & Solutions:

  1. Network Connectivity

    • ✅ Check firewall rules allow HTTPS (443) outbound
    • ✅ Verify DNS resolution: nslookup storage-endpoint.com
    • ✅ Test connectivity: curl -I https://storage-endpoint.com
  2. Incorrect Endpoint URL

    • ✅ AWS S3: Leave endpoint_url blank (uses default)
    • ✅ Cloudflare R2: Endpoint auto-generated from account_id
    • ✅ MinIO: Verify endpoint format (e.g., http://minio:9000)
  3. SSL Certificate Issues

    • ✅ Use valid SSL certificates for HTTPS endpoints
    • ✅ For self-signed certs, configure trusted CA bundles
    • ✅ MinIO: Use https:// endpoint in production

Permission Errors

Symptom: "Access Denied" or "403 Forbidden"

Causes & Solutions:

  1. Insufficient IAM/Bucket Permissions

    • ✅ AWS S3: Verify IAM policy includes required actions
    • ✅ R2: Check API token has "Object Read & Write"
    • ✅ B2: Verify application key has read/write access to bucket
    • ✅ MinIO: Check user/policy allows bucket operations
  2. Bucket Policy Conflicts

    • ✅ Ensure bucket policy doesn't deny access
    • ✅ Check bucket ACLs don't override permissions
    • ✅ Verify no organization SCPs blocking access (AWS)
  3. Missing Permission in OEC.SH

    • ✅ User needs org.storage.create permission
    • ✅ Check role assignment: org_admin or higher
    • ✅ Review Permission Matrix in Settings

Credential Issues

Symptom: "Invalid credentials" or "Authentication failed"

Causes & Solutions:

  1. Wrong Credentials

    • ✅ Verify access key and secret key copied correctly
    • ✅ Check for extra whitespace or newlines
    • ✅ Regenerate keys if unsure (old keys may be revoked)
  2. Expired Credentials

    • ✅ AWS: IAM user credentials don't expire (unless rotated)
    • ✅ R2/B2: API tokens don't expire
    • ✅ MinIO: Check user account status
  3. Region Mismatch (AWS S3 only)

    • ✅ Bucket in us-west-2 but config says us-east-1
    • ✅ Use correct region for bucket location

Bucket Not Found

Symptom: "NoSuchBucket" or "Bucket does not exist"

Causes & Solutions:

  1. Bucket Doesn't Exist

    • ✅ Create bucket in storage provider dashboard
    • ✅ Verify bucket name spelling (case-sensitive for some providers)
  2. Wrong Account/Region

    • ✅ Cloudflare R2: Verify account_id is correct
    • ✅ AWS S3: Check bucket in specified region
    • ✅ Backblaze B2: Bucket names are globally unique
  3. Cross-Region Access (AWS S3)

    • ✅ Use correct region in configuration
    • ✅ Or use S3 global endpoint (higher latency)

Slow Upload/Download

Symptom: Backups taking longer than expected

Causes & Solutions:

  1. Network Bandwidth

    • ✅ Check server internet speed: speedtest-cli
    • ✅ Monitor network usage during backup
    • ✅ Consider upgrading server network tier
  2. Geographic Distance

    • ✅ Use storage provider region closest to server
    • ✅ AWS: Choose region near your VMs
    • ✅ Cloudflare R2: Benefits from global network
  3. Large Filestore

    • ✅ Enable compression (already done by default)
    • ✅ Consider excluding temporary files
    • ✅ Archive old filestore data
  4. Provider Rate Limits

    • ✅ AWS S3: No rate limits on standard operations
    • ✅ R2: Check Class A/B operation limits
    • ✅ Implement exponential backoff for retries

Storage Quota Exceeded

Symptom: "Storage quota exceeded" or upload failures

Causes & Solutions:

  1. Provider Storage Limit

    • ✅ Check storage provider dashboard for usage
    • ✅ Upgrade storage tier if needed
    • ✅ Delete old/unnecessary backups
  2. Cost Budget Exceeded

    • ✅ Review storage costs in billing dashboard
    • ✅ Implement aggressive retention policy
    • ✅ Switch to cheaper provider (e.g., B2)
  3. Bucket Policy Quota

    • ✅ Some buckets have size quotas configured
    • ✅ Remove or increase bucket quota

FTP/SFTP Specific Issues

Symptom: FTP/SFTP connection failures

Causes & Solutions:

  1. Firewall Blocking Ports

    • ✅ FTP: Allow port 21 (control) + passive port range
    • ✅ FTPS: Allow port 990 (implicit) or 21 (explicit)
    • ✅ SFTP: Allow port 22 (SSH)
  2. Passive Mode Issues

    • ✅ Enable passive mode: ftp_passive_mode: true
    • ✅ Configure passive port range on FTP server
    • ✅ Use SFTP instead (single port, simpler)
  3. Base Path Permissions

    • ✅ Verify ftp_base_path directory exists
    • ✅ Check FTP user has write permissions
    • ✅ Test with FTP client: ftp ftp.example.com
  4. SSL/TLS Certificate Issues (FTPS)

    • ✅ Use valid SSL certificate on FTP server
    • ✅ For self-signed: May need to disable cert verification
    • ✅ Check SSL mode: explicit vs implicit

Related Documentation


Support

Need Help?