Features
Backups & Restore
Backup Creation & Scheduling

Backup Creation and Scheduling

OEC.SH provides a robust backup system for your Odoo environments with support for manual backups, automated scheduling, and sophisticated retention policies. This guide covers everything you need to know about creating and managing backups.

Overview

The backup system in OEC.SH offers:

  • Manual on-demand backups - Create backups at any time with a single click
  • Automated scheduled backups - Set up recurring backups with cron expressions
  • GFS retention policy - Grandfather-Father-Son rotation for efficient storage management
  • Multi-provider support - Store backups across 6 different cloud providers
  • Integrity verification - SHA-256 checksums ensure backup integrity
  • Atomic backup format - Single ZIP file containing database, filestore, and manifest

Backup Types

OEC.SH categorizes backups based on how they're created:

Manual Backups

User-initiated backups created through the UI or API. Ideal for:

  • Pre-deployment safety snapshots
  • Before making major configuration changes
  • Ad-hoc data protection
  • Testing restore procedures

API Endpoint: POST /api/v1/backups/environments/{id}/backups

Scheduled Backups

Automated backups triggered by backup policies. Configured via cron expressions for:

  • Regular daily backups (e.g., 2 AM daily)
  • Hourly backups for critical environments
  • Weekly backups on specific days
  • Custom schedules matching your operational needs

Cron Job: Runs every hour to check for due backups

Pre-Restore Backups

Automatic safety backups created before restore operations. This provides:

  • Rollback capability if restore fails
  • Protection against data loss
  • Audit trail for restore operations

Type: pre_restore

Pre-Upgrade Backups

Automatic backups created before Odoo version upgrades:

  • Safeguard against upgrade failures
  • Enable rollback to previous version
  • Preserve data before major changes

Type: pre_upgrade

Pre-Destroy Backups

Optional backups created before environment deletion:

  • Final safety net before destruction
  • Compliance requirement for data retention
  • Recovery option for accidental deletions

Type: pre_destroy


Manual Backup Creation

Via Dashboard UI

  1. Navigate to Environment Details page
  2. Click Backups tab in sidebar
  3. Click Create Backup button
  4. Configure backup options:
    • Include Filestore: Toggle to include/exclude files (default: ON)
    • Storage Provider: Select destination (uses default if not specified)
    • Retention Type: Choose GFS tier (auto-determined if not specified)
  5. Click Create Backup
  6. Monitor progress in real-time via SSE updates

The backup will be queued immediately and processed asynchronously by ARQ workers.

Via API

Create a manual backup using the REST API:

curl -X POST https://oec.sh/api/v1/backups/environments/{env-id}/backups \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "backup_type": "manual",
    "include_filestore": true,
    "retention_type": "daily",
    "storage_config_id": "uuid-of-storage-config"
  }'

Request Schema:

interface BackupCreate {
  backup_type?: "manual" | "scheduled" | "pre_restore" | "pre_upgrade" | "pre_destroy";
  storage_config_id?: string; // UUID (uses org default if null)
  include_filestore?: boolean; // Default: true
  retention_type?: "daily" | "weekly" | "monthly" | "yearly" | "permanent"; // Auto-determined if null
}

Response:

{
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "environment_id": "660e8400-e29b-41d4-a716-446655440000",
  "status": "pending",
  "backup_type": "manual",
  "database_size": 0,
  "filestore_size": 0,
  "total_size": 0,
  "retention_type": "daily",
  "created_at": "2024-12-11T10:30:00Z",
  "task_id": "770e8400-e29b-41d4-a716-446655440000"
}

Rate Limiting

Manual backup creation is rate-limited to prevent abuse:

  • Limit: Configured via BACKUP_RATE_LIMIT environment variable
  • Default: 10 backups per hour per IP address
  • Headers: Rate limit info returned in response headers

Backup Process

Understanding the backup workflow helps troubleshoot issues and optimize performance.

1. Initialization Phase

When you trigger a backup:

  1. Task Creation: A Task record is created with type BACKUP and status PENDING
  2. Backup Record: A Backup record is created in the database
  3. Queue Job: The task is enqueued to ARQ worker with job ID
  4. Status Update: Task status changes to QUEUED

Database Records Created:

  • tasks table: Tracks execution progress
  • backups table: Stores backup metadata

2. Execution Phase

The ARQ worker processes the backup:

┌─────────────────────────────────────────────────────┐
│ ARQ Worker: execute_backup()                        │
└─────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────┐
│ 1. Update status to IN_PROGRESS                     │
│ 2. Connect to VM via SSH                            │
│ 3. Create environment snapshot (metadata JSON)      │
└─────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────┐
│ Database Backup (pg_dump)                           │
│ - Container: {env_id}_db                           │
│ - Format: PostgreSQL custom format (-Fc)            │
│ - Compression: Level 9 (-Z9)                        │
│ - Output: /tmp/backup_{uuid}.dump                  │
│ - Checksum: SHA-256                                 │
└─────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────┐
│ Filestore Backup (tar.gz)                           │
│ - Container: {env_id}_odoo                          │
│ - Path: /var/lib/odoo/filestore/{db_name}          │
│ - Command: tar -czf                                 │
│ - Output: /tmp/filestore_{uuid}.tar.gz              │
│ - Checksum: SHA-256                                 │
└─────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────┐
│ Download Files via SFTP                             │
│ - Remote → Local: PaaSPortal backend                │
│ - Verify checksums match                            │
│ - Save to temp directory                            │
└─────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────┐
│ Create Backup ZIP Package                           │
│ - File: {backup_id}.zip                            │
│ - Contents:                                         │
│   • dump.sql (database)                             │
│   • filestore.tar.gz (files)                        │
│   • manifest.json (metadata)                        │
│ - Compression: ZIP_DEFLATED                         │
└─────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────┐
│ Upload to Cloud Storage                             │
│ - Status: UPLOADING                                 │
│ - Key: organizations/{org}/projects/{proj}/      │
│        environments/{env}/{backup_id}.zip          │
│ - Provider: S3/R2/B2/MinIO/FTP/SFTP                 │
└─────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────┐
│ Finalization                                        │
│ - Status: COMPLETED                                 │
│ - Calculate expiration date                         │
│ - Update storage usage metrics                      │
│ - Cleanup temp files                                │
│ - Send notification (if enabled)                    │
└─────────────────────────────────────────────────────┘

3. Database Backup Details

The PostgreSQL backup uses pg_dump with optimal settings:

# Executed inside postgres container
docker exec {env_id}_db \
  pg_dump -U odoo -Fc -Z9 {database_name} \
  > /tmp/backup_{uuid}.dump

Format Options:

  • -Fc: Custom format (binary, compressed, supports parallel restore)
  • -Z9: Maximum compression level
  • -U odoo: PostgreSQL user
  • Localhost connection (avoids Docker network overhead)

Advantages of Custom Format:

  • Smaller file size than SQL format
  • Faster restoration
  • Supports parallel restore with pg_restore -j
  • Includes all database objects, permissions, and sequences

4. Filestore Backup Details

Odoo stores uploaded files in the filestore directory:

# Executed inside odoo container
docker exec {env_id}_odoo \
  tar -czf - \
  -C /var/lib/odoo/filestore {db_name} \
  > /tmp/filestore_{uuid}.tar.gz

Directory Structure:

/var/lib/odoo/filestore/
└── {database_name}/
    ├── 00/
    ├── 01/
    ├── ...
    └── ff/

Compression:

  • Uses gzip compression (-z flag)
  • Relative paths (-C flag)
  • Preserves permissions and timestamps

5. Manifest Structure

Each backup includes a JSON manifest with complete metadata:

{
  "version": "1.0",
  "backup_id": "550e8400-e29b-41d4-a716-446655440000",
  "environment_id": "660e8400-e29b-41d4-a716-446655440000",
  "backup_type": "manual",
  "created_at": "2024-12-11T10:30:00Z",
 
  "database": {
    "name": "prod_db_main",
    "key": "organizations/.../550e8400.zip",
    "size": 52428800,
    "checksum": "a1b2c3d4e5f6...",
    "format": "pg_dump_custom"
  },
 
  "filestore": {
    "key": null,
    "size": 104857600,
    "checksum": "f6e5d4c3b2a1...",
    "format": "tar.gz"
  },
 
  "environment_snapshot": {
    "environment": {
      "id": "660e8400-e29b-41d4-a716-446655440000",
      "name": "Production",
      "type": "production",
      "status": "running",
      "container_name": "660e8400_odoo",
      "db_name": "prod_db_main",
      "domain": "prod.example.com"
    },
    "project": {
      "id": "770e8400-e29b-41d4-a716-446655440000",
      "name": "E-commerce Platform",
      "slug": "ecommerce-platform",
      "odoo_version": "18.0"
    },
    "vm": {
      "id": "880e8400-e29b-41d4-a716-446655440000",
      "name": "server-1",
      "ip_address": "165.22.65.97"
    },
    "timestamp": "2024-12-11T10:30:00Z"
  },
 
  "retention": {
    "type": "daily",
    "expires_at": "2024-12-18T10:30:00Z"
  }
}

Manifest Fields Explained:

FieldDescription
versionManifest schema version (currently "1.0")
backup_idUnique identifier for this backup
environment_idSource environment UUID
backup_typeHow backup was created (manual, scheduled, etc.)
database.checksumSHA-256 hash for integrity verification
database.formatPostgreSQL dump format identifier
filestore.sizeTotal bytes in filestore archive
environment_snapshotComplete state of environment at backup time
retention.expires_atWhen backup will be automatically deleted

Backup Scheduling

Automate backups with flexible scheduling policies.

Creating a Backup Policy

Via API:

curl -X POST https://oec.sh/api/v1/backups/environments/{env-id}/policy \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "is_enabled": true,
    "schedule_cron": "0 2 * * *",
    "timezone": "UTC",
    "daily_retention": 7,
    "weekly_retention": 4,
    "monthly_retention": 12,
    "yearly_retention": 2,
    "weekly_backup_day": 6,
    "storage_config_id": "uuid-of-storage",
    "notify_on_success": false,
    "notify_on_failure": true
  }'

Policy Fields:

interface BackupPolicyCreate {
  is_enabled: boolean;          // Enable/disable automated backups
  schedule_cron: string;         // Cron expression (5 parts)
  timezone: string;              // Timezone for cron (default: "UTC")
 
  // GFS Retention Settings
  daily_retention: number;       // Days to keep daily backups (0-365)
  weekly_retention: number;      // Weeks to keep weekly backups (0-52)
  monthly_retention: number;     // Months to keep monthly backups (0-60)
  yearly_retention: number;      // Years to keep yearly backups (0-10)
  weekly_backup_day: number;     // Day for weekly (0=Mon, 6=Sun)
 
  // Storage
  storage_config_id?: string;    // Default storage (optional)
 
  // Notifications
  notify_on_success: boolean;    // Email on successful backup
  notify_on_failure: boolean;    // Email on failed backup
}

Cron Expression Examples

The schedule_cron field uses standard 5-part cron syntax:

┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of week (0 - 6, 0=Monday)
│ │ │ │ │
* * * * *

Common Schedules:

ScheduleCron ExpressionDescription
Daily at 2 AM0 2 * * *Default - runs every day at 2:00 AM
Every 6 hours0 */6 * * *Runs at 00:00, 06:00, 12:00, 18:00
Twice daily0 2,14 * * *Runs at 2:00 AM and 2:00 PM
Hourly0 * * * *Every hour on the hour
Business hours0 9-17 * * 1-5Weekdays 9 AM - 5 PM
Weekly Sunday0 3 * * 0Every Sunday at 3:00 AM
Monthly0 1 1 * *1st of month at 1:00 AM
Weekday mornings30 8 * * 1-5Monday-Friday at 8:30 AM

Cron Execution Flow

The scheduled backup system runs as an ARQ cron job:

┌─────────────────────────────────────────────────────┐
│ Cron Job: execute_scheduled_backups()               │
│ Frequency: Every hour (at :00)                      │
└─────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────┐
│ Query all enabled BackupPolicy records              │
│ WHERE is_enabled = true                             │
│   AND next_backup_at <= NOW()                       │
└─────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────┐
│ For each policy:                                    │
│ 1. Create scheduled backup                          │
│ 2. Update last_backup_at = NOW()                    │
│ 3. Calculate next_backup_at using croniter          │
│ 4. Update last_backup_status                        │
└─────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────┐
│ Send notifications (if configured)                  │
│ - Success: notify_on_success = true                 │
│ - Failure: notify_on_failure = true                 │
└─────────────────────────────────────────────────────┘

Cron Implementation (from backend/tasks/worker.py):

# Run at the start of every hour
cron(
    execute_scheduled_backups,
    minute=0,
)

Next Backup Calculation:

Uses croniter library to compute the next execution time:

from croniter import croniter
from datetime import datetime, UTC
 
now = datetime.now(UTC)
cron = croniter(policy.schedule_cron, now)
next_backup_at = cron.get_next(datetime)

GFS Retention Policy

OEC.SH implements the Grandfather-Father-Son (GFS) backup rotation scheme for optimal storage efficiency and recovery flexibility.

Retention Tiers

Backups are categorized into retention tiers based on their age and importance:

TierDescriptionDefault RetentionPromotion Logic
DailyRegular daily backups7 daysCreated daily
WeeklyWeekly snapshots4 weeksPromoted from first daily of week
MonthlyMonthly archives12 monthsPromoted from first weekly of month
YearlyLong-term archives2 yearsPromoted from first monthly of year
PermanentNever expiresForeverManually set

How GFS Works

1. Initial Backup Classification

When a backup is created, its initial retention type is determined by the current date:

# From backend/services/backup_service.py
def _determine_retention_type(self, environment_id: UUID) -> RetentionType:
    now = datetime.now(UTC)
 
    # First day of year → YEARLY
    if now.month == 1 and now.day == 1:
        return RetentionType.YEARLY
 
    # First day of month → MONTHLY
    if now.day == 1:
        return RetentionType.MONTHLY
 
    # Sunday (weekday 6) → WEEKLY
    if now.weekday() == 6:
        return RetentionType.WEEKLY
 
    # All other days → DAILY
    return RetentionType.DAILY

2. Expiration Calculation

Each tier has a configurable retention period:

# From BackupPolicy defaults
daily_retention = 7      # Keep 7 daily backups
weekly_retention = 4     # Keep 4 weekly backups
monthly_retention = 12   # Keep 12 monthly backups
yearly_retention = 2     # Keep 2 yearly backups

Expiration Formula:

TierExpires At
Dailycreated_at + daily_retention days
Weeklycreated_at + weekly_retention weeks
Monthlycreated_at + (monthly_retention × 30) days
Yearlycreated_at + (yearly_retention × 365) days
PermanentNULL (never expires)

3. Automatic Promotion

The retention service automatically promotes backups to higher tiers:

Cron Job: Runs daily at 3:00 AM UTC

# From backend/tasks/worker.py
cron(
    execute_retention_cleanup,
    hour=3,
    minute=0,
)

Promotion Rules:

  1. Daily → Weekly:

    • Trigger: First completed daily backup of the week
    • Condition: No existing weekly backup for current week
    • Day: Configured via weekly_backup_day (default: Sunday)
    • New expiration: now + weekly_retention weeks
  2. Weekly → Monthly:

    • Trigger: First completed weekly backup of the month
    • Condition: No existing monthly backup for current month
    • Day: 1st of month
    • New expiration: now + (monthly_retention × 30) days
  3. Monthly → Yearly:

    • Trigger: First completed monthly backup of the year
    • Condition: No existing yearly backup for current year
    • Day: January 1st
    • New expiration: now + (yearly_retention × 365) days

Retention Cleanup Process

The daily retention job performs three tasks:

┌─────────────────────────────────────────────────────┐
│ 1. Mark Expired Backups                             │
│    - Query: WHERE expires_at < NOW()                │
│    - Action: status = EXPIRED                       │
└─────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────┐
│ 2. Delete Storage Objects                           │
│    - Find: status IN (EXPIRED, DELETED)             │
│    - Delete from cloud storage                      │
│    - Update storage_config usage metrics            │
│    - Clear storage keys from backup record          │
└─────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────┐
│ 3. Promote Eligible Backups                         │
│    - Check each enabled policy                      │
│    - Promote daily → weekly                         │
│    - Promote weekly → monthly                       │
│    - Promote monthly → yearly                       │
│    - Update retention_type and expires_at           │
└─────────────────────────────────────────────────────┘

Example GFS Timeline

Here's how backups evolve over time with default settings:

Day 1 (Monday):
  CREATE: Backup A (DAILY, expires in 7 days)

Day 7 (Sunday):
  CREATE: Backup B (WEEKLY, expires in 4 weeks)
  EXPIRE: Backup A (past 7 days)

Day 30 (First Sunday of month):
  CREATE: Backup C (WEEKLY, expires in 4 weeks)
  PROMOTE: Backup B → MONTHLY (new expiration: 12 months)

Day 365 (January 1):
  CREATE: Backup D (YEARLY, expires in 2 years)
  PROMOTE: Oldest MONTHLY → YEARLY

Result after 1 year:
  - 7 DAILY backups (last 7 days)
  - 4 WEEKLY backups (last 4 weeks)
  - 12 MONTHLY backups (last 12 months)
  - 1 YEARLY backup (start of year)

Storage Efficiency

GFS rotation significantly reduces storage costs:

Without GFS (keeping all daily backups for 1 year):

  • Backups: 365 daily backups
  • Assuming 1 GB each = 365 GB total

With GFS (default policy):

  • 7 daily (7 GB)
  • 4 weekly (4 GB)
  • 12 monthly (12 GB)
  • 2 yearly (2 GB)
  • Total: 25 GB (93% reduction)

Backup Metadata

Each backup record stores comprehensive metadata for tracking and auditing.

Database Schema

-- From backend/models/backup.py
CREATE TABLE backups (
    id UUID PRIMARY KEY,
    environment_id UUID NOT NULL REFERENCES project_environments(id),
    storage_config_id UUID REFERENCES storage_configs(id),
    task_id UUID REFERENCES tasks(id),
 
    -- Status tracking
    status VARCHAR NOT NULL,  -- pending, in_progress, uploading, completed, failed, expired, deleted
    backup_type VARCHAR NOT NULL,  -- manual, scheduled, pre_restore, pre_upgrade, pre_destroy
 
    -- Storage paths
    database_key VARCHAR(500),    -- S3 key for ZIP file
    filestore_key VARCHAR(500),   -- NULL (included in ZIP)
    manifest_key VARCHAR(500),    -- NULL (included in ZIP)
 
    -- Sizes (bytes)
    database_size BIGINT DEFAULT 0,
    filestore_size BIGINT DEFAULT 0,
    total_size BIGINT DEFAULT 0,
    compressed_size BIGINT DEFAULT 0,
 
    -- Timing
    started_at TIMESTAMP WITH TIME ZONE,
    completed_at TIMESTAMP WITH TIME ZONE,
    duration_seconds INTEGER,
 
    -- Retention (GFS)
    retention_type VARCHAR,  -- daily, weekly, monthly, yearly, permanent
    expires_at TIMESTAMP WITH TIME ZONE,
 
    -- Verification
    is_verified BOOLEAN DEFAULT FALSE,
    verified_at TIMESTAMP WITH TIME ZONE,
    verification_error TEXT,
 
    -- Integrity checksums
    database_checksum VARCHAR(64),  -- SHA-256
    filestore_checksum VARCHAR(64),
 
    -- Error tracking
    error_message TEXT,
    error_details JSON,
 
    -- Environment snapshot
    environment_snapshot JSON,
 
    -- Audit fields
    create_date TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
    write_date TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
    created_by UUID,
    write_by UUID
);
 
-- Indexes for performance
CREATE INDEX idx_backups_environment ON backups(environment_id);
CREATE INDEX idx_backups_status ON backups(status);
CREATE INDEX idx_backups_expires_at ON backups(expires_at);

Size Calculation

Backup sizes are tracked at multiple levels:

FieldDescriptionCalculation
database_sizeRaw PostgreSQL dump sizestat -c%s dump.sql
filestore_sizeTar.gz archive sizestat -c%s filestore.tar.gz
total_sizeSum of componentsdatabase_size + filestore_size
compressed_sizeFinal ZIP sizestat -c%s backup.zip

Storage Usage Tracking:

# From backup_service.py
storage_config.total_size_bytes += backup.compressed_size
storage_config.object_count += 1
storage_config.last_used_at = datetime.now(UTC)

Odoo Version Tracking

The environment snapshot captures the Odoo version at backup time:

&#123;
  "project": &#123;
    "odoo_version": "18.0"
  &#125;
&#125;

This enables:

  • Version compatibility checks before restore
  • Migration planning
  • Audit trail for upgrades

Creator Tracking

All backups track who created them:

created_by UUID REFERENCES users(id)

Captured For:

  • Manual backups: Current user from API request
  • Scheduled backups: NULL (system-generated)
  • Pre-restore backups: User initiating restore
  • Pre-upgrade backups: User initiating upgrade

Storage Providers

Backups are stored in cloud storage. See Storage Configuration for details on supported providers:

  • AWS S3
  • Cloudflare R2
  • Backblaze B2
  • MinIO (self-hosted)
  • FTP
  • SFTP

Storage Key Structure

Backups use a hierarchical key structure for organization:

organizations/&#123;org_id&#125;/
  projects/&#123;project_id&#125;/
    environments/&#123;env_id&#125;/
      &#123;backup_id&#125;.zip

Example:

organizations/550e8400-e29b-41d4-a716-446655440000/
  projects/660e8400-e29b-41d4-a716-446655440000/
    environments/770e8400-e29b-41d4-a716-446655440000/
      880e8400-e29b-41d4-a716-446655440000.zip

Benefits:

  • Easy to list all backups for an organization
  • Supports multi-tenancy
  • Enables per-organization storage quotas
  • Simplifies migration between storage providers

Backup Verification

Checksum Integrity

All backup components are verified with SHA-256 checksums:

# From backup_service.py
def _calculate_file_checksum(self, file_path: Path) -> str:
    sha256 = hashlib.sha256()
    with open(file_path, "rb") as f:
        for chunk in iter(lambda: f.read(8192), b""):
            sha256.update(chunk)
    return sha256.hexdigest()

Verification Points:

  1. On VM: Calculate checksum after creating dump/tar.gz
  2. After Download: Verify checksum matches after SFTP transfer
  3. After Upload: Store checksum in database for future verification

Double Verification:

# Remote checksum (on VM)
exit_code, stdout, _ = ssh.execute_command(f"sha256sum &#123;remote_temp&#125;")
remote_checksum = stdout.split()[0]
 
# Local checksum (on PaaSPortal backend)
local_checksum = self._calculate_file_checksum(local_path)
 
# Compare
if remote_checksum != local_checksum:
    raise BackupError("Checksum mismatch")

Backup Validation Fields

class Backup:
    is_verified: bool = False
    verified_at: datetime | None
    verification_error: str | None
 
    database_checksum: str  # SHA-256
    filestore_checksum: str  # SHA-256

Test Restore Capability

Recommended: Periodically test restore backups to verify:

  • Backup files are intact
  • Restore process works correctly
  • Data integrity is maintained
  • Recovery time is acceptable

See Backup Restoration for restore procedures.


Backup Encryption

In-Transit Encryption

All backup data is encrypted during transfer:

Transfer StageEncryption Method
VM → PaaSPortal BackendSFTP (SSH encryption)
Backend → Cloud StorageHTTPS/TLS 1.2+
Download URLsPresigned URLs over HTTPS

At-Rest Encryption

Encryption at rest depends on the storage provider:

ProviderEncryption Support
AWS S3AES-256 server-side encryption (SSE-S3)
Cloudflare R2Automatic encryption at rest
Backblaze B2AES-256 encryption
MinIOOptional SSE-C or SSE-S3
FTP/SFTPDepends on server configuration

Configuration: Set storage_class when creating storage config to enable encryption options.


Permissions

Backup operations require specific permissions from the RBAC matrix:

OperationRequired PermissionLevel
List backupsproject.backups.listProject
Create backupproject.backups.createProject
View backup detailsproject.backups.viewProject
Download backupproject.backups.viewProject
Delete backupproject.backups.deleteProject
Create policyorg.backups.createOrganization
Update policyorg.backups.createOrganization
Delete policyorg.backups.deleteOrganization

Role Assignments (default):

RoleBackup Permissions
portal_adminAll backup permissions (global)
org_ownerAll org-level backup permissions
org_adminCreate, view, list backups
org_memberView, list backups (read-only)
project_adminCreate, view, list, delete backups
project_memberView, list backups (read-only)

Storage Quotas

Backup storage counts toward organization quota limits.

Quota Calculation

SELECT
    SUM(compressed_size) as total_backup_storage
FROM backups
WHERE environment_id IN (
    SELECT id FROM project_environments
    WHERE project_id IN (
        SELECT id FROM projects
        WHERE organization_id = :org_id
    )
)
AND status NOT IN ('EXPIRED', 'DELETED');

Plan Limits

Backup storage limits by billing plan:

PlanBackup StorageAdditional Storage
Free5 GBNot available
Starter50 GB$0.10/GB/month
Professional200 GB$0.08/GB/month
Business500 GB$0.06/GB/month
EnterpriseUnlimitedIncluded

Quota Exceeded Behavior:

  • New manual backups are blocked
  • Scheduled backups continue but send warning notifications
  • Existing backups are not deleted automatically
  • UI shows warning banner

Check Current Usage:

GET /api/v1/organizations/&#123;org-id&#125;/usage

ARQ Background Jobs

Backups are processed asynchronously using ARQ (Async Redis Queue) workers.

Job Queue Architecture

┌──────────────────┐
│   API Request    │
│ POST /backups    │
└────────┬─────────┘


┌────────────────────────────────────────┐
│ 1. Create Task (status: PENDING)       │
│ 2. Create Backup (status: PENDING)     │
│ 3. Enqueue to ARQ                      │
│ 4. Return job_id to client             │
└────────┬───────────────────────────────┘


┌────────────────────────────────────────┐
│         Redis Queue                    │
│  queue_name: "paasportal:queue"        │
└────────┬───────────────────────────────┘


┌────────────────────────────────────────┐
│      ARQ Worker Pool                   │
│  max_jobs: 10 (configurable)           │
│  job_timeout: 3600s (1 hour)           │
└────────┬───────────────────────────────┘


┌────────────────────────────────────────┐
│ execute_backup() task function         │
│ - Updates task status via SSE          │
│ - Calls BackupService.execute_backup() │
│ - Sends completion notification        │
└────────────────────────────────────────┘

Task Lifecycle

# From backend/tasks/backup_tasks.py
 
async def execute_backup(
    ctx: dict,
    task_id: str,
    environment_id: str,
    backup_type: str = "manual",
    storage_config_id: str | None = None,
    include_filestore: bool = True,
    user_id: str | None = None,
    backup_id: str | None = None,
) -> dict[str, Any]:
    # 1. Load task and backup records
    task = await db.get(Task, UUID(task_id))
    backup = await db.get(Backup, UUID(backup_id))
 
    # 2. Update task status
    task.status = TaskStatus.RUNNING
    task.started_at = datetime.now(UTC)
    await broadcast_task_status(task, previous_status)
 
    # 3. Execute backup via service
    backup = await service.execute_backup(
        backup=backup,
        include_filestore=include_filestore,
    )
 
    # 4. Mark task as completed
    task.status = TaskStatus.COMPLETED
    task.result = &#123;"backup_id": str(backup.id)&#125;
    task.completed_at = datetime.now(UTC)
 
    # 5. Send notification
    await _send_backup_notification(db, task, env_name, success=True)
 
    return &#123;"success": True, "backup_id": str(backup.id)&#125;

Retry Logic

ARQ automatically retries failed jobs:

# From backend/tasks/worker.py (WorkerSettings)
max_tries = 3  # Retry failed jobs up to 3 times

Retry Backoff:

  • 1st retry: Immediate
  • 2nd retry: After 60 seconds
  • 3rd retry: After 300 seconds (5 minutes)

Permanent Failure:

  • After 3 failed attempts, job is marked as permanently failed
  • Added to Dead Letter Queue (DLQ) for manual inspection
  • User receives failure notification

Monitoring Jobs

Check Task Status (via API):

GET /api/v1/tasks/&#123;task-id&#125;

Real-Time Updates (via SSE):

const eventSource = new EventSource('/api/v1/events/stream');
 
eventSource.addEventListener('task.status', (event) => &#123;
  const data = JSON.parse(event.data);
  console.log(`Task $&#123;data.task_id&#125;: $&#123;data.status&#125;`);
&#125;);

Scheduled Backup Cron Jobs

execute_scheduled_backups

Schedule: Every hour at :00

cron(
    execute_scheduled_backups,
    minute=0,
)

Function (from backend/tasks/backup_tasks.py):

async def execute_scheduled_backups(ctx: dict) -> dict[str, Any]:
    now = datetime.now(UTC)
 
    # Find policies due for backup
    result = await db.execute(
        select(BackupPolicy).where(
            BackupPolicy.is_enabled == True,
            BackupPolicy.next_backup_at <= now,
        )
    )
    policies = list(result.scalars().all())
 
    for policy in policies:
        # Create scheduled backup
        backup = await service.create_backup(
            environment_id=policy.environment_id,
            backup_type=BackupType.SCHEDULED,
            storage_config_id=policy.storage_config_id,
            include_filestore=True,
        )
 
        # Update policy
        policy.last_backup_at = now
        policy.last_backup_status = backup.status.value
        policy.next_backup_at = _calculate_next_backup(policy)
 
    return &#123;"triggered": triggered_count, "errors": errors&#125;

execute_retention_cleanup

Schedule: Daily at 3:00 AM UTC

cron(
    execute_retention_cleanup,
    hour=3,
    minute=0,
)

Actions:

  1. Mark expired backups (expires_at < NOW())
  2. Delete storage objects for expired/deleted backups
  3. Promote eligible backups to higher GFS tiers
  4. Update storage usage metrics

Return Value:

&#123;
  "expired": 12,
  "deleted_storage": 11,
  "promoted": 3,
  "errors": []
&#125;

API Reference

Create Backup

POST /api/v1/backups/environments/&#123;environment_id&#125;/backups

Headers:

Authorization: Bearer &#123;token&#125;
Content-Type: application/json

Request Body:

&#123;
  "backup_type": "manual",
  "storage_config_id": "550e8400-e29b-41d4-a716-446655440000",
  "include_filestore": true,
  "retention_type": "daily"
&#125;

Response (201 Created):

&#123;
  "id": "660e8400-e29b-41d4-a716-446655440000",
  "environment_id": "770e8400-e29b-41d4-a716-446655440000",
  "storage_config_id": "550e8400-e29b-41d4-a716-446655440000",
  "task_id": "880e8400-e29b-41d4-a716-446655440000",
  "status": "pending",
  "backup_type": "manual",
  "database_size": 0,
  "filestore_size": 0,
  "total_size": 0,
  "compressed_size": 0,
  "retention_type": "daily",
  "expires_at": null,
  "is_verified": false,
  "created_at": "2024-12-11T10:30:00Z"
&#125;

List Backups

GET /api/v1/backups/environments/&#123;environment_id&#125;/backups?status=completed&page=1&page_size=20

Query Parameters:

  • status (optional): Filter by status (pending, in_progress, completed, failed, expired)
  • backup_type (optional): Filter by type (manual, scheduled, pre_restore, etc.)
  • page (default: 1): Page number
  • page_size (default: 20, max: 100): Items per page

Response (200 OK):

&#123;
  "items": [
    &#123;
      "id": "660e8400-e29b-41d4-a716-446655440000",
      "environment_id": "770e8400-e29b-41d4-a716-446655440000",
      "status": "completed",
      "backup_type": "manual",
      "database_size": 52428800,
      "filestore_size": 104857600,
      "total_size": 157286400,
      "compressed_size": 147286400,
      "started_at": "2024-12-11T10:30:00Z",
      "completed_at": "2024-12-11T10:35:23Z",
      "duration_seconds": 323,
      "retention_type": "daily",
      "expires_at": "2024-12-18T10:30:00Z",
      "is_verified": true,
      "created_at": "2024-12-11T10:30:00Z"
    &#125;
  ],
  "total": 45,
  "page": 1,
  "page_size": 20,
  "pages": 3
&#125;

Get Backup Details

GET /api/v1/backups/&#123;backup_id&#125;

Response (200 OK):

&#123;
  "id": "660e8400-e29b-41d4-a716-446655440000",
  "environment_id": "770e8400-e29b-41d4-a716-446655440000",
  "storage_config_id": "550e8400-e29b-41d4-a716-446655440000",
  "task_id": "880e8400-e29b-41d4-a716-446655440000",
  "status": "completed",
  "backup_type": "manual",
  "database_size": 52428800,
  "filestore_size": 104857600,
  "total_size": 157286400,
  "compressed_size": 147286400,
  "started_at": "2024-12-11T10:30:00Z",
  "completed_at": "2024-12-11T10:35:23Z",
  "duration_seconds": 323,
  "retention_type": "daily",
  "expires_at": "2024-12-18T10:30:00Z",
  "is_verified": true,
  "verified_at": "2024-12-11T10:35:30Z",
  "database_checksum": "a1b2c3d4e5f6...",
  "filestore_checksum": "f6e5d4c3b2a1...",
  "created_at": "2024-12-11T10:30:00Z"
&#125;

Delete Backup

DELETE /api/v1/backups/&#123;backup_id&#125;

Response (204 No Content)

Note: This marks the backup as DELETED and schedules storage cleanup. The record remains for audit purposes.

Get Download URLs

GET /api/v1/backups/&#123;backup_id&#125;/download?expires_in=3600

Query Parameters:

  • expires_in (default: 3600, min: 300, max: 86400): URL validity in seconds

Response (200 OK):

&#123;
  "backup_id": "660e8400-e29b-41d4-a716-446655440000",
  "database_url": "https://s3.amazonaws.com/bucket/path/backup.zip?X-Amz-Signature=...",
  "filestore_url": null,
  "manifest_url": null,
  "expires_in": 3600,
  "expires_at": "2024-12-11T11:30:00Z"
&#125;

Note: For new backups, database_url contains the single ZIP file. filestore_url and manifest_url are null (included in ZIP).

Create Backup Policy

POST /api/v1/backups/environments/&#123;environment_id&#125;/policy

Request Body:

&#123;
  "is_enabled": true,
  "schedule_cron": "0 2 * * *",
  "timezone": "UTC",
  "daily_retention": 7,
  "weekly_retention": 4,
  "monthly_retention": 12,
  "yearly_retention": 2,
  "weekly_backup_day": 6,
  "storage_config_id": "550e8400-e29b-41d4-a716-446655440000",
  "notify_on_success": false,
  "notify_on_failure": true
&#125;

Response (201 Created):

&#123;
  "id": "990e8400-e29b-41d4-a716-446655440000",
  "environment_id": "770e8400-e29b-41d4-a716-446655440000",
  "is_enabled": true,
  "schedule_cron": "0 2 * * *",
  "timezone": "UTC",
  "daily_retention": 7,
  "weekly_retention": 4,
  "monthly_retention": 12,
  "yearly_retention": 2,
  "weekly_backup_day": 6,
  "storage_config_id": "550e8400-e29b-41d4-a716-446655440000",
  "notify_on_success": false,
  "notify_on_failure": true,
  "last_backup_at": null,
  "last_backup_status": null,
  "next_backup_at": "2024-12-12T02:00:00Z",
  "created_at": "2024-12-11T10:30:00Z",
  "updated_at": "2024-12-11T10:30:00Z"
&#125;

Update Backup Policy

PATCH /api/v1/backups/environments/&#123;environment_id&#125;/policy

Request Body (partial update):

&#123;
  "is_enabled": false,
  "daily_retention": 14
&#125;

Response (200 OK): Updated policy object

Delete Backup Policy

DELETE /api/v1/backups/environments/&#123;environment_id&#125;/policy

Response (204 No Content)

Note: Deleting a policy stops automated backups but does NOT delete existing backups.

Get Backup Summary

GET /api/v1/backups/environments/&#123;environment_id&#125;/summary

Response (200 OK):

&#123;
  "environment_id": "770e8400-e29b-41d4-a716-446655440000",
  "total_backups": 45,
  "completed_backups": 42,
  "failed_backups": 2,
  "total_size_bytes": 6442450944,
  "oldest_backup_at": "2024-11-01T02:00:00Z",
  "newest_backup_at": "2024-12-11T02:00:00Z",
  "last_successful_backup_at": "2024-12-11T02:00:00Z",
  "policy_enabled": true,
  "next_scheduled_backup": "2024-12-12T02:00:00Z"
&#125;

Best Practices

Backup Frequency

Production Environments:

  • Minimum: Daily backups at 2 AM
  • Recommended: Every 6 hours for active databases
  • High-traffic: Hourly backups during business hours

Staging Environments:

  • Daily backups sufficient
  • Before major deployments

Development Environments:

  • Weekly backups or manual only
  • Before destructive testing

Retention Configuration

Conservative Approach (longer retention):

&#123;
  "daily_retention": 14,
  "weekly_retention": 8,
  "monthly_retention": 24,
  "yearly_retention": 5
&#125;

Aggressive Approach (minimize storage costs):

&#123;
  "daily_retention": 3,
  "weekly_retention": 2,
  "monthly_retention": 6,
  "yearly_retention": 1
&#125;

Compliance-Focused (regulatory requirements):

&#123;
  "daily_retention": 7,
  "weekly_retention": 4,
  "monthly_retention": 12,
  "yearly_retention": 7,
  "retention_type": "permanent"  // For yearly backups
&#125;

Multi-Provider Redundancy

For critical data, configure backups to multiple providers:

  1. Primary Provider: Fastest/cheapest (e.g., Cloudflare R2)
  2. Secondary Provider: Different region (e.g., AWS S3 us-west-2)
  3. Tertiary Provider: Different vendor (e.g., Backblaze B2)

Implementation:

  • Create multiple storage configs
  • Configure policies to use different providers
  • Manually trigger backups to secondary providers for critical snapshots

Pre-Deployment Backups

Always create a manual backup before:

  • Deploying new code
  • Running database migrations
  • Changing environment configuration
  • Upgrading Odoo version
  • Installing/updating addons

Workflow:

# 1. Create pre-deployment backup
POST /api/v1/backups/environments/&#123;id&#125;/backups
&#123;"backup_type": "manual", "retention_type": "weekly"&#125;
 
# 2. Wait for completion (monitor via SSE or polling)
 
# 3. Proceed with deployment
POST /api/v1/environments/&#123;id&#125;/deploy

Testing Restores

Monthly Test Restore:

  • Select a random backup from last month
  • Restore to a test environment
  • Verify data integrity
  • Document recovery time

Disaster Recovery Drill (quarterly):

  • Simulate complete environment loss
  • Restore from backup to new server
  • Measure total recovery time
  • Identify bottlenecks

Storage Lifecycle Management

Archive Old Backups:

  • Move yearly backups to cheaper storage tiers (Glacier, Archive)
  • Use storage_class field in storage config
  • Implement custom retention rules for compliance

Monitor Storage Costs:

GET /api/v1/organizations/&#123;org-id&#125;/usage

Track total_backup_storage and estimate monthly costs.

Notification Strategy

Production:

&#123;
  "notify_on_success": true,   // Confirm backups are working
  "notify_on_failure": true
&#125;

Development:

&#123;
  "notify_on_success": false,  // Reduce noise
  "notify_on_failure": true    // Only alert on failures
&#125;

Troubleshooting

Backup Stuck in "Pending" Status

Symptoms:

  • Backup status remains pending for > 5 minutes
  • No progress updates via SSE

Causes:

  • ARQ worker not running
  • Redis connection issues
  • Worker queue full (max_jobs reached)

Solutions:

  1. Check worker status:
docker ps | grep worker
  1. Inspect ARQ queue:
redis-cli -h redis -p 6379
> LLEN paasportal:queue
  1. Restart worker:
docker compose restart worker

Backup Fails with "pg_dump failed"

Error Message:

pg_dump failed: could not connect to database

Causes:

  • PostgreSQL container not running
  • Database name mismatch
  • Network connectivity issues

Solutions:

  1. Verify PostgreSQL container:
docker ps | grep &#123;env_id&#125;_db
  1. Test database connection:
docker exec &#123;env_id&#125;_db psql -U odoo -d &#123;db_name&#125; -c "SELECT 1"
  1. Check environment status:
GET /api/v1/environments/&#123;id&#125;

Filestore Backup Empty (0 bytes)

Symptoms:

  • filestore_size = 0 in backup record
  • Warning: "Filestore archive is empty"

Causes:

  • No files uploaded to Odoo
  • Filestore directory not found
  • Permissions issue in container

Solutions:

  1. Check if filestore exists:
docker exec &#123;env_id&#125;_odoo ls -la /var/lib/odoo/filestore/&#123;db_name&#125;
  1. Verify filestore has files:
docker exec &#123;env_id&#125;_odoo du -sh /var/lib/odoo/filestore/&#123;db_name&#125;
  1. If empty, this is normal for new environments

Upload to Storage Fails

Error Message:

Failed to upload backup: Connection timeout

Causes:

  • Invalid storage credentials
  • Network firewall blocking connection
  • Storage bucket doesn't exist
  • Insufficient permissions

Solutions:

  1. Test storage connection:
POST /api/v1/backups/storage-configs/&#123;id&#125;/test
  1. Verify credentials:
GET /api/v1/backups/storage-configs/&#123;id&#125;
  1. Check storage provider status (S3, R2, etc.)

  2. Review provider-specific firewall rules

Checksum Mismatch

Error Message:

Checksum mismatch: remote=abc123, local=def456

Causes:

  • File corruption during transfer
  • Network issues during SFTP download
  • Disk write errors

Solutions:

  1. Retry backup (automatic via ARQ)

  2. Verify VM disk health:

ssh root@&#123;vm_ip&#125; "df -h && iostat"
  1. Check PaaSPortal backend disk:
df -h /tmp
  1. Review network stability between VM and backend

Large Database Backup Timeout

Symptoms:

  • Backup fails after 30 minutes
  • Error: "Job timeout exceeded"

Causes:

  • Database > 10 GB
  • Slow disk I/O on VM
  • Default timeout too low

Solutions:

  1. Increase job timeout in worker config:
# backend/tasks/worker.py
job_timeout = 7200  # 2 hours
  1. Optimize PostgreSQL for faster dumps:
-- Reduce work_mem during dump
SET work_mem = '256MB';
  1. Use faster compression level:
# Modify pg_dump command to use -Z6 instead of -Z9
  1. Consider splitting large databases

Scheduled Backups Not Running

Symptoms:

  • next_backup_at is in the past
  • last_backup_at is stale
  • No new backups created

Causes:

  • Policy is_enabled = false
  • Cron job not running
  • Worker not processing cron jobs
  • Invalid cron expression

Solutions:

  1. Check policy status:
GET /api/v1/backups/environments/&#123;id&#125;/policy
  1. Verify cron job is registered:
# Check worker logs
docker logs worker 2>&1 | grep execute_scheduled_backups
  1. Validate cron expression:
from croniter import croniter
croniter.is_valid("0 2 * * *")  # Should return True
  1. Manually trigger scheduled backup check:
# Via ARQ worker console
await execute_scheduled_backups(&#123;&#125;)

Retention Cleanup Not Deleting Old Backups

Symptoms:

  • Expired backups still in COMPLETED status
  • Storage usage not decreasing
  • expires_at in the past but backup still exists

Causes:

  • Retention cleanup job not running
  • Database transaction issues
  • Storage provider errors

Solutions:

  1. Check last retention cleanup run:
# Review worker logs
docker logs worker 2>&1 | grep execute_retention_cleanup
  1. Manually trigger cleanup:
# Via API or worker console
from services.retention_service import RetentionService
service = RetentionService(db)
summary = await service.run_retention_cleanup()
  1. Verify cron job schedule:
# Should run daily at 3 AM UTC
cron(execute_retention_cleanup, hour=3, minute=0)
  1. Check for storage provider errors in logs

Additional Resources


Summary

OEC.SH provides enterprise-grade backup capabilities for Odoo environments:

  • Automated Scheduling: Set it and forget it with cron-based policies
  • GFS Retention: Smart storage management with daily/weekly/monthly/yearly tiers
  • Multi-Cloud: Store backups across 6 different providers
  • Integrity Verified: SHA-256 checksums ensure data integrity
  • Fast Restore: Single ZIP format enables quick recovery
  • Audit Trail: Complete metadata and creator tracking
  • Scalable: ARQ workers handle concurrent backup jobs efficiently

For questions or support, contact your organization administrator or submit a ticket via the dashboard.