Backup Creation and Scheduling
OEC.SH provides a robust backup system for your Odoo environments with support for manual backups, automated scheduling, and sophisticated retention policies. This guide covers everything you need to know about creating and managing backups.
Overview
The backup system in OEC.SH offers:
- Manual on-demand backups - Create backups at any time with a single click
- Automated scheduled backups - Set up recurring backups with cron expressions
- GFS retention policy - Grandfather-Father-Son rotation for efficient storage management
- Multi-provider support - Store backups across 6 different cloud providers
- Integrity verification - SHA-256 checksums ensure backup integrity
- Atomic backup format - Single ZIP file containing database, filestore, and manifest
Backup Types
OEC.SH categorizes backups based on how they're created:
Manual Backups
User-initiated backups created through the UI or API. Ideal for:
- Pre-deployment safety snapshots
- Before making major configuration changes
- Ad-hoc data protection
- Testing restore procedures
API Endpoint: POST /api/v1/backups/environments/{id}/backups
Scheduled Backups
Automated backups triggered by backup policies. Configured via cron expressions for:
- Regular daily backups (e.g., 2 AM daily)
- Hourly backups for critical environments
- Weekly backups on specific days
- Custom schedules matching your operational needs
Cron Job: Runs every hour to check for due backups
Pre-Restore Backups
Automatic safety backups created before restore operations. This provides:
- Rollback capability if restore fails
- Protection against data loss
- Audit trail for restore operations
Type: pre_restore
Pre-Upgrade Backups
Automatic backups created before Odoo version upgrades:
- Safeguard against upgrade failures
- Enable rollback to previous version
- Preserve data before major changes
Type: pre_upgrade
Pre-Destroy Backups
Optional backups created before environment deletion:
- Final safety net before destruction
- Compliance requirement for data retention
- Recovery option for accidental deletions
Type: pre_destroy
Manual Backup Creation
Via Dashboard UI
- Navigate to Environment Details page
- Click Backups tab in sidebar
- Click Create Backup button
- Configure backup options:
- Include Filestore: Toggle to include/exclude files (default: ON)
- Storage Provider: Select destination (uses default if not specified)
- Retention Type: Choose GFS tier (auto-determined if not specified)
- Click Create Backup
- Monitor progress in real-time via SSE updates
The backup will be queued immediately and processed asynchronously by ARQ workers.
Via API
Create a manual backup using the REST API:
curl -X POST https://oec.sh/api/v1/backups/environments/{env-id}/backups \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"backup_type": "manual",
"include_filestore": true,
"retention_type": "daily",
"storage_config_id": "uuid-of-storage-config"
}'Request Schema:
interface BackupCreate {
backup_type?: "manual" | "scheduled" | "pre_restore" | "pre_upgrade" | "pre_destroy";
storage_config_id?: string; // UUID (uses org default if null)
include_filestore?: boolean; // Default: true
retention_type?: "daily" | "weekly" | "monthly" | "yearly" | "permanent"; // Auto-determined if null
}Response:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"environment_id": "660e8400-e29b-41d4-a716-446655440000",
"status": "pending",
"backup_type": "manual",
"database_size": 0,
"filestore_size": 0,
"total_size": 0,
"retention_type": "daily",
"created_at": "2024-12-11T10:30:00Z",
"task_id": "770e8400-e29b-41d4-a716-446655440000"
}Rate Limiting
Manual backup creation is rate-limited to prevent abuse:
- Limit: Configured via
BACKUP_RATE_LIMITenvironment variable - Default: 10 backups per hour per IP address
- Headers: Rate limit info returned in response headers
Backup Process
Understanding the backup workflow helps troubleshoot issues and optimize performance.
1. Initialization Phase
When you trigger a backup:
- Task Creation: A
Taskrecord is created with typeBACKUPand statusPENDING - Backup Record: A
Backuprecord is created in the database - Queue Job: The task is enqueued to ARQ worker with job ID
- Status Update: Task status changes to
QUEUED
Database Records Created:
taskstable: Tracks execution progressbackupstable: Stores backup metadata
2. Execution Phase
The ARQ worker processes the backup:
┌─────────────────────────────────────────────────────┐
│ ARQ Worker: execute_backup() │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ 1. Update status to IN_PROGRESS │
│ 2. Connect to VM via SSH │
│ 3. Create environment snapshot (metadata JSON) │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Database Backup (pg_dump) │
│ - Container: {env_id}_db │
│ - Format: PostgreSQL custom format (-Fc) │
│ - Compression: Level 9 (-Z9) │
│ - Output: /tmp/backup_{uuid}.dump │
│ - Checksum: SHA-256 │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Filestore Backup (tar.gz) │
│ - Container: {env_id}_odoo │
│ - Path: /var/lib/odoo/filestore/{db_name} │
│ - Command: tar -czf │
│ - Output: /tmp/filestore_{uuid}.tar.gz │
│ - Checksum: SHA-256 │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Download Files via SFTP │
│ - Remote → Local: PaaSPortal backend │
│ - Verify checksums match │
│ - Save to temp directory │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Create Backup ZIP Package │
│ - File: {backup_id}.zip │
│ - Contents: │
│ • dump.sql (database) │
│ • filestore.tar.gz (files) │
│ • manifest.json (metadata) │
│ - Compression: ZIP_DEFLATED │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Upload to Cloud Storage │
│ - Status: UPLOADING │
│ - Key: organizations/{org}/projects/{proj}/ │
│ environments/{env}/{backup_id}.zip │
│ - Provider: S3/R2/B2/MinIO/FTP/SFTP │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Finalization │
│ - Status: COMPLETED │
│ - Calculate expiration date │
│ - Update storage usage metrics │
│ - Cleanup temp files │
│ - Send notification (if enabled) │
└─────────────────────────────────────────────────────┘3. Database Backup Details
The PostgreSQL backup uses pg_dump with optimal settings:
# Executed inside postgres container
docker exec {env_id}_db \
pg_dump -U odoo -Fc -Z9 {database_name} \
> /tmp/backup_{uuid}.dumpFormat Options:
-Fc: Custom format (binary, compressed, supports parallel restore)-Z9: Maximum compression level-U odoo: PostgreSQL user- Localhost connection (avoids Docker network overhead)
Advantages of Custom Format:
- Smaller file size than SQL format
- Faster restoration
- Supports parallel restore with
pg_restore -j - Includes all database objects, permissions, and sequences
4. Filestore Backup Details
Odoo stores uploaded files in the filestore directory:
# Executed inside odoo container
docker exec {env_id}_odoo \
tar -czf - \
-C /var/lib/odoo/filestore {db_name} \
> /tmp/filestore_{uuid}.tar.gzDirectory Structure:
/var/lib/odoo/filestore/
└── {database_name}/
├── 00/
├── 01/
├── ...
└── ff/Compression:
- Uses gzip compression (
-zflag) - Relative paths (
-Cflag) - Preserves permissions and timestamps
5. Manifest Structure
Each backup includes a JSON manifest with complete metadata:
{
"version": "1.0",
"backup_id": "550e8400-e29b-41d4-a716-446655440000",
"environment_id": "660e8400-e29b-41d4-a716-446655440000",
"backup_type": "manual",
"created_at": "2024-12-11T10:30:00Z",
"database": {
"name": "prod_db_main",
"key": "organizations/.../550e8400.zip",
"size": 52428800,
"checksum": "a1b2c3d4e5f6...",
"format": "pg_dump_custom"
},
"filestore": {
"key": null,
"size": 104857600,
"checksum": "f6e5d4c3b2a1...",
"format": "tar.gz"
},
"environment_snapshot": {
"environment": {
"id": "660e8400-e29b-41d4-a716-446655440000",
"name": "Production",
"type": "production",
"status": "running",
"container_name": "660e8400_odoo",
"db_name": "prod_db_main",
"domain": "prod.example.com"
},
"project": {
"id": "770e8400-e29b-41d4-a716-446655440000",
"name": "E-commerce Platform",
"slug": "ecommerce-platform",
"odoo_version": "18.0"
},
"vm": {
"id": "880e8400-e29b-41d4-a716-446655440000",
"name": "server-1",
"ip_address": "165.22.65.97"
},
"timestamp": "2024-12-11T10:30:00Z"
},
"retention": {
"type": "daily",
"expires_at": "2024-12-18T10:30:00Z"
}
}Manifest Fields Explained:
| Field | Description |
|---|---|
version | Manifest schema version (currently "1.0") |
backup_id | Unique identifier for this backup |
environment_id | Source environment UUID |
backup_type | How backup was created (manual, scheduled, etc.) |
database.checksum | SHA-256 hash for integrity verification |
database.format | PostgreSQL dump format identifier |
filestore.size | Total bytes in filestore archive |
environment_snapshot | Complete state of environment at backup time |
retention.expires_at | When backup will be automatically deleted |
Backup Scheduling
Automate backups with flexible scheduling policies.
Creating a Backup Policy
Via API:
curl -X POST https://oec.sh/api/v1/backups/environments/{env-id}/policy \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"is_enabled": true,
"schedule_cron": "0 2 * * *",
"timezone": "UTC",
"daily_retention": 7,
"weekly_retention": 4,
"monthly_retention": 12,
"yearly_retention": 2,
"weekly_backup_day": 6,
"storage_config_id": "uuid-of-storage",
"notify_on_success": false,
"notify_on_failure": true
}'Policy Fields:
interface BackupPolicyCreate {
is_enabled: boolean; // Enable/disable automated backups
schedule_cron: string; // Cron expression (5 parts)
timezone: string; // Timezone for cron (default: "UTC")
// GFS Retention Settings
daily_retention: number; // Days to keep daily backups (0-365)
weekly_retention: number; // Weeks to keep weekly backups (0-52)
monthly_retention: number; // Months to keep monthly backups (0-60)
yearly_retention: number; // Years to keep yearly backups (0-10)
weekly_backup_day: number; // Day for weekly (0=Mon, 6=Sun)
// Storage
storage_config_id?: string; // Default storage (optional)
// Notifications
notify_on_success: boolean; // Email on successful backup
notify_on_failure: boolean; // Email on failed backup
}Cron Expression Examples
The schedule_cron field uses standard 5-part cron syntax:
┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of week (0 - 6, 0=Monday)
│ │ │ │ │
* * * * *Common Schedules:
| Schedule | Cron Expression | Description |
|---|---|---|
| Daily at 2 AM | 0 2 * * * | Default - runs every day at 2:00 AM |
| Every 6 hours | 0 */6 * * * | Runs at 00:00, 06:00, 12:00, 18:00 |
| Twice daily | 0 2,14 * * * | Runs at 2:00 AM and 2:00 PM |
| Hourly | 0 * * * * | Every hour on the hour |
| Business hours | 0 9-17 * * 1-5 | Weekdays 9 AM - 5 PM |
| Weekly Sunday | 0 3 * * 0 | Every Sunday at 3:00 AM |
| Monthly | 0 1 1 * * | 1st of month at 1:00 AM |
| Weekday mornings | 30 8 * * 1-5 | Monday-Friday at 8:30 AM |
Cron Execution Flow
The scheduled backup system runs as an ARQ cron job:
┌─────────────────────────────────────────────────────┐
│ Cron Job: execute_scheduled_backups() │
│ Frequency: Every hour (at :00) │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Query all enabled BackupPolicy records │
│ WHERE is_enabled = true │
│ AND next_backup_at <= NOW() │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ For each policy: │
│ 1. Create scheduled backup │
│ 2. Update last_backup_at = NOW() │
│ 3. Calculate next_backup_at using croniter │
│ 4. Update last_backup_status │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ Send notifications (if configured) │
│ - Success: notify_on_success = true │
│ - Failure: notify_on_failure = true │
└─────────────────────────────────────────────────────┘Cron Implementation (from backend/tasks/worker.py):
# Run at the start of every hour
cron(
execute_scheduled_backups,
minute=0,
)Next Backup Calculation:
Uses croniter library to compute the next execution time:
from croniter import croniter
from datetime import datetime, UTC
now = datetime.now(UTC)
cron = croniter(policy.schedule_cron, now)
next_backup_at = cron.get_next(datetime)GFS Retention Policy
OEC.SH implements the Grandfather-Father-Son (GFS) backup rotation scheme for optimal storage efficiency and recovery flexibility.
Retention Tiers
Backups are categorized into retention tiers based on their age and importance:
| Tier | Description | Default Retention | Promotion Logic |
|---|---|---|---|
| Daily | Regular daily backups | 7 days | Created daily |
| Weekly | Weekly snapshots | 4 weeks | Promoted from first daily of week |
| Monthly | Monthly archives | 12 months | Promoted from first weekly of month |
| Yearly | Long-term archives | 2 years | Promoted from first monthly of year |
| Permanent | Never expires | Forever | Manually set |
How GFS Works
1. Initial Backup Classification
When a backup is created, its initial retention type is determined by the current date:
# From backend/services/backup_service.py
def _determine_retention_type(self, environment_id: UUID) -> RetentionType:
now = datetime.now(UTC)
# First day of year → YEARLY
if now.month == 1 and now.day == 1:
return RetentionType.YEARLY
# First day of month → MONTHLY
if now.day == 1:
return RetentionType.MONTHLY
# Sunday (weekday 6) → WEEKLY
if now.weekday() == 6:
return RetentionType.WEEKLY
# All other days → DAILY
return RetentionType.DAILY2. Expiration Calculation
Each tier has a configurable retention period:
# From BackupPolicy defaults
daily_retention = 7 # Keep 7 daily backups
weekly_retention = 4 # Keep 4 weekly backups
monthly_retention = 12 # Keep 12 monthly backups
yearly_retention = 2 # Keep 2 yearly backupsExpiration Formula:
| Tier | Expires At |
|---|---|
| Daily | created_at + daily_retention days |
| Weekly | created_at + weekly_retention weeks |
| Monthly | created_at + (monthly_retention × 30) days |
| Yearly | created_at + (yearly_retention × 365) days |
| Permanent | NULL (never expires) |
3. Automatic Promotion
The retention service automatically promotes backups to higher tiers:
Cron Job: Runs daily at 3:00 AM UTC
# From backend/tasks/worker.py
cron(
execute_retention_cleanup,
hour=3,
minute=0,
)Promotion Rules:
-
Daily → Weekly:
- Trigger: First completed daily backup of the week
- Condition: No existing weekly backup for current week
- Day: Configured via
weekly_backup_day(default: Sunday) - New expiration:
now + weekly_retention weeks
-
Weekly → Monthly:
- Trigger: First completed weekly backup of the month
- Condition: No existing monthly backup for current month
- Day: 1st of month
- New expiration:
now + (monthly_retention × 30) days
-
Monthly → Yearly:
- Trigger: First completed monthly backup of the year
- Condition: No existing yearly backup for current year
- Day: January 1st
- New expiration:
now + (yearly_retention × 365) days
Retention Cleanup Process
The daily retention job performs three tasks:
┌─────────────────────────────────────────────────────┐
│ 1. Mark Expired Backups │
│ - Query: WHERE expires_at < NOW() │
│ - Action: status = EXPIRED │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ 2. Delete Storage Objects │
│ - Find: status IN (EXPIRED, DELETED) │
│ - Delete from cloud storage │
│ - Update storage_config usage metrics │
│ - Clear storage keys from backup record │
└─────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────┐
│ 3. Promote Eligible Backups │
│ - Check each enabled policy │
│ - Promote daily → weekly │
│ - Promote weekly → monthly │
│ - Promote monthly → yearly │
│ - Update retention_type and expires_at │
└─────────────────────────────────────────────────────┘Example GFS Timeline
Here's how backups evolve over time with default settings:
Day 1 (Monday):
CREATE: Backup A (DAILY, expires in 7 days)
Day 7 (Sunday):
CREATE: Backup B (WEEKLY, expires in 4 weeks)
EXPIRE: Backup A (past 7 days)
Day 30 (First Sunday of month):
CREATE: Backup C (WEEKLY, expires in 4 weeks)
PROMOTE: Backup B → MONTHLY (new expiration: 12 months)
Day 365 (January 1):
CREATE: Backup D (YEARLY, expires in 2 years)
PROMOTE: Oldest MONTHLY → YEARLY
Result after 1 year:
- 7 DAILY backups (last 7 days)
- 4 WEEKLY backups (last 4 weeks)
- 12 MONTHLY backups (last 12 months)
- 1 YEARLY backup (start of year)Storage Efficiency
GFS rotation significantly reduces storage costs:
Without GFS (keeping all daily backups for 1 year):
- Backups: 365 daily backups
- Assuming 1 GB each = 365 GB total
With GFS (default policy):
- 7 daily (7 GB)
- 4 weekly (4 GB)
- 12 monthly (12 GB)
- 2 yearly (2 GB)
- Total: 25 GB (93% reduction)
Backup Metadata
Each backup record stores comprehensive metadata for tracking and auditing.
Database Schema
-- From backend/models/backup.py
CREATE TABLE backups (
id UUID PRIMARY KEY,
environment_id UUID NOT NULL REFERENCES project_environments(id),
storage_config_id UUID REFERENCES storage_configs(id),
task_id UUID REFERENCES tasks(id),
-- Status tracking
status VARCHAR NOT NULL, -- pending, in_progress, uploading, completed, failed, expired, deleted
backup_type VARCHAR NOT NULL, -- manual, scheduled, pre_restore, pre_upgrade, pre_destroy
-- Storage paths
database_key VARCHAR(500), -- S3 key for ZIP file
filestore_key VARCHAR(500), -- NULL (included in ZIP)
manifest_key VARCHAR(500), -- NULL (included in ZIP)
-- Sizes (bytes)
database_size BIGINT DEFAULT 0,
filestore_size BIGINT DEFAULT 0,
total_size BIGINT DEFAULT 0,
compressed_size BIGINT DEFAULT 0,
-- Timing
started_at TIMESTAMP WITH TIME ZONE,
completed_at TIMESTAMP WITH TIME ZONE,
duration_seconds INTEGER,
-- Retention (GFS)
retention_type VARCHAR, -- daily, weekly, monthly, yearly, permanent
expires_at TIMESTAMP WITH TIME ZONE,
-- Verification
is_verified BOOLEAN DEFAULT FALSE,
verified_at TIMESTAMP WITH TIME ZONE,
verification_error TEXT,
-- Integrity checksums
database_checksum VARCHAR(64), -- SHA-256
filestore_checksum VARCHAR(64),
-- Error tracking
error_message TEXT,
error_details JSON,
-- Environment snapshot
environment_snapshot JSON,
-- Audit fields
create_date TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
write_date TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
created_by UUID,
write_by UUID
);
-- Indexes for performance
CREATE INDEX idx_backups_environment ON backups(environment_id);
CREATE INDEX idx_backups_status ON backups(status);
CREATE INDEX idx_backups_expires_at ON backups(expires_at);Size Calculation
Backup sizes are tracked at multiple levels:
| Field | Description | Calculation |
|---|---|---|
database_size | Raw PostgreSQL dump size | stat -c%s dump.sql |
filestore_size | Tar.gz archive size | stat -c%s filestore.tar.gz |
total_size | Sum of components | database_size + filestore_size |
compressed_size | Final ZIP size | stat -c%s backup.zip |
Storage Usage Tracking:
# From backup_service.py
storage_config.total_size_bytes += backup.compressed_size
storage_config.object_count += 1
storage_config.last_used_at = datetime.now(UTC)Odoo Version Tracking
The environment snapshot captures the Odoo version at backup time:
{
"project": {
"odoo_version": "18.0"
}
}This enables:
- Version compatibility checks before restore
- Migration planning
- Audit trail for upgrades
Creator Tracking
All backups track who created them:
created_by UUID REFERENCES users(id)Captured For:
- Manual backups: Current user from API request
- Scheduled backups:
NULL(system-generated) - Pre-restore backups: User initiating restore
- Pre-upgrade backups: User initiating upgrade
Storage Providers
Backups are stored in cloud storage. See Storage Configuration for details on supported providers:
- AWS S3
- Cloudflare R2
- Backblaze B2
- MinIO (self-hosted)
- FTP
- SFTP
Storage Key Structure
Backups use a hierarchical key structure for organization:
organizations/{org_id}/
projects/{project_id}/
environments/{env_id}/
{backup_id}.zipExample:
organizations/550e8400-e29b-41d4-a716-446655440000/
projects/660e8400-e29b-41d4-a716-446655440000/
environments/770e8400-e29b-41d4-a716-446655440000/
880e8400-e29b-41d4-a716-446655440000.zipBenefits:
- Easy to list all backups for an organization
- Supports multi-tenancy
- Enables per-organization storage quotas
- Simplifies migration between storage providers
Backup Verification
Checksum Integrity
All backup components are verified with SHA-256 checksums:
# From backup_service.py
def _calculate_file_checksum(self, file_path: Path) -> str:
sha256 = hashlib.sha256()
with open(file_path, "rb") as f:
for chunk in iter(lambda: f.read(8192), b""):
sha256.update(chunk)
return sha256.hexdigest()Verification Points:
- On VM: Calculate checksum after creating dump/tar.gz
- After Download: Verify checksum matches after SFTP transfer
- After Upload: Store checksum in database for future verification
Double Verification:
# Remote checksum (on VM)
exit_code, stdout, _ = ssh.execute_command(f"sha256sum {remote_temp}")
remote_checksum = stdout.split()[0]
# Local checksum (on PaaSPortal backend)
local_checksum = self._calculate_file_checksum(local_path)
# Compare
if remote_checksum != local_checksum:
raise BackupError("Checksum mismatch")Backup Validation Fields
class Backup:
is_verified: bool = False
verified_at: datetime | None
verification_error: str | None
database_checksum: str # SHA-256
filestore_checksum: str # SHA-256Test Restore Capability
Recommended: Periodically test restore backups to verify:
- Backup files are intact
- Restore process works correctly
- Data integrity is maintained
- Recovery time is acceptable
See Backup Restoration for restore procedures.
Backup Encryption
In-Transit Encryption
All backup data is encrypted during transfer:
| Transfer Stage | Encryption Method |
|---|---|
| VM → PaaSPortal Backend | SFTP (SSH encryption) |
| Backend → Cloud Storage | HTTPS/TLS 1.2+ |
| Download URLs | Presigned URLs over HTTPS |
At-Rest Encryption
Encryption at rest depends on the storage provider:
| Provider | Encryption Support |
|---|---|
| AWS S3 | AES-256 server-side encryption (SSE-S3) |
| Cloudflare R2 | Automatic encryption at rest |
| Backblaze B2 | AES-256 encryption |
| MinIO | Optional SSE-C or SSE-S3 |
| FTP/SFTP | Depends on server configuration |
Configuration: Set storage_class when creating storage config to enable encryption options.
Permissions
Backup operations require specific permissions from the RBAC matrix:
| Operation | Required Permission | Level |
|---|---|---|
| List backups | project.backups.list | Project |
| Create backup | project.backups.create | Project |
| View backup details | project.backups.view | Project |
| Download backup | project.backups.view | Project |
| Delete backup | project.backups.delete | Project |
| Create policy | org.backups.create | Organization |
| Update policy | org.backups.create | Organization |
| Delete policy | org.backups.delete | Organization |
Role Assignments (default):
| Role | Backup Permissions |
|---|---|
portal_admin | All backup permissions (global) |
org_owner | All org-level backup permissions |
org_admin | Create, view, list backups |
org_member | View, list backups (read-only) |
project_admin | Create, view, list, delete backups |
project_member | View, list backups (read-only) |
Storage Quotas
Backup storage counts toward organization quota limits.
Quota Calculation
SELECT
SUM(compressed_size) as total_backup_storage
FROM backups
WHERE environment_id IN (
SELECT id FROM project_environments
WHERE project_id IN (
SELECT id FROM projects
WHERE organization_id = :org_id
)
)
AND status NOT IN ('EXPIRED', 'DELETED');Plan Limits
Backup storage limits by billing plan:
| Plan | Backup Storage | Additional Storage |
|---|---|---|
| Free | 5 GB | Not available |
| Starter | 50 GB | $0.10/GB/month |
| Professional | 200 GB | $0.08/GB/month |
| Business | 500 GB | $0.06/GB/month |
| Enterprise | Unlimited | Included |
Quota Exceeded Behavior:
- New manual backups are blocked
- Scheduled backups continue but send warning notifications
- Existing backups are not deleted automatically
- UI shows warning banner
Check Current Usage:
GET /api/v1/organizations/{org-id}/usageARQ Background Jobs
Backups are processed asynchronously using ARQ (Async Redis Queue) workers.
Job Queue Architecture
┌──────────────────┐
│ API Request │
│ POST /backups │
└────────┬─────────┘
│
↓
┌────────────────────────────────────────┐
│ 1. Create Task (status: PENDING) │
│ 2. Create Backup (status: PENDING) │
│ 3. Enqueue to ARQ │
│ 4. Return job_id to client │
└────────┬───────────────────────────────┘
│
↓
┌────────────────────────────────────────┐
│ Redis Queue │
│ queue_name: "paasportal:queue" │
└────────┬───────────────────────────────┘
│
↓
┌────────────────────────────────────────┐
│ ARQ Worker Pool │
│ max_jobs: 10 (configurable) │
│ job_timeout: 3600s (1 hour) │
└────────┬───────────────────────────────┘
│
↓
┌────────────────────────────────────────┐
│ execute_backup() task function │
│ - Updates task status via SSE │
│ - Calls BackupService.execute_backup() │
│ - Sends completion notification │
└────────────────────────────────────────┘Task Lifecycle
# From backend/tasks/backup_tasks.py
async def execute_backup(
ctx: dict,
task_id: str,
environment_id: str,
backup_type: str = "manual",
storage_config_id: str | None = None,
include_filestore: bool = True,
user_id: str | None = None,
backup_id: str | None = None,
) -> dict[str, Any]:
# 1. Load task and backup records
task = await db.get(Task, UUID(task_id))
backup = await db.get(Backup, UUID(backup_id))
# 2. Update task status
task.status = TaskStatus.RUNNING
task.started_at = datetime.now(UTC)
await broadcast_task_status(task, previous_status)
# 3. Execute backup via service
backup = await service.execute_backup(
backup=backup,
include_filestore=include_filestore,
)
# 4. Mark task as completed
task.status = TaskStatus.COMPLETED
task.result = {"backup_id": str(backup.id)}
task.completed_at = datetime.now(UTC)
# 5. Send notification
await _send_backup_notification(db, task, env_name, success=True)
return {"success": True, "backup_id": str(backup.id)}Retry Logic
ARQ automatically retries failed jobs:
# From backend/tasks/worker.py (WorkerSettings)
max_tries = 3 # Retry failed jobs up to 3 timesRetry Backoff:
- 1st retry: Immediate
- 2nd retry: After 60 seconds
- 3rd retry: After 300 seconds (5 minutes)
Permanent Failure:
- After 3 failed attempts, job is marked as permanently failed
- Added to Dead Letter Queue (DLQ) for manual inspection
- User receives failure notification
Monitoring Jobs
Check Task Status (via API):
GET /api/v1/tasks/{task-id}Real-Time Updates (via SSE):
const eventSource = new EventSource('/api/v1/events/stream');
eventSource.addEventListener('task.status', (event) => {
const data = JSON.parse(event.data);
console.log(`Task ${data.task_id}: ${data.status}`);
});Scheduled Backup Cron Jobs
execute_scheduled_backups
Schedule: Every hour at :00
cron(
execute_scheduled_backups,
minute=0,
)Function (from backend/tasks/backup_tasks.py):
async def execute_scheduled_backups(ctx: dict) -> dict[str, Any]:
now = datetime.now(UTC)
# Find policies due for backup
result = await db.execute(
select(BackupPolicy).where(
BackupPolicy.is_enabled == True,
BackupPolicy.next_backup_at <= now,
)
)
policies = list(result.scalars().all())
for policy in policies:
# Create scheduled backup
backup = await service.create_backup(
environment_id=policy.environment_id,
backup_type=BackupType.SCHEDULED,
storage_config_id=policy.storage_config_id,
include_filestore=True,
)
# Update policy
policy.last_backup_at = now
policy.last_backup_status = backup.status.value
policy.next_backup_at = _calculate_next_backup(policy)
return {"triggered": triggered_count, "errors": errors}execute_retention_cleanup
Schedule: Daily at 3:00 AM UTC
cron(
execute_retention_cleanup,
hour=3,
minute=0,
)Actions:
- Mark expired backups (
expires_at < NOW()) - Delete storage objects for expired/deleted backups
- Promote eligible backups to higher GFS tiers
- Update storage usage metrics
Return Value:
{
"expired": 12,
"deleted_storage": 11,
"promoted": 3,
"errors": []
}API Reference
Create Backup
POST /api/v1/backups/environments/{environment_id}/backupsHeaders:
Authorization: Bearer {token}
Content-Type: application/jsonRequest Body:
{
"backup_type": "manual",
"storage_config_id": "550e8400-e29b-41d4-a716-446655440000",
"include_filestore": true,
"retention_type": "daily"
}Response (201 Created):
{
"id": "660e8400-e29b-41d4-a716-446655440000",
"environment_id": "770e8400-e29b-41d4-a716-446655440000",
"storage_config_id": "550e8400-e29b-41d4-a716-446655440000",
"task_id": "880e8400-e29b-41d4-a716-446655440000",
"status": "pending",
"backup_type": "manual",
"database_size": 0,
"filestore_size": 0,
"total_size": 0,
"compressed_size": 0,
"retention_type": "daily",
"expires_at": null,
"is_verified": false,
"created_at": "2024-12-11T10:30:00Z"
}List Backups
GET /api/v1/backups/environments/{environment_id}/backups?status=completed&page=1&page_size=20Query Parameters:
status(optional): Filter by status (pending, in_progress, completed, failed, expired)backup_type(optional): Filter by type (manual, scheduled, pre_restore, etc.)page(default: 1): Page numberpage_size(default: 20, max: 100): Items per page
Response (200 OK):
{
"items": [
{
"id": "660e8400-e29b-41d4-a716-446655440000",
"environment_id": "770e8400-e29b-41d4-a716-446655440000",
"status": "completed",
"backup_type": "manual",
"database_size": 52428800,
"filestore_size": 104857600,
"total_size": 157286400,
"compressed_size": 147286400,
"started_at": "2024-12-11T10:30:00Z",
"completed_at": "2024-12-11T10:35:23Z",
"duration_seconds": 323,
"retention_type": "daily",
"expires_at": "2024-12-18T10:30:00Z",
"is_verified": true,
"created_at": "2024-12-11T10:30:00Z"
}
],
"total": 45,
"page": 1,
"page_size": 20,
"pages": 3
}Get Backup Details
GET /api/v1/backups/{backup_id}Response (200 OK):
{
"id": "660e8400-e29b-41d4-a716-446655440000",
"environment_id": "770e8400-e29b-41d4-a716-446655440000",
"storage_config_id": "550e8400-e29b-41d4-a716-446655440000",
"task_id": "880e8400-e29b-41d4-a716-446655440000",
"status": "completed",
"backup_type": "manual",
"database_size": 52428800,
"filestore_size": 104857600,
"total_size": 157286400,
"compressed_size": 147286400,
"started_at": "2024-12-11T10:30:00Z",
"completed_at": "2024-12-11T10:35:23Z",
"duration_seconds": 323,
"retention_type": "daily",
"expires_at": "2024-12-18T10:30:00Z",
"is_verified": true,
"verified_at": "2024-12-11T10:35:30Z",
"database_checksum": "a1b2c3d4e5f6...",
"filestore_checksum": "f6e5d4c3b2a1...",
"created_at": "2024-12-11T10:30:00Z"
}Delete Backup
DELETE /api/v1/backups/{backup_id}Response (204 No Content)
Note: This marks the backup as DELETED and schedules storage cleanup. The record remains for audit purposes.
Get Download URLs
GET /api/v1/backups/{backup_id}/download?expires_in=3600Query Parameters:
expires_in(default: 3600, min: 300, max: 86400): URL validity in seconds
Response (200 OK):
{
"backup_id": "660e8400-e29b-41d4-a716-446655440000",
"database_url": "https://s3.amazonaws.com/bucket/path/backup.zip?X-Amz-Signature=...",
"filestore_url": null,
"manifest_url": null,
"expires_in": 3600,
"expires_at": "2024-12-11T11:30:00Z"
}Note: For new backups, database_url contains the single ZIP file. filestore_url and manifest_url are null (included in ZIP).
Create Backup Policy
POST /api/v1/backups/environments/{environment_id}/policyRequest Body:
{
"is_enabled": true,
"schedule_cron": "0 2 * * *",
"timezone": "UTC",
"daily_retention": 7,
"weekly_retention": 4,
"monthly_retention": 12,
"yearly_retention": 2,
"weekly_backup_day": 6,
"storage_config_id": "550e8400-e29b-41d4-a716-446655440000",
"notify_on_success": false,
"notify_on_failure": true
}Response (201 Created):
{
"id": "990e8400-e29b-41d4-a716-446655440000",
"environment_id": "770e8400-e29b-41d4-a716-446655440000",
"is_enabled": true,
"schedule_cron": "0 2 * * *",
"timezone": "UTC",
"daily_retention": 7,
"weekly_retention": 4,
"monthly_retention": 12,
"yearly_retention": 2,
"weekly_backup_day": 6,
"storage_config_id": "550e8400-e29b-41d4-a716-446655440000",
"notify_on_success": false,
"notify_on_failure": true,
"last_backup_at": null,
"last_backup_status": null,
"next_backup_at": "2024-12-12T02:00:00Z",
"created_at": "2024-12-11T10:30:00Z",
"updated_at": "2024-12-11T10:30:00Z"
}Update Backup Policy
PATCH /api/v1/backups/environments/{environment_id}/policyRequest Body (partial update):
{
"is_enabled": false,
"daily_retention": 14
}Response (200 OK): Updated policy object
Delete Backup Policy
DELETE /api/v1/backups/environments/{environment_id}/policyResponse (204 No Content)
Note: Deleting a policy stops automated backups but does NOT delete existing backups.
Get Backup Summary
GET /api/v1/backups/environments/{environment_id}/summaryResponse (200 OK):
{
"environment_id": "770e8400-e29b-41d4-a716-446655440000",
"total_backups": 45,
"completed_backups": 42,
"failed_backups": 2,
"total_size_bytes": 6442450944,
"oldest_backup_at": "2024-11-01T02:00:00Z",
"newest_backup_at": "2024-12-11T02:00:00Z",
"last_successful_backup_at": "2024-12-11T02:00:00Z",
"policy_enabled": true,
"next_scheduled_backup": "2024-12-12T02:00:00Z"
}Best Practices
Backup Frequency
Production Environments:
- Minimum: Daily backups at 2 AM
- Recommended: Every 6 hours for active databases
- High-traffic: Hourly backups during business hours
Staging Environments:
- Daily backups sufficient
- Before major deployments
Development Environments:
- Weekly backups or manual only
- Before destructive testing
Retention Configuration
Conservative Approach (longer retention):
{
"daily_retention": 14,
"weekly_retention": 8,
"monthly_retention": 24,
"yearly_retention": 5
}Aggressive Approach (minimize storage costs):
{
"daily_retention": 3,
"weekly_retention": 2,
"monthly_retention": 6,
"yearly_retention": 1
}Compliance-Focused (regulatory requirements):
{
"daily_retention": 7,
"weekly_retention": 4,
"monthly_retention": 12,
"yearly_retention": 7,
"retention_type": "permanent" // For yearly backups
}Multi-Provider Redundancy
For critical data, configure backups to multiple providers:
- Primary Provider: Fastest/cheapest (e.g., Cloudflare R2)
- Secondary Provider: Different region (e.g., AWS S3 us-west-2)
- Tertiary Provider: Different vendor (e.g., Backblaze B2)
Implementation:
- Create multiple storage configs
- Configure policies to use different providers
- Manually trigger backups to secondary providers for critical snapshots
Pre-Deployment Backups
Always create a manual backup before:
- Deploying new code
- Running database migrations
- Changing environment configuration
- Upgrading Odoo version
- Installing/updating addons
Workflow:
# 1. Create pre-deployment backup
POST /api/v1/backups/environments/{id}/backups
{"backup_type": "manual", "retention_type": "weekly"}
# 2. Wait for completion (monitor via SSE or polling)
# 3. Proceed with deployment
POST /api/v1/environments/{id}/deployTesting Restores
Monthly Test Restore:
- Select a random backup from last month
- Restore to a test environment
- Verify data integrity
- Document recovery time
Disaster Recovery Drill (quarterly):
- Simulate complete environment loss
- Restore from backup to new server
- Measure total recovery time
- Identify bottlenecks
Storage Lifecycle Management
Archive Old Backups:
- Move yearly backups to cheaper storage tiers (Glacier, Archive)
- Use
storage_classfield in storage config - Implement custom retention rules for compliance
Monitor Storage Costs:
GET /api/v1/organizations/{org-id}/usageTrack total_backup_storage and estimate monthly costs.
Notification Strategy
Production:
{
"notify_on_success": true, // Confirm backups are working
"notify_on_failure": true
}Development:
{
"notify_on_success": false, // Reduce noise
"notify_on_failure": true // Only alert on failures
}Troubleshooting
Backup Stuck in "Pending" Status
Symptoms:
- Backup status remains
pendingfor > 5 minutes - No progress updates via SSE
Causes:
- ARQ worker not running
- Redis connection issues
- Worker queue full (max_jobs reached)
Solutions:
- Check worker status:
docker ps | grep worker- Inspect ARQ queue:
redis-cli -h redis -p 6379
> LLEN paasportal:queue- Restart worker:
docker compose restart workerBackup Fails with "pg_dump failed"
Error Message:
pg_dump failed: could not connect to databaseCauses:
- PostgreSQL container not running
- Database name mismatch
- Network connectivity issues
Solutions:
- Verify PostgreSQL container:
docker ps | grep {env_id}_db- Test database connection:
docker exec {env_id}_db psql -U odoo -d {db_name} -c "SELECT 1"- Check environment status:
GET /api/v1/environments/{id}Filestore Backup Empty (0 bytes)
Symptoms:
filestore_size = 0in backup record- Warning: "Filestore archive is empty"
Causes:
- No files uploaded to Odoo
- Filestore directory not found
- Permissions issue in container
Solutions:
- Check if filestore exists:
docker exec {env_id}_odoo ls -la /var/lib/odoo/filestore/{db_name}- Verify filestore has files:
docker exec {env_id}_odoo du -sh /var/lib/odoo/filestore/{db_name}- If empty, this is normal for new environments
Upload to Storage Fails
Error Message:
Failed to upload backup: Connection timeoutCauses:
- Invalid storage credentials
- Network firewall blocking connection
- Storage bucket doesn't exist
- Insufficient permissions
Solutions:
- Test storage connection:
POST /api/v1/backups/storage-configs/{id}/test- Verify credentials:
GET /api/v1/backups/storage-configs/{id}-
Check storage provider status (S3, R2, etc.)
-
Review provider-specific firewall rules
Checksum Mismatch
Error Message:
Checksum mismatch: remote=abc123, local=def456Causes:
- File corruption during transfer
- Network issues during SFTP download
- Disk write errors
Solutions:
-
Retry backup (automatic via ARQ)
-
Verify VM disk health:
ssh root@{vm_ip} "df -h && iostat"- Check PaaSPortal backend disk:
df -h /tmp- Review network stability between VM and backend
Large Database Backup Timeout
Symptoms:
- Backup fails after 30 minutes
- Error: "Job timeout exceeded"
Causes:
- Database > 10 GB
- Slow disk I/O on VM
- Default timeout too low
Solutions:
- Increase job timeout in worker config:
# backend/tasks/worker.py
job_timeout = 7200 # 2 hours- Optimize PostgreSQL for faster dumps:
-- Reduce work_mem during dump
SET work_mem = '256MB';- Use faster compression level:
# Modify pg_dump command to use -Z6 instead of -Z9- Consider splitting large databases
Scheduled Backups Not Running
Symptoms:
next_backup_atis in the pastlast_backup_atis stale- No new backups created
Causes:
- Policy
is_enabled = false - Cron job not running
- Worker not processing cron jobs
- Invalid cron expression
Solutions:
- Check policy status:
GET /api/v1/backups/environments/{id}/policy- Verify cron job is registered:
# Check worker logs
docker logs worker 2>&1 | grep execute_scheduled_backups- Validate cron expression:
from croniter import croniter
croniter.is_valid("0 2 * * *") # Should return True- Manually trigger scheduled backup check:
# Via ARQ worker console
await execute_scheduled_backups({})Retention Cleanup Not Deleting Old Backups
Symptoms:
- Expired backups still in
COMPLETEDstatus - Storage usage not decreasing
expires_atin the past but backup still exists
Causes:
- Retention cleanup job not running
- Database transaction issues
- Storage provider errors
Solutions:
- Check last retention cleanup run:
# Review worker logs
docker logs worker 2>&1 | grep execute_retention_cleanup- Manually trigger cleanup:
# Via API or worker console
from services.retention_service import RetentionService
service = RetentionService(db)
summary = await service.run_retention_cleanup()- Verify cron job schedule:
# Should run daily at 3 AM UTC
cron(execute_retention_cleanup, hour=3, minute=0)- Check for storage provider errors in logs
Additional Resources
- Backup Storage Configuration - Configure cloud storage providers
- Backup Restoration - Restore backups to environments
- Clone with Sanitization - Clone environments with backup integration
- ARQ Worker Configuration - Background job system details
- Permission Matrix - RBAC permission details
Summary
OEC.SH provides enterprise-grade backup capabilities for Odoo environments:
- Automated Scheduling: Set it and forget it with cron-based policies
- GFS Retention: Smart storage management with daily/weekly/monthly/yearly tiers
- Multi-Cloud: Store backups across 6 different providers
- Integrity Verified: SHA-256 checksums ensure data integrity
- Fast Restore: Single ZIP format enables quick recovery
- Audit Trail: Complete metadata and creator tracking
- Scalable: ARQ workers handle concurrent backup jobs efficiently
For questions or support, contact your organization administrator or submit a ticket via the dashboard.