Create Manual Backup
Feature ID: BAK-001
Category: Backups & Restore
Required Permission: project.backups.create
API Endpoint: POST /api/v1/backups/environments/{environment_id}/backups
Overview
Create an on-demand backup of your Odoo environment including PostgreSQL database and filestore. Backups are encrypted, verified, and stored in your configured cloud storage with automatic retention management using GFS (Grandfather-Father-Son) rotation.
Use this feature when you need to:
- Create a safety backup before making risky changes
- Manually backup critical data outside of automated schedules
- Create a backup before major Odoo upgrades
- Archive environment state for compliance or audit purposes
- Backup before restoring another backup (pre-restore safety)
- Create backups for disaster recovery testing
What's included in backups:
- PostgreSQL Database - Full database dump (SQL format)
- Odoo Filestore (optional) - Attachments, documents, images
- Manifest Metadata - Environment snapshot (Odoo version, modules, resources)
Backup format: Single ZIP archive containing:
dump.sql- PostgreSQL plain-text dumpfilestore.tar.gz- Compressed filestore (if included)manifest.json- Backup metadata and checksums
Prerequisites
Required Setup
- Storage Configuration: At least one storage provider configured (S3, R2, B2, MinIO, FTP, or SFTP)
- Environment Deployed: Target environment must be running with active containers
- Permission: User has
project.backups.createpermission
Storage Providers Supported
OEC.SH supports 6 cloud storage providers:
| Provider | Best For | Pros | Cons |
|---|---|---|---|
| AWS S3 | Enterprise, multi-region | Industry standard, highly reliable | Egress fees |
| Cloudflare R2 | Cost optimization | Zero egress fees, S3-compatible | Limited regions |
| Backblaze B2 | Archival storage | Lowest cost, simple pricing | Slower retrieval |
| MinIO | Self-hosted, GDPR | Full control, on-premises | Self-managed |
| SFTP | Legacy systems | Secure, widely supported | Manual management |
| FTP | Legacy integration | Simple, compatible | Less secure |
Recommendation: Use Cloudflare R2 for best cost/performance balance, or AWS S3 for enterprise requirements.
How to Create a Manual Backup
Method 1: Via UI (Recommended)
Step 1: Navigate to Backup Manager
- Go to Dashboard → Environments
- Click environment name
- Click Backups tab in left sidebar
- Click "Create Backup" button (top right)
Step 2: Configure Backup Settings
CreateBackupDialog opens with 4 configuration sections:
Storage Location
- Dropdown: Select storage configuration
- Default: Auto-selects organization's default storage
- Provider Type: Shows (S3), (R2), (B2), (MinIO), (FTP), or (SFTP)
Example:
Storage Location
┌────────────────────────────────────────┐
│ Primary Backup Storage (R2) [Default] │
│ Archive Storage (B2) │
│ On-Premises Storage (MinIO) │
└────────────────────────────────────────┘Backup Type
- Radio buttons with descriptions:
| Option | Description | Use Case |
|---|---|---|
| Manual Backup | One-time user-initiated backup | General purpose |
| Pre-Restore Backup | Safety backup before restore | Automatic safety net |
| Pre-Upgrade Backup | Backup before Odoo upgrade | Major version changes |
Default: Manual Backup
What to Include
- ☑ Database (always included, cannot be disabled)
- ☑ Filestore (optional, checkbox)
Filestore Details:
- Size: Typically 100 MB - 10 GB+
- Contains: Attachments, documents, images, reports
- Location:
/var/lib/odoo/filestore/{db_name}/ - Recommendation: Always include filestore for complete backups
Retention Period
- Dropdown: Select how long to keep the backup
| Option | Duration | Use Case |
|---|---|---|
| Auto (Policy Default) | Based on backup policy | Follows organization rules |
| Daily | 7 days | Short-term recovery |
| Weekly | 4 weeks | Medium-term snapshots |
| Monthly | 12 months | Long-term archive |
| Yearly | 2 years | Compliance, audit |
| Permanent | Never expires | Critical milestones |
How GFS Retention Works:
- System automatically promotes backups through retention tiers
- Daily → Weekly (if created on Sunday)
- Weekly → Monthly (if created on 1st of month)
- Monthly → Yearly (if created on Jan 1st)
- See Retention Management section for details
Step 3: Confirm and Create
-
Review settings:
- Storage location: Primary Backup Storage (R2)
- Type: Manual Backup
- Include: Database + Filestore
- Retention: Daily (7 days)
-
Info banner displays:
ℹ️ Backups are encrypted at rest and verified after upload You can download or restore from any completed backup -
Click "Create Backup" button
-
Dialog closes, backup queued message appears:
✓ Backup queued successfully Backup will appear in the list once started
Step 4: Monitor Progress
Real-Time Progress Updates (via SSE):
Backup Progress: 45%
Status: Backing up filestore...
Started: 2024-12-11 10:30:00
Elapsed: 47 seconds
Steps:
✓ Initialize (10%)
✓ Backup database (50%)
⟳ Backup filestore (45%)
○ Create manifest
○ Package backup
○ Upload to storage
○ Verify backupProgress Stages:
| Stage | Progress | Duration (typical) |
|---|---|---|
| Initialize | 10% | 5 seconds |
| Database dump | 10-50% | 30-180 seconds |
| Filestore archive | 50-70% | 30-300 seconds |
| Create manifest | 70% | 2 seconds |
| Package ZIP | 70-80% | 10-60 seconds |
| Upload to storage | 80-95% | 60-600 seconds |
| Verify backup | 95-100% | 5-10 seconds |
Step 5: Verify Completion
Once complete, backup appears in list with:
- ✓ Status: Completed
- ✓ Verified: Yes (checkmark)
- Size: 1.2 GB (compressed)
- Created: 2024-12-11 10:30:00
- Expires: 2024-12-18 (7 days from creation)
- Actions: Download | Restore | Delete
Verification Badge:
✓ Verified
Database: SHA-256 checksum matched
Filestore: SHA-256 checksum matchedMethod 2: Via API
Create Backup Request
POST /api/v1/backups/environments/{environment_id}/backups
Content-Type: application/json
Authorization: Bearer YOUR_TOKEN
{
"backup_type": "manual",
"storage_config_id": "storage-uuid",
"include_filestore": true,
"retention_type": "daily"
}Request Body Fields:
backup_type(string, optional) - Type:manual|pre_restore|pre_upgrade(default:manual)storage_config_id(UUID, optional) - Storage to use (defaults to organization default)include_filestore(boolean, optional) - Include filestore (default:true)retention_type(string, optional) - Retention:daily|weekly|monthly|yearly|permanent(default: auto-determined)
Response (201 Created)
{
"id": "backup-uuid",
"environment_id": "env-uuid",
"storage_config_id": "storage-uuid",
"task_id": "task-uuid",
"status": "pending",
"backup_type": "manual",
"backup_key": "backups/org-abc/env-123/backup-xyz.zip",
"database_size": 0,
"filestore_size": 0,
"total_size": 0,
"compressed_size": 0,
"started_at": null,
"completed_at": null,
"duration_seconds": null,
"retention_type": "daily",
"expires_at": "2024-12-18T10:30:00Z",
"is_verified": false,
"verified_at": null,
"database_checksum": null,
"filestore_checksum": null,
"environment_snapshot": {
"odoo_version": "18.0",
"odoo_image": "odoo:18.0",
"cpu_cores": 2,
"ram_mb": 4096,
"disk_gb": 20
},
"error_message": null,
"created_at": "2024-12-11T10:30:00Z",
"created_by": "user-uuid"
}Key Response Fields:
status:pending→in_progress→uploading→completed(orfailed)task_id: Background task ID for monitoring progressbackup_key: Storage path (format:{prefix}/backups/{org_id}/{env_id}/{backup_id}.zip)expires_at: Calculated based on retention_type and policy
Monitor Backup Progress
GET /api/v1/tasks/{task_id}
Response:
{
"id": "task-uuid",
"task_type": "backup",
"status": "running",
"progress": 45,
"message": "Backing up filestore...",
"started_at": "2024-12-11T10:30:05Z",
"environment_id": "env-uuid"
}Task Status Values:
pending- Queued, not yet startedqueued- In ARQ queuerunning- Currently executingcompleted- Finished successfullyfailed- Error occurred
Method 3: Automated via Backup Policy
Configure automatic backups on a schedule:
-
Navigate to Backups tab
-
Click "Backup Policy" sub-tab
-
Configure schedule:
- Enable automatic backups
- Set time:
02:00(2 AM daily) - Select retention periods:
- Daily: 7 days
- Weekly: 4 weeks
- Monthly: 12 months
- Yearly: 2 years
- Weekly backup day: Sunday
- Monthly backup day: 1st of month
-
Save policy
Result: Backups created automatically every day at 2 AM, with retention automatically managed via GFS rotation.
Understanding Backup Process
Step-by-Step Breakdown
Step 1: Initialize (Progress: 10%)
What happens:
- Create temporary directory on backend server
- Verify environment exists and is accessible
- Get container names:
{env_id}_odoo,{env_id}_db - Create backup record in database with status
PENDING - Create ARQ background task
Time: ~5 seconds
Step 2: Database Dump (Progress: 20-50%)
What happens:
- Execute
pg_dumpinside PostgreSQL container - Export as plain-text SQL (not binary custom format)
- Write to temporary file:
/tmp/dump-{uuid}.sql - Calculate SHA-256 checksum
Command executed:
docker exec {env_id}_db pg_dump -U odoo -d {db_name} > /tmp/dump.sqlSize: Typically 10 MB - 500 MB (uncompressed) Time: 30-180 seconds (depends on database size)
Example Output (dump.sql):
--
-- PostgreSQL database dump
--
SET statement_timeout = 0;
SET lock_timeout = 0;
...
CREATE TABLE res_users (
id integer NOT NULL,
login character varying(64) NOT NULL,
password character varying,
...
);
INSERT INTO res_users VALUES (1, 'admin', '$pbkdf2-sha512$...');
...Step 3: Filestore Archive (Progress: 50-70%, if enabled)
What happens:
- Tar+gzip Odoo filestore directory
- Source path:
/var/lib/odoo/filestore/{db_name}/ - Write to:
/tmp/filestore-{uuid}.tar.gz - Calculate SHA-256 checksum
Command executed:
docker exec {env_id}_odoo tar -czf /tmp/filestore.tar.gz \
-C /var/lib/odoo/filestore/{db_name}/ .Size: Typically 100 MB - 10 GB+ (compressed) Time: 30-300 seconds (depends on filestore size and compression)
Filestore Contents:
filestore/
├── res.users/
│ ├── 1/ (admin user attachments)
│ └── 2/
├── ir.attachment/
│ ├── 123/document.pdf
│ ├── 124/image.png
│ └── 125/report.xlsx
└── mail.message/
└── (email attachments)Step 4: Create Manifest (Progress: 70%)
What happens:
- Generate
manifest.jsonwith backup metadata - Include environment snapshot (Odoo version, modules, resources)
- Record checksums for verification
Time: ~2 seconds
manifest.json Example:
{
"version": "1.0",
"backup_id": "backup-uuid",
"environment_id": "env-uuid",
"backup_type": "manual",
"created_at": "2024-12-11T10:30:00Z",
"database": {
"name": "odoo-env-123",
"key": "backups/org-abc/backup-xyz.zip",
"size": 52428800,
"checksum": "sha256:abc123...",
"format": "pg_dump_plain"
},
"filestore": {
"key": "backups/org-abc/backup-xyz.zip",
"size": 1073741824,
"checksum": "sha256:def456...",
"format": "tar.gz"
},
"environment_snapshot": {
"odoo_version": "18.0",
"odoo_image": "odoo:18.0",
"modules": ["base", "sale", "crm", "purchase"],
"cpu_cores": 2,
"ram_mb": 4096,
"disk_gb": 20,
"db_name": "odoo-env-123"
},
"retention": {
"type": "daily",
"expires_at": "2024-12-18T10:30:00Z"
}
}Step 5: Package ZIP (Progress: 70-80%)
What happens:
- Combine 3 files into single ZIP archive
- Files:
dump.sql,filestore.tar.gz,manifest.json - Compression: ZIP deflate algorithm
- Output:
/tmp/backup-{uuid}.zip
Time: 10-60 seconds (depends on total size)
ZIP Structure:
backup-abc123.zip (1.2 GB compressed)
├── dump.sql (50 MB)
├── filestore.tar.gz (1.1 GB)
└── manifest.json (5 KB)Step 6: Upload to Storage (Progress: 80-95%)
What happens:
- Connect to configured storage provider
- Upload ZIP to cloud storage
- Storage key format:
{path_prefix}/backups/{org_id}/{env_id}/{backup_id}.zip - Track upload progress (if provider supports)
- Update backup status to
UPLOADING
Time: 60-600 seconds (depends on size and network speed)
Example Storage Keys:
- S3:
backups/org-abc/env-123/backup-xyz.zip - R2:
production-backups/org-abc/env-123/backup-xyz.zip - B2:
b2://my-bucket/backups/org-abc/env-123/backup-xyz.zip - MinIO:
http://minio.local:9000/backups/org-abc/env-123/backup-xyz.zip - SFTP:
/backups/org-abc/env-123/backup-xyz.zip
Upload Progress Example:
Uploading to storage... (85%)
Uploaded: 1.0 GB / 1.2 GB
Speed: 15 MB/s
ETA: 13 secondsStep 7: Verify Backup (Progress: 95-100%)
What happens:
- Verify upload succeeded
- Compare local checksum with uploaded file metadata
- Mark backup as verified (
is_verified = true) - Update backup status to
COMPLETED - Record
verified_attimestamp
Time: 5-10 seconds
Verification Checks:
- ✓ File exists in storage
- ✓ File size matches expected size
- ✓ SHA-256 checksum matches (if supported by provider)
- ✓ Metadata matches manifest
Verification Failed:
- If any check fails, backup marked as
FAILED - Error message: "Verification failed: checksum mismatch"
- Upload is retried (up to 3 attempts)
Step 8: Cleanup (Progress: 100%)
What happens:
- Delete temporary files from backend server
- Update backup record with final sizes
- Calculate total duration
- Broadcast SSE event:
task.completed
Time: ~2 seconds
Final Backup Record:
{
"status": "completed",
"database_size": 52428800,
"filestore_size": 1073741824,
"total_size": 1126170624,
"compressed_size": 1258291200,
"started_at": "2024-12-11T10:30:05Z",
"completed_at": "2024-12-11T10:37:23Z",
"duration_seconds": 438,
"is_verified": true,
"verified_at": "2024-12-11T10:37:23Z",
"database_checksum": "sha256:abc123...",
"filestore_checksum": "sha256:def456..."
}GFS Retention Management
What is GFS?
GFS (Grandfather-Father-Son) is a backup rotation scheme that maintains:
- Recent backups (daily) for quick recovery
- Mid-term backups (weekly) for medium-term restore
- Long-term backups (monthly/yearly) for compliance and audit
Benefits:
- Automatic cleanup of old backups
- Balanced storage usage
- Compliance-friendly (long-term retention)
- Disaster recovery coverage (multiple restore points)
Retention Tiers
| Tier | Default Retention | Typical Use |
|---|---|---|
| Daily | 7 days | Short-term recovery (yesterday, last week) |
| Weekly | 4 weeks (28 days) | Medium-term snapshots (last month) |
| Monthly | 12 months (1 year) | Long-term archive (last quarter) |
| Yearly | 2 years | Compliance, audit, legal requirements |
| Permanent | Never expires | Critical milestones, regulatory archives |
Customizable: Organization admins can configure retention periods per tier.
Automatic Promotion
Backups are automatically promoted through tiers based on creation time:
Promotion Logic:
def determine_retention_type(date):
# First day of year → YEARLY
if date.month == 1 and date.day == 1:
return "yearly"
# First day of month → MONTHLY
if date.day == 1:
return "monthly"
# Sunday (configurable) → WEEKLY
if date.weekday() == 6: # Sunday
return "weekly"
# All other days → DAILY
return "daily"Example Timeline:
Dec 25 (Tue) → Daily backup (expires Dec 32025)
Dec 29 (Sun) → Weekly backup (expires Jan 26)
Jan 1 (Thu) → Yearly backup (expires Jan 1, 2027)
Jan 2 (Fri) → Daily backup (expires Jan 9)Expiration Calculation
expires_at = created_at + retention_period
Examples:
- Daily created on Dec 11 → Expires Dec 18 (7 days)
- Weekly created on Dec 8 → Expires Jan 5 (4 weeks)
- Monthly created on Dec 1 → Expires Dec 1, 2026 (12 months)
- Yearly created on Jan 1 → Expires Jan 1, 2027 (2 years)
- Permanent → Never expires (expires_at = null)Retention Cleanup Task
ARQ Cron Job runs daily at 3:00 AM:
What it does:
- Find all backups with
expires_at < now() - Update status from
COMPLETEDtoEXPIRED - Delete backup file from cloud storage
- Keep database record for audit (marked as EXPIRED)
- Log cleanup actions
Example Cleanup Log:
2024-12-11 03:00:00 - Retention cleanup started
2024-12-11 03:00:05 - Found 15 expired backups
2024-12-11 03:00:10 - Deleted backup-abc (expired 2 days ago)
2024-12-11 03:00:15 - Deleted backup-def (expired 5 days ago)
...
2024-12-11 03:02:30 - Cleanup completed: 15 backups deleted, 2.3 GB freedManual Override: Admins can extend expiration or mark backups as permanent.
Storage Providers Configuration
AWS S3
Use Case: Enterprise, multi-region, high reliability
Configuration:
{
"provider": "s3",
"bucket": "my-odoo-backups",
"region": "us-east-1",
"access_key": "AKIAIOSFODNN7EXAMPLE",
"secret_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"storage_class": "STANDARD_IA",
"path_prefix": "production"
}Storage Classes:
STANDARD- Frequently accessed (most expensive)STANDARD_IA- Infrequent access (recommended for backups)GLACIER- Archive (cheapest, slower retrieval)
Cost: ~$0.023/GB/month + retrieval fees
Pros: Industry standard, 99.999999999% durability, multi-region Cons: Egress fees ($0.09/GB), complex pricing
Cloudflare R2
Use Case: Cost optimization, frequent restores
Configuration:
{
"provider": "r2",
"bucket": "odoo-backups",
"account_id": "abc123def456",
"access_key": "access_key_id",
"secret_key": "secret_access_key",
"path_prefix": "backups"
}Cost: ~$0.015/GB/month + zero egress fees
Pros: S3-compatible API, no egress fees, fast Cons: Limited to Cloudflare network, fewer regions
Recommendation: Best choice for most users (cost + performance)
Backblaze B2
Use Case: Long-term archival, lowest cost
Configuration:
{
"provider": "b2",
"bucket": "odoo-archive",
"region": "us-west-002",
"application_key_id": "key_id",
"application_key": "key_secret"
}Cost: ~$0.005/GB/month (cheapest)
Pros: Extremely low cost, simple pricing, S3-compatible Cons: Slower downloads, limited regions
MinIO (Self-Hosted)
Use Case: GDPR compliance, on-premises, air-gapped
Configuration:
{
"provider": "minio",
"bucket": "odoo-backups",
"endpoint_url": "https://minio.company.local:9000",
"access_key": "admin",
"secret_key": "password"
}Cost: Infrastructure + management only
Pros: Full control, GDPR-friendly, no vendor lock-in Cons: Self-managed, requires infrastructure
SFTP
Use Case: Legacy systems, secure file transfer
Configuration:
{
"provider": "sftp",
"ftp_host": "sftp.company.com",
"ftp_port": 22,
"ftp_username": "backup_user",
"ftp_password": "secure_password",
"ftp_base_path": "/backups/odoo"
}Pros: Secure, widely supported, simple Cons: Manual management, no versioning
FTP
Use Case: Legacy integration, simple requirements
Configuration:
{
"provider": "ftp",
"ftp_host": "ftp.company.com",
"ftp_port": 21,
"ftp_username": "backup_user",
"ftp_password": "password",
"ftp_use_ssl": true,
"ftp_passive_mode": true,
"ftp_base_path": "/backups"
}Pros: Simple, compatible, ubiquitous Cons: Less secure (even with SSL), no advanced features
Troubleshooting
Issue 1: "No Storage Configuration Available"
Symptoms: API returns 400 Bad Request
Cause: No storage provider configured for organization
Solution:
- Navigate to Settings → Storage
- Click "Add Storage Configuration"
- Select provider (S3, R2, B2, etc.)
- Enter credentials and bucket details
- Test connection
- Mark as default (optional)
- Save configuration
- Retry backup creation
Issue 2: Backup Fails with "Connection Timeout"
Symptoms: Backup status = FAILED, error: "Failed to connect to storage"
Causes:
- Invalid credentials
- Bucket doesn't exist
- Network connectivity issues
- Firewall blocking outbound connections
Solution:
- Test storage connection:
POST /api/v1/backups/storage-configs/{config_id}/test - Verify credentials are correct
- Check bucket exists and is accessible
- Verify OEC.SH backend can reach storage endpoint
- Check firewall rules allow HTTPS outbound (port 443)
Issue 3: Backup Size Exceeds Quota
Symptoms: Backup fails with "Storage quota exceeded"
Cause: Organization storage quota limit reached
Solution:
- Check current usage:
GET /api/v1/organizations/{org_id}/quota - Delete old backups to free space
- Upgrade organization plan for more quota
- Configure backup policy to delete old backups automatically
- Exclude filestore if only database needed
Issue 4: "Container Not Found" Error
Symptoms: Backup fails at database dump step
Cause: Environment not deployed or containers stopped
Solution:
- Check environment status (should be
RUNNING) - Verify deployment succeeded
- Check containers are running:
docker ps | grep {env_id} - Redeploy environment if needed
- Retry backup after environment is stable
Issue 5: Backup Taking Too Long
Symptoms: Backup stuck at same progress for 10+ minutes
Causes:
- Very large database (>10 GB)
- Very large filestore (>50 GB)
- Slow network connection to storage
- Server under heavy load
Solution:
- Check backup size estimates:
- Database:
SELECT pg_database_size('db_name') - Filestore:
du -sh /var/lib/odoo/filestore/
- Database:
- Schedule backups during off-peak hours
- Consider excluding filestore if unnecessary
- Upgrade server resources (CPU, network)
- Use local storage (MinIO) for faster uploads
Best Practices
1. Configure Multiple Storage Providers
✅ Do: Set up primary + secondary storage (e.g., R2 + B2) ✅ Do: Use different providers for redundancy ❌ Don't: Rely on single storage provider (single point of failure)
Example:
- Primary: Cloudflare R2 (fast restores)
- Secondary: Backblaze B2 (archival, cost-effective)
2. Test Restore Regularly
✅ Do: Test restore process monthly in non-production environment ✅ Do: Verify backup integrity by restoring to staging ✅ Do: Document restore procedures ❌ Don't: Assume backups work without testing
3. Include Filestore in Backups
✅ Do: Always include filestore for complete backups ✅ Do: Understand what data is in filestore (documents, images) ❌ Don't: Skip filestore to save space (incomplete backups)
Exception: Database-only backups acceptable for:
- Testing/development environments
- When filestore is backed up separately
- When filestore is negligible (less than 10 MB)
4. Use Appropriate Retention
✅ Do: Set retention based on data criticality ✅ Do: Use "Permanent" for compliance-critical backups ✅ Do: Configure backup policy for automatic retention ❌ Don't: Set all backups to "Permanent" (wastes storage)
Recommended Retention:
- Development: Daily (7 days)
- Staging: Weekly (4 weeks)
- Production: Monthly (12 months) or Yearly (2 years)
5. Monitor Backup Status
✅ Do: Set up alerts for failed backups ✅ Do: Review backup list weekly ✅ Do: Check "Last Backup" date in environment list ❌ Don't: Ignore failed backup notifications
Related Documentation
- Restore from Backup - Restore backups to environments
- Configure Storage - Set up storage providers
- Backup Policies - Automated backup schedules
Last Updated: December 11, 2025 Applies to: OEC.SH v2.0+ Related Sprint: Sprint 2E41 - Documentation System