Sandbox Containers
The sandbox container pool is a set of pre-warmed, generic Docker containers for executing untrusted user code. This is the default execution mode for all functions (shared_pool=false).
How it works:
- On startup, the pool creates
sandbox_min_sizecontainers (default: 4) ready to accept work. - When a function executes, a container is acquired from the idle pool, used, and returned.
- Containers are recycled (destroyed and replaced) after
sandbox_max_executionsuses (default: 100) to prevent state leakage between executions. - If a container errors during execution, it’s marked as tainted and destroyed immediately.
- A background replenishment loop monitors the idle count and creates new containers whenever it drops below
sandbox_min_idle(default: 2), up tosandbox_max_size(default: 20). - Health checks run every 60 seconds to detect and replace dead containers.
| Constraint | Default |
|---|---|
| Memory | 512 MB (MAX_FUNCTION_MEMORY) |
| CPU | 1.0 cores (MAX_FUNCTION_CPU) |
| Disk | 1 GB (MAX_FUNCTION_STORAGE) |
| Execution time | 300 seconds (FUNCTION_TIMEOUT) |
| Temp storage | 100 MB tmpfs at /tmp |
| Capabilities | All dropped, only CHOWN/SETUID/SETGID added |
| Privilege escalation | Disabled (no-new-privileges) |
Shared Containers
Functions markedshared_pool=true run in persistent shared containers instead of sandbox containers. This is an admin-only option for trusted code that benefits from longer-lived containers.
Differences from sandbox:
| Sandbox Containers | Shared Containers | |
|---|---|---|
| Trust level | Untrusted user code | Trusted admin code only |
| Isolation | Per-request (recycled after N uses) | Shared (persistent containers) |
| Lifecycle | Created/destroyed automatically | Persist until explicitly scaled down |
| Scaling | Auto-replenishment + manual | Manual via API only |
| Load balancing | First available idle container | Round-robin across workers |
| Best for | User-submitted functions | Admin functions, long-startup libraries |
shared_pool=true (shared containers):
- Functions created and maintained by admins (not user-submitted code)
- Functions that import heavy libraries (pandas, scikit-learn) where container startup cost matters
- Performance-critical functions that benefit from warm containers
Queue Workers
All function and agent executions are processed asynchronously through Redis-based queues (arq). Two separate worker types handle different workloads:| Worker | Docker service | Queue | Concurrency | Retries |
|---|---|---|---|---|
| Function workers | queue-worker | sinas:queue:functions | 10 jobs/worker | Up to 3 |
| Agent workers | queue-agent | sinas:queue:agents | 5 jobs/worker | None (not idempotent) |
queued → running → completed or failed. Stale or orphaned jobs can be cancelled via the admin API:
cancelled and marks the DB execution record as CANCELLED. It also publishes to the done channel so any waiters unblock. This is a soft cancel — it does not kill running containers.
Results are stored in Redis with a 24-hour TTL.
System Endpoints
Admin endpoints for monitoring and managing the Sinas deployment. All requiresinas.system.read:all or sinas.system.update:all permissions.
Health check:
services— All Docker Compose containers with status, health, uptime, CPU %, and memory usage. Infrastructure containers (redis, postgres, pgbouncer) are listed first, followed by application containers sorted alphabetically. Sandbox and shared worker containers are included.host— Host-level CPU, memory, and disk usage (read from/procon Linux).warnings— Auto-generated alerts at three levels:critical— No queue workers running, or infrastructure services (redis, postgres, pgbouncer) downwarning— Non-infrastructure services down, unhealthy containers, DLQ items, queue backlog >50, disk/memory >90%info— Disk/memory >75%
running state for over 2 hours. Useful for recovering from worker crashes or orphaned jobs.
Dependencies (Python Packages)
Functions can only use Python packages that have been approved by an admin. This prevents untrusted code from installing arbitrary dependencies. Approval flow:- Admin approves a dependency (optionally pinning a version)
- Package becomes available in newly created containers and workers
- Use
POST /containers/reloadorPOST /workers/reloadto install into existing containers
Configuration Reference
Container pool:| Variable | Default | Description |
|---|---|---|
POOL_MIN_SIZE | 4 | Containers created on startup |
POOL_MAX_SIZE | 20 | Maximum total containers |
POOL_MIN_IDLE | 2 | Replenish when idle count drops below this |
POOL_MAX_EXECUTIONS | 100 | Recycle container after this many uses |
POOL_ACQUIRE_TIMEOUT | 30 | Seconds to wait for an available container |
| Variable | Default | Description |
|---|---|---|
FUNCTION_TIMEOUT | 300 | Max execution time in seconds |
MAX_FUNCTION_MEMORY | 512 | Memory limit per container (MB) |
MAX_FUNCTION_CPU | 1.0 | CPU cores per container |
MAX_FUNCTION_STORAGE | 1g | Disk storage limit |
FUNCTION_CONTAINER_IDLE_TIMEOUT | 3600 | Idle container cleanup (seconds) |
| Variable | Default | Description |
|---|---|---|
DEFAULT_WORKER_COUNT | 4 | Shared workers created on startup |
QUEUE_WORKER_REPLICAS | 2 | Function queue worker processes |
QUEUE_AGENT_REPLICAS | 2 | Agent queue worker processes |
QUEUE_FUNCTION_CONCURRENCY | 10 | Concurrent jobs per function worker |
QUEUE_AGENT_CONCURRENCY | 5 | Concurrent jobs per agent worker |
QUEUE_MAX_RETRIES | 3 | Retry attempts before DLQ |
QUEUE_RETRY_DELAY | 10 | Seconds between retries |
| Variable | Default | Description |
|---|---|---|
ALLOW_PACKAGE_INSTALLATION | true | Enable pip in containers |
ALLOWED_PACKAGES | (empty) | Comma-separated whitelist (empty = all allowed) |