Your server stores opaque blobs. Your browser holds the keys. Single Go binary. No Docker, no Postgres, no Redis, no S3. All encryption and decryption happens in the browser, in memory, never on disk.
data/
darkreel.db # rows of ciphertext
f47ac10b-58cc/
a3d9c8e2-7b14/
000000.enc [4.00 MB] # could be anything
000001.enc [2.00 MB] # padded to bucket size
000002.enc [1.00 MB] # random fill hides real size
thumb.enc [256 KB] # encrypted thumbnail
Every file timestamp on disk reads 2024-01-01T00:00:00Z. Every chunk is padded
to 1, 2, 4, 8, or 16 MB with random data. Upload dates are coarsened to year only.
An attacker with root on your server sees uniform blobs with no meaningful metadata.
| data | visible to server |
|---|---|
| file content | no - AES-256-GCM, per-file key |
| file names | no - encrypted metadata blob |
| file types / MIME | no - encrypted metadata blob |
| file sizes | no - chunk padding + encrypted metadata |
| dimensions / duration | no - encrypted metadata blob |
| thumbnails | no - separate encrypted key |
| folder structure | no - encrypted blob |
| passwords | never - Argon2id hash only |
| master key | never - encrypted, browser-only. Cleared from server memory immediately after login. |
| usernames | yes |
| file count per user | yes - database row count |
| total storage | ~approx - quantized to 256 KB buckets, padding obscures per-file |
| upload time | year only - coarsened |
The master key and private key never leave the browser after login. The browser imports the
private key as a non-extractable CryptoKey — even XSS cannot exfiltrate its
bytes, only use it to derive shared secrets. Both master key and private key are also wrapped
by a 256-bit recovery code generated at account creation and rotated on every password change.
Uploads (browser, CLI, or delegated third-party) share one wire format. Delegated clients hold only the public key — they can seal uploads to the user but cannot open anything. This lets other apps upload to your account without ever seeing a password or an AES key.
All block-level encryption uses Additional Authenticated Data (AAD) to bind ciphertext to its context. An attacker with database access cannot swap encrypted keys or metadata between users or media items - decryption fails if the AAD doesn't match.
| password hashing | Argon2id - 3 iterations, 64 MB, 4 threads |
| master key derivation | Argon2id - separate salt from auth hash |
| file / thumb / metadata encryption | AES-256-GCM - media ID + chunk index as AAD (prevents reordering and cross-file substitution) |
| per-file key wrapping | X25519 + HKDF-SHA256 + AES-256-GCM - sealed-box to user's public key, 92-byte output per 32-byte key |
| user keypair | X25519 - private key dual-wrapped (master key + recovery code) |
| session key | PBKDF2-SHA256 - 600,000 iterations |
| chunk padding | random fill to 1 / 2 / 4 / 8 / 16 MB buckets (on disk and over the network) |
| metadata padding | ASCII-space fill to power-of-2 from 512 B (JSON-parseable without an unpad step) |
| delegation tokens | HS256 JWT, scope=upload, 1 h TTL — rejected on every non-upload endpoint |
| refresh tokens | 32-byte URL-safe random, stored as sha256("darkreel:delegation-refresh-v1"‖token) |
| hash modification | nonce injection - JPEG COM, PNG tEXt, MP4 free box (appended at end), WebM Void element |
| deletion | 1-pass random overwrite → fsync → unlink (keys deleted first) |
// encrypted chunk
[nonce: 12 bytes] [ciphertext] [GCM tag: 16 bytes]
// chunk index bound as AAD - reorder and decryption fails
// padded chunk on disk
[real length: 4B big-endian] [encrypted data] [random padding → bucket]
Videos are remuxed to fragmented MP4 on upload - no re-encoding. The CLI uses ffmpeg (handles everything: WEBM, MKV, AVI, etc.). The browser uses mp4box.js (144 KB, no WASM, handles MP4/MOV).
upload:
container → extract samples → fMP4 segments (~2s)
→ merge into ~1 MB chunks → AES-256-GCM encrypt
→ pad to bucket size → upload
playback:
fetch chunk (prefetch-ahead) → Web Worker decrypt
→ MediaSource Extensions → <video>
→ playback starts after first chunk
download:
fetch all chunks → decrypt → fMP4 → standard MP4
iOS Safari 17.1+ uses ManagedMediaSource. Non-remuxable formats uploaded via browser are stored as-is and played via blob URL.
MP4 MOV WEBM MKV M4V JPG PNG GIF WEBP
MP4 and MOV get streaming playback in the browser. CLI uploads via ffmpeg support all video formats with full streaming.
Any other file type can also be uploaded and stored with full encryption - no preview, just encrypted storage and download.
A 3 MB file becomes 4 MB on disk and over the wire. A 5 MB file becomes 8 MB. Thumbnails are always 256 KB regardless of actual size. This is the cost of preventing size fingerprinting on both storage and network layers.
Upload dates are stored as year only. Precise timestamps reveal usage patterns. That precision is deliberately discarded.
The server can't see your files, so it can't generate thumbnails. The browser encrypts them before upload with a separate per-file key.
Lose your password and your recovery code? Your data is cryptographically gone. No backdoor, no admin recovery, no "forgot password" email. This is correct behavior for a zero-knowledge system.
The overwrite pass works on HDDs. On SSDs, wear leveling may retain old data. Since encryption keys are deleted before shredding, the ciphertext is computationally unrecoverable regardless. LUKS full-disk encryption on the data partition provides an additional layer — see hardening.
Storage quotas are enforced against encrypted byte counts, quantized to 256 KB buckets to prevent exact-size content fingerprinting. Actual disk usage is higher than the quota suggests because every chunk is padded to a bucket boundary (1/2/4/8/16 MB). This is intentional — exposing exact or padded sizes would weaken size-fingerprinting resistance.
With PERSIST_SESSION=true (the default), the master key is cached in
sessionStorage so page refreshes don't require re-login. A tight CSP and SRI on all
scripts make XSS difficult, but a malicious browser extension could read it. Set
PERSIST_SESSION=false to keep the key in memory only.
Each login performs two Argon2id derivations (~600 ms total, 128 MB RAM, 8 threads). On an 8-core machine, only ~2 logins can run at full speed concurrently. This is a deliberate security trade-off.
Darkreel does not support horizontal scaling or clustering. A single machine with adequate disk is sufficient for most self-hosted use cases.
One writer at a time. Rarely a practical bottleneck since chunk I/O doesn't hold the database lock.
The number of chunks per file is sent unencrypted during upload so the server can validate upload completeness. Since chunks are ~1 MB each, this reveals approximate file size. Exact sizes remain hidden by chunk padding and encrypted metadata.
git clone https://github.com/baileywjohnson/darkreel.git && cd darkreel
sudo ./setup.sh
# firewall, fail2ban, SSH hardening, TLS via Caddy,
# systemd service, daily backups - all handled
Designed for a fresh Ubuntu 22.04+ or Debian 12+ VPS. The script asks for your domain (verified against server IP), an admin password, and optionally a personal SSH user. Safe to re-run.
$ git clone https://github.com/baileywjohnson/darkreel.git && cd darkreel
$ bash build.sh
$ DARKREEL_ADMIN_PASSWORD='YourStr0ng!Password' ./darkreel
listening on :8080
# put Caddy or nginx in front for TLS
Darkreel handles 100K+ media items without issue. Chunks stream with zero-copy I/O. Startup is parallelized (integrity checks, cleanup, and migration run concurrently). Streaming uploads use ~64 KB memory per chunk write regardless of file size, so uploading a 10 GB file costs the same RAM as a 10 MB file.
$ ./darkreel -addr :8080 -data ./data
# Environment variables
DARKREEL_ADMIN_USERNAME=admin # first-run only
DARKREEL_ADMIN_PASSWORD=... # required on first run
PERSIST_SESSION=true # master key in sessionStorage (default: on)
ALLOW_REGISTRATION=false # initial state; admin panel toggle persists to DB
TRUST_PROXY=false # enable behind Caddy/nginx for correct rate limiting
TRUST_PROXY_CIDR=127.0.0.1/32 # optional: only honor proxy headers from these CIDRs
MAX_STORAGE_GB=50 # per-user storage quota in GB (default: 1 GB if not set)
All endpoints except /health and /api/config require a JWT.
JWTs contain user ID, session ID, and admin flag.
The database is load-bearing. Every encrypted file key lives in
darkreel.db. Lose the database and every file on disk becomes permanently
undecryptable - even with the correct password.
# hot backup (server stays running, WAL-safe)
$ sqlite3 /var/lib/darkreel/darkreel.db ".backup /path/to/backup.db"
# full backup (stop for consistency)
$ sudo systemctl stop darkreel
$ tar czf darkreel-backup.tar.gz /var/lib/darkreel/
$ sudo systemctl start darkreel
The setup script configures daily encrypted backups (AES-256-CBC with a dedicated key, 30-day retention). Backups are safe to store off-site - media is encrypted on disk, and database backups are additionally encrypted at rest. An attacker with a backup can't decrypt anything without a user's password.
Migrations run automatically on startup.
$ cd /opt/darkreel && git pull && bash build.sh
$ sudo cp darkreel /usr/local/bin/darkreel
$ sudo systemctl restart darkreel
# or use the auto-updater (checks GitHub releases, verifies SHA-256 + Ed25519 signature, refuses unsigned binaries)
$ sudo ./update.sh --install # daily cron at 4 AM
The setup script handles all of this automatically. Checklist for deploying manually:
TLS termination Caddy or nginx - darkreel doesn't handle TLS
UFW firewall SSH, HTTP, HTTPS only
fail2ban auto-ban after failed SSH attempts
SSH hardening root login disabled, key-only auth
systemd sandbox 15+ directives: capabilities, syscall filter, namespaces, and more
dedicated user darkreel user with minimal permissions
SRI hashes all JS/CSS integrity-verified, including dynamic loads
rate limiting 5 auth/min/IP + 10/15min/username (distributed brute-force protection)
security headers CSP, HSTS, Permissions-Policy, Cache-Control: no-store
COOP/COEP defense-in-depth for SharedArrayBuffer
graceful shutdown drains in-flight requests before closing database
session expiry 24h max, periodic cleanup of stale sessions
password change all other sessions invalidated immediately
admin re-check admin status verified from DB on every admin request
timing mitigation login and recovery endpoints do dummy work for unknown users
BREACH mitigation compression disabled on auth endpoints that return secrets
last-admin guard atomic transaction prevents TOCTOU race on admin deletion
storage quotas mandatory per-user limits in GB (byte-accurate), raise-only, validated against disk (2 GB reserve)
secure delete (DB) PRAGMA secure_delete zeroes deleted pages - prevents forensic recovery from SQLite/WAL
hashed identifiers IPs and usernames keyed-hashed (SipHash, per-process random seed) - no plaintext in dumps, no collision attacks
privacy-safe logs no usernames, user IDs, media IDs, or IP addresses in server logs
path validation UUID validation at storage layer - defense-in-depth against path traversal
proxy-aware IPs X-Forwarded-For trust off by default - opt-in via TRUST_PROXY, optionally scoped to TRUST_PROXY_CIDR allowlist
startup integrity incomplete uploads cleaned up on restart, crashed size records backfilled
encrypted backups AES-256-CBC with dedicated key, 30-day retention
upload concurrency max 3 concurrent uploads per user - prevents disk exhaustion
network padding chunks/thumbnails sent padded over the wire, not just on disk
master key clearing cleared from server memory immediately after login response
recovery rotation recovery code rotated on every password change
signed updates auto-updater requires valid Ed25519 signature, hard failure on missing key
access log control setup offers to disable Caddy access logs for privacy
thumbnail validation oversized thumbnails rejected with clear error, not silently truncated
folder tree padding encrypted folder blobs padded with random bytes, not zeros - prevents DB-level size inference
thread-safe PRNG per-goroutine ChaCha8 instances for padding and shredding - no shared mutable state
shredder shutdown rejects new work after shutdown begins - prevents panics during graceful shutdown
metadata update limits PATCH endpoint enforces same size limits as upload (64 KB metadata, 64B nonces)
dynamic cache-bust dynamically loaded scripts include content-hash params - prevents stale cache after upgrades
generic reg errors public registration returns generic error on failure - prevents username enumeration
admin storage coarse per-user used_bytes coarsened to nearest GB - reduces per-upload activity monitoring
The secure deletion overwrite is defense-in-depth — encryption keys are deleted first, making the ciphertext computationally unrecoverable. The overwrite is unreliable on SSDs due to wear leveling. If your threat model includes physical disk seizure, encrypt the data partition:
# set up LUKS before installing Darkreel
$ sudo cryptsetup luksFormat /dev/sdX
$ sudo cryptsetup open /dev/sdX darkreel-data
$ sudo mkfs.ext4 /dev/mapper/darkreel-data
$ sudo mount /dev/mapper/darkreel-data /var/lib/darkreel
With LUKS, all data at rest is encrypted at the block level. The combination of Darkreel’s application-layer encryption (keys deleted before shredding) and LUKS block-layer encryption provides two independent layers of protection against physical recovery. Recommended for production VPS deployments.
PERSIST_SESSION (default: true) controls whether the master
encryption key is cached in sessionStorage between page refreshes.
PERSIST_SESSION=true master key survives refresh (default)
XSS or malicious extension could read it
CSP + SRI make XSS difficult, not impossible
PERSIST_SESSION=false key only in JS memory, cleared on refresh
users re-enter password on every page load
more secure, less convenient
To disable: set PERSIST_SESSION=false in /etc/darkreel/env
and restart the service.