source

🍿darkreel

encrypted media storage & streaming

Your server stores opaque blobs. Your browser holds the keys. Single Go binary. No Docker, no Postgres, no Redis, no S3. All encryption and decryption happens in the browser, in memory, never on disk.


what the server sees

data/ darkreel.db # rows of ciphertext f47ac10b-58cc/ a3d9c8e2-7b14/ 000000.enc [4.00 MB] # could be anything 000001.enc [2.00 MB] # padded to bucket size 000002.enc [1.00 MB] # random fill hides real size thumb.enc [256 KB] # encrypted thumbnail

Every file timestamp on disk reads 2024-01-01T00:00:00Z. Every chunk is padded to 1, 2, 4, 8, or 16 MB with random data. Upload dates are coarsened to year only. An attacker with root on your server sees uniform blobs with no meaningful metadata.

datavisible to server
file contentno - AES-256-GCM, per-file key
file namesno - encrypted metadata blob
file types / MIMEno - encrypted metadata blob
file sizesno - chunk padding + encrypted metadata
dimensions / durationno - encrypted metadata blob
thumbnailsno - separate encrypted key
folder structureno - encrypted blob
passwordsnever - Argon2id hash only
master keynever - encrypted, browser-only. Cleared from server memory immediately after login.
usernamesyes
file count per useryes - database row count
total storage~approx - quantized to 256 KB buckets, padding obscures per-file
upload timeyear only - coarsened

what you get

end-to-end encryption
AES-256-GCM everything. Per-file keys wrapped with your master key. The server is a dumb blob store.
zero-knowledge metadata
File names, types, sizes, dimensions, durations - encrypted into a single blob per file. The server can't read any of it.
encrypted streaming
Videos stream via MSE with chunk-level decryption in a Web Worker. No server-side decryption. Starts after one chunk.
size fingerprinting resistance
Every chunk padded to bucketed sizes with random data, both on disk and over the network. Storage sizes quantized to 256 KB in the database. Original file sizes are unrecoverable.
secure deletion
Random overwrite, fsync, then unlink. Keys deleted first — ciphertext is unrecoverable regardless. Best-effort on SSDs due to wear leveling.
multi-user
Isolated encrypted libraries. Each user gets their own master key. Admin panel for user management.
hash modification
Random nonces injected into file headers (JPEG COM, PNG tEXt, MP4 free box appended at end, WebM Void) before encryption. Identical files produce different ciphertexts, defeating duplicate detection.
chunk integrity verification
Chunk counts stored inside encrypted metadata. On download/playback, the client verifies the count matches - detects truncation attacks where chunks are deleted from the server.
generic file storage
Not just media. Upload any file type - PDFs, documents, archives, code. Same zero-knowledge encryption.
encrypted folders
Organize your files into folders. The structure is encrypted - only you can see it. Drag-and-drop to reorganize on desktop and mobile.
folder download
Download an entire folder (including subfolders) as a ZIP file, decrypted client-side. No server involvement.
image rotation
Rotate images at the pixel level. The original is securely deleted and replaced with a freshly encrypted copy using new keys.
text editor
Plain-text files (txt, md, log, csv, json, yaml, xml, ini, conf) open in an in-browser editor. Edit and save — a fresh copy is encrypted and the old one is securely deleted. Create new text documents from the Upload modal.
6 color themes
Classic, cool, forest, neon, ocean, and warm. Stored in localStorage.
recovery codes
256-bit code generated at account creation and rotated on every password change. If you lose your password, this is the only way back in. Lose both and your data is gone.
delegated uploads
Grant other apps (e.g., PPVDA) upload-only access via a copy-paste consent flow. They hold your X25519 public key and a refresh token — they can seal uploads to you but can't read, list, or delete anything. Revoke anytime from Settings → Connected Apps.
single binary
Go + embedded SQLite + embedded web UI. No runtime dependencies. No containers. No external services.
storage quotas
Per-user quotas in GB (default: 1 GB), tracked in bytes for accuracy. Per-user quotas can only be raised. Validated against available disk capacity with a 2 GB reserve.
self-hosted
Runs on your hardware. A $6/month VPS is enough. Your data never touches a third-party service.
async secure deletion
File shredding runs in a background worker pool so API responses return immediately while data is securely erased.
static asset caching
Content-hash cache busting for static assets. Aggressive caching without serving stale files after upgrades.
upload progress tracking
Real-time progress bar on gallery tiles during upload. Encryption progress (0-50%) and network transfer progress (50-100%) via XHR upload events.
streaming chunk uploads
Chunks are written to disk as they stream in. ~64 KB memory per chunk write, regardless of chunk size.
rate-limited sensitive endpoints
Password change and account deletion endpoints are rate-limited per-IP (5/min) and per-account (10/15min), preventing brute-force of the old password via stolen sessions.

key hierarchy

Password ├─ Argon2id(password, authSalt) password hash login verification └─ Argon2id(password, kdfSalt) KDF key └─ AES-256-GCM decrypt (AAD: userID) master key browser memory only ├─ AES-256-GCM unwrap (AAD: userID) X25519 private key browser memory only └─ encrypts folder structure (AAD: userID) X25519 keypair per user, generated at registration ├─ public key stored plaintext, served to browsers and delegated clients └─ private key dual-wrapped (master key + recovery code) Per-file keys one trio generated per upload ├─ file key AES-256-GCM encrypts chunks (AAD: mediaID || chunkIndex) ├─ thumb key AES-256-GCM encrypts thumbnail (AAD: mediaID || 0) └─ metadata key AES-256-GCM encrypts metadata (AAD: mediaID) all three sealed to the user's public key via X25519 + HKDF-SHA256 + AES-256-GCM (info: "darkreel-seal-v1")

The master key and private key never leave the browser after login. The browser imports the private key as a non-extractable CryptoKey — even XSS cannot exfiltrate its bytes, only use it to derive shared secrets. Both master key and private key are also wrapped by a 256-bit recovery code generated at account creation and rotated on every password change.

Uploads (browser, CLI, or delegated third-party) share one wire format. Delegated clients hold only the public key — they can seal uploads to the user but cannot open anything. This lets other apps upload to your account without ever seeing a password or an AES key.

All block-level encryption uses Additional Authenticated Data (AAD) to bind ciphertext to its context. An attacker with database access cannot swap encrypted keys or metadata between users or media items - decryption fails if the AAD doesn't match.

algorithms

password hashingArgon2id - 3 iterations, 64 MB, 4 threads
master key derivationArgon2id - separate salt from auth hash
file / thumb / metadata encryptionAES-256-GCM - media ID + chunk index as AAD (prevents reordering and cross-file substitution)
per-file key wrappingX25519 + HKDF-SHA256 + AES-256-GCM - sealed-box to user's public key, 92-byte output per 32-byte key
user keypairX25519 - private key dual-wrapped (master key + recovery code)
session keyPBKDF2-SHA256 - 600,000 iterations
chunk paddingrandom fill to 1 / 2 / 4 / 8 / 16 MB buckets (on disk and over the network)
metadata paddingASCII-space fill to power-of-2 from 512 B (JSON-parseable without an unpad step)
delegation tokensHS256 JWT, scope=upload, 1 h TTL — rejected on every non-upload endpoint
refresh tokens32-byte URL-safe random, stored as sha256("darkreel:delegation-refresh-v1"‖token)
hash modificationnonce injection - JPEG COM, PNG tEXt, MP4 free box (appended at end), WebM Void element
deletion1-pass random overwrite → fsync → unlink (keys deleted first)

on-disk format

// encrypted chunk [nonce: 12 bytes] [ciphertext] [GCM tag: 16 bytes] // chunk index bound as AAD - reorder and decryption fails // padded chunk on disk [real length: 4B big-endian] [encrypted data] [random padding → bucket]

encrypted video playback

Videos are remuxed to fragmented MP4 on upload - no re-encoding. The CLI uses ffmpeg (handles everything: WEBM, MKV, AVI, etc.). The browser uses mp4box.js (144 KB, no WASM, handles MP4/MOV).

upload: container → extract samples → fMP4 segments (~2s) → merge into ~1 MB chunks → AES-256-GCM encrypt → pad to bucket size → upload playback: fetch chunk (prefetch-ahead) → Web Worker decrypt → MediaSource Extensions → <video> → playback starts after first chunk download: fetch all chunks → decrypt → fMP4 → standard MP4

iOS Safari 17.1+ uses ManagedMediaSource. Non-remuxable formats uploaded via browser are stored as-is and played via blob URL.

media formats

MP4 MOV WEBM MKV M4V JPG PNG GIF WEBP

MP4 and MOV get streaming playback in the browser. CLI uploads via ffmpeg support all video formats with full streaming.

Any other file type can also be uploaded and stored with full encryption - no preview, just encrypted storage and download.

choices

chunk padding wastes disk space and bandwidth

A 3 MB file becomes 4 MB on disk and over the wire. A 5 MB file becomes 8 MB. Thumbnails are always 256 KB regardless of actual size. This is the cost of preventing size fingerprinting on both storage and network layers.

timestamps are coarsened

Upload dates are stored as year only. Precise timestamps reveal usage patterns. That precision is deliberately discarded.

no server-side thumbnails

The server can't see your files, so it can't generate thumbnails. The browser encrypts them before upload with a separate per-file key.

no recovery without codes

Lose your password and your recovery code? Your data is cryptographically gone. No backdoor, no admin recovery, no "forgot password" email. This is correct behavior for a zero-knowledge system.

SSD deletion is best-effort

The overwrite pass works on HDDs. On SSDs, wear leveling may retain old data. Since encryption keys are deleted before shredding, the ciphertext is computationally unrecoverable regardless. LUKS full-disk encryption on the data partition provides an additional layer — see hardening.

quotas are quantized, not exact

Storage quotas are enforced against encrypted byte counts, quantized to 256 KB buckets to prevent exact-size content fingerprinting. Actual disk usage is higher than the quota suggests because every chunk is padded to a bucket boundary (1/2/4/8/16 MB). This is intentional — exposing exact or padded sizes would weaken size-fingerprinting resistance.

session persistence is a trade-off

With PERSIST_SESSION=true (the default), the master key is cached in sessionStorage so page refreshes don't require re-login. A tight CSP and SRI on all scripts make XSS difficult, but a malicious browser extension could read it. Set PERSIST_SESSION=false to keep the key in memory only.

deliberate constraints

concurrent login throughput

Each login performs two Argon2id derivations (~600 ms total, 128 MB RAM, 8 threads). On an 8-core machine, only ~2 logins can run at full speed concurrently. This is a deliberate security trade-off.

single-machine architecture

Darkreel does not support horizontal scaling or clustering. A single machine with adequate disk is sufficient for most self-hosted use cases.

SQLite write contention

One writer at a time. Rarely a practical bottleneck since chunk I/O doesn't hold the database lock.

chunk count sent in plaintext

The number of chunks per file is sent unencrypted during upload so the server can validate upload completeness. Since chunks are ~1 MB each, this reveals approximate file size. Exact sizes remain hidden by chunk padding and encrypted metadata.

one command on a fresh VPS

full setup
git clone https://github.com/baileywjohnson/darkreel.git && cd darkreel
sudo ./setup.sh
# firewall, fail2ban, SSH hardening, TLS via Caddy,
# systemd service, daily backups - all handled

Designed for a fresh Ubuntu 22.04+ or Debian 12+ VPS. The script asks for your domain (verified against server IP), an admin password, and optionally a personal SSH user. Safe to re-run.

manual

$ git clone https://github.com/baileywjohnson/darkreel.git && cd darkreel $ bash build.sh $ DARKREEL_ADMIN_PASSWORD='YourStr0ng!Password' ./darkreel listening on :8080 # put Caddy or nginx in front for TLS
~14MB
RAM usage
1
binary
0
dependencies
$6/mo
min. VPS cost

scalability

Darkreel handles 100K+ media items without issue. Chunks stream with zero-copy I/O. Startup is parallelized (integrity checks, cleanup, and migration run concurrently). Streaming uploads use ~64 KB memory per chunk write regardless of file size, so uploading a 10 GB file costs the same RAM as a 10 MB file.

configuration

$ ./darkreel -addr :8080 -data ./data # Environment variables DARKREEL_ADMIN_USERNAME=admin # first-run only DARKREEL_ADMIN_PASSWORD=... # required on first run PERSIST_SESSION=true # master key in sessionStorage (default: on) ALLOW_REGISTRATION=false # initial state; admin panel toggle persists to DB TRUST_PROXY=false # enable behind Caddy/nginx for correct rate limiting TRUST_PROXY_CIDR=127.0.0.1/32 # optional: only honor proxy headers from these CIDRs MAX_STORAGE_GB=50 # per-user storage quota in GB (default: 1 GB if not set)

endpoints

All endpoints except /health and /api/config require a JWT. JWTs contain user ID, session ID, and admin flag.

auth
POST/api/auth/registerregister, returns recovery code
POST/api/auth/loginreturns JWT + encrypted master key
POST/api/auth/logoutimmediate session invalidation
POST/api/auth/recoverreset password with recovery code
POST/api/auth/change-passwordre-encrypts master key, rotates recovery code, invalidates all sessions
DELETE/api/auth/accountdelete account and all media
GET/api/configserver config (no auth)
media
GET/api/medialist media (paginated)
GET/api/media/quotacheck quota (effective quota + current usage)
GET/api/media/:idget metadata
POST/api/media/uploadmultipart: metadata + thumbnail + chunks
PATCH/api/media/:idupdate metadata
DELETE/api/media/:idsecure delete (1-pass shred)
GET/api/media/:id/chunk/:idxdownload encrypted chunk
GET/api/media/:id/thumbnaildownload encrypted thumbnail
folders
GET/api/foldersget encrypted folder tree
PUT/api/folderssave encrypted folder tree
admin
GET/api/admin/userslist users with storage usage
POST/api/admin/userscreate user, returns recovery code
DELETE/api/admin/users/:iddelete user + all media
PATCH/api/admin/users/:id/quotaraise per-user storage quota (can only increase)
GET/api/admin/storagestorage stats (used bytes, allocated quota, disk usage)
PUT/api/admin/storage/quotaset default storage quota for new users
POST/api/admin/registrationtoggle registration
health
GET/health→ {"status":"ok"}

backups

The database is load-bearing. Every encrypted file key lives in darkreel.db. Lose the database and every file on disk becomes permanently undecryptable - even with the correct password.

# hot backup (server stays running, WAL-safe) $ sqlite3 /var/lib/darkreel/darkreel.db ".backup /path/to/backup.db" # full backup (stop for consistency) $ sudo systemctl stop darkreel $ tar czf darkreel-backup.tar.gz /var/lib/darkreel/ $ sudo systemctl start darkreel

The setup script configures daily encrypted backups (AES-256-CBC with a dedicated key, 30-day retention). Backups are safe to store off-site - media is encrypted on disk, and database backups are additionally encrypted at rest. An attacker with a backup can't decrypt anything without a user's password.

upgrading

Migrations run automatically on startup.

$ cd /opt/darkreel && git pull && bash build.sh $ sudo cp darkreel /usr/local/bin/darkreel $ sudo systemctl restart darkreel # or use the auto-updater (checks GitHub releases, verifies SHA-256 + Ed25519 signature, refuses unsigned binaries) $ sudo ./update.sh --install # daily cron at 4 AM

system requirements

1
vCPU min
512MB
RAM min
10GB
disk min
amd64
or arm64

security hardening

The setup script handles all of this automatically. Checklist for deploying manually:

TLS termination Caddy or nginx - darkreel doesn't handle TLS UFW firewall SSH, HTTP, HTTPS only fail2ban auto-ban after failed SSH attempts SSH hardening root login disabled, key-only auth systemd sandbox 15+ directives: capabilities, syscall filter, namespaces, and more dedicated user darkreel user with minimal permissions SRI hashes all JS/CSS integrity-verified, including dynamic loads rate limiting 5 auth/min/IP + 10/15min/username (distributed brute-force protection) security headers CSP, HSTS, Permissions-Policy, Cache-Control: no-store COOP/COEP defense-in-depth for SharedArrayBuffer graceful shutdown drains in-flight requests before closing database session expiry 24h max, periodic cleanup of stale sessions password change all other sessions invalidated immediately admin re-check admin status verified from DB on every admin request timing mitigation login and recovery endpoints do dummy work for unknown users BREACH mitigation compression disabled on auth endpoints that return secrets last-admin guard atomic transaction prevents TOCTOU race on admin deletion storage quotas mandatory per-user limits in GB (byte-accurate), raise-only, validated against disk (2 GB reserve) secure delete (DB) PRAGMA secure_delete zeroes deleted pages - prevents forensic recovery from SQLite/WAL hashed identifiers IPs and usernames keyed-hashed (SipHash, per-process random seed) - no plaintext in dumps, no collision attacks privacy-safe logs no usernames, user IDs, media IDs, or IP addresses in server logs path validation UUID validation at storage layer - defense-in-depth against path traversal proxy-aware IPs X-Forwarded-For trust off by default - opt-in via TRUST_PROXY, optionally scoped to TRUST_PROXY_CIDR allowlist startup integrity incomplete uploads cleaned up on restart, crashed size records backfilled encrypted backups AES-256-CBC with dedicated key, 30-day retention upload concurrency max 3 concurrent uploads per user - prevents disk exhaustion network padding chunks/thumbnails sent padded over the wire, not just on disk master key clearing cleared from server memory immediately after login response recovery rotation recovery code rotated on every password change signed updates auto-updater requires valid Ed25519 signature, hard failure on missing key access log control setup offers to disable Caddy access logs for privacy thumbnail validation oversized thumbnails rejected with clear error, not silently truncated folder tree padding encrypted folder blobs padded with random bytes, not zeros - prevents DB-level size inference thread-safe PRNG per-goroutine ChaCha8 instances for padding and shredding - no shared mutable state shredder shutdown rejects new work after shutdown begins - prevents panics during graceful shutdown metadata update limits PATCH endpoint enforces same size limits as upload (64 KB metadata, 64B nonces) dynamic cache-bust dynamically loaded scripts include content-hash params - prevents stale cache after upgrades generic reg errors public registration returns generic error on failure - prevents username enumeration admin storage coarse per-user used_bytes coarsened to nearest GB - reduces per-upload activity monitoring

disk encryption (LUKS)

The secure deletion overwrite is defense-in-depth — encryption keys are deleted first, making the ciphertext computationally unrecoverable. The overwrite is unreliable on SSDs due to wear leveling. If your threat model includes physical disk seizure, encrypt the data partition:

# set up LUKS before installing Darkreel $ sudo cryptsetup luksFormat /dev/sdX $ sudo cryptsetup open /dev/sdX darkreel-data $ sudo mkfs.ext4 /dev/mapper/darkreel-data $ sudo mount /dev/mapper/darkreel-data /var/lib/darkreel

With LUKS, all data at rest is encrypted at the block level. The combination of Darkreel’s application-layer encryption (keys deleted before shredding) and LUKS block-layer encryption provides two independent layers of protection against physical recovery. Recommended for production VPS deployments.

session persistence

PERSIST_SESSION (default: true) controls whether the master encryption key is cached in sessionStorage between page refreshes.

PERSIST_SESSION=true master key survives refresh (default) XSS or malicious extension could read it CSP + SRI make XSS difficult, not impossible PERSIST_SESSION=false key only in JS memory, cleared on refresh users re-enter password on every page load more secure, less convenient

To disable: set PERSIST_SESSION=false in /etc/darkreel/env and restart the service.