feat(docker): deprecate docker-based services

This commit is contained in:
2026-03-12 20:14:20 +02:00
parent bd155efe9d
commit 33d0df77bf
23 changed files with 0 additions and 1281 deletions

View File

@@ -1,27 +0,0 @@
# Docker Stacks
Individual service stacks with comprehensive documentation. See the [main README](../../README.md) for architecture overview and deployment process.
## Available Stacks
| Stack | Description | Port(s) | Mobile/Remote Access |
|-------|-------------|---------|---------------------|
| [**Immich**](./immich/) | Photo and video management with AI | 2283 | iOS/Android apps |
| [**Paperless-ngx**](./paperless/) | Document management with OCR | Web UI | Email integration |
| [**Media**](./media/) | *arr suite for media automation | 8989, 7878, 9696, 8114 | nzb360 mobile app |
| [**Pi-hole**](./pihole/) | Network-wide ad blocker | 53, 80 | Web dashboard |
| [**Arch Mirror**](./archmirror/) | Local Arch Linux package mirror | 8080 | pacman client |
## Quick Start
1. Choose a stack from the table above
2. Read the stack's README for setup instructions
3. Copy environment template: `cp stack.env stack.env.real`
4. Configure variables in `stack.env.real`
5. Deploy via Portainer using the docker-compose.yaml
Each stack directory contains:
- `docker-compose.yaml` - Service definitions
- `stack.env` - Environment template (tracked in git)
- `stack.env.real` - Actual values with secrets (gitignored)
- `README.md` - Detailed documentation

View File

@@ -1,106 +0,0 @@
# Arch Linux Mirror Stack
A self-hosted Arch Linux package mirror that provides local access to Arch Linux packages, reducing bandwidth usage and improving package download speeds for local Arch Linux systems.
## Services Overview
- **rsync-mirror**: Automated synchronization service that mirrors Arch Linux packages from upstream
- **nginx-server**: HTTP server that serves the mirrored packages to local clients
## Key Features
- **Automated Syncing**: Scheduled rsync synchronization with upstream Arch Linux mirrors
- **Local Package Serving**: Fast HTTP access to packages for local Arch Linux installations
- **Bandwidth Optimization**: Reduces external bandwidth usage for multiple Arch Linux systems
- **Health Monitoring**: Built-in health checks for both sync and web services
- **Customizable Sync**: Configurable sync schedules and rsync options
## Architecture
### Sync Process
1. **rsync-mirror** container runs scheduled sync jobs using supercronic
2. Downloads packages from configured upstream mirror
3. Stores packages in shared volume
4. **nginx-server** serves packages via HTTP
### Storage
- Shared volume between containers for package storage
- Read-only access for nginx service ensures data integrity
- Configurable storage path for flexible deployment
## Links & Documentation
### Arch Linux
- **Website**: https://archlinux.org/
- **Package Database**: https://archlinux.org/packages/
- **Mirror Status**: https://archlinux.org/mirrors/status/
- **Mirror Setup Guide**: https://wiki.archlinux.org/title/DeveloperWiki:NewMirrors
### Container Technologies
- **Docker Compose**: https://docs.docker.com/compose/
- **Nginx**: https://nginx.org/en/docs/
- **Supercronic**: https://github.com/aptible/supercronic
## Configuration
### Environment Variables
Copy `stack.env` to `stack.env.real` and configure:
- `MIRROR_URL`: Upstream Arch Linux mirror URL for rsync
- `SYNC_SCHEDULE`: Cron schedule for sync operations (e.g., "0 */4 * * *" for every 4 hours)
- `TZ`: Timezone for scheduling
- `RSYNC_EXTRA_OPTIONS`: Additional rsync options for fine-tuning
- `ARCHLINUX_VOLUME_PATH`: Local path for package storage
- `HTTP_PORT`: HTTP port for package access (default: 8080)
- `NGINX_WORKERS`: Number of nginx worker processes
### Network Access
- **HTTP Server**: Accessible on configured port (default: 8080)
- **Health Checks**: Both services include health monitoring
## Usage
### Client Configuration
Configure Arch Linux clients to use the local mirror by editing `/etc/pacman.d/mirrorlist`:
```
## Local mirror
Server = http://your-server-ip:8080/archlinux/$repo/os/$arch
## Fallback mirrors
# ... other mirrors
```
### Sync Monitoring
- Monitor sync container logs for sync status and errors
- Health checks ensure services are running properly
- Nginx access logs show package download activity
## Storage Requirements
- **Full Mirror**: ~60-80GB for complete Arch Linux repository
- **Growth**: Expect ~1-2GB growth per month
- **I/O**: SSD storage recommended for better performance during sync operations
## Sync Strategy
### Recommended Schedule
- **Frequent Updates**: Every 4-6 hours for active development
- **Conservative**: Daily syncs for stable environments
- **Bandwidth Considerations**: Schedule during low-usage periods
### Upstream Mirror Selection
Choose geographically close, reliable mirrors from the [official mirror list](https://archlinux.org/mirrorlist/).
## Custom Builds
The stack uses custom Dockerfiles for both rsync and nginx services, allowing for:
- Optimized container sizing
- Specific configuration needs
- Custom sync scripts and monitoring
## Dependencies
- Docker and Docker Compose
- Sufficient storage for package mirror
- Network access to upstream Arch Linux mirrors

View File

@@ -1,44 +0,0 @@
services:
rsync-mirror:
build:
context: ./rsync
dockerfile: Dockerfile
environment:
- MIRROR_URL=${MIRROR_URL}
- SYNC_SCHEDULE=${SYNC_SCHEDULE}
- TZ=${TZ}
- RSYNC_EXTRA_OPTIONS=${RSYNC_EXTRA_OPTIONS}
volumes:
- ${ARCHLINUX_VOLUME_PATH:-./archlinux}:/archlinux
restart: unless-stopped
healthcheck:
test: ["CMD", "ps", "aux", "|", "grep", "[s]upercronic"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
nginx-server:
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- "${HTTP_PORT:-8080}:80"
volumes:
- ${ARCHLINUX_VOLUME_PATH:-./archlinux}:/usr/share/nginx/html/archlinux:ro
environment:
- NGINX_WORKERS=${NGINX_WORKERS}
depends_on:
rsync-mirror:
condition: service_healthy
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 15s
networks:
default:
name: archlinux-mirror-net

View File

@@ -1,3 +0,0 @@
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf

View File

@@ -1,30 +0,0 @@
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
gzip on;
server {
listen 80;
root /usr/share/nginx/html;
location /archlinux/ {
alias /usr/share/nginx/html/archlinux/;
autoindex on;
}
location = / {
return 301 /archlinux/;
}
location /health {
return 200 "OK\n";
add_header Content-Type text/plain;
}
}
}

View File

@@ -1,24 +0,0 @@
FROM alpine:latest AS builder
RUN apk add --no-cache curl \
&& curl -fsSLO https://github.com/aptible/supercronic/releases/download/v0.2.29/supercronic-linux-amd64 \
&& echo "cd48d45c4b10f3f0bfdd3a57d054cd05ac96812b supercronic-linux-amd64" | sha1sum -c - \
&& chmod +x supercronic-linux-amd64
FROM alpine:latest
RUN apk add --no-cache rsync flock \
&& rm -rf /var/cache/apk/*
COPY --from=builder /supercronic-linux-amd64 /usr/local/bin/supercronic
RUN mkdir -p /archlinux /var/log /var/lock
COPY sync.sh /usr/local/bin/sync.sh
COPY crontab /etc/crontab
RUN chmod +x /usr/local/bin/sync.sh
WORKDIR /archlinux
CMD ["supercronic", "/etc/crontab"]

View File

@@ -1,2 +0,0 @@
0 */6 * * * /usr/bin/flock -n /var/lock/archlinux-sync.lock /usr/local/bin/sync.sh
* * * * * [ ! -f /var/lock/initial-sync-done ] && /usr/bin/flock -n /var/lock/archlinux-sync.lock /usr/local/bin/sync.sh && touch /var/lock/initial-sync-done

View File

@@ -1,32 +0,0 @@
#!/bin/sh
set -e
TARGET_DIR="/archlinux"
MAX_RETRIES=3
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >&2
}
if [ -z "$MIRROR_URL" ]; then
log "ERROR: MIRROR_URL not set"
exit 1
fi
mkdir -p "$TARGET_DIR"
for i in $(seq 1 $MAX_RETRIES); do
log "Sync attempt $i/$MAX_RETRIES from $MIRROR_URL"
if rsync --timeout=7200 \
-rlptH --safe-links --delete-delay --delay-updates \
${RSYNC_EXTRA_OPTIONS:-} \
"$MIRROR_URL/" "$TARGET_DIR/"; then
log "Sync completed successfully"
exit 0
fi
[ $i -lt $MAX_RETRIES ] && sleep $((i * 300))
done
log "All sync attempts failed"
exit 1

View File

@@ -1,14 +0,0 @@
# Mirror Configuration
MIRROR_URL=
SYNC_SCHEDULE=0 */6 * * *
RSYNC_EXTRA_OPTIONS=--info=progress2
# HTTP Configuration
HTTP_PORT=8080
NGINX_WORKERS=auto
# Path Configuration
ARCHLINUX_VOLUME_PATH=
# Basic Configuration
TZ=UTC

View File

@@ -1,61 +0,0 @@
# Immich Stack
A high-performance photo and video management solution that makes organizing and sharing your media collection effortless.
## Services Overview
- **immich-server**: Main application server providing web interface and API
- **immich-machine-learning**: AI-powered features for face recognition, object detection, and smart search
- **redis**: In-memory data store for caching and session management
- **database**: PostgreSQL database with vector extensions for ML features
- **backup-files**: Automated file backups using resticprofile with AWS S3
- **backup-database**: Automated PostgreSQL database dumps
## Key Features
- **Smart Photo Management**: AI-powered face recognition, object detection, and duplicate detection
- **Mobile Apps**: Native iOS and Android apps with automatic photo backup
- **Video Support**: Hardware-accelerated video transcoding and streaming
- **Sharing**: Secure photo and album sharing with customizable permissions
- **Search**: Powerful search capabilities using AI and metadata
- **Multi-user**: Support for multiple users with individual libraries
- **Backup**: Automated backups to AWS S3 for both files and database
## Links & Documentation
- **Official Website**: https://immich.app/
- **GitHub Repository**: https://github.com/immich-app/immich
- **Documentation**: https://immich.app/docs/overview/introduction
- **Docker Hub**: https://hub.docker.com/r/immich-app/immich-server
- **Mobile Apps**:
- [iOS App Store](https://apps.apple.com/us/app/immich/id1613945652)
- [Android Play Store](https://play.google.com/store/apps/details?id=app.alextran.immich)
## Configuration
### Environment Variables
Copy `stack.env` to `stack.env.real` and configure:
- `IMMICH_VERSION`: Docker image version (default: release)
- `UPLOAD_LOCATION`: Path for photo/video storage
- `DB_*`: PostgreSQL database credentials
- `TZ`: Timezone
- `TRAEFIK_DOMAIN`: Domain for web access
- `AWS_*`: AWS S3 credentials for backups
- `SERVICE_DATA_ROOT_PATH`: Base path for service data
### Network Access
- **Web Interface**: Accessible via Traefik at configured domain
- **Port**: 2283 (internal Docker port 3001)
- **Mobile Apps**: Connect using the configured domain
## Backup Strategy
**Database**: Hourly PostgreSQL dumps with 2-hour retention
**Files**: Automated S3 backups of uploaded photos/videos using resticprofile
## Dependencies
- External Traefik reverse proxy network
- AWS S3 bucket for backups

View File

@@ -1,112 +0,0 @@
services:
immich-server:
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
volumes:
- ${UPLOAD_LOCATION}:/usr/src/app/upload
- /etc/localtime:/etc/localtime:ro
environment:
UPLOAD_LOCATION: ${UPLOAD_LOCATION}
DB_PASSWORD: ${DB_PASSWORD}
DB_USERNAME: ${DB_USERNAME}
DB_DATABASE_NAME: ${DB_DATABASE_NAME}
REDIS_HOSTNAME: redis
TZ: ${TZ}
ports:
- 2283:3001
depends_on:
- redis
- database
restart: always
healthcheck:
disable: false
labels:
- "traefik.enable=true"
- "traefik.http.routers.immich.rule=Host(`${TRAEFIK_DOMAIN}`)"
- "traefik.http.routers.immich.entrypoints=websecure"
- "traefik.http.routers.immich.tls.certresolver=myresolver"
- "traefik.docker.network=traefik"
networks:
- immich
- traefik
immich-machine-learning:
container_name: immich_machine_learning
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
volumes:
- model-cache:/cache
restart: always
healthcheck:
disable: false
networks:
- immich
redis:
image: registry.hub.docker.com/library/redis:6.2-alpine@sha256:84882e87b54734154586e5f8abd4dce69fe7311315e2fc6d67c29614c8de2672
healthcheck:
test: redis-cli ping || exit 1
restart: always
networks:
- immich
database:
image: ghcr.io/immich-app/postgres:14-vectorchord0.3.0-pgvectors0.2.0
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: ${DB_DATABASE_NAME}
POSTGRES_INITDB_ARGS: '--data-checksums'
DB_STORAGE_TYPE: 'HDD'
volumes:
- ${SERVICE_DATA_ROOT_PATH}/database:/var/lib/postgresql/data
restart: always
networks:
- immich
backup-files:
image: creativeprojects/resticprofile:${RP_VERSION:-latest}
entrypoint: '/bin/sh'
hostname: immich-resticprofile
restart: always
command:
- '-c'
- 'resticprofile schedule --all && crond -f'
volumes:
- ${SERVICE_DATA_ROOT_PATH}/restic/resticprofile.yaml:/etc/resticprofile/profiles.yaml:ro
- ${SERVICE_DATA_ROOT_PATH}/restic/restic.key:/etc/resticprofile/key:ro
- ${SERVICE_DATA_ROOT_PATH}/db_dumps:/db_dumps:ro
- ${UPLOAD_LOCATION}:/photos:ro
environment:
TZ: ${TZ}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
backup-database:
container_name: immich_db_dumper
image: prodrigestivill/postgres-backup-local:14
restart: always
environment:
POSTGRES_HOST: database
POSTGRES_CLUSTER: 'TRUE'
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_DATABASE_NAME}
BACKUP_KEEP_MINS: "120"
SCHEDULE: "50 * * * *"
POSTGRES_EXTRA_OPTS: '--clean --if-exists'
BACKUP_DIR: /db_dumps
volumes:
- ${SERVICE_DATA_ROOT_PATH}/db_dumps:/db_dumps
depends_on:
- database
networks:
- immich
networks:
immich:
name: immich
traefik:
name: traefik
external: true
volumes:
model-cache:

View File

@@ -1,44 +0,0 @@
global:
scheduler: crond
default:
password-file: key
repository: s3:s3.eu-central-003.backblazeb2.com/BUCKET-NAME
initialize: true
force-inactive-lock: true
backup:
source: /photos
exclude-caches: true
one-file-system: true
schedule: "*:00,15,30,45"
schedule-permission: system
check-before: false
group-by: "paths"
forget:
schedule: "daily"
keep-hourly: 24
keep-daily: 7
keep-weekly: 4
heep-monthly: 12
prune: true
database:
password-file: key
repository: s3:s3.eu-central-003.backblazeb2.com/BUCKET-NAME
initialize: true
force-inactive-lock: true
backup:
source: /db_dumps
exclude-caches: true
one-file-system: true
schedule: "hourly"
schedule-permission: system
check-before: false
group-by: "paths"
forget:
schedule: "daily"
keep-hourly: 24
keep-daily: 7
keep-weekly: 4
heep-monthly: 12
prune: true

View File

@@ -1,11 +0,0 @@
UPLOAD_LOCATION=
SERVICE_DATA_ROOT_PATH=
TRAEFIK_DOMAIN=
IMMICH_VERSION=release
DB_PASSWORD=
DB_USERNAME=immich
DB_DATABASE_NAME=immich
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
TZ=

View File

@@ -1,123 +0,0 @@
# Media Stack (*arr Suite)
A complete media automation solution that automatically downloads, organizes, and manages your TV shows and movies. This stack combines the popular *arr applications with a torrent client for a fully automated media center.
## Services Overview
- **sonarr**: TV series management and automation with PostgreSQL backend
- **radarr**: Movie management and automation with PostgreSQL backend
- **prowlarr**: Indexer manager for torrent and usenet sources
- **qbittorrent**: BitTorrent client for downloading media
- **sonarr-db**: Dedicated PostgreSQL database for Sonarr
- **radarr-db**: Dedicated PostgreSQL database for Radarr
## Key Features
- **Automated Downloads**: Monitor RSS feeds and automatically download new episodes/movies
- **Quality Management**: Configurable quality profiles and upgrade automation
- **Release Profiles**: Advanced filtering and scoring of releases
- **Calendar Integration**: Track upcoming releases and air dates
- **Metadata Management**: Automatic metadata and artwork fetching
- **Notifications**: Webhooks and notifications for downloads and imports
- **API Integration**: Full REST APIs for external integrations
- **Multi-Profile**: Support for different quality and language profiles
## Application Details
### Sonarr
- **TV Series Management**: Monitors TV show RSS feeds and manages series libraries
- **Season Management**: Handles season packs and individual episodes
- **Episode Renaming**: Automatic file renaming with customizable patterns
### Radarr
- **Movie Management**: Monitors movie releases and manages movie libraries
- **Collection Support**: Handle movie collections and franchises
- **Release Monitoring**: Track theatrical, digital, and physical releases
### Prowlarr
- **Indexer Management**: Central management for all torrent/usenet indexers
- **Sync to Apps**: Automatically syncs indexers to Sonarr and Radarr
- **Statistics**: Download and indexer performance statistics
### qBittorrent
- **Torrent Client**: Handles all BitTorrent downloads for the media stack
- **Category Support**: Automatic categorization for different media types
- **API Access**: HTTP-protected API for *arr application integration
## Links & Documentation
### Sonarr
- **Website**: https://sonarr.tv/
- **GitHub**: https://github.com/Sonarr/Sonarr
- **Documentation**: https://wiki.servarr.com/sonarr
- **Docker**: https://hub.docker.com/r/linuxserver/sonarr
### Radarr
- **Website**: https://radarr.video/
- **GitHub**: https://github.com/Radarr/Radarr
- **Documentation**: https://wiki.servarr.com/radarr
- **Docker**: https://hub.docker.com/r/linuxserver/radarr
### Prowlarr
- **Website**: https://prowlarr.com/
- **GitHub**: https://github.com/Prowlarr/Prowlarr
- **Documentation**: https://wiki.servarr.com/prowlarr
- **Docker**: https://hub.docker.com/r/linuxserver/prowlarr
### qBittorrent
- **Website**: https://www.qbittorrent.org/
- **GitHub**: https://github.com/qbittorrent/qBittorrent
- **Documentation**: https://github.com/qbittorrent/qBittorrent/wiki
- **Docker**: https://hub.docker.com/r/linuxserver/qbittorrent
### nzb360
- **Mobile App**: Remote management client for *arr applications
- **Website**: https://nzb360.com/
- **Android**: https://play.google.com/store/apps/details?id=com.kevinforeman.nzb360
- **iOS**: https://apps.apple.com/app/nzb360/id1116293427
## Configuration
### Environment Variables
Copy `stack.env` to `stack.env.real` and configure:
- `PUID/PGID`: User and group IDs for file permissions
- `TZ`: Timezone
- `MEDIA_PATH`: Root path for media storage
- `SERVICE_DATA_ROOT_PATH`: Base path for application data
- `*_SERVICE_DOMAIN`: Traefik domains for each service
- `*_BASIC_AUTH`: HTTP basic authentication credentials
- `*_DB_*`: PostgreSQL database credentials for Sonarr/Radarr
### Network Access
- **Sonarr Web UI**: Port 8989
- **Radarr Web UI**: Port 7878
- **Prowlarr Web UI**: Port 9696
- **qBittorrent Web UI**: Port 8114 (also accessible via Traefik with authentication)
### API Access
API endpoints are exposed through Traefik with HTTP basic authentication for secure external access. These APIs are configured for integration with **nzb360**, a mobile app for managing *arr applications and download clients remotely.
## Media Organization
### Directory Structure
```
/media/
├── downloads/ # qBittorrent download directory
├── tv/ # TV shows library (Sonarr)
├── movies/ # Movies library (Radarr)
└── ... # Additional media directories
```
### File Permissions
All services run with consistent PUID/PGID to ensure proper file access across the media path.
## Database Backend
Both Sonarr and Radarr use dedicated PostgreSQL databases for improved performance and reliability compared to SQLite.
## Dependencies
- External Traefik reverse proxy network for secure API access
- Shared media storage path accessible by all services
- Network connectivity between services for API communication

View File

@@ -1,146 +0,0 @@
services:
# Torrent client
qbittorrent:
image: linuxserver/qbittorrent
restart: unless-stopped
environment:
- PUID=${PUID}
- PGID=${PGID}
- TZ=${TZ}
- UMASK=002
- WEBUI_PORT=8114
volumes:
- ${MEDIA_PATH}:/media
- ${SERVICE_DATA_ROOT_PATH}/qbittorrent/config:/config
ports:
- 8114:8114
- 23312:23312
networks:
- traefik
- sonarr
- radarr
labels:
- "traefik.enable=true"
- "traefik.http.routers.qbittorrent-service.rule=Host(`${QBITTORRENT_SERVICE_DOMAIN}`) && PathPrefix(`/api/v2`)"
- "traefik.http.routers.qbittorrent-service.service=qbittorrent-service"
- "traefik.http.services.qbittorrent-service.loadbalancer.server.port=8114"
- "traefik.http.routers.qbittorrent-service.entrypoints=websecure"
- "traefik.http.routers.qbittorrent-service.tls.certresolver=myresolver"
- "traefik.http.middlewares.qbittorrent-service-auth.basicauth.users=${QBITTORRENT_BASIC_AUTH}"
- "traefik.http.routers.qbittorrent-service.middlewares=qbittorrent-service-auth@docker"
- "traefik.docker.network=traefik"
# Tracker indexer
prowlarr:
image: linuxserver/prowlarr:latest
restart: unless-stopped
environment:
- PUID=${PUID}
- PGID=${PGID}
- TZ=${TZ}
volumes:
- ${SERVICE_DATA_ROOT_PATH}/prowlarr/config:/config
ports:
- 9696:9696
networks:
- sonarr
- radarr
# Series manager
sonarr:
image: linuxserver/sonarr:latest
restart: unless-stopped
environment:
- PUID=${PUID}
- PGID=${PGID}
- TZ=${TZ}
- UMASK=002
volumes:
- ${SERVICE_DATA_ROOT_PATH}/sonarr/config:/config
- ${MEDIA_PATH}:/media
ports:
- 8989:8989
depends_on:
- sonarr-db
networks:
- traefik
- sonarr
labels:
- "traefik.enable=true"
- "traefik.http.routers.sonarr-service.rule=Host(`${SONARR_SERVICE_DOMAIN}`) && PathPrefix(`/api/v3`)"
- "traefik.http.routers.sonarr-service.entrypoints=websecure"
- "traefik.http.routers.sonarr-service.tls.certresolver=myresolver"
- "traefik.http.middlewares.sonarr-service-auth.basicauth.users=${SONARR_BASIC_AUTH}"
- "traefik.http.routers.sonarr-service.middlewares=sonarr-service-auth@docker"
- "traefik.docker.network=traefik"
sonarr-db:
image: postgres:14
restart: unless-stopped
user: ${PUID}:${PGID}
environment:
- PUID=${PUID}
- PGID=${PGID}
- TZ=${TZ}
- UMASK=002
- POSTGRES_DB=${SONARR_DB_NAME}
- POSTGRES_USER=${SONARR_DB_USER}
- POSTGRES_PASSWORD=${SONARR_DB_PASSWORD}
volumes:
- ${SERVICE_DATA_ROOT_PATH}/sonarr/database:/var/lib/postgresql/data
networks:
- sonarr
# Movies manager
radarr:
image: linuxserver/radarr:latest
restart: unless-stopped
environment:
- PUID=${PUID}
- PGID=${PGID}
- TZ=${TZ}
- UMASK=002
volumes:
- ${SERVICE_DATA_ROOT_PATH}/radarr/config:/config
- ${MEDIA_PATH}:/media
ports:
- 7878:7878
depends_on:
- radarr-db
networks:
- traefik
- radarr
labels:
- "traefik.enable=true"
- "traefik.http.routers.radarr-service.rule=Host(`${RADARR_SERVICE_DOMAIN}`) && PathPrefix(`/api/v3`)"
- "traefik.http.routers.radarr-service.entrypoints=websecure"
- "traefik.http.routers.radarr-service.tls.certresolver=myresolver"
- "traefik.http.middlewares.radarr-service-auth.basicauth.users=${RADARR_BASIC_AUTH}"
- "traefik.http.routers.radarr-service.middlewares=radarr-service-auth@docker"
- "traefik.docker.network=traefik"
radarr-db:
image: postgres:14
restart: unless-stopped
user: ${PUID}:${PGID}
environment:
- PUID=${PUID}
- PGID=${PGID}
- TZ=${TZ}
- UMASK=002
- POSTGRES_DB=${RADARR_DB_NAME}
- POSTGRES_USER=${RADARR_DB_USER}
- POSTGRES_PASSWORD=${RADARR_DB_PASSWORD}
volumes:
- ${SERVICE_DATA_ROOT_PATH}/radarr/database:/var/lib/postgresql/data
networks:
- radarr
networks:
radarr:
name: radarr
sonarr:
name: sonarr
traefik:
name: traefik
external: true

View File

@@ -1,30 +0,0 @@
# Paths
MEDIA_PATH=
SERVICE_DATA_ROOT_PATH=
# User/Group IDs
PUID=1027
PGID=100
TZ=Europe/Kyiv
# Service Domains
QBITTORRENT_SERVICE_DOMAIN=
SONARR_SERVICE_DOMAIN=
SONARR_DOMAIN=
RADARR_SERVICE_DOMAIN=
# Database Configuration
# Sonarr Database
SONARR_DB_USER=sonarr
SONARR_DB_NAME=sonarr-main
SONARR_DB_PASSWORD=
# Radarr Database
RADARR_DB_USER=radarr
RADARR_DB_NAME=radarr-main
RADARR_DB_PASSWORD=
# Basic Auth for API access (format: user:hashedpassword)
QBITTORRENT_BASIC_AUTH=
SONARR_BASIC_AUTH=
RADARR_BASIC_AUTH=

View File

@@ -1,75 +0,0 @@
# Paperless-ngx Stack
A document management system that transforms your physical documents into a searchable online archive. Scan, index, and archive all your documents with powerful OCR and AI-powered organization.
## Services Overview
- **webserver**: Main Paperless-ngx application with web interface and API
- **db**: PostgreSQL database for document metadata and full-text search
- **broker**: Redis message broker for background task processing
- **gotenberg**: Document conversion service for Office files and web pages
- **tika**: Text extraction service for various file formats
- **backup-files**: Automated file backups using resticprofile with AWS S3
- **backup-database**: Automated PostgreSQL database dumps
## Key Features
- **OCR Processing**: Automatic text extraction from scanned documents
- **AI Tagging**: Machine learning-powered document classification and tagging
- **Full-Text Search**: Fast searching across all document contents
- **Document Types**: Support for PDF, images, Office documents, emails
- **Web Interface**: Modern, responsive web UI for document management
- **REST API**: Full API for integration with other applications
- **Barcode Support**: QR code and barcode recognition for automated filing
- **Email Integration**: Import documents via email
- **Multi-user**: User management with permission controls
## Links & Documentation
- **Official Website**: https://paperless-ngx.com/
- **GitHub Repository**: https://github.com/paperless-ngx/paperless-ngx
- **Documentation**: https://docs.paperless-ngx.com/
- **Docker Hub**: https://hub.docker.com/r/paperlessngx/paperless-ngx
- **Demo**: https://demo.paperless-ngx.com/ (admin/demo)
- **Community**: https://github.com/paperless-ngx/paperless-ngx/discussions
## Configuration
### Environment Variables
Copy `stack.env` to `stack.env.real` and configure:
- `PAPERLESS_*`: Application-specific settings (database, OCR languages, secret key)
- `TZ`: Timezone
- `TRAEFIK_DOMAIN`: Domain for web access
- `CONSUME_PATH`: Directory for automatic document consumption
- `AWS_*`: AWS S3 credentials for backups
- `SERVICE_DATA_ROOT_PATH`: Base path for service data
- `USERMAP_UID/USERMAP_GID`: User/group IDs for file permissions
### OCR Languages
Configure `PAPERLESS_OCR_LANGUAGE` and `PAPERLESS_OCR_LANGUAGES` for multi-language OCR support.
### Network Access
- **Web Interface**: Accessible via Traefik at configured domain
- **Document Consumption**: Place documents in the consume directory for automatic processing
## Document Processing Pipeline
1. **Intake**: Documents added via web upload, email, or consume folder
2. **OCR**: Text extraction using Tesseract with configured languages
3. **Text Extraction**: Additional text processing via Tika for office documents
4. **PDF Generation**: Gotenberg converts office documents to searchable PDFs
5. **Classification**: AI-powered tagging and document type detection
6. **Storage**: Organized storage with full-text search indexing
## Backup Strategy
**Database**: Hourly PostgreSQL dumps with 2-hour retention
**Files**: Automated S3 backups of documents and media using resticprofile
## Dependencies
- External Traefik reverse proxy network
- AWS S3 bucket for backups
- Consume directory for document intake

View File

@@ -1,135 +0,0 @@
services:
broker:
image: docker.io/library/redis:8
restart: unless-stopped
environment:
- TZ=${TZ}
volumes:
- ${SERVICE_DATA_ROOT_PATH}/redis:/data
networks:
- internal
db:
image: docker.io/library/postgres:17-bookworm
restart: unless-stopped
environment:
- POSTGRES_DB=${PAPERLESS_DBNAME}
- POSTGRES_USER=${PAPERLESS_DBUSER}
- POSTGRES_PASSWORD=${PAPERLESS_DBPASS}
- TZ=${TZ}
volumes:
- ${SERVICE_DATA_ROOT_PATH}/database:/var/lib/postgresql/data
networks:
- internal
webserver:
image: ghcr.io/paperless-ngx/paperless-ngx:latest
restart: unless-stopped
environment:
- PAPERLESS_REDIS=redis://broker:6379
- PAPERLESS_DBHOST=db
- PAPERLESS_DBUSER=${PAPERLESS_DBUSER}
- PAPERLESS_DBPASS=${PAPERLESS_DBPASS}
- PAPERLESS_DBNAME=${PAPERLESS_DBNAME}
- PAPERLESS_TIKA_ENABLED=1
- PAPERLESS_TIKA_GOTENBERG_ENDPOINT=http://gotenberg:3000
- PAPERLESS_TIKA_ENDPOINT=http://tika:9998
- PAPERLESS_OCR_LANGUAGE=${PAPERLESS_OCR_LANGUAGE}
- PAPERLESS_OCR_LANGUAGES=${PAPERLESS_OCR_LANGUAGES}
- PAPERLESS_SECRET_KEY=${PAPERLESS_SECRET_KEY}
- PAPERLESS_TIME_ZONE=${TZ}
- PAPERLESS_URL=https://${TRAEFIK_DOMAIN}
- PAPERLESS_CONSUMER_BARCODE_SCANNER=ZXING
- USERMAP_UID=${USERMAP_UID}
- USERMAP_GID=${USERMAP_GID}
- PAPERLESS_APPS=${PAPERLESS_APPS}
- PAPERLESS_SOCIALACCOUNT_PROVIDERS=${PAPERLESS_SOCIALACCOUNT_PROVIDERS}
- PAPERLESS_SOCIALACCOUNT_ALLOW_SIGNUPS=${PAPERLESS_SOCIALACCOUNT_ALLOW_SIGNUPS}
volumes:
- ${SERVICE_DATA_ROOT_PATH}/data:/usr/src/paperless/data
- ${MEDIA_PATH:-${SERVICE_DATA_ROOT_PATH}/media}:/usr/src/paperless/media
- ${SERVICE_DATA_ROOT_PATH}/export:/usr/src/paperless/export
- ${CONSUME_PATH}:/usr/src/paperless/consume
depends_on:
- db
- broker
- gotenberg
- tika
networks:
- internal
- traefik
labels:
- "traefik.enable=true"
- "traefik.http.routers.paperless.rule=Host(`${TRAEFIK_DOMAIN}`)"
- "traefik.http.routers.paperless.entrypoints=websecure"
- "traefik.http.routers.paperless.tls.certresolver=myresolver"
- "traefik.docker.network=traefik"
- "traefik.http.services.paperless.loadbalancer.server.port=8000"
gotenberg:
image: docker.io/gotenberg/gotenberg:8.20
restart: unless-stopped
environment:
- TZ=${TZ}
command:
- "gotenberg"
- "--chromium-disable-javascript=true"
- "--chromium-allow-list=file:///tmp/.*"
networks:
- internal
tika:
image: docker.io/apache/tika:latest
restart: unless-stopped
environment:
- TZ=${TZ}
networks:
- internal
backup-files:
image: creativeprojects/resticprofile:${RP_VERSION:-latest}
restart: always
environment:
- TZ=${TZ}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
volumes:
- ${SERVICE_DATA_ROOT_PATH}/restic/resticprofile.yaml:/etc/resticprofile/profiles.yaml:ro
- ${SERVICE_DATA_ROOT_PATH}/restic/restic.key:/etc/resticprofile/key:ro
- ${MEDIA_PATH:-${SERVICE_DATA_ROOT_PATH}/media}:/media:ro
- ${SERVICE_DATA_ROOT_PATH}/db_dumps:/db_dumps:ro
command:
- '-c'
- 'resticprofile schedule --all && crond -f'
entrypoint: "/bin/sh"
hostname: paperless-resticprofile
networks:
- internal
backup-database:
image: prodrigestivill/postgres-backup-local:17
restart: always
environment:
- POSTGRES_HOST=db
- POSTGRES_CLUSTER=TRUE
- POSTGRES_USER=${PAPERLESS_DBUSER}
- POSTGRES_PASSWORD=${PAPERLESS_DBPASS}
- POSTGRES_DB=${PAPERLESS_DBNAME}
- BACKUP_KEEP_MINS=120
- SCHEDULE=50 * * * *
- POSTGRES_EXTRA_OPTS=--clean --if-exists
- BACKUP_DIR=/db_dumps
- BACKUP_ON_START=TRUE
- TZ=${TZ}
volumes:
- ${SERVICE_DATA_ROOT_PATH}/db_dumps:/db_dumps
depends_on:
- db
networks:
- internal
networks:
internal:
name: paperless
traefik:
external: true

View File

@@ -1,44 +0,0 @@
global:
scheduler: crond
default:
password-file: key
repository: s3:s3.eu-central-003.backblazeb2.com/BUCKET-NAME
initialize: true
force-inactive-lock: true
backup:
source: /media
exclude-caches: true
one-file-system: true
schedule: "*:00,15,30,45"
schedule-permission: system
check-before: false
group-by: "paths"
forget:
schedule: "daily"
keep-hourly: 24
keep-daily: 7
keep-weekly: 4
heep-monthly: 12
prune: true
database:
password-file: key
repository: s3:s3.eu-central-003.backblazeb2.com/BUCKET-NAME
initialize: true
force-inactive-lock: true
backup:
source: /db_dumps
exclude-caches: true
one-file-system: true
schedule: "hourly"
schedule-permission: system
check-before: false
group-by: "paths"
forget:
schedule: "daily"
keep-hourly: 24
keep-daily: 7
keep-weekly: 4
heep-monthly: 12
prune: true

View File

@@ -1,25 +0,0 @@
# Paths
SERVICE_DATA_ROOT_PATH=
CONSUME_PATH=
# Basic Configuration
TZ=Europe/Kyiv
# Traefik Domain
TRAEFIK_DOMAIN=
# Database Configuration
PAPERLESS_DBUSER=paperless
PAPERLESS_DBNAME=paperless
PAPERLESS_DBPASS=
# Paperless Configuration
PAPERLESS_OCR_LANGUAGE=
PAPERLESS_OCR_LANGUAGES=
PAPERLESS_SECRET_KEY=
USERMAP_UID=1027
USERMAP_GID=100
# Backup Configuration
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=

View File

@@ -1,144 +0,0 @@
# Pi-hole Stack
A network-wide ad blocker that acts as a DNS sinkhole, protecting your entire network from ads, trackers, and malicious domains. Enhanced with DNSCrypt-proxy for encrypted upstream DNS queries.
## Services Overview
- **server**: Pi-hole DNS server with web interface for ad blocking and network monitoring
- **dnscrypt-proxy**: Encrypted DNS proxy for secure upstream DNS resolution
## Key Features
- **Network-wide Ad Blocking**: Blocks ads for all devices on your network
- **DNS Sinkhole**: Prevents requests to known advertising and tracking domains
- **Encrypted DNS**: DNSCrypt-proxy provides encrypted communication with upstream DNS servers
- **Web Interface**: Comprehensive dashboard for monitoring and configuration
- **Query Logging**: Detailed logs of all DNS queries with filtering capabilities
- **Whitelist/Blacklist**: Custom domain allow/block lists
- **Multiple Blocklists**: Support for various community-maintained blocklists
- **Network Monitoring**: Real-time network activity and top blocked domains
- **DHCP Server**: Optional DHCP functionality for network management
## Architecture
### DNS Flow
1. Client DNS requests → Pi-hole (port 53)
2. Pi-hole checks blocklists → Blocks or allows
3. Allowed requests → DNSCrypt-proxy (port 5353)
4. DNSCrypt-proxy → Encrypted upstream DNS servers
5. Response flows back through the chain
### Security Features
- **Encrypted Upstream**: DNSCrypt-proxy encrypts DNS queries to upstream servers
- **Privacy Protection**: Prevents DNS queries from being monitored
- **Malware Protection**: Blocks known malicious domains
## Links & Documentation
### Pi-hole
- **Official Website**: https://pi-hole.net/
- **GitHub Repository**: https://github.com/pi-hole/pi-hole
- **Documentation**: https://docs.pi-hole.net/
- **Docker Hub**: https://hub.docker.com/r/pihole/pihole
- **Community**: https://discourse.pi-hole.net/
### DNSCrypt-proxy
- **GitHub Repository**: https://github.com/DNSCrypt/dnscrypt-proxy
- **Documentation**: https://github.com/DNSCrypt/dnscrypt-proxy/wiki
- **Docker Image**: https://hub.docker.com/r/klutchell/dnscrypt-proxy
### Blocklists
- **StevenBlack's List**: https://github.com/StevenBlack/hosts
- **AdguardTeam Lists**: https://github.com/AdguardTeam/AdguardFilters
- **Firebog Lists**: https://firebog.net/
## Configuration
### Environment Variables
Copy `stack.env` to `stack.env.real` and configure:
- `TZ`: Timezone for log timestamps
- `PIHOLE_WEBPASSWORD`: Password for Pi-hole web interface
- `PIHOLE_DNS_PORT`: DNS server port (default: 53)
- `PIHOLE_HTTP_PORT`: Web interface port (default: 80)
- `SERVICE_DATA_ROOT_PATH`: Base path for Pi-hole configuration data
### DNSCrypt Configuration
The DNSCrypt-proxy configuration file is located at:
`${SERVICE_DATA_ROOT_PATH}/dnscrypt/dnscrypt-proxy.toml`
### Network Access
- **DNS Service**: Port 53 (TCP/UDP) - Configure as DNS server for network devices
- **Web Interface**: Port 80 (or configured `PIHOLE_HTTP_PORT`)
- **Admin Panel**: Access via `http://your-server-ip:port/admin`
## Setup Instructions
### 1. Network Configuration
Configure your router or devices to use Pi-hole as the DNS server:
- **Router**: Set DNS server to Pi-hole IP address
- **Individual Devices**: Configure network settings to use Pi-hole IP
### 2. Initial Setup
1. Access web interface at `http://your-server-ip:port/admin`
2. Login with configured password
3. Configure blocklists under "Group Management" → "Adlists"
4. Update gravity database to apply blocklists
### 3. Testing
- Visit `http://doubleclick.net` - should be blocked
- Check Pi-hole dashboard for blocked queries
- Verify DNS resolution is working for legitimate domains
## Blocklist Management
### Default Lists
Pi-hole comes with several default blocklists. Popular additions include:
- **StevenBlack Unified**: Comprehensive hosts file
- **AdGuard Base Filter**: AdGuard's main blocklist
- **EasyList**: Popular browser extension list
- **Malware Domain List**: Security-focused blocking
### Custom Lists
Add custom blocklists via:
- Web interface: Group Management → Adlists
- Manual file editing: Add domains to local blocklist files
## Advanced Features
### Conditional Forwarding
Configure local domain resolution for internal networks.
### DHCP Replacement
Pi-hole can replace your router's DHCP server for better integration.
### API Access
REST API available for external integrations and monitoring.
## Performance Considerations
- **Memory Usage**: Minimal resource requirements (~100MB RAM)
- **Storage**: Logs and configuration require modest disk space
- **Network Impact**: Negligible latency impact on DNS resolution
- **Query Volume**: Handles thousands of queries per minute efficiently
## Monitoring & Maintenance
### Dashboard Metrics
- Total queries processed
- Percentage of blocked queries
- Top blocked domains
- Query volume over time
- Client activity statistics
### Log Management
- Query logs with filtering options
- Long-term trend analysis
- Privacy-focused logging controls
## Dependencies
- Network access for initial blocklist downloads
- DNSCrypt-proxy configuration file
- Persistent storage for Pi-hole configuration and logs

View File

@@ -1,39 +0,0 @@
services:
server:
image: pihole/pihole:2025.08.0
restart: unless-stopped
environment:
TZ: ${TZ}
FTLCONF_webserver_api_password: '${PIHOLE_WEBPASSWORD}'
FTLCONF_dns_listeningMode: all
FTLCONF_dns_upstreams: dnscrypt-proxy#5353
FTLCONF_misc_etc_dnsmasq_d: true
labels:
- com.centurylinklabs.watchtower.enable=false
volumes:
- ${SERVICE_DATA_ROOT_PATH}/config:/etc/pihole
- ${SERVICE_DATA_ROOT_PATH}/dnsmasq:/etc/dnsmasq.d
ports:
- "${PIHOLE_DNS_PORT}:53/tcp"
- "${PIHOLE_DNS_PORT}:53/udp"
- "${PIHOLE_HTTP_PORT}:80/tcp"
cap_add:
- SYS_NICE
depends_on:
- dnscrypt-proxy
networks:
- pihole
dnscrypt-proxy:
image: klutchell/dnscrypt-proxy:latest
restart: unless-stopped
environment:
TZ: ${TZ}
volumes:
- ${SERVICE_DATA_ROOT_PATH}/dnscrypt/dnscrypt-proxy.toml:/config/dnscrypt-proxy.toml
networks:
- pihole
networks:
pihole:
name: pihole

View File

@@ -1,10 +0,0 @@
# Paths
SERVICE_DATA_ROOT_PATH=
# Basic Configuration
TZ=UTC
# Pi-hole Configuration
PIHOLE_WEBPASSWORD=
PIHOLE_DNS_PORT=53
PIHOLE_HTTP_PORT=80