docs: add numerous README.md

This commit is contained in:
2025-08-31 23:08:41 +03:00
parent bf08114b72
commit bbee335940
6 changed files with 536 additions and 0 deletions

27
docker/stacks/README.md Normal file
View File

@@ -0,0 +1,27 @@
# Docker Stacks
Individual service stacks with comprehensive documentation. See the [main README](../../README.md) for architecture overview and deployment process.
## Available Stacks
| Stack | Description | Port(s) | Mobile/Remote Access |
|-------|-------------|---------|---------------------|
| [**Immich**](./immich/) | Photo and video management with AI | 2283 | iOS/Android apps |
| [**Paperless-ngx**](./paperless/) | Document management with OCR | Web UI | Email integration |
| [**Media**](./media/) | *arr suite for media automation | 8989, 7878, 9696, 8114 | nzb360 mobile app |
| [**Pi-hole**](./pihole/) | Network-wide ad blocker | 53, 80 | Web dashboard |
| [**Arch Mirror**](./archmirror/) | Local Arch Linux package mirror | 8080 | pacman client |
## Quick Start
1. Choose a stack from the table above
2. Read the stack's README for setup instructions
3. Copy environment template: `cp stack.env stack.env.real`
4. Configure variables in `stack.env.real`
5. Deploy via Portainer using the docker-compose.yaml
Each stack directory contains:
- `docker-compose.yaml` - Service definitions
- `stack.env` - Environment template (tracked in git)
- `stack.env.real` - Actual values with secrets (gitignored)
- `README.md` - Detailed documentation

View File

@@ -0,0 +1,106 @@
# Arch Linux Mirror Stack
A self-hosted Arch Linux package mirror that provides local access to Arch Linux packages, reducing bandwidth usage and improving package download speeds for local Arch Linux systems.
## Services Overview
- **rsync-mirror**: Automated synchronization service that mirrors Arch Linux packages from upstream
- **nginx-server**: HTTP server that serves the mirrored packages to local clients
## Key Features
- **Automated Syncing**: Scheduled rsync synchronization with upstream Arch Linux mirrors
- **Local Package Serving**: Fast HTTP access to packages for local Arch Linux installations
- **Bandwidth Optimization**: Reduces external bandwidth usage for multiple Arch Linux systems
- **Health Monitoring**: Built-in health checks for both sync and web services
- **Customizable Sync**: Configurable sync schedules and rsync options
## Architecture
### Sync Process
1. **rsync-mirror** container runs scheduled sync jobs using supercronic
2. Downloads packages from configured upstream mirror
3. Stores packages in shared volume
4. **nginx-server** serves packages via HTTP
### Storage
- Shared volume between containers for package storage
- Read-only access for nginx service ensures data integrity
- Configurable storage path for flexible deployment
## Links & Documentation
### Arch Linux
- **Website**: https://archlinux.org/
- **Package Database**: https://archlinux.org/packages/
- **Mirror Status**: https://archlinux.org/mirrors/status/
- **Mirror Setup Guide**: https://wiki.archlinux.org/title/DeveloperWiki:NewMirrors
### Container Technologies
- **Docker Compose**: https://docs.docker.com/compose/
- **Nginx**: https://nginx.org/en/docs/
- **Supercronic**: https://github.com/aptible/supercronic
## Configuration
### Environment Variables
Copy `stack.env` to `stack.env.real` and configure:
- `MIRROR_URL`: Upstream Arch Linux mirror URL for rsync
- `SYNC_SCHEDULE`: Cron schedule for sync operations (e.g., "0 */4 * * *" for every 4 hours)
- `TZ`: Timezone for scheduling
- `RSYNC_EXTRA_OPTIONS`: Additional rsync options for fine-tuning
- `ARCHLINUX_VOLUME_PATH`: Local path for package storage
- `HTTP_PORT`: HTTP port for package access (default: 8080)
- `NGINX_WORKERS`: Number of nginx worker processes
### Network Access
- **HTTP Server**: Accessible on configured port (default: 8080)
- **Health Checks**: Both services include health monitoring
## Usage
### Client Configuration
Configure Arch Linux clients to use the local mirror by editing `/etc/pacman.d/mirrorlist`:
```
## Local mirror
Server = http://your-server-ip:8080/archlinux/$repo/os/$arch
## Fallback mirrors
# ... other mirrors
```
### Sync Monitoring
- Monitor sync container logs for sync status and errors
- Health checks ensure services are running properly
- Nginx access logs show package download activity
## Storage Requirements
- **Full Mirror**: ~60-80GB for complete Arch Linux repository
- **Growth**: Expect ~1-2GB growth per month
- **I/O**: SSD storage recommended for better performance during sync operations
## Sync Strategy
### Recommended Schedule
- **Frequent Updates**: Every 4-6 hours for active development
- **Conservative**: Daily syncs for stable environments
- **Bandwidth Considerations**: Schedule during low-usage periods
### Upstream Mirror Selection
Choose geographically close, reliable mirrors from the [official mirror list](https://archlinux.org/mirrorlist/).
## Custom Builds
The stack uses custom Dockerfiles for both rsync and nginx services, allowing for:
- Optimized container sizing
- Specific configuration needs
- Custom sync scripts and monitoring
## Dependencies
- Docker and Docker Compose
- Sufficient storage for package mirror
- Network access to upstream Arch Linux mirrors

View File

@@ -0,0 +1,61 @@
# Immich Stack
A high-performance photo and video management solution that makes organizing and sharing your media collection effortless.
## Services Overview
- **immich-server**: Main application server providing web interface and API
- **immich-machine-learning**: AI-powered features for face recognition, object detection, and smart search
- **redis**: In-memory data store for caching and session management
- **database**: PostgreSQL database with vector extensions for ML features
- **backup-files**: Automated file backups using resticprofile with AWS S3
- **backup-database**: Automated PostgreSQL database dumps
## Key Features
- **Smart Photo Management**: AI-powered face recognition, object detection, and duplicate detection
- **Mobile Apps**: Native iOS and Android apps with automatic photo backup
- **Video Support**: Hardware-accelerated video transcoding and streaming
- **Sharing**: Secure photo and album sharing with customizable permissions
- **Search**: Powerful search capabilities using AI and metadata
- **Multi-user**: Support for multiple users with individual libraries
- **Backup**: Automated backups to AWS S3 for both files and database
## Links & Documentation
- **Official Website**: https://immich.app/
- **GitHub Repository**: https://github.com/immich-app/immich
- **Documentation**: https://immich.app/docs/overview/introduction
- **Docker Hub**: https://hub.docker.com/r/immich-app/immich-server
- **Mobile Apps**:
- [iOS App Store](https://apps.apple.com/us/app/immich/id1613945652)
- [Android Play Store](https://play.google.com/store/apps/details?id=app.alextran.immich)
## Configuration
### Environment Variables
Copy `stack.env` to `stack.env.real` and configure:
- `IMMICH_VERSION`: Docker image version (default: release)
- `UPLOAD_LOCATION`: Path for photo/video storage
- `DB_*`: PostgreSQL database credentials
- `TZ`: Timezone
- `TRAEFIK_DOMAIN`: Domain for web access
- `AWS_*`: AWS S3 credentials for backups
- `SERVICE_DATA_ROOT_PATH`: Base path for service data
### Network Access
- **Web Interface**: Accessible via Traefik at configured domain
- **Port**: 2283 (internal Docker port 3001)
- **Mobile Apps**: Connect using the configured domain
## Backup Strategy
**Database**: Hourly PostgreSQL dumps with 2-hour retention
**Files**: Automated S3 backups of uploaded photos/videos using resticprofile
## Dependencies
- External Traefik reverse proxy network
- AWS S3 bucket for backups

View File

@@ -0,0 +1,123 @@
# Media Stack (*arr Suite)
A complete media automation solution that automatically downloads, organizes, and manages your TV shows and movies. This stack combines the popular *arr applications with a torrent client for a fully automated media center.
## Services Overview
- **sonarr**: TV series management and automation with PostgreSQL backend
- **radarr**: Movie management and automation with PostgreSQL backend
- **prowlarr**: Indexer manager for torrent and usenet sources
- **qbittorrent**: BitTorrent client for downloading media
- **sonarr-db**: Dedicated PostgreSQL database for Sonarr
- **radarr-db**: Dedicated PostgreSQL database for Radarr
## Key Features
- **Automated Downloads**: Monitor RSS feeds and automatically download new episodes/movies
- **Quality Management**: Configurable quality profiles and upgrade automation
- **Release Profiles**: Advanced filtering and scoring of releases
- **Calendar Integration**: Track upcoming releases and air dates
- **Metadata Management**: Automatic metadata and artwork fetching
- **Notifications**: Webhooks and notifications for downloads and imports
- **API Integration**: Full REST APIs for external integrations
- **Multi-Profile**: Support for different quality and language profiles
## Application Details
### Sonarr
- **TV Series Management**: Monitors TV show RSS feeds and manages series libraries
- **Season Management**: Handles season packs and individual episodes
- **Episode Renaming**: Automatic file renaming with customizable patterns
### Radarr
- **Movie Management**: Monitors movie releases and manages movie libraries
- **Collection Support**: Handle movie collections and franchises
- **Release Monitoring**: Track theatrical, digital, and physical releases
### Prowlarr
- **Indexer Management**: Central management for all torrent/usenet indexers
- **Sync to Apps**: Automatically syncs indexers to Sonarr and Radarr
- **Statistics**: Download and indexer performance statistics
### qBittorrent
- **Torrent Client**: Handles all BitTorrent downloads for the media stack
- **Category Support**: Automatic categorization for different media types
- **API Access**: HTTP-protected API for *arr application integration
## Links & Documentation
### Sonarr
- **Website**: https://sonarr.tv/
- **GitHub**: https://github.com/Sonarr/Sonarr
- **Documentation**: https://wiki.servarr.com/sonarr
- **Docker**: https://hub.docker.com/r/linuxserver/sonarr
### Radarr
- **Website**: https://radarr.video/
- **GitHub**: https://github.com/Radarr/Radarr
- **Documentation**: https://wiki.servarr.com/radarr
- **Docker**: https://hub.docker.com/r/linuxserver/radarr
### Prowlarr
- **Website**: https://prowlarr.com/
- **GitHub**: https://github.com/Prowlarr/Prowlarr
- **Documentation**: https://wiki.servarr.com/prowlarr
- **Docker**: https://hub.docker.com/r/linuxserver/prowlarr
### qBittorrent
- **Website**: https://www.qbittorrent.org/
- **GitHub**: https://github.com/qbittorrent/qBittorrent
- **Documentation**: https://github.com/qbittorrent/qBittorrent/wiki
- **Docker**: https://hub.docker.com/r/linuxserver/qbittorrent
### nzb360
- **Mobile App**: Remote management client for *arr applications
- **Website**: https://nzb360.com/
- **Android**: https://play.google.com/store/apps/details?id=com.kevinforeman.nzb360
- **iOS**: https://apps.apple.com/app/nzb360/id1116293427
## Configuration
### Environment Variables
Copy `stack.env` to `stack.env.real` and configure:
- `PUID/PGID`: User and group IDs for file permissions
- `TZ`: Timezone
- `MEDIA_PATH`: Root path for media storage
- `SERVICE_DATA_ROOT_PATH`: Base path for application data
- `*_SERVICE_DOMAIN`: Traefik domains for each service
- `*_BASIC_AUTH`: HTTP basic authentication credentials
- `*_DB_*`: PostgreSQL database credentials for Sonarr/Radarr
### Network Access
- **Sonarr Web UI**: Port 8989
- **Radarr Web UI**: Port 7878
- **Prowlarr Web UI**: Port 9696
- **qBittorrent Web UI**: Port 8114 (also accessible via Traefik with authentication)
### API Access
API endpoints are exposed through Traefik with HTTP basic authentication for secure external access. These APIs are configured for integration with **nzb360**, a mobile app for managing *arr applications and download clients remotely.
## Media Organization
### Directory Structure
```
/media/
├── downloads/ # qBittorrent download directory
├── tv/ # TV shows library (Sonarr)
├── movies/ # Movies library (Radarr)
└── ... # Additional media directories
```
### File Permissions
All services run with consistent PUID/PGID to ensure proper file access across the media path.
## Database Backend
Both Sonarr and Radarr use dedicated PostgreSQL databases for improved performance and reliability compared to SQLite.
## Dependencies
- External Traefik reverse proxy network for secure API access
- Shared media storage path accessible by all services
- Network connectivity between services for API communication

View File

@@ -0,0 +1,75 @@
# Paperless-ngx Stack
A document management system that transforms your physical documents into a searchable online archive. Scan, index, and archive all your documents with powerful OCR and AI-powered organization.
## Services Overview
- **webserver**: Main Paperless-ngx application with web interface and API
- **db**: PostgreSQL database for document metadata and full-text search
- **broker**: Redis message broker for background task processing
- **gotenberg**: Document conversion service for Office files and web pages
- **tika**: Text extraction service for various file formats
- **backup-files**: Automated file backups using resticprofile with AWS S3
- **backup-database**: Automated PostgreSQL database dumps
## Key Features
- **OCR Processing**: Automatic text extraction from scanned documents
- **AI Tagging**: Machine learning-powered document classification and tagging
- **Full-Text Search**: Fast searching across all document contents
- **Document Types**: Support for PDF, images, Office documents, emails
- **Web Interface**: Modern, responsive web UI for document management
- **REST API**: Full API for integration with other applications
- **Barcode Support**: QR code and barcode recognition for automated filing
- **Email Integration**: Import documents via email
- **Multi-user**: User management with permission controls
## Links & Documentation
- **Official Website**: https://paperless-ngx.com/
- **GitHub Repository**: https://github.com/paperless-ngx/paperless-ngx
- **Documentation**: https://docs.paperless-ngx.com/
- **Docker Hub**: https://hub.docker.com/r/paperlessngx/paperless-ngx
- **Demo**: https://demo.paperless-ngx.com/ (admin/demo)
- **Community**: https://github.com/paperless-ngx/paperless-ngx/discussions
## Configuration
### Environment Variables
Copy `stack.env` to `stack.env.real` and configure:
- `PAPERLESS_*`: Application-specific settings (database, OCR languages, secret key)
- `TZ`: Timezone
- `TRAEFIK_DOMAIN`: Domain for web access
- `CONSUME_PATH`: Directory for automatic document consumption
- `AWS_*`: AWS S3 credentials for backups
- `SERVICE_DATA_ROOT_PATH`: Base path for service data
- `USERMAP_UID/USERMAP_GID`: User/group IDs for file permissions
### OCR Languages
Configure `PAPERLESS_OCR_LANGUAGE` and `PAPERLESS_OCR_LANGUAGES` for multi-language OCR support.
### Network Access
- **Web Interface**: Accessible via Traefik at configured domain
- **Document Consumption**: Place documents in the consume directory for automatic processing
## Document Processing Pipeline
1. **Intake**: Documents added via web upload, email, or consume folder
2. **OCR**: Text extraction using Tesseract with configured languages
3. **Text Extraction**: Additional text processing via Tika for office documents
4. **PDF Generation**: Gotenberg converts office documents to searchable PDFs
5. **Classification**: AI-powered tagging and document type detection
6. **Storage**: Organized storage with full-text search indexing
## Backup Strategy
**Database**: Hourly PostgreSQL dumps with 2-hour retention
**Files**: Automated S3 backups of documents and media using resticprofile
## Dependencies
- External Traefik reverse proxy network
- AWS S3 bucket for backups
- Consume directory for document intake

View File

@@ -0,0 +1,144 @@
# Pi-hole Stack
A network-wide ad blocker that acts as a DNS sinkhole, protecting your entire network from ads, trackers, and malicious domains. Enhanced with DNSCrypt-proxy for encrypted upstream DNS queries.
## Services Overview
- **server**: Pi-hole DNS server with web interface for ad blocking and network monitoring
- **dnscrypt-proxy**: Encrypted DNS proxy for secure upstream DNS resolution
## Key Features
- **Network-wide Ad Blocking**: Blocks ads for all devices on your network
- **DNS Sinkhole**: Prevents requests to known advertising and tracking domains
- **Encrypted DNS**: DNSCrypt-proxy provides encrypted communication with upstream DNS servers
- **Web Interface**: Comprehensive dashboard for monitoring and configuration
- **Query Logging**: Detailed logs of all DNS queries with filtering capabilities
- **Whitelist/Blacklist**: Custom domain allow/block lists
- **Multiple Blocklists**: Support for various community-maintained blocklists
- **Network Monitoring**: Real-time network activity and top blocked domains
- **DHCP Server**: Optional DHCP functionality for network management
## Architecture
### DNS Flow
1. Client DNS requests → Pi-hole (port 53)
2. Pi-hole checks blocklists → Blocks or allows
3. Allowed requests → DNSCrypt-proxy (port 5353)
4. DNSCrypt-proxy → Encrypted upstream DNS servers
5. Response flows back through the chain
### Security Features
- **Encrypted Upstream**: DNSCrypt-proxy encrypts DNS queries to upstream servers
- **Privacy Protection**: Prevents DNS queries from being monitored
- **Malware Protection**: Blocks known malicious domains
## Links & Documentation
### Pi-hole
- **Official Website**: https://pi-hole.net/
- **GitHub Repository**: https://github.com/pi-hole/pi-hole
- **Documentation**: https://docs.pi-hole.net/
- **Docker Hub**: https://hub.docker.com/r/pihole/pihole
- **Community**: https://discourse.pi-hole.net/
### DNSCrypt-proxy
- **GitHub Repository**: https://github.com/DNSCrypt/dnscrypt-proxy
- **Documentation**: https://github.com/DNSCrypt/dnscrypt-proxy/wiki
- **Docker Image**: https://hub.docker.com/r/klutchell/dnscrypt-proxy
### Blocklists
- **StevenBlack's List**: https://github.com/StevenBlack/hosts
- **AdguardTeam Lists**: https://github.com/AdguardTeam/AdguardFilters
- **Firebog Lists**: https://firebog.net/
## Configuration
### Environment Variables
Copy `stack.env` to `stack.env.real` and configure:
- `TZ`: Timezone for log timestamps
- `PIHOLE_WEBPASSWORD`: Password for Pi-hole web interface
- `PIHOLE_DNS_PORT`: DNS server port (default: 53)
- `PIHOLE_HTTP_PORT`: Web interface port (default: 80)
- `SERVICE_DATA_ROOT_PATH`: Base path for Pi-hole configuration data
### DNSCrypt Configuration
The DNSCrypt-proxy configuration file is located at:
`${SERVICE_DATA_ROOT_PATH}/dnscrypt/dnscrypt-proxy.toml`
### Network Access
- **DNS Service**: Port 53 (TCP/UDP) - Configure as DNS server for network devices
- **Web Interface**: Port 80 (or configured `PIHOLE_HTTP_PORT`)
- **Admin Panel**: Access via `http://your-server-ip:port/admin`
## Setup Instructions
### 1. Network Configuration
Configure your router or devices to use Pi-hole as the DNS server:
- **Router**: Set DNS server to Pi-hole IP address
- **Individual Devices**: Configure network settings to use Pi-hole IP
### 2. Initial Setup
1. Access web interface at `http://your-server-ip:port/admin`
2. Login with configured password
3. Configure blocklists under "Group Management" → "Adlists"
4. Update gravity database to apply blocklists
### 3. Testing
- Visit `http://doubleclick.net` - should be blocked
- Check Pi-hole dashboard for blocked queries
- Verify DNS resolution is working for legitimate domains
## Blocklist Management
### Default Lists
Pi-hole comes with several default blocklists. Popular additions include:
- **StevenBlack Unified**: Comprehensive hosts file
- **AdGuard Base Filter**: AdGuard's main blocklist
- **EasyList**: Popular browser extension list
- **Malware Domain List**: Security-focused blocking
### Custom Lists
Add custom blocklists via:
- Web interface: Group Management → Adlists
- Manual file editing: Add domains to local blocklist files
## Advanced Features
### Conditional Forwarding
Configure local domain resolution for internal networks.
### DHCP Replacement
Pi-hole can replace your router's DHCP server for better integration.
### API Access
REST API available for external integrations and monitoring.
## Performance Considerations
- **Memory Usage**: Minimal resource requirements (~100MB RAM)
- **Storage**: Logs and configuration require modest disk space
- **Network Impact**: Negligible latency impact on DNS resolution
- **Query Volume**: Handles thousands of queries per minute efficiently
## Monitoring & Maintenance
### Dashboard Metrics
- Total queries processed
- Percentage of blocked queries
- Top blocked domains
- Query volume over time
- Client activity statistics
### Log Management
- Query logs with filtering options
- Long-term trend analysis
- Privacy-focused logging controls
## Dependencies
- Network access for initial blocklist downloads
- DNSCrypt-proxy configuration file
- Persistent storage for Pi-hole configuration and logs