15 articles tagged with "Open Source"
BookStack: Self-Hosted Wiki and Documentation Platform
Deploy BookStack for self-hosted wiki and documentation. Organized as Shelves → Books → Chapters → Pages. Features WYSIWYG/Markdown editing, LDAP auth, diagrams, and full API.
DocuSeal: Self-Hosted Document Signing — Free DocuSign Alternative
Deploy DocuSeal for self-hosted document signing and e-signatures. Create templates, send for signing, track status, and store documents on your server. Free DocuSign alternative.
Listmonk: Self-Hosted Newsletter and Mailing List Manager — Free Mailchimp Alternative
Deploy Listmonk for self-hosted newsletters and mailing lists. High-performance Go application with Docker, templating, analytics, and multi-list management. Free Mailchimp/ConvertKit alternative.
Mattermost: Self-Hosted Team Messaging — Open Source Slack Alternative
Deploy Mattermost for self-hosted team messaging. Channels, threads, file sharing, integrations, and plugins. Open-source Slack alternative with Docker deployment.
NocoDB: Self-Hosted Airtable Alternative — Turn Any Database into a Spreadsheet
Deploy NocoDB — the open-source Airtable alternative. Turn MySQL, PostgreSQL, or SQLite databases into smart spreadsheets with forms, views, automations, and REST APIs. Self-hosted with Docker.
Ollama: Run AI Language Models Locally — Setup, GPU Acceleration, and API Guide
Run LLMs like Llama 3, Mistral, Gemma, and Phi locally with Ollama. Guide covers installation, GPU acceleration, Docker deployment, REST API, and Open WebUI integration.
Ollama: KI-Sprachmodelle Lokal Ausführen — Setup, GPU und API
Führen Sie LLMs wie Llama 3, Mistral, Gemma und Phi lokal mit Ollama aus. Installationsanleitung, GPU-Beschleunigung, Docker-Deployment, REST-API und Open WebUI-Integration.
Ollama: Ejecuta Modelos de IA Localmente — Instalación, GPU y API
Ejecuta LLMs como Llama 3, Mistral, Gemma y Phi localmente con Ollama. Guía de instalación, aceleración GPU, despliegue Docker, API REST e integración con Open WebUI.
Ollama : Exécutez des Modèles IA Localement — Installation, GPU et API
Exécutez des LLMs comme Llama 3, Mistral, Gemma et Phi localement avec Ollama. Guide d'installation, accélération GPU, déploiement Docker, API REST et intégration Open WebUI.
Ollama: Execute Modelos de IA Localmente — Instalação, GPU e API
Execute LLMs como Llama 3, Mistral, Gemma e Phi localmente com Ollama. Guia de instalação, aceleração GPU, implantação Docker, API REST e integração com Open WebUI.
Plausible Analytics: Self-Hosted, Privacy-First Google Analytics Alternative
Deploy Plausible Analytics for privacy-friendly web analytics. Lightweight (<1KB script), no cookies, GDPR-compliant. Covers Docker setup, goals, custom events, and comparison with GA4.
Stable Diffusion WebUI: Self-Hosted AI Image Generation — Free, Private, GPU-Accelerated
Run Stable Diffusion locally for AI image generation. Covers AUTOMATIC1111 WebUI, ComfyUI, model selection (SDXL, SD 1.5), LoRA fine-tuning, ControlNet, and GPU optimization.
Whisper: Self-Hosted Speech-to-Text with OpenAI's Model — Local, Private, Free
Run OpenAI's Whisper speech-to-text model locally for free, private audio transcription. Covers CLI, Docker, GPU acceleration, Whisper.cpp for CPU, faster-whisper, and web UI options.
Whisper: Transcripción de Voz a Texto Self-Hosted — Local, Privado y Gratuito
Ejecuta el modelo Whisper de OpenAI localmente para transcripción de audio gratuita y privada. Incluye CLI, Docker, GPU, whisper.cpp para CPU y opciones de interfaz web.
Nextcloud: Self-Hosted Cloud Storage, Calendar, and Collaboration Platform
Deploy Nextcloud with Docker for self-hosted cloud storage, calendar, contacts, and collaboration. Covers reverse proxy, Redis caching, OnlyOffice, Talk video calls, and storage optimization.