Community Apps

Browse our large and growing catalog of applications to run in your Unraid server. 

Download the Plugin  |  Become a Community Developer


Community-built

All the applications you love—built and maintained by a community member who understands what you need on Unraid. Love a particular app or plugin? Donate directly to the developer to support their work.

Created by a Legend

Andrew (aka Squid) has worked tirelessly to build and enhance the experience of Community Apps for users like you.

Moderated and Vetted

Moderators ensure that apps listed in the store offer a safe, compatible, and consistent experience. 


AUTOMATIC1111-Stable-Diffusion-Web-UI's Icon

AUTOMATIC1111-Stable-Diffusion-Web-UI

Other, Productivity, Tools / UtilitiesUtilities, AI

A web interface for Stable Diffusion Integrates with Open WebUI: https://docs.openwebui.com/tutorial/images/#configuring-open-webui Add custom models: https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Usage#custom-models

big-AGI's Icon

Generative AI suite powered by state-of-the-art models and providing advanced AI/AGI functions. It features AI personas, AGI functions, multi-model chats, text-to-image, voice, response streaming, code highlighting and execution, PDF import, presets for developers, and more.

CodeProject.AI_Server's Icon

CodeProject.AI_Server

AI

Fast, free, self-hosted Artificial Intelligence Server for any platform, any language. CodeProject.AI Server is a locally installed, self-hosted, fast, free and Open Source Artificial Intelligence server for any platform, any language. No off-device or out of network data transfer, no messing around with dependencies, and able to be used from any platform, any language. Runs as a Windows Service or a Docker container. It may take some time to install as the image takes up a few GB of space! One among many examples of use: it can be easily integrated in AgentDVR Video Surveillance Software for face or object recognition.

CodeProject.AI_ServerGPU's Icon

CodeProject.AI_ServerGPU

AI

Fast, free, self-hosted Artificial Intelligence Server for any platform, any language. CodeProject.AI Server is a locally installed, self-hosted, fast, free and Open Source Artificial Intelligence server for any platform, any language. No off-device or out of network data transfer, no messing around with dependencies, and able to be used from any platform, any language. Runs as a Windows Service or a Docker container. The Docker GPU version is specific to nVidia's CUDA enabled cards with compute capability >= 6.0 It may take some time to install as the image takes up a few GB of space! One among many examples of use: it can be easily integrated in AgentDVR Video Surveillance Software for face or object recognition.

ComfyUI-Nvidia-Docker's Icon

ComfyUI-Nvidia-Docker

Media ApplicationsPhotos, AI

ComfyUI WebUI Dockerfile with Nvidia support, installing ComfyUI from GitHub. Also installs ComfyUI Manager to simplify integration of additional custom nodes. The "run directory" contains HF, ComfyUI and venv. The "basedir" contains input, output and custom_nodes. All those folders will be created with the WANTED_UID and WANTED_GID parameters (by default using Unraid's default of 99:100) allowing the end-user to place directly into the folders their checkpoints, unet, lora and other required models. The container comes with no weights/models; you need to obtain those and install them in the proper directories under the mount you have selected for the "run directory". Output files will be placed into the basedir/output folder within the "basedir" directory. Please see https://github.com/mmartial/ComfyUI-Nvidia-Docker for further details. - See details about "latest" tag - See details about "First time use" (and the "bottle" workflow), noting that Unraid's default YOUR_BASE_DIRECTORY should be /mnt/user/appdata/comfyui-nvidia/basedir Note: - The container requires the Nvidia Driver plugin to be installed on your Unraid server. Usually that plugin will get you access to a CUDA driver with support for the latest tag available for this container. - This is a WebUI for the ComfyUI Stable Diffusion tool with a Docker image of usually over 4GB. - The container will take a while to start up, as it needs to download the ComfyUI Stable Diffusion tool and install its dependencies, usually adding another 5GB of downloaded content in the venv folder - The original Docker image is from Nvidia, as such it is governed by the NVIDIA Deep Learning Container License. - There are multiple version of the base image available, please select the one that fits your needs best. The name of the tag is the Ubuntu version followed by the CUDA version. Latest is set to point to the most recent combination as it should include the most recent software updates. For the complete list of supported versions, please see the GitHub repository

DOODS's Icon

DOODS (Dedicated Open Object Detection Service) is a REST service that detects objects in images or video streams. It also supports GPUs and EdgeTPU hardware acceleration. For Nvidia GPU support, add "--gpus all" to the Extra Parameters field under Advanced.

Flowise's Icon

Flowise

Productivity, AI

Open source low-code tool for developers to build customized LLM orchestration flow and AI agents.

gpt-subtrans-openai's Icon

gpt-subtrans-openai

Tools / UtilitiesUtilities, AI

FR Container contenant gpt-subtrans pour traduire des .srt vers une autre langue en utilisant OpenAI ChatGPT EN Translate .srt files using gpt-subtrans and OpenAI ChatGPT Source of gpt-subtrans: https://github.com/machinewrapped/gpt-subtrans Usage on demand run: docker exec -it gpt-subtrans-openai translate -o /subtitles/output.srt /subtitles/original.srt

gpt-subtrans-webui's Icon

gpt-subtrans-webui

Tools / UtilitiesUtilities, AI

FR WEBUI gpt-subtrans pour faire traduire des sous-titres avec ChatGPT OpenAI EN WEBUI gpt-subtrans for translate subtitles using ChatGPT OpenAI Project source: https://github.com/machinewrapped/gpt-subtrans

Invoke-AI's Icon

An implementation of Stable Diffusion, the open source text-to-image and image-to-image generator, providing a streamlined process with various new features and options to aid the image generation process. **Nvidia GPU Use:** Using the Unraid Nvidia Plugin to install a version of Unraid with the Nvidia Drivers installed and add **--runtime=nvidia --gpus=all** to "extra parameters" (switch on advanced view) **AMD GPU Use:** For AMD GPU support, add "/dev/kfd" and "/dev/dri" each as a Device and add the required Variables: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html#accessing-gpus-in-containers

LibreChat's Icon

LibreChat brings together the future of assistant AIs with the revolutionary technology of OpenAI's ChatGPT. Celebrating the original styling, LibreChat gives you the ability to integrate multiple AI models. It also integrates and enhances original client features such as conversation and message search, prompt templates and plugins. https://docs.librechat.ai/

lobe-chat's Icon

lobe-chat

Network ServicesWeb, AI

LobeChat is an open-source, extensible (Function Calling) high-performance chatbot framework. It supports one-click free deployment of your private ChatGPT/LLM web application. https://github.com/lobehub/lobe-chat/wiki If you need to use the OpenAI service through a proxy, you can configure the proxy address using the OPENAI_PROXY_URL environment variable: OPENAI_PROXY_URL=https://api-proxy.com/v1

LocalAI's Icon

The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities. Additional image variants are also available: https://localai.io/basics/container/#standard-container-images For Nvidia GPU support, add "--gpus all" to the Extra Parameters field under Advanced. For AMD GPU support, add "/dev/kfd" and "/dev/dri" each as a Device and add the required Variables: https://localai.io/features/gpu-acceleration/#setup-example-dockercontainerd For Intel iGPU support, add "/dev/dri" as a Device and add "--device=/dev/dri" to the Extra Parameters field under Advanced.

ollama's Icon

ollama

Other, AI

The easiest way to get up and running with large language models locally.

open-webui's Icon

(Formerly Ollama WebUI) ChatGPT-Style Web Interface for various LLM runners, including Ollama and OpenAI-compatible APIs IMPORTANT: Make sure to add the following environment variable to your ollama container - OLLAMA_ORIGINS=* Set your OpenAI API key (not persistant) - OPENAI_API_KEY

OpenChat-Cuda's Icon

A self-hosted, offline, ChatGPT-like chatbot with open source LLM support. 100% private, with no data leaving your device. Please note that this version requires an NVIDIA GPU with the Unraid NVIDIA-DRIVER plugin.