Community Apps

Browse our large and growing catalog of applications to run in your Unraid server. 

Download the Plugin  |  Become a Community Developer


Community-built

All the applications you love—built and maintained by a community member who understands what you need on Unraid. Love a particular app or plugin? Donate directly to the developer to support their work.

Created by a Legend

Andrew (aka Squid) has worked tirelessly to build and enhance the experience of Community Apps for users like you.

Moderated and Vetted

Moderators ensure that apps listed in the store offer a safe, compatible, and consistent experience. 


Refact's Icon

Refact

AI

Refact WebUI for fine-tuning and self-hosting of code models, that you can later use inside Refact plugins for code completion and chat.

serge's Icon

serge

AI

Serge - LLaMa made easy A chat interface based on llama.cpp for running Alpaca models. Entirely self-hosted, no API keys needed. Fits on 4GB of RAM (depending on model) and runs on the CPU. Models can be downloaded from within the interface. A note on memory usage: llama will just crash if you don't have enough available memory for your model. 7B requires about 4.5GB of free RAM 13B requires about 12GB free 30B requires about 20GB free New models are regularly being added, check the project page for notes and requirements

stable-diffusion's Icon

stable-diffusion

Tools / UtilitiesUtilities, AI

A big thank you to Holaf for this compiled version of Stable Diffusion which allows you to easily benefit from the interface of your choice and fully enjoy the power of this artificial intelligence. Please note that to work properly, it is recommended to have an Nvidia GPU with at least 6GB of VRAM. /! During the first installation or when changing the Web-UI, the first startup may take some time to download/install the necessary packages /! Un grand Merci à Holaf pour cette version compilée de Stable Diffusion qui permet de bénéficier de l'interface de votre choix facilement pour profiter pleinement de la puissance de cette intelligence artifielle. Attention, pour bien fonctionner il est recommandé d'avoir un GPU Nvidia d'au moins 6GB de VRAM. /! lors de la premiere installation ou lors d'un changement de Web-ui le premier démarrage peut prendre du temps le temps de télécharger/installer les paquets nécessaires /!

stable-diffusion's Icon

stable-diffusion

AI

GPU-ready Dockerfile to run the Stability.AI stable-diffusion model with a simple web interface

Steel's Icon

The open-source browser API built for AI agents. Steel provides a REST API to control headless browsers with session management, proxy support, and anti-detection features. Perfect for web automation, scraping, and building AI agents that can interact with the web.

Tabby's Icon

Tabby

AI

Opensource, self-hosted AI coding assistant

vLLM's Icon

vLLM

AI

Easy, fast, and cheap LLM serving for everyone

youtube-transcript-to-article's Icon

youtube-transcript-to-article

Productivity, Tools / UtilitiesUtilities, AI

YouTube Transcript to Article YouTube Transcript to Article is a Docker-based Python project that provides an API for converting YouTube transcripts into professional articles using OpenAI's ChatGPT. This tool automates the creation of summaries or detailed articles from YouTube video content, making it easy to generate professional write-ups from video transcripts. Features Automatic Transcript Retrieval: Fetches the transcript of a YouTube video in its original language, handling both video URLs and IDs. Article Generation: Generates a professional article from the transcript, with options for brief or detailed formats. Customizable Output Language: Allows you to specify the output language, with the default being the video's language. Minimalist Web Interface: Provides a simple, user-friendly web interface to easily input video IDs or URLs and generate articles. Dockerized Deployment: Easy deployment with Docker, including integration options for Home Assistant and MQTT. You will need a OpenAI API key.

YuE-GP's Icon

YuE-GP beta

Productivity, AI

YuE AI Music Generation for the GPU Poor (by deepmeepbeep) Our model's name is YuE (乐). In Chinese, the word means "music" and "happiness." Some of you may find words that start with Yu hard to pronounce. If so, you can just call it "yeah." We wrote a song with our model's name. YuE is a groundbreaking series of open-source foundation models designed for music generation, specifically for transforming lyrics into full songs (lyrics2song). It can generate a complete song, lasting several minutes, that includes both a catchy vocal track and complementary accompaniment, ensuring a polished and cohesive result. YuE is capable of modeling diverse genres/vocal styles. Below are examples of songs in the pop and metal genres. For more styles, please visit the demo page. NOTE: On first start-up, a number of inference models and libraries will be downloaded to the cache folder. Be patient. It will be up to 30 GB of storage. NOTE: All generated songs remain in the cache folder - even if they have been downloaded through the WebUI. You may manually remove them, if disk space becomes precious.