
AnythingLLM
Other, Productivity, Tools / Utilities• Utilities, AI
The all-in-one AI app for any LLM with full RAG and AI Agent capabilities.
Browse our large and growing catalog of applications to run in your Unraid server.
Other, Productivity, Tools / Utilities• Utilities, AI
The all-in-one AI app for any LLM with full RAG and AI Agent capabilities.
Other, Productivity, Tools / Utilities• Utilities, AI
A web interface for Stable Diffusion Integrates with Open WebUI: https://docs.openwebui.com/tutorial/images/#configuring-open-webui Add custom models: https://github.com/AbdBarho/stable-diffusion-webui-docker/wiki/Usage#custom-models
Other, Productivity, Tools / Utilities• Utilities, AI
Generative AI suite powered by state-of-the-art models and providing advanced AI/AGI functions. It features AI personas, AGI functions, multi-model chats, text-to-image, voice, response streaming, code highlighting and execution, PDF import, presets for developers, and more.
Productivity, Tools / Utilities• Utilities, AI
Prompt, run, edit, and deploy full-stack web applications using any LLM you want!
Network Services• Web, Tools / Utilities• Utilities, AI
Chatbot Ollama is an open source chat UI for Ollama
Cloud, Other, Tools / Utilities• Utilities, AI
Premium quality UI for ChatGPT
Fast, free, self-hosted Artificial Intelligence Server for any platform, any language. CodeProject.AI Server is a locally installed, self-hosted, fast, free and Open Source Artificial Intelligence server for any platform, any language. No off-device or out of network data transfer, no messing around with dependencies, and able to be used from any platform, any language. Runs as a Windows Service or a Docker container. It may take some time to install as the image takes up a few GB of space! One among many examples of use: it can be easily integrated in AgentDVR Video Surveillance Software for face or object recognition.
Fast, free, self-hosted Artificial Intelligence Server for any platform, any language. CodeProject.AI Server is a locally installed, self-hosted, fast, free and Open Source Artificial Intelligence server for any platform, any language. No off-device or out of network data transfer, no messing around with dependencies, and able to be used from any platform, any language. Runs as a Windows Service or a Docker container. The Docker GPU version is specific to nVidia's CUDA enabled cards with compute capability >= 6.0 It may take some time to install as the image takes up a few GB of space! One among many examples of use: it can be easily integrated in AgentDVR Video Surveillance Software for face or object recognition.
Media Applications• Photos, AI
ComfyUI WebUI Dockerfile with Nvidia support, installing ComfyUI from GitHub. Also installs ComfyUI Manager to simplify integration of additional custom nodes. The "run directory" contains HF, ComfyUI and venv. The "basedir" contains input, output and custom_nodes. All those folders will be created with the WANTED_UID and WANTED_GID parameters (by default using Unraid's default of 99:100) allowing the end-user to place directly into the folders their checkpoints, unet, lora and other required models. The container comes with no weights/models; you need to obtain those and install them in the proper directories under the mount you have selected for the "run directory". Output files will be placed into the basedir/output folder within the "basedir" directory. Please see https://github.com/mmartial/ComfyUI-Nvidia-Docker for further details. - See details about "latest" tag - See details about "First time use" (and the "bottle" workflow), noting that Unraid's default YOUR_BASE_DIRECTORY should be /mnt/user/appdata/comfyui-nvidia/basedir Note: - The container requires the Nvidia Driver plugin to be installed on your Unraid server. Usually that plugin will get you access to a CUDA driver with support for the latest tag available for this container. - This is a WebUI for the ComfyUI Stable Diffusion tool with a Docker image of usually over 4GB. - The container will take a while to start up, as it needs to download the ComfyUI Stable Diffusion tool and install its dependencies, usually adding another 5GB of downloaded content in the venv folder - The original Docker image is from Nvidia, as such it is governed by the NVIDIA Deep Learning Container License. - There are multiple version of the base image available, please select the one that fits your needs best. The name of the tag is the Ubuntu version followed by the CUDA version. Latest is set to point to the most recent combination as it should include the most recent software updates. For the complete list of supported versions, please see the GitHub repository
Media Applications• Video, Other, Productivity, Tools / Utilities• Utilities, AI
DOODS (Dedicated Open Object Detection Service) is a REST service that detects objects in images or video streams. It also supports GPUs and EdgeTPU hardware acceleration. For Nvidia GPU support, add "--gpus all" to the Extra Parameters field under Advanced.
Media Applications• Books, Media Servers• Books, Other, Productivity, Tools / Utilities• Utilities, AI
CPU/GPU Converter from eBooks to audiobooks with chapters and metadata using Calibre, ffmpeg, XTTSv2, Fairseq and more. Supports voice cloning and 1124 languages! For Nvidia GPU support, add "--gpus all" to the Extra Parameters field under Advanced.
Media Applications• Books, Media Servers• Books, Other, Productivity, Tools / Utilities• Utilities, AI
This is a legacy version of ebook2audiobook. Please use the new version.
Productivity, AI
Open source low-code tool for developers to build customized LLM orchestration flow and AI agents.
Tools / Utilities• Utilities, AI
FR Container contenant gpt-subtrans pour traduire des .srt vers une autre langue en utilisant OpenAI ChatGPT EN Translate .srt files using gpt-subtrans and OpenAI ChatGPT Source of gpt-subtrans: https://github.com/machinewrapped/gpt-subtrans Usage on demand run: docker exec -it gpt-subtrans-openai translate -o /subtitles/output.srt /subtitles/original.srt
Tools / Utilities• Utilities, AI
FR WEBUI gpt-subtrans pour faire traduire des sous-titres avec ChatGPT OpenAI EN WEBUI gpt-subtrans for translate subtitles using ChatGPT OpenAI Project source: https://github.com/machinewrapped/gpt-subtrans
Other, Productivity, Tools / Utilities• Utilities, AI
An all-in-one LLM server and chat UI
Other, Productivity, Tools / Utilities• Utilities, AI
An implementation of Stable Diffusion, the open source text-to-image and image-to-image generator, providing a streamlined process with various new features and options to aid the image generation process. **Nvidia GPU Use:** Using the Unraid Nvidia Plugin to install a version of Unraid with the Nvidia Drivers installed and add **--runtime=nvidia --gpus=all** to "extra parameters" (switch on advanced view) **AMD GPU Use:** For AMD GPU support, add "/dev/kfd" and "/dev/dri" each as a Device and add the required Variables: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html#accessing-gpus-in-containers
Tools / Utilities• Utilities, AI
LibreChat brings together the future of assistant AIs with the revolutionary technology of OpenAI's ChatGPT. Celebrating the original styling, LibreChat gives you the ability to integrate multiple AI models. It also integrates and enhances original client features such as conversation and message search, prompt templates and plugins. https://docs.librechat.ai/
Media Applications• Music, Video, Other, Productivity, Tools / Utilities• Utilities, AI
Lingarr is an application that leverages translation technologies to automatically translate subtitle files to your desired target language. With support for both LibreTranslate, DeepL and AI Lingarr offers a flexible solution for all your subtitle translation needs.
Network Services• Web, Tools / Utilities• Utilities, AI
LiteLLM provides a proxy server to manage auth, loadbalancing, and spend tracking across 100+ LLMs. All in the OpenAI format.
Other, Productivity, Tools / Utilities• Utilities, AI
Inference of Meta's LLaMA model (and others) in pure C/C++
Network Services• Web, AI
LobeChat is an open-source, extensible (Function Calling) high-performance chatbot framework. It supports one-click free deployment of your private ChatGPT/LLM web application. https://github.com/lobehub/lobe-chat/wiki If you need to use the OpenAI service through a proxy, you can configure the proxy address using the OPENAI_PROXY_URL environment variable: OPENAI_PROXY_URL=https://api-proxy.com/v1
Other, Productivity, Tools / Utilities• Utilities, AI
The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities. Additional image variants are also available: https://localai.io/basics/container/#standard-container-images For Nvidia GPU support, add "--gpus all" to the Extra Parameters field under Advanced. For AMD GPU support, add "/dev/kfd" and "/dev/dri" each as a Device and add the required Variables: https://localai.io/features/gpu-acceleration/#setup-example-dockercontainerd For Intel iGPU support, add "/dev/dri" as a Device and add "--device=/dev/dri" to the Extra Parameters field under Advanced.
The easiest way to get up and running with large language models locally.
Network Services• Web, Tools / Utilities• Utilities, AI
(Formerly Ollama WebUI) ChatGPT-Style Web Interface for various LLM runners, including Ollama and OpenAI-compatible APIs IMPORTANT: Make sure to add the following environment variable to your ollama container - OLLAMA_ORIGINS=* Set your OpenAI API key (not persistant) - OPENAI_API_KEY
Home Automation, Productivity, Tools / Utilities• Utilities, AI
A self-hosted, offline, ChatGPT-like chatbot with open source LLM support. 100% private, with no data leaving your device.
Home Automation, Productivity, Tools / Utilities• Utilities, AI
A self-hosted, offline, ChatGPT-like chatbot with open source LLM support. 100% private, with no data leaving your device. Please note that this version requires an NVIDIA GPU with the Unraid NVIDIA-DRIVER plugin.
Cloud, Media Applications• Other, Other, Productivity, Tools / Utilities• Utilities, AI
An automated document analyzer for Paperless-ngx using OpenAI API and Ollama (Mistral, llama, phi 3, gemma 2) to automatically analyze and tag your documents.
Other, Productivity, Tools / Utilities• Utilities, AI
Phantasm offers open-source toolkits that allows you to create human-in-the-loop (HITL) workflows for modern AI agents.