👧
Amica
Launch DemoTelegramGitHubTwitter
  • Welcome to Amica!
  • 🌸Overview
    • How Amica Works
    • Core Features
    • Amica Life
    • Emotion System
    • Other Features
    • Use Cases
    • Amica vs Other Tools
  • 🌳Getting Started
    • Quickstart Guide
    • Installing Amica
    • Next Steps
  • 🗣️Connecting LLMs (Your Core AI Chatbot Model)
    • Using LM Studio
    • Using LLaMA.cpp
    • Using Ollama
    • Using KoboldCpp
    • Using OpenAI
    • Using Oobabooga
    • Using OpenRouter
  • 🔊Connecting Speech Options (TTS)
    • Using SpeechT5
    • Using ElevenLabs
    • Using Coqui Local
    • Using Piper
    • Using Alltalk TTS
    • Using Kokoro TTS
    • Using RVC
  • 👂Connecting Microphone Options (STT)
    • Using whisper.cpp
  • 👁️Connecting Multi-Modal Modules
    • Using LLaVA
  • 🔧Other Guides
    • Using Window.ai
    • Using Moshi (Voice to Voice)
  • 🧩Plugin System
    • Plugins Intro
    • Getting Real World News on Amica
  • 🔌API System
    • External API for Agents
  • 🌻Tutorials
    • Creating new Avatars
    • Using Custom Assets
  • 🌺Contributing to Amica
    • Setting up your developer environment
    • Contributing to the Docs
    • Developing Amica
    • Adding Translations
Powered by GitBook
On this page
  • Step 1 - Clone the repo
  • Step 2 - Download the model
  • Step 3 - Build the server
  • Step 4 - Run the server
  • Step 5 - Enable the server in the client
Edit on GitHub
  1. Connecting Multi-Modal Modules

Using LLaVA

PreviousUsing whisper.cppNextUsing Window.ai

Last updated 1 year ago

LLaVA / BakLLaVA can be used with .

You can find the full llama.cpp documentation .

Step 1 - Clone the repo

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

Step 2 - Download the model

For example, we will use BakLLaVA-1 model, which is what is used on the demo instance.

Navigate to and download either q4 or q5 quant, as well as the mmproj-model-f16.gguf file.

The mmproj-model-f16.gguf file is necessary for the vision model.

Step 3 - Build the server

make server

Step 4 - Run the server

Read the documentation for more information on the server options. Or run ./server --help.

./server -t 4 -c 4096 -ngl 35 -b 512 --mlock -m models/openchat_3.5.Q5_K_M.gguf --mmproj models/mmproj-model-f16.gguf

Step 5 - Enable the server in the client

settings -> Vision -> Vision Backend -> LLaMA.cpp
👁️
LLaMA.cpp
here
mys/ggml_bakllava-1
llama.cpp