👧
Amica
Launch DemoTelegramGitHubTwitter
  • Welcome to Amica!
  • 🌸Overview
    • How Amica Works
    • Core Features
    • Amica Life
    • Emotion System
    • Other Features
    • Use Cases
    • Amica vs Other Tools
  • 🌳Getting Started
    • Quickstart Guide
    • Installing Amica
    • Next Steps
  • 🗣️Connecting LLMs (Your Core AI Chatbot Model)
    • Using LM Studio
    • Using LLaMA.cpp
    • Using Ollama
    • Using KoboldCpp
    • Using OpenAI
    • Using Oobabooga
    • Using OpenRouter
  • 🔊Connecting Speech Options (TTS)
    • Using SpeechT5
    • Using ElevenLabs
    • Using Coqui Local
    • Using Piper
    • Using Alltalk TTS
    • Using Kokoro TTS
    • Using RVC
  • 👂Connecting Microphone Options (STT)
    • Using whisper.cpp
  • 👁️Connecting Multi-Modal Modules
    • Using LLaVA
  • 🔧Other Guides
    • Using Window.ai
    • Using Moshi (Voice to Voice)
  • 🧩Plugin System
    • Plugins Intro
    • Getting Real World News on Amica
  • 🔌API System
    • External API for Agents
  • 🌻Tutorials
    • Creating new Avatars
    • Using Custom Assets
  • 🌺Contributing to Amica
    • Setting up your developer environment
    • Contributing to the Docs
    • Developing Amica
    • Adding Translations
Powered by GitBook
On this page
  • Step 1 - Install Ollama
  • Step 2 - Start the server
  • Step 3 - Download a model
  • Step 4 - Enable the server in the client
Edit on GitHub
  1. Connecting LLMs (Your Core AI Chatbot Model)

Using Ollama

PreviousUsing LLaMA.cppNextUsing KoboldCpp

Last updated 1 year ago

You can find the full Ollama documentation .

Step 1 - Install Ollama

Linux and WSL2

curl https://ollama.ai/install.sh | sh

Mac OSX

Windows

Not yet supported

Step 2 - Start the server

ollama serve

Step 3 - Download a model

For example, we will use Mistral 7B. There are many models to choose from listed in .

ollama run mistral

Step 4 - Enable the server in the client

settings -> ChatBot -> ChatBot Backend -> Ollama
🗣️
here
Download
the library