👧
Amica
Launch DemoTelegramGitHubTwitter
  • Welcome to Amica!
  • 🌸Overview
    • How Amica Works
    • Core Features
    • Amica Life
    • Emotion System
    • Other Features
    • Use Cases
    • Amica vs Other Tools
  • 🌳Getting Started
    • Quickstart Guide
    • Installing Amica
    • Next Steps
  • 🗣️Connecting LLMs (Your Core AI Chatbot Model)
    • Using LM Studio
    • Using LLaMA.cpp
    • Using Ollama
    • Using KoboldCpp
    • Using OpenAI
    • Using Oobabooga
    • Using OpenRouter
  • 🔊Connecting Speech Options (TTS)
    • Using SpeechT5
    • Using ElevenLabs
    • Using Coqui Local
    • Using Piper
    • Using Alltalk TTS
    • Using Kokoro TTS
    • Using RVC
  • 👂Connecting Microphone Options (STT)
    • Using whisper.cpp
  • 👁️Connecting Multi-Modal Modules
    • Using LLaVA
  • 🔧Other Guides
    • Using Window.ai
    • Using Moshi (Voice to Voice)
  • 🧩Plugin System
    • Plugins Intro
    • Getting Real World News on Amica
  • 🔌API System
    • External API for Agents
  • 🌻Tutorials
    • Creating new Avatars
    • Using Custom Assets
  • 🌺Contributing to Amica
    • Setting up your developer environment
    • Contributing to the Docs
    • Developing Amica
    • Adding Translations
Powered by GitBook
On this page
  • Step 1 - Install LM Studio
  • Step 2 - Download a model
  • Step 3 - Start the server
  • Step 4 - Enable the server in the client
Edit on GitHub
  1. Connecting LLMs (Your Core AI Chatbot Model)

Using LM Studio

PreviousNext StepsNextUsing LLaMA.cpp

Last updated 1 year ago

You can find the full LM Studio documentation .

Step 1 - Install LM Studio

Navigate to and follow the instructions to install the GUI.

Step 2 - Download a model

Using the GUI, download a model from the LM Studio library. If you don't know which to pick, try TheBloke/openchat_3.5.gguf version openchat_3.5.Q5.K_M.gguf.

Step 3 - Start the server

On the left side of the GUI, click the "Local Server" button. Then, in the dropdown on the top of the screen, select the model you downloaded.

Next, in the Server Options pane, ensure that Cross-Origin-Resource-Sharing (CORS) is enabled.

Finally, click "Start Server".

Step 4 - Enable the server in the client

First select ChatGPT as the backend in the client:

settings -> ChatBot -> ChatBot Backend -> ChatGPT

Then configure ChatGPT to use the LM Studio server:

settings -> ChatBot -> ChatGPT

Set OpenAI URL to http://localhost:8080 and OpenAI Key to default. If you changed the port in the LM Studio GUI, use that port instead of 8080.

🗣️
here
the LM Studio website