LlamaCpp
Run the llama.cpp server binary to start the API server. If running on a remote server, be sure to set host to 0.0.0.0:
.\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models\meta\llama\codellama-7b-instruct.Q8_0.gguf
After it's up and running, change ~/.continue/config.json
to look like this:
config.json
{
"models": [
{
"title": "Llama CPP",
"provider": "llama.cpp",
"model": "MODEL_NAME",
"apiBase": "http://localhost:8080"
}
]
}