dumbpilot/README.md
2023-11-19 19:37:57 +01:00

219 B

dumbpilot

Get inline completions using llama.cpp as as server backend

Usage

  1. start llama.cpp/server in the background or on a remote machine
  2. configure the host
  3. press ctrl+shift+l to use code prediction