Skip to content

Details

The open-source Ollama server makes it straightforward to run a variety of models (Mistral, Gemma, Phi, Llama) on your own machine in a podman/docker container. The advantage of running them inside a container is that you can experiment with them and then discard when you are bored - and the queries stay on your own machine, of course.

This talk will show you how to get Mistral, a pretty decent LLM chatbot, up and running in short order without any fuss! To follow along, you will need:

  • `podman` or `docker`
  • `make` or `just`
  • and a reasonably fast internet connection!

NAVI:
If you often forget commands, hate context switching, and want to automate more of what you do, this talk aims to help you do that!

It will be a quick overview of a tool I've found really neat over the last few months and has saved me loads of time - Navi (https://github.com/denisidoro/navi)

It’s an interactive command cheat sheet with dynamic customisable placeholders. Great for making more use of CLIs without learning them by hand, and working in the terminal faster

Related topics

Artificial Intelligence
Artificial Intelligence Programming
Computer Programming
DevOps
Coding for Beginners

You may also like