Local GenAI LLMs with Ollama and Docker (Ep 262)

Published 2024-04-18
Learn how to run your own local ChatGPT clone and GitHub Copilot clone by setting up Ollama and Docker's "GenAI Stack" to build apps on top of open source LLMs and closed-source SaaS models (GPT-4, etc.). Matt Williams is our guest to walk us through all the parts of this solution, and show us how Ollama can make it easier on Mac, Windows, and Linux to setup custom LLM stacks.

🗞️ Sign up for my weekly newsletter for the latest on upcoming guests and what I'm releasing: www.bretfisher.com/newsletter/

Matt Williams
============
twitter.com/Technovangelist
www.linkedin.com/in/technovangelist/

Nirmal Mehta
============
www.linkedin.com/in/nirmalkmehta
twitter.com/normalfaults
hachyderm.io/@nirmal

Bret Fisher
=========
www.linkedin.com/in/bretefisher/
twitter.com/BretFisher
www.bretfisher.com/

Join my Community 🤜🤛
================
đź’Ś Weekly newsletter on upcoming guests and stuff I'm working on: www.bretfisher.com/newsletter/
đź’¬ Join the discussion on our Discord chat server discord.com/invite/devops
👨‍🏫 Coupons for my Docker and Kubernetes courses www.bretfisher.com/courses/
🎙️ Podcast of this show www.bretfisher.com/podcast

Show Music 🎵
==========
waiting music: Jakarta - Bonsaye www.epidemicsound.com/track/YOhNIQJXnZ/
intro music: I Need A Remedy (Instrumental Version) - Of Men And Wolves www.epidemicsound.com/track/zMtvEjKL4Y/
outro music: Electric Ballroom - Quesa www.epidemicsound.com/track/KHL0iR8AAM/

All Comments (6)