119: Why So Many Llamas?

Self-Hosted - A podcast by Jupiter Broadcasting - Fridays

Categories:

Alex rolls back a major server upgrade, and we have fun playing with local large language models.Special Guest: Wes Payne.Sponsored By:Tailscale: Tailscale is a Zero config VPN. It installs on any device in minutes, manages firewall rules for you, and works from anywhere. Get 3 users and 100 devices for free. This Week in Bitcoin: A high-signal Bitcoin news podcast focused on analysis you'll find valuable.Support Self-HostedLinks:āš” Grab Sats with Strike Around the World ā€” Strike is a lightning-powered app that lets you quickly and cheaply grab sats in over 36 countries. šŸŽ‰ Boost with Fountain FM ā€” Fountain 1.0 has a new UI, upgrades, and super simple Strike integration for easy Boosts.Training AI to Play Pokemon with Reinforcement Learning Open WebUI ā€” Open WebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI for various LLM runners, supported LLM runners include Ollama and OpenAI-compatible APIs.Ollama ā€” Get up and running with large language models, locally.Alex's Nix Nvidia Config tlm ā€” Local CLI Copilot, powered by CodeLLaMa.LM Studio - Discover, download, and run local LLMs ā€” šŸ¤– - Run LLMs on your laptop, entirely offline šŸ‘¾ - Use models through the in-app Chat UI or an OpenAI compatible local server šŸ“‚ - Download any compatible model files from HuggingFace šŸ¤— repositories šŸ”­ - Discover new & noteworthy LLMs in the app's home pageHugging Face ā€” The Home of Machine Learning šŸ” Lunch at SCaLE šŸ‡ ā€” Let's put an official time down on the calendar to get together. The Yardhouse has always been a solid go-to, so sit down and break bread with the Unplugged crew during the lunch break on Saturday!