<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Inference on Caminho Solo</title><link>https://caminhosolo.com.br/en/tags/inference/</link><description>Recent content in Inference on Caminho Solo</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Wed, 01 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://caminhosolo.com.br/en/tags/inference/index.xml" rel="self" type="application/rss+xml"/><item><title>Liquid Foundation Models: AI Without the Per-Token Bill</title><link>https://caminhosolo.com.br/en/2026/04/liquid-foundation-models-independent-ai/</link><pubDate>Wed, 01 Apr 2026 00:00:00 +0000</pubDate><guid>https://caminhosolo.com.br/en/2026/04/liquid-foundation-models-independent-ai/</guid><description>TL;DR Liquid AI released Liquid Foundation Models (LFMs)—open-source AI models that run on any hardware (CPU, GPU, NPU) at a fraction of traditional LLM costs. For solo builders: run AI locally at professional quality, eliminate expensive API dependencies, and build competitive AI products without heavy infrastructure.</description></item><item><title>vLLM: how to serve LLMs in production with high throughput</title><link>https://caminhosolo.com.br/en/2026/03/vllm-inference-production/</link><pubDate>Sun, 29 Mar 2026 00:00:00 +0000</pubDate><guid>https://caminhosolo.com.br/en/2026/03/vllm-inference-production/</guid><description>TL;DR: vLLM is an open-source inference engine that delivers 2-4x more throughput than traditional solutions, with 50-80% lower costs than external APIs for high-volume usage. Recommended for products exceeding 100k tokens/month.</description></item></channel></rss>