| Ludwig AutoML for Text Classification |
Anne Holler |
May 02, 2022 |
3543 |
- |
| 10 Things You Need To Know About LLMs |
Arnav Garg and Daliana Liu |
Sep 28, 2023 |
1795 |
- |
| Ludwig v0.8: Open-source Toolkit to Build and Fine-tune Custom LLMs on Your Data |
Travis Addair and Arnav Garg |
Aug 09, 2023 |
3260 |
- |
| Announcing the Ludwig 10k Giveaway Competition |
Alex Sherstinsky |
Nov 13, 2023 |
618 |
- |
| Product Updates - February 2024 |
Will Van Eaton and Abhay Malik |
Feb 13, 2024 |
338 |
- |
| The Complete Guide To Sentiment Analysis with Ludwig — Part II |
Kanishk Kalra |
Feb 24, 2021 |
1589 |
- |
| 15x Faster Fine-Tuning in Under 15 Days |
Arnav Garg |
Jul 02, 2024 |
1867 |
- |
| Serve 100+ Fine-Tuned LLMs with LoRA Exchange on One GPU |
Travis Addair and Geoffrey Angus |
Oct 18, 2023 |
2531 |
- |
| Unit Test ML Models in PyTorch for Gradient Updates |
Jim Thompson |
Oct 26, 2022 |
1463 |
- |
| The First Reinforcement Fine-Tuning Platform for LLMs |
Devvret Rishi and Travis Addair |
Mar 19, 2025 |
1316 |
- |
| Not Your Average VPC: Secure AI in Your Private Cloud with Direct Ingress |
Noah Yoshida and Michael Ortega |
Aug 13, 2025 |
1028 |
- |
| Fine-Tune CodeLlama-7B to Generate Python Docstrings |
Connor McCormick and Arnav Garg |
Dec 06, 2023 |
1483 |
- |
| Fine-Tuned: January 2024 |
Predibase Team |
Jan 28, 2024 |
897 |
- |
| 2023 December Newsletter |
Predibase Team |
Jan 17, 2024 |
1033 |
- |
| Build an NER Model for Molecular Biology Terms |
Connor McCormick |
May 12, 2023 |
2048 |
- |
| How to Run Inference on Ludwig Models Using TorchScript |
Geoffrey Angus |
Dec 05, 2022 |
2087 |
- |
| How to Fine-tune And Serve VLMs in Predibase |
Timothy Wang |
Jan 07, 2025 |
1359 |
- |
| Introducing Predibase: The enterprise declarative machine learning platform |
Piero Molino |
May 11, 2022 |
1899 |
- |
| 12 Best Practices for Distilling Small LMs from GPT |
Justin Zhao and Wael Abid |
Jan 16, 2024 |
5092 |
- |
| Try 10x Faster Fine-Tuning |
Abhay Malik |
Apr 25, 2024 |
616 |
- |
| Boost Tabular Data Predictions with Tree Models in Ludwig 0.6 |
Daliana Liu and Joppe Geluykens |
Dec 20, 2022 |
1525 |
- |
| Guide: How to Prevent Overfitting in Machine Learning Models |
Daliana Liu and Geoffrey Angus |
Jul 31, 2023 |
2126 |
- |
| Ludwig AI v0.4 — Introducing Declarative MLOps with Ray, Dask, TabNet, and MLflow integrations |
Piero Molino |
Jun 14, 2021 |
2222 |
- |
| Real-World LLM Inference Benchmarks: How Predibase Built the Fastest Stack |
Chloe Leung |
May 28, 2025 |
1683 |
- |
| How to Fine-tune Mixtral 8x7b with Open-source Ludwig |
Timothy Wang |
Dec 19, 2023 |
1127 |
- |
| How DeepSeek-R1 Beats o1 with Reinforcement Learning |
Will Van Eaton |
Jan 29, 2025 |
1410 |
- |
| Declarative ML for Fraud Detection and Imbalanced Data |
Daliana Liu |
Jun 14, 2023 |
2175 |
- |
| Deep Learning for Topic Classification on Unstructured Text |
Daliana Liu |
Mar 14, 2023 |
2009 |
- |
| The 5 Hidden Hurdles of Building AI Infra |
Michael Ortega |
Jul 02, 2025 |
1065 |
- |
| Predibase will be joining forces with Rubrik |
Devvret Rishi |
Jun 25, 2025 |
1488 |
- |
| Turbo LoRA: 2-3x faster fine-tuned LLM inference |
Travis Addair and Arnav Garg |
Aug 02, 2024 |
3618 |
- |
| Train AI to Write GPU Code via Reinforcement Fine-Tuning |
Arnav Garg, Travis Addair and Will Van Eaton |
Feb 14, 2025 |
2055 |
- |
| How to Fine-Tune Zephyr-7B for Support Call Analysis |
Alex Sherstinsky and Magdy Saleh |
Jan 08, 2024 |
2719 |
- |
| Improving Agent Feedback with Multi-LoRA at Convirza |
Will Van Eaton |
Nov 25, 2024 |
974 |
- |
| Fine-Tuned Newsletter: April-May 2024 |
Will Van Eaton |
May 21, 2024 |
551 |
- |
| Guide to Reward Functions in Reinforcement Fine-Tuning |
Joppe Geluykens |
Apr 09, 2025 |
2524 |
- |
| 10 AI Predictions that Will Shape 2023 and Beyond |
Michael Ortega |
Jan 24, 2023 |
1578 |
- |
| Maximize Zero-Shot LLM Performance on Tabular Data |
Timothy Wang and Justin Zhao |
Aug 15, 2023 |
2538 |
- |
| Ludwig 0.6: Gradient Boosted Models, Config Validation, and Pipelined TorchScript |
Justin Zhao and Jim Thompson |
Oct 04, 2022 |
2796 |
- |
| Fine-Tune LLaMA-2 for Code Generation on a Budge |
Timothy Wang and Devvret Rishi |
Nov 09, 2023 |
1419 |
- |
| DeepSeek Survey Results: Insights from AI Leaders |
Will Van Eaton |
Apr 16, 2025 |
456 |
- |
| Product Updates - September 2024 |
Will Van Eaton |
Sep 18, 2024 |
860 |
- |
| How Upstage Built a Highly Accurate SLM for Proofreading |
Devvret Rishi and Kasey Roh |
Sep 09, 2024 |
904 |
- |
| How to Deploy LLaMA 4 Models in Your VPC or Cloud |
Martin Davis and Michael Ortega |
Apr 14, 2025 |
1647 |
- |
| Fine-Tuned Newsletter: June 2024 |
Will Van Eaton |
Jul 02, 2024 |
774 |
- |
| Manage Your LLM Deployments with Command Center |
Will Van Eaton |
Mar 29, 2024 |
661 |
- |
| How to Fine-Tune LLaMA-2 on Your Own Data at Scale |
Arnav Garg |
Jul 20, 2023 |
3138 |
- |
| How to Deploy and Serve Qwen 3 in Your Private Cloud (VPC) |
Michael Ortega and Magdy Saleh |
May 01, 2025 |
2339 |
- |
| Why Reinforcement Learning Beats SFT with Limited Data |
Travis Addair and Arnav Garg |
Feb 11, 2025 |
2995 |
- |
| Introducing the Fine-Tuning Index for LLMs |
Will Van Eaton |
May 21, 2024 |
382 |
- |
| 5 Reasons Why LoRA Adapters are the Future of Fine-tuning |
Predibase Team |
Jun 10, 2024 |
2493 |
- |
| How to Efficiently Fine-Tune CodeLlama-70B Instruct |
Alex Sherstinsky |
Feb 08, 2024 |
2247 |
- |
| AI and LLM Predictions for 2024 |
Michael Ortega |
Jan 29, 2024 |
2178 |
- |
| Fine-Tuned: February-March 2024 |
Predibase Team |
Mar 04, 2024 |
939 |
- |
| Ludwig 0.5: Declarative Machine Learning, now on PyTorch |
Justin Zhao, Jim Thompson and Piero Molino |
Jun 28, 2022 |
1707 |
- |
| Apple’s GenAI Architecture: Small, Fine-Tuned & LoRA-Based |
Devvret Rishi |
Jun 13, 2024 |
713 |
- |
| LoRAX: Open Source LoRA Serving Framework for LLMs |
Travis Addair, Geoffrey Angus, Magdy Saleh and Wael Abid |
Nov 16, 2023 |
1781 |
- |
| LoRAX + Outlines: Better JSON Extraction with LoRA |
Jeffrey Tang and Travis Addair |
Mar 03, 2024 |
2285 |
- |
| Fine-Tune and Serve Open-Source AI—Faster and Cheaper |
Abhay Malik |
Oct 24, 2023 |
1346 |
- |
| LLMs in Production: Key Insights from Our New Report |
Michael Ortega |
Aug 23, 2023 |
1144 |
- |
| Optimize LLM Performance with Deployment Health Analytics |
Will Van Eaton |
Sep 11, 2024 |
971 |
- |
| How to Use LLMs on Tabular Data with TabLLM |
Timothy Wang and Justin Zhao |
Aug 16, 2023 |
1133 |
- |
| Training an Expert Coding Agent with Reinforcement Fine-Tuning |
Evan Sandler, Ross Favero and Ajinkya Tejankar |
May 20, 2025 |
2561 |
- |
| Using Multi-Modal ML to Predict Customer Ratings |
Abhay Malik |
Jan 31, 2023 |
2324 |
- |
| LLM Serving Guide: How to Build Faster Inference for Open-source Models |
Michael Ortega |
May 12, 2025 |
1794 |
- |
| Koble’s Case Study: AI-Driven Startup Investing |
Connor McCormick and Will Van Eaton |
Nov 14, 2023 |
1141 |
- |
| Beyond Chat: Real Use Cases for LLMs in Production |
Joppe Geluykens, Geoffrey Angus and Miheer Patankar |
Jun 27, 2023 |
1297 |
- |
| Predibase Wrapped: Our greatest hits of 2024 |
Will Van Eaton |
Dec 19, 2024 |
1622 |
- |
| Build an SQL Copilot with LLMs and Synthetic Data |
Alex Sherstinsky and Yev Meyer |
Jul 11, 2024 |
2434 |
- |
| Build AI Applications Faster with Declarative ML |
Abhay Malik |
May 31, 2023 |
923 |
- |
| Fine-Tuned SLMs Help Checkr Optimize Background Checks |
Vlad Bukhin, Staff ML Engineer at Checkr |
Oct 03, 2024 |
1863 |
- |
| The First Serverless Solution for Fine-Tuned LLMs |
Abhay Malik |
Feb 13, 2024 |
830 |
- |
| Next-Gen Inference Engine for Fine-Tuned SLMs |
Will Van Eaton |
Oct 15, 2024 |
2475 |
- |
| Fine-Tuned Newsletter: March 2024 |
Predibase Team |
Mar 31, 2024 |
688 |
- |
| Personalizing Trading with Deep Learning on Snowflake |
Joppe Geluykens and Michael Ortega |
Jan 20, 2024 |
1395 |
- |
| Self-Distilling DeepSeek-R1 with Turbo Speculation - 2x Inference |
Ajinkya Tejankar and Will Van Eaton |
Feb 19, 2025 |
1887 |
- |
| Ludwig v0.7: Fine-tuning Pretrained Image and Text Models 50x Faster and Easier |
Travis Addair |
Feb 27, 2023 |
2080 |
- |
| The Complete Guide to Sentiment Analysis with Ludwig —Part I |
Kanishk Kalra |
Feb 24, 2021 |
2384 |
- |
| Build an SLM That Outperforms GPT-4o with Synthetic Data |
Chloe Leung |
Oct 30, 2024 |
2900 |
- |
| LoRA Land: Open-Source LLMs That Beat GPT-4 |
Timothy Wang, Justin Zhao and Will Van Eaton |
Feb 20, 2024 |
1821 |
- |
| The Future of AI is Specialized |
Devvret Rishi and Piero Molino |
Dec 15, 2023 |
1691 |
- |
| How to Efficiently Fine-Tune Gemma-7B with Open-Source Ludwig |
Alex Sherstinsky |
Feb 23, 2024 |
787 |
- |
| 7 Things to Know About Fine-Tuning LLMs |
Geoffrey Angus |
Feb 22, 2024 |
3475 |
- |
| DeepSeek Deployment Guide for VPC and SaaS Clouds |
Will Van Eaton |
Jan 31, 2025 |
1505 |
- |
| The Complete Guide to Sentiment Analysis with Ludwig — Part III: Hyperparameter Optimization |
Michael Zhu and Piero Molino |
Dec 21, 2020 |
1637 |
- |
| How Declarative ML Is Transforming Data Science |
Kevin Petrie |
Oct 18, 2022 |
1592 |
- |
| Solar LLM: Fine-Tuned Performance That Beats GPT-4 |
Arnav Garg, Junyeop Lee, Lucy Park, Kasey Roh and Will Van Eaton |
Jun 17, 2024 |
1142 |
- |
| Agentic AI at Scale: Marsh McLennan Saves 1M+ Hours |
Will Van Eaton |
Mar 12, 2025 |
1042 |
- |
| How to Fine-Tune LLaMA-70B for JSON Generation |
Geoffrey Angus, Wael Abid and Timothy Wang |
Dec 07, 2023 |
1400 |
- |
| Ludwig AutoML for Deep Learning |
Anne Holler |
Feb 14, 2022 |
2436 |
- |
| Product Updates - March 2024 |
Will Van Eaton |
Mar 29, 2024 |
459 |
- |
| Fine-Tune Mistral 7B on a Single GPU with Ludwig |
Alex Sherstinsky and Arnav Garg |
Oct 06, 2023 |
6332 |
- |
| How to Fine-Tune LLaMA 3 for Customer Support Tasks |
Chloe Leung |
Apr 30, 2024 |
1925 |
- |
| Ludwig 10k Stars LLM Fine-tuning Hackathon Winners |
Alex Sherstinsky and Michael Ortega |
Feb 01, 2024 |
1297 |
- |