|
Ludwig AutoML for Text Classification
|
Anne Holler |
2022-05-02 |
3,543 |
--
|
|
10 Things You Need To Know About LLMs
|
Arnav Garg and Daliana Liu |
2023-09-28 |
1,795 |
--
|
|
Ludwig v0.8: Open-source Toolkit to Build and Fine-tune Custom LLMs on Your …
|
Travis Addair and Arnav Garg |
2023-08-09 |
3,260 |
--
|
|
Announcing the Ludwig 10k Giveaway Competition
|
Alex Sherstinsky |
2023-11-13 |
618 |
--
|
|
Product Updates - February 2024
|
Will Van Eaton and Abhay Malik |
2024-02-13 |
338 |
--
|
|
15x Faster Fine-Tuning in Under 15 Days
|
Arnav Garg |
2024-07-02 |
1,867 |
--
|
|
Serve 100+ Fine-Tuned LLMs with LoRA Exchange on One GPU
|
Travis Addair and Geoffrey Angus |
2023-10-18 |
2,531 |
--
|
|
Unit Test ML Models in PyTorch for Gradient Updates
|
Jim Thompson |
2022-10-26 |
1,463 |
--
|
|
The First Reinforcement Fine-Tuning Platform for LLMs
|
Devvret Rishi and Travis Addair |
2025-03-19 |
1,316 |
--
|
|
Not Your Average VPC: Secure AI in Your Private Cloud with Direct …
|
Noah Yoshida and Michael Ortega |
2025-08-13 |
1,028 |
--
|
|
Fine-Tune CodeLlama-7B to Generate Python Docstrings
|
Connor McCormick and Arnav Garg |
2023-12-06 |
1,483 |
--
|
|
Fine-Tuned: January 2024
|
Predibase Team |
2024-01-28 |
897 |
--
|
|
2023 December Newsletter
|
Predibase Team |
2024-01-17 |
1,033 |
--
|
|
Build an NER Model for Molecular Biology Terms
|
Connor McCormick |
2023-05-12 |
2,048 |
--
|
|
How to Run Inference on Ludwig Models Using TorchScript
|
Geoffrey Angus |
2022-12-05 |
2,087 |
--
|
|
How to Fine-tune And Serve VLMs in Predibase
|
Timothy Wang |
2025-01-07 |
1,359 |
--
|
|
Introducing Predibase: The enterprise declarative machine learning platform
|
Piero Molino |
2022-05-11 |
1,899 |
--
|
|
12 Best Practices for Distilling Small LMs from GPT
|
Justin Zhao and Wael Abid |
2024-01-16 |
5,092 |
--
|
|
Try 10x Faster Fine-Tuning
|
Abhay Malik |
2024-04-25 |
616 |
--
|
|
Boost Tabular Data Predictions with Tree Models in Ludwig 0.6
|
Daliana Liu and Joppe Geluykens |
2022-12-20 |
1,525 |
--
|
|
Guide: How to Prevent Overfitting in Machine Learning Models
|
Daliana Liu and Geoffrey Angus |
2023-07-31 |
2,126 |
--
|
|
Real-World LLM Inference Benchmarks: How Predibase Built the Fastest Stack
|
Chloe Leung |
2025-05-28 |
1,683 |
--
|
|
How to Fine-tune Mixtral 8x7b with Open-source Ludwig
|
Timothy Wang |
2023-12-19 |
1,127 |
--
|
|
How DeepSeek-R1 Beats o1 with Reinforcement Learning
|
Will Van Eaton |
2025-01-29 |
1,410 |
--
|
|
Declarative ML for Fraud Detection and Imbalanced Data
|
Daliana Liu |
2023-06-14 |
2,175 |
--
|
|
Deep Learning for Topic Classification on Unstructured Text
|
Daliana Liu |
2023-03-14 |
2,009 |
--
|
|
The 5 Hidden Hurdles of Building AI Infra
|
Michael Ortega |
2025-07-02 |
1,065 |
--
|
|
Predibase will be joining forces with Rubrik
|
Devvret Rishi |
2025-06-25 |
1,488 |
--
|
|
Turbo LoRA: 2-3x faster fine-tuned LLM inference
|
Travis Addair and Arnav Garg |
2024-08-02 |
3,618 |
--
|
|
Train AI to Write GPU Code via Reinforcement Fine-Tuning
|
Arnav Garg, Travis Addair and Will Van Eaton |
2025-02-14 |
2,055 |
--
|
|
How to Fine-Tune Zephyr-7B for Support Call Analysis
|
Alex Sherstinsky and Magdy Saleh |
2024-01-08 |
2,719 |
--
|
|
Improving Agent Feedback with Multi-LoRA at Convirza
|
Will Van Eaton |
2024-11-25 |
974 |
--
|
|
Fine-Tuned Newsletter: April-May 2024
|
Will Van Eaton |
2024-05-21 |
551 |
--
|
|
Guide to Reward Functions in Reinforcement Fine-Tuning
|
Joppe Geluykens |
2025-04-09 |
2,524 |
--
|
|
10 AI Predictions that Will Shape 2023 and Beyond
|
Michael Ortega |
2023-01-24 |
1,578 |
--
|
|
Maximize Zero-Shot LLM Performance on Tabular Data
|
Timothy Wang and Justin Zhao |
2023-08-15 |
2,538 |
--
|
|
Ludwig 0.6: Gradient Boosted Models, Config Validation, and Pipelined TorchScript
|
Justin Zhao and Jim Thompson |
2022-10-04 |
2,796 |
--
|
|
Fine-Tune LLaMA-2 for Code Generation on a Budge
|
Timothy Wang and Devvret Rishi |
2023-11-09 |
1,419 |
--
|
|
DeepSeek Survey Results: Insights from AI Leaders
|
Will Van Eaton |
2025-04-16 |
456 |
--
|
|
Product Updates - September 2024
|
Will Van Eaton |
2024-09-18 |
860 |
--
|
|
How Upstage Built a Highly Accurate SLM for Proofreading
|
Devvret Rishi and Kasey Roh |
2024-09-09 |
904 |
--
|
|
How to Deploy LLaMA 4 Models in Your VPC or Cloud
|
Martin Davis and Michael Ortega |
2025-04-14 |
1,647 |
--
|
|
Fine-Tuned Newsletter: June 2024
|
Will Van Eaton |
2024-07-02 |
774 |
--
|
|
Manage Your LLM Deployments with Command Center
|
Will Van Eaton |
2024-03-29 |
661 |
--
|
|
How to Fine-Tune LLaMA-2 on Your Own Data at Scale
|
Arnav Garg |
2023-07-20 |
3,138 |
--
|
|
How to Deploy and Serve Qwen 3 in Your Private Cloud (VPC)
|
Michael Ortega and Magdy Saleh |
2025-05-01 |
2,339 |
--
|
|
Why Reinforcement Learning Beats SFT with Limited Data
|
Travis Addair and Arnav Garg |
2025-02-11 |
2,995 |
--
|
|
Introducing the Fine-Tuning Index for LLMs
|
Will Van Eaton |
2024-05-21 |
382 |
--
|
|
5 Reasons Why LoRA Adapters are the Future of Fine-tuning
|
Predibase Team |
2024-06-10 |
2,493 |
--
|
|
How to Efficiently Fine-Tune CodeLlama-70B Instruct
|
Alex Sherstinsky |
2024-02-08 |
2,247 |
--
|
|
AI and LLM Predictions for 2024
|
Michael Ortega |
2024-01-29 |
2,178 |
--
|
|
Fine-Tuned: February-March 2024
|
Predibase Team |
2024-03-04 |
939 |
--
|
|
Ludwig 0.5: Declarative Machine Learning, now on PyTorch
|
Justin Zhao, Jim Thompson and Piero Molino |
2022-06-28 |
1,707 |
--
|
|
Apple’s GenAI Architecture: Small, Fine-Tuned & LoRA-Based
|
Devvret Rishi |
2024-06-13 |
713 |
--
|
|
LoRAX: Open Source LoRA Serving Framework for LLMs
|
Travis Addair, Geoffrey Angus, Magdy Saleh and Wael Abid |
2023-11-16 |
1,781 |
--
|
|
LoRAX + Outlines: Better JSON Extraction with LoRA
|
Jeffrey Tang and Travis Addair |
2024-03-03 |
2,285 |
--
|
|
Fine-Tune and Serve Open-Source AI—Faster and Cheaper
|
Abhay Malik |
2023-10-24 |
1,346 |
--
|
|
LLMs in Production: Key Insights from Our New Report
|
Michael Ortega |
2023-08-23 |
1,144 |
--
|
|
Optimize LLM Performance with Deployment Health Analytics
|
Will Van Eaton |
2024-09-11 |
971 |
--
|
|
How to Use LLMs on Tabular Data with TabLLM
|
Timothy Wang and Justin Zhao |
2023-08-16 |
1,133 |
--
|
|
Training an Expert Coding Agent with Reinforcement Fine-Tuning
|
Evan Sandler, Ross Favero and Ajinkya Tejankar |
2025-05-20 |
2,561 |
--
|
|
Using Multi-Modal ML to Predict Customer Ratings
|
Abhay Malik |
2023-01-31 |
2,324 |
--
|
|
LLM Serving Guide: How to Build Faster Inference for Open-source Models
|
Michael Ortega |
2025-05-12 |
1,794 |
--
|
|
Koble’s Case Study: AI-Driven Startup Investing
|
Connor McCormick and Will Van Eaton |
2023-11-14 |
1,141 |
--
|
|
Beyond Chat: Real Use Cases for LLMs in Production
|
Joppe Geluykens, Geoffrey Angus and Miheer Patankar |
2023-06-27 |
1,297 |
--
|
|
Predibase Wrapped: Our greatest hits of 2024
|
Will Van Eaton |
2024-12-19 |
1,622 |
--
|
|
Build an SQL Copilot with LLMs and Synthetic Data
|
Alex Sherstinsky and Yev Meyer |
2024-07-11 |
2,434 |
--
|
|
Build AI Applications Faster with Declarative ML
|
Abhay Malik |
2023-05-31 |
923 |
--
|
|
Fine-Tuned SLMs Help Checkr Optimize Background Checks
|
Vlad Bukhin, Staff ML Engineer at Checkr |
2024-10-03 |
1,863 |
--
|
|
The First Serverless Solution for Fine-Tuned LLMs
|
Abhay Malik |
2024-02-13 |
830 |
--
|
|
Next-Gen Inference Engine for Fine-Tuned SLMs
|
Will Van Eaton |
2024-10-15 |
2,475 |
--
|
|
Fine-Tuned Newsletter: March 2024
|
Predibase Team |
2024-03-31 |
688 |
--
|
|
Personalizing Trading with Deep Learning on Snowflake
|
Joppe Geluykens and Michael Ortega |
2024-01-20 |
1,395 |
--
|
|
Self-Distilling DeepSeek-R1 with Turbo Speculation - 2x Inference
|
Ajinkya Tejankar and Will Van Eaton |
2025-02-19 |
1,887 |
--
|
|
Ludwig v0.7: Fine-tuning Pretrained Image and Text Models 50x Faster and Easier
|
Travis Addair |
2023-02-27 |
2,080 |
--
|
|
Build an SLM That Outperforms GPT-4o with Synthetic Data
|
Chloe Leung |
2024-10-30 |
2,900 |
--
|
|
LoRA Land: Open-Source LLMs That Beat GPT-4
|
Timothy Wang, Justin Zhao and Will Van Eaton |
2024-02-20 |
1,821 |
--
|
|
The Future of AI is Specialized
|
Devvret Rishi and Piero Molino |
2023-12-15 |
1,691 |
--
|
|
How to Efficiently Fine-Tune Gemma-7B with Open-Source Ludwig
|
Alex Sherstinsky |
2024-02-23 |
787 |
--
|
|
7 Things to Know About Fine-Tuning LLMs
|
Geoffrey Angus |
2024-02-22 |
3,475 |
--
|
|
DeepSeek Deployment Guide for VPC and SaaS Clouds
|
Will Van Eaton |
2025-01-31 |
1,505 |
--
|
|
How Declarative ML Is Transforming Data Science
|
Kevin Petrie |
2022-10-18 |
1,592 |
--
|
|
Solar LLM: Fine-Tuned Performance That Beats GPT-4
|
Arnav Garg, Junyeop Lee, Lucy Park, Kasey Roh and Will Van Eaton |
2024-06-17 |
1,142 |
--
|
|
Agentic AI at Scale: Marsh McLennan Saves 1M+ Hours
|
Will Van Eaton |
2025-03-12 |
1,042 |
--
|
|
How to Fine-Tune LLaMA-70B for JSON Generation
|
Geoffrey Angus, Wael Abid and Timothy Wang |
2023-12-07 |
1,400 |
--
|
|
Ludwig AutoML for Deep Learning
|
Anne Holler |
2022-02-14 |
2,436 |
--
|
|
Product Updates - March 2024
|
Will Van Eaton |
2024-03-29 |
459 |
--
|
|
Fine-Tune Mistral 7B on a Single GPU with Ludwig
|
Alex Sherstinsky and Arnav Garg |
2023-10-06 |
6,332 |
--
|
|
How to Fine-Tune LLaMA 3 for Customer Support Tasks
|
Chloe Leung |
2024-04-30 |
1,925 |
--
|
|
Ludwig 10k Stars LLM Fine-tuning Hackathon Winners
|
Alex Sherstinsky and Michael Ortega |
2024-02-01 |
1,297 |
--
|