Company
Date Published
Author
-
Word count
517
Language
English
Hacker News points
2

Summary

Ramp, a spend management company, was struggling with fine-tuning its large language models (LLMs) and scaling batch processing. They initially tried using LLM providers like OpenAI but were limited by customizability concerns and high costs. Ramp then adopted Modal, a platform that allowed them to fine-tune their models while controlling each step of the fine-tuning workflow. By using Modal, Ramp was able to accelerate development of their text-to-structured-JSON model for receipt management, driving down receipts requiring manual intervention by 34%. Modal's serverless platform also enabled Ramp to parallelize tasks and speed up LLM batch processing, resulting in significant cost savings and productivity gains. With Modal as part of its data processing stack, Ramp is now able to ship its AI features faster than ever before.