Home / Companies / Refuel / Blog / Post Details
Content Deep Dive

Announcing Refuel-LLM

Blog post from Refuel

Post Details
Company
Date Published
Author
Refuel Team
Word Count
1,141
Language
English
Hacker News Points
-
Summary

Refuel LLM, a large language model specifically designed for data labeling and enrichment tasks, has been launched, demonstrating superior performance over trained human annotators and several other language models, including GPT-3.5-turbo and PaLM-2, across a benchmark of 15 text labeling datasets. Built on the Llama-v2-13b base model, Refuel LLM is trained on over 2500 unique datasets spanning various categories and can be fine-tuned on target domains to further enhance performance and reduce inference costs. The model is publicly accessible via the LLM labeling playground and the Autolabel open-source library, with plans for a more detailed technical summary and open-source release in the future. Fine-tuning experiments have shown that Refuel LLM can achieve superhuman performance quickly, outperforming GPT-4 in certain tasks with minimal training time. The model's training encompassed diverse instructions to ensure accurate label output without parsing, and it is now available for use through various platforms, with further details and support available via Refuel Cloud.