Home / Companies / OpenPipe / Blog / Post Details
Content Deep Dive

OpenPipe Mixture of Agents: Outperform GPT-4 at 1/25th the Cost

Blog post from OpenPipe

Post Details
Company
Date Published
Author
Kyle Corbitt and Saumya Gandhi
Word Count
1,301
Language
English
Hacker News Points
13
Summary

The OpenPipe Mixture of Agents (MoA) model has been developed and released, achieving state-of-the-art results on various benchmarks compared to GPT-4, a leading large language model. The MoA model is designed as a drop-in replacement for GPT-4 and can be used for generating synthetic training data, fine-tuning smaller models, and improving response quality. It has been benchmarked across open-source and private benchmarks, including Arena Hard Auto and AlpacaEval 2.0, where it outperformed GPT-4 variants in many cases. The MoA model is also 1/25th the cost and 3x faster than GPT-4-Turbo, making it an attractive option for those looking to improve their models without breaking the bank. The OpenPipe platform provides a way to use the MoA model through its Chat Completions endpoint or as a relabeling model for fine-tuning smaller models.