Company
Date Published
Author
Together
Word count
1045
Language
English
Hacker News points
None

Summary

The Stanford Center for Research on Foundation Models (CRFM) has announced a comprehensive effort to benchmark 30 language models, known as Holistic Evaluation of Language Models (HELM), which covers 42 scenarios from question answering to sentiment analysis. To support this effort, the Together Research Computer aggregates idle GPU cycles across thousands of servers, running inference over more than 11 billion input tokens and 1.6 billion output tokens on 10 open language models, including GPT-3, Flan-T5, BLOOMZ, and Galatica. This decentralized computing approach aims to make the field more accessible to researchers and practitioners by solving computation bottlenecks in large language models. The project is an early step towards enabling efficient, shared compute for AI, with the goal of bringing the world's compute together to enable everyone to contribute to and benefit from advanced AI models.