Home / Companies / Together AI / Blog / Post Details
Content Deep Dive

Building your own RAG application using Together AI and LlamaIndex

Blog post from Together AI

Post Details
Company
Date Published
Author
Together AI
Word Count
615
Language
English
Hacker News Points
-
Summary

You can build a powerful Retrieval Augmented Generation (RAG) application using Together AI's cloud platform and LlamaIndex, which provides fast and cost-efficient training without requiring technical expertise to train a model. This approach leverages both generative models and retrieval models to improve knowledge-intensive tasks by providing up-to-date information from external data sources during response generation. By creating a vector store and indexing source documents using an embedding model of your choice, you can retrieve relevant information, augment it with the original query, and use a large language model (LLM) to generate accurate responses. This approach has been demonstrated through a quickstart example that incorporates a new article into a RAG application using the Together API and LlamaIndex. The tools provide numerous advantages, including faster training, lower costs, and improved performance, making it an attractive option for building innovative solutions.