Home / Companies / Kong / Blog / Post Details
Content Deep Dive

Build a Multi-LLM AI Agent with Kong AI Gateway & LangGraph

Blog post from Kong

Post Details
Company
Date Published
Author
Claudio Acquaviva
Word Count
1,307
Language
English
Hacker News Points
-
Summary

In the final part of a series on evolving AI Agents using Kong AI Gateway, the focus is on integrating multiple Large Language Models (LLMs) and implementing Semantic Routing policies to enhance AI capabilities. Utilizing Kong AI Gateway 3.11, which supports various GenAI infrastructures, the configuration involves adding new LLMs like Mistral and Anthropic alongside OpenAI, allowing selective communication based on factors such as cost and semantics. Semantic Routing is facilitated by the AI Proxy Advanced plugin, which uses a vector database, Redis, and an embedding model from Ollama to route requests to appropriate LLMs based on topic relevance. The AI Agent architecture includes a Python script with new functions and pre-built LangGraph functionalities for efficient tool integration, while Grafana Dashboards monitor request metrics. The deployment of a LangGraph Server using Minikube is outlined, leveraging Docker images and requiring a LangSmith API Key for monitoring. Overall, the setup enhances AI Agents' ability to manage diverse GenAI tasks, including text, video, and audio processing, by abstracting LLM infrastructures and applying protective policies through the Kong AI Gateway.