Open Sourcing Guardrails on the Gateway Framework
Blog post from Portkey
Portkey was developed to address the challenges of deploying large language model (LLM) applications in production, such as debugging, cost visibility, prompt iteration, and model integration. The open-source AI Gateway has evolved to process billions of LLM tokens daily, helping numerous companies manage their AI applications efficiently. Despite this progress, issues remain with unpredictable LLM outputs, which can be factually inaccurate, biased, or privacy-violating. To tackle this, Portkey is integrating Guardrails, systems designed to control and guide LLM outputs, into its platform, enhancing the robustness of AI applications. While Portkey acknowledges its limited expertise in Guardrails, it is partnering with leading AI guardrail platforms to improve LLM behavior management. The integration of Guardrails into Portkey's Gateway, available both through their open-source repository and hosted app, marks a significant step toward bridging the production gap for AI applications, with continuous learning and collaboration deemed essential for future developments.