Switchable models come to Tabnine Chat
Blog post from Tabnine
Tabnine has introduced a new feature enabling users to switch between different large language models (LLMs) for its AI chat functionality, providing flexibility and control over model selection based on specific project requirements or team needs. Users can choose from the proprietary Tabnine Protected model, a collaboration with Mistral, and popular models like GPT-3.5 Turbo and GPT-4.0 Turbo, allowing them to balance performance with privacy and protection concerns. This update aims to address the rapidly evolving landscape of generative AI by accommodating diverse user needs, whether prioritizing performance or compliance, without locking them into a single model or vendor. Tabnine ensures that users can leverage the full capabilities of its AI tools, including code generation and AI-created tests, while maintaining integration with popular development environments. The ability to switch models is expected to enhance the user experience by maximizing accessibility to the latest innovations in AI without additional costs or the need for multiple tools.