Content Deep Dive
DeepSeek vs Qwen: local model showdown in Python, featuring LaunchDarkly AI Configs
Blog post from LaunchDarkly
Post Details
Company
Date Published
Author
Tilde Thurium
Word Count
1,927
Language
English
Hacker News Points
-
Summary
LaunchDarkly's new AI Configs now support bringing your own model, unlocking more flexibility for supporting fine-tuned models and running models on local hardware. Ollama is an open-source tool used to run large language models locally. A tutorial demonstrates how to connect LaunchDarkly with Ollama using Python, creating a custom model AI config that tracks metrics such as latency, token usage, and generation count. The tutorial showcases the capabilities of reasoning models and provides guidance on tracking metrics, advanced targeting capabilities, and further reading resources for runtime model management.