In 2024, the landscape of Large Language Model (LLM) applications continued to evolve, marked by increased adoption of open-source models and a shift towards AI agent applications with complex, multi-step workflows. OpenAI remained the most popular LLM provider among LangSmith users, while Ollama and Groq gained momentum, highlighting a demand for flexible and customizable AI infrastructure. The use of vector stores like Chroma and FAISS remained prevalent, with new entrants like Milvus and MongoDB making strides. Developers increasingly utilized Python for building LLM apps, though interest in JavaScript grew as web-first applications became more common. The LangGraph framework, introduced in March 2024, facilitated the rise of AI agents by allowing more intricate and agentic interactions, evidenced by a significant increase in tool calls within traces. Developers focused on balancing complexity with performance, achieving more sophisticated workflows while managing LLM call efficiency. Quality assurance was a priority, with organizations leveraging LangSmith’s evaluation tools to test and enhance application reliability through automated testing and human feedback loops. Overall, developers embraced complexity and efficiency, paving the way for smarter, more reliable LLM applications.