The updated Smart Turn v3 model offers significant advancements over its predecessor and competitors, boasting a reduced size of 8 MB and the ability to perform CPU inference in just 12 milliseconds without needing a GPU. This version supports 23 languages, providing better accuracy despite the model's compactness, and remains fully open source with available weights, training data, and scripts. The model's architecture leverages the Whisper Tiny framework with int8 quantization, resulting in enhanced performance while maintaining accuracy. Compared to other models like Krisp and Ultravox, Smart Turn v3 excels in decision latency by focusing on single-inference decisions. The update also introduces an open and transparent benchmarking initiative in collaboration with other developers to evaluate model accuracy. Users can implement Smart Turn v3 in Pipecat Cloud instances or use it standalone via the ONNX runtime, with further performance tuning possible. Additionally, efforts are underway to improve accuracy across different languages, inviting community participation in data sample reviews.