Ludwig 0.6 introduces a new feature that allows exporting models into TorchScript, enhancing the deployment of machine learning models for efficient inference in production environments. This update supports the transition from vanilla Ludwig models, which involve separate preprocessing, prediction, and postprocessing stages, into fully serialized TorchScript models that do not require the original Python code. The new TorchScript-compatible models facilitate running inference with improved performance and reduced dependencies, providing flexibility across various environments, including mobile and C++. The introduction of a three-stage pipeline—comprising separate modules for preprocessing, prediction, and postprocessing—enables resource allocation optimization, such as deploying these stages on different devices (e.g., CPU and GPU) and pairing with model serving tools like NVIDIA Triton for independent scaling. Despite these advancements, there are certain limitations, such as initial limited support for HuggingFace encoders and specific preprocessing requirements for Image, Audio, and Date features. Overall, this feature aims to offer Ludwig users a more efficient, backend-independent, and scalable solution for deploying machine learning models, with further support available through Predibase, a Declarative ML platform built on Ludwig.