The article, part of a three-part series, explores the use of Ludwig, a deep learning toolkit based on TensorFlow, for hyperparameter optimization on a Recurrent Neural Network (RNN) aimed at sentiment analysis using the Stanford Sentiment Treebank dataset. It explains the concept of hyperparameters, their role in defining model architecture and training processes, and the importance of hyperparameter optimization for enhancing model performance. The article discusses different methods for hyperparameter sampling, including grid search, random search, and Bayesian optimization, while focusing on random search for its simplicity in this context. The Ludwig hyperopt module is utilized for this optimization process, with a specific configuration designed to maximize the validation accuracy of the model by adjusting variables such as learning rate, state size, RNN cell type, and the number of encoder layers. The article highlights the results of the hyperparameter optimization, noting that the LSTM cell type achieved the best average performance, and introduces visualization tools like hiplot to help interpret the results. Ultimately, the optimization process led to improved accuracy over previous models, demonstrating the effectiveness of Ludwig in simplifying deep learning tasks for data scientists and machine learning engineers.