Home / Companies / Vespa / Blog / Post Details
Content Deep Dive

Improving Product Search with Learning to Rank - part three

Blog post from Vespa

Post Details
Company
Date Published
Author
Jo Kristian Bergum
Word Count
2,708
Language
English
Hacker News Points
-
Summary

The blog post discusses the use of Gradient Boosting Decision Trees (GBDT) to improve product search by learning to rank, as part of a series exploring ranking models. It highlights the native support Vespa offers for evaluating GBDT models and importing them from popular frameworks like XGBoost and LightGBM. The post explains how GBDT models handle multi-objective ranking optimization by combining various feature types, such as normalized and unnormalized features, and business-related features like sales margins. It emphasizes the importance of prediction explainability and computational efficiency, noting that GBDT models require less computational power than neural networks and offer quick training times. The article details the process of converting unstructured data into tabular features for GBDT training and the role of Vespa in feature computation and logging. It concludes by comparing the performance of models trained with different feature sets and frameworks, finding that while GBDT models show improvements, they may not fully utilize the potential of unstructured text data, which shines in neural methods. The post also mentions the upcoming focus on end-to-end retrieval challenges in the next installment of the series.