Company
Date Published
Author
Simon Willnauer • Shay Banon
Word count
1304
Language
English
Hacker News points
None

Summary

In the first post of a three-part series on Instant Aggregations, Simon Willnauer and Shay Banon of Elastic explore the challenges and potential solutions for improving query performance in Kibana through caching. The story begins in 2013 with discussions about the slow initial dashboard load times due to Elasticsearch recomputing queries from scratch each time, despite similar searches being executed. The narrative unfolds with brainstorming sessions and technical analyses, leading to the realization that utilizing Apache Lucene's point-in-time index views could allow for better caching. The team proposed leveraging Lucene's top-level reader to create a request cache, ultimately aiming for efficient query rewriting based on time properties. Despite initial enthusiasm and a prototype developed by Mike Mccandless, the team faced the reality that substantial refactoring of the codebase was necessary, requiring significant engineering effort. However, the potential benefits of improving search efficiency and reducing workload for Kibana dashboards justified the investment, resulting in a 12-month development process followed by an additional six months to release an alpha version of the feature. The post concludes by suggesting further insights in subsequent articles.