How shredding JSON is giving Logfire 1000x query speedups
Blog post from Pydantic
Logfire is set to dramatically enhance query speeds, achieving improvements of over 1000 times faster in some scenarios by adopting dynamic shredding for handling semi-structured data. Previously, Logfire stored attributes as JSON blobs, which, while simple, led to inefficiencies in compression and high input/output operations. The new approach involves dynamically extracting frequently accessed attributes into separate, strongly-typed columns during data ingestion, reducing the need to parse large JSON blobs repeatedly. This optimization allows for efficient pruning and compression, significantly speeding up query times. The transition to dynamic shredding has involved extensive collaboration and contributions to the DataFusion open-source project, enhancing its capabilities to support this new feature. The rollout of dynamic shredding will start in February 2026, and users will experience faster query performance without any required changes to their existing workflows.