Columnar storage presents an efficient method for compressing and storing logs while maintaining fast and flexible queries, achieving up to 178x compression by structuring raw logs into columns, using optimized data types, and clustering similar values. Logs, unlike traces and metrics, are often unstructured and do not compress well, yet they contain vital historical data essential for debugging. By transforming logs into structured data, identifying variable message parts, and using suitable data types, logs can be compressed efficiently. Experiments with Nginx access logs demonstrated that even by structuring logs and using column-based storage, compression ratios improved significantly, reaching up to 92x. Further optimization through ordering data on disk based on selected columns with low cardinality and skewed distribution resulted in a compression ratio of 178x. However, when typical scenarios like time-based querying are considered, compression efficiency decreases, highlighting the impact of ordering keys on compression outcomes. Ultimately, columnar databases like ClickHouse offer a viable solution for achieving high compression rates, thereby improving I/O efficiency, query speed, and reducing storage costs, albeit with potential trade-offs in query performance.