The primary contributors to overall system performance are how the working set relates to both the storage engine cache size (the memory dedicated for storing data) and disk performance (which provides a physical limit to how quickly data can be accessed). Using YCSB, we explore the interactions between disk performance and cache size, demonstrating how these two factors can affect performance. Fitting the working set inside memory provides optimal application performance, while exceeding this limit negatively affects latency and overall throughput. Understanding Disk Metrics such as disk throughput, latency, IOPS, and utilization is important for determining disk baseline performance and identifying potential bottlenecks. Testing disk performance using synthetic benchmarks like YCSB can provide insights into how different workloads interact with the storage engine cache and disk, but these results may not directly translate to production environments. The WiredTiger Storage Engine performs its own caching and stores documents with snappy compression algorithm by default. Queries are serviced from the WiredTiger cache, and if the requested data is already in the cache, this step is skipped. Disk utilization indicates what percentage of disk queues is busy at a given time, and high utilization can indicate a disk bottleneck. The filesystem cache is an Operating System construct to store frequently accessed files in memory to facilitate faster accesses. Modern operating systems cache frequently accessed files to improve read performance, but when the working set exceeds available memory, disk performance quickly becomes the limiting factor for throughput.