The separation of storage and compute is a new paradigm in data platforms that allows businesses to pay for what they use and nothing more, reducing waste and improving utilization of resources. This separation was previously impossible due to the tightly coupled nature of compute and storage in traditional systems like Hadoop or HDFS, which resulted in a very inelastic data architecture with a lot of waste. With advances in serverless computing and cloud storage, data platforms can now combine whatever storage or compute they need on-demand, offering flexible scaling, better utilization of resources, and only paying for what is used. Key technologies that make this separation possible include object storage and network attached storage, open table and file formats like Apache Iceberg and Apache Parquet, and query engines like DuckDB, Trino, and Spark. Examples of data platforms that have adopted this new paradigm include Snowflake, Dremio, and Propel, which offer independent pricing for storage and compute resources. By identifying workloads and allocating resources accordingly, businesses can take advantage of the separation of storage and compute to improve their data architecture and reduce waste.