The Aerospike Spark connector enables massively parallel storage to build high-throughput and low-latency ETL pipelines by leveraging the hybrid memory architecture of Aerospike, which is ideal for real-time environments managing terabyte to petabyte data volumes. The connector loads data into a Spark streaming or batch DataFrame in a massively parallel manner, allowing users to process data using Spark APIs supported in multiple languages. It also enables efficient writing of DataFrames to Aerospike via the corresponding Spark write API. The connector allows applications to parallelize work on a massive scale by leveraging up to 32,768 Spark partitions to read data from an Aerospike namespace. By pushing down predicates in queries, the connector optimizes query performance by limiting data movement between the Spark cluster and the database, using Aerospike Expressions to create powerful and efficient filters that significantly limit data transfer between Aerospike and Spark clusters. However, the existing Spark filter class has limitations on the number of Aerospike expressions it can generate and push down to the database, which is addressed by a side-channel approach that inserts user-provided Aerospike expressions into the query plan in the code generation phase using the `.option()` method. This approach allows users to bring the full power of Aerospike Expressions to bear without altering the Spark query planning process, resulting in significant performance gains when running Spark queries against large datasets stored in Aerospike.