CockroachDB has introduced a high-speed bulk data import feature in its 1.1 release, designed to facilitate the migration of data from existing databases through CSV files, a common format for data export. This feature transforms CSVs into SST files compatible with CockroachDB's enterprise backup/restore format, significantly accelerating data ingestion compared to traditional methods like INSERTs or the Postgres COPY protocol. The initial implementation was a local, single-node process that utilized RocksDB for sorting, requiring 3x disk space, but was later integrated with CockroachDB’s SQL layer to reduce disk usage. A second, distributed implementation employed the DistSQL framework to harness parallel processing across nodes, enhancing performance for larger datasets by sampling CSV data to determine optimal data routing and file splitting. While still undergoing testing, the distributed approach aims to offer a more efficient solution for bulk data import, with future versions potentially allowing users to select the most effective method based on their data size and structure.