The text discusses the use of Dagster and dlt to orchestrate data pipelines. dlt is an open-source Python library that allows declarative loading of messy data sources into well-structured tables or datasets through automatic schema inference and evolution. It simplifies building data pipelines by providing functionality to support the entire extract and load process, including scalability, robustness on extraction, and state management for incremental extraction. To start with dlt, users can install it using pip and then import it in their Python script to build a data pipeline. The project code is available on GitHub. In this example, we will ingest GitHub issue data from a repository and store the data in BigQuery using Dagster and dlt. We will use dlt to create a simple data pipeline and then orchestrate it using Dagster. Finally, we will add more features to this pipeline by using dlt schema evolution and Dagster asset metadata. The project code is available on GitHub. To run the pipeline, users need to install Dagster and dlt, create a service account credential for BigQuery, and then execute the commands specified in the text. The text also provides instructions on how to orchestrate MongoDB verified sources using Dagster, including setting up a Dagster project, creating an asset factory, defining definitions, and running the web server. In this example, we utilized the Dagster @multi_asset feature to create a dlt_asset_factory that converts each collection under a database to a separate asset allowing us to create more robust data pipelines. Both dlt and dagster can be easily run on local machines by combining them, which enables building data pipelines at great speed and rigorously testing them before shipping to production.