Airflow allows users to collect metrics, logs, and traces from their pipelines using its native tooling. Users can customize the ingestion of metrics by specifying categories such as scheduler, executor, dagrun, pool, triggerer, and celery. This helps remove unwanted data, reduce noise, and save on intake costs. Airflow generates component logs automatically, which are mostly useful for pre-production testing and debugging. Scheduler logs contain critical information about task queue performance and runtime events, while worker logs capture information about the runtimes of worker processes as tasks are submitted, run, and cleaned up. Task logs record data for specific DAG runs, enabling users to troubleshoot failed or retried task instances. Airflow includes an OpenLineage provider that enables users to send lineage events for task executions to record run metadata and job metadata. Users can monitor metrics, logs, and traces using Airflow's native webserver interface, including views such as the Cluster Activity View, Grid View, Graph View, and Gantt Chart. Additionally, users can use external tools like FluentD and Marquez to collect and visualize log data.