The integration of Apache Airflow and dbt for scalable analytics architecture is explored through the experiences of the data engineering team at Updater, highlighting the challenges and solutions in managing dbt models within Airflow. The transition from an ETL to ELT paradigm has necessitated the use of tools like dbt for in-warehouse data transformations, and the team initially faced difficulties in efficiently orchestrating and scheduling these transformations using Airflow. They experimented with various methods, ultimately settling on using Airflow's BashOperator to run dbt commands, though this led to issues with scalability and granularity as the number of models increased. To address these, Updater devised a strategy to create an Airflow DAG for each dbt model, using the dbt-generated manifest.json file to define task dependencies, thereby achieving more granular control and efficient retry mechanisms. This approach allows for each dbt model to be run and tested independently, alleviating issues with task interdependencies and enhancing the scheduling and retry logic inherent in Airflow. The article sets the stage for further exploration of production-level solutions to automate and optimize these processes as part of a broader ELT pipeline.