Adrian Brudaru, Co-Founder and CDO, discusses the evolution of their AI-native system designed to streamline data engineering by automating the creation of "scaffolds" using large language models (LLMs). The initial version of the system aimed to utilize AI to generate ready-to-use configurations from API documentation, but it was often unreliable, as LLMs sometimes produced inaccurate information, leading to time-consuming debugging. To improve this, the team developed a second version that integrates a deterministic parser with LLMs, focusing on fact extraction and semantic understanding, respectively. This hybrid approach increases reliability by grounding LLMs with verified facts and enriching them with nuanced insights, while also providing pointers to original documentation. The result is a more dependable tool that enhances efficiency without sacrificing accuracy, marking a significant improvement in the use of AI for data pipeline automation.