Generative AI, particularly through large language models (LLMs), is transforming software development by boosting code generation productivity, yet it faces challenges in automating end-to-end (E2E) testing. While LLMs excel at generating unit test code due to the well-defined inputs and limited scope, E2E testing involves complex interactions and dynamic elements that require adaptiveness beyond current LLM capabilities. To address this, agentic workflows are introduced as automated processes that mimic human decision-making, allowing for more flexible and adaptive testing akin to manual processes. These workflows can enhance automated testing through intent-based approaches and natural language commands, simplifying test creation and enabling visual testing with vision models like GPT-V. By generating comprehensive test cases and exploring boundary conditions, Generative AI can ensure thorough software examination, thus reducing post-deployment issues. Harness is leading this innovation with its Generative AI-powered test automation agent, aiming to overcome the bottlenecks of LLM-assisted coding and revolutionize software testing and delivery.