AI doesn't belong in test runtime
Blog post from Octomind
In the context of end-to-end testing, adopting generative AI can enhance test coverage and reduce time spent on testing by automating manual test cases, although concerns about AI's reliability and stability must be addressed. AI testing tools use AI differently from traditional testing tools, particularly in writing, executing, and maintaining automated tests, with large language models (LLMs) being deployed in various ways. AI is most valuable during the creation and maintenance phases of test cases, as it can generate initial test versions and auto-heal tests within defined boundaries. However, AI should ideally not be used during runtime due to its cost, speed, and brittleness, with established automation frameworks like Playwright, Cypress, or Selenium preferred for fast and reliable test execution. Using AI for test maintenance, particularly auto-healing, is promising as it addresses the significant challenge of keeping tests up-to-date, which is often more challenging than scripting and running them.