The use of generative AI chatbots has raised concerns around the security and privacy of customer data, as well as the accuracy and trustworthiness of the information provided by these bots. Generative AI is trained on vast amounts of data from the internet, which can lead to regulatory and transparency issues. The EU's General Data Protection Regulation (GDPR) and the US are in the early stages of regulation and lawmaking when it comes to AI, with some tech companies committing to voluntary agreements concerning areas like information sharing, testing, and transparency. Companies need control over the information their customers receive to ensure that it's accurate, up-to-date, and relevant to their product. To address these concerns, AI chatbot providers must prioritize data security and maintain transparency with their customers. Intercom's Fin is an example of a company taking steps to handle customer data securely and provide accurate answers from trusted sources.