In 2023, discussions around AI safety and regulation were prominent, particularly with the anticipation of the UK's AI Safety Summit and concerns about the implications of large language models (LLMs) like GPT-4. Contrary to sensationalized media narratives, actual AI safety incidents are not as rampant as suggested, with only a modest increase in reported events and many incidents being unrelated to AI technology itself. Generative AI, while often misunderstood as being autonomous, functions as an advanced probabilistic tool that excels in generating content across various media. The year also highlighted vulnerabilities in AI systems, such as adversarial suffixes and prompt injection, but these received minimal attention compared to broader fears about AI's potential societal impacts. As AI systems become more capable of performing tasks traditionally done by humans, there is anticipation of significant shifts in the workforce, particularly in administrative roles, prompting discussions on economic structures like universal basic income. Despite regulatory efforts and concerns about AI's influence, leading AI companies continue to prioritize alignment and safety, while debates persist on how best to integrate AI advancements into society responsibly.