The development and regulation of artificial intelligence (AI) systems require accurate measurement and quantification of their capabilities and risks, which can be achieved through funding the National Institute of Standards and Technology (NIST). NIST has been working on this area for years but has seen under-resourcing and stagnation in funding over the past few years. An ambitious investment in NIST would allow it to build on fundamental measurement techniques, standardize them across the field, and create community resources such as testbeds to assess AI system capabilities and risks. This would be a pragmatic approach to enhancing the safety of AI systems, increasing trust among the public, promoting innovation, creating a market for system certification, and providing confidence that advanced systems are safe for the general public.