Running a proof-of-concept for HDFS (Hadoop Distributed File System) on Kubernetes involves adapting the traditional HDFS architecture, where namenodes and datanodes are typically deployed on dedicated VMs, to a Kubernetes environment. In Kubernetes, the ephemeral nature of pods necessitates solutions like wrapping the namenode in a Service to provide a static IP or hostname for reliable communication, while Stateful Sets ensure datanodes maintain consistent identities and data. This setup allows for a fully distributed HDFS to run on a single node by utilizing Kubernetes Persistent Volumes to emulate multiple storage volumes on a single disk. The implementation demonstrates how Kubernetes can manage container-level distribution, even on a single node, paving the way for running applications like Apache Spark on this HDFS setup in future explorations.