State Space Models (SSMs) are traditionally used in control theory and have been adapted for deep learning to handle dynamic systems using state variables. The article delves into the application of SSMs in deep learning, particularly through the S4 model, which serves as an educational tool rather than a practical model due to the existence of more efficient alternatives. The discussion encompasses the three views of SSMs—continuous, recursive, and convolutive—and their respective advantages in different contexts, emphasizing the importance of discretization in transforming continuous data for computational use. The article highlights how SSMs are versatile for tasks involving text, vision, audio, and time-series data, often outperforming other models like ConvNets and transformers in terms of efficiency and parameter usage, particularly in handling very long sequences. It also touches upon the complexity of matrix operations in SSMs, discussing the HiPPO matrix and its role in achieving effective state representation, and concludes with the potential of SSMs as demonstrated by the S4 model's performance across various benchmarks.