/plushcap/analysis/deepgram/deep-learning-101-building-blocks-of-machine-intelligence

Deep Learning 101: Building Blocks of Machine Intelligence

What's this blog post about?

This article provides an overview of deep learning, its history, and key terminology. Deep learning is a subset of neural networks that involves multiple layers of interconnected neurons. It is particularly good at tasks involving complex, unstructured data such as images, audio, and text. The training process for neural nets typically includes both a learning mode (training) and a running mode (inference). Deep learning models are often viewed as black boxes, making it difficult to explain why outputs are arrived at. The article also covers the history of deep learning, tracing its roots back to the 1940s and 50s with the proposal of the first artificial neural net (McCulloch-Pitts neuron). It highlights key advancements in the field, such as the introduction of error backpropagation by Paul Werbos in 1974 and the coining of the term "deep learning" by Geoffrey Hinton in 2006. The article delves into various types of deep learning models, including Convolutional Neural Networks (CNNs), Generative Adversarial Networks (GANs), Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Transformers. It also discusses the role of GPUs in enabling faster training of neural networks and the importance of large amounts of data for deep learning research. Finally, the article mentions popular deep learning code frameworks like PyTorch and TensorFlow, as well as Deepgram's use of Rust for production purposes to bypass Python's Global Interpreter Lock (GIL) optimization that negatively impacts multithreaded computation.

Company
Deepgram

Date published
July 10, 2023

Author(s)
Sam McKennoch

Word count
3820

Hacker News points
None found.

Language
English


By Matt Makai. 2021-2024.