Deep Learning

A few weeks ago, we discussed representation learning, which allows an algorithm to automatically discover and learn the most useful features from the data itself (unlike classic machine learning that relies on human-defined features). However, representation learning has a limitation when it comes to intricate problems like voice or image recognition, where it becomes challenging to create a representation due to complexities. For instance, in speech recognition, accents pose a difficulty in representing them as a mere set of features.

This is where deep learning steps in to solve this problem. Deep learning expresses the representation in terms of other simpler representations. For example, if we need to identify whether an image depicts a car or an animal, the first input to the model can be just the colors. These colors then serve as input to another representation of edges, which further feeds into contours, and so on, until after multiple layers, we can understand the type of image.

I realise that this description of deep learning is a bit high level. We will try to extend our understanding of Perceptrons to Multi-layer Perceptrons (MLP) in upcoming updates (MLP is a common deep learning model). For now, what we can summarise is that deep learning represents the complexities of the world in a nested hierarchy of constructs. Each construct is a collection of more simpler constructs. And to model the real world scenarios - we need to go deep into this hierarchy - adding up simple things to understand more abstract things.




Comments

Popular posts from this blog

Logical and Physical Architecture

Data Models