Posts

Logical and Physical Architecture

Image
  When you have defined the user stories (requirements) for a new system, the next step is to design the system architecture. There are two main parts to this: 1. Logical Architecture :  This is a high-level view of the system, focusing on the different components or building blocks needed to meet the requirements. It's like a blueprint for the system, showing how the different pieces should work together. ⠀For example, if you're building a system to report bugs, the logical components could be: * Ticket Creation * Ticket Assignment * Notifications (to the requester and resolver) * Communication (adding notes to the ticket) * Ticket Closure ⠀The logical architecture doesn't go into the technical details of how each component will be built. It just maps out what components are needed and how they should interact. 2. Physical Architecture:  This is the more technical view of the system, specifying the actual technologies and infrastructure that will be used to build the com...

Deep Learning

Image
A few weeks ago, we discussed   representation learning , which allows an algorithm to automatically discover and learn the most useful features from the data itself (unlike classic machine learning that relies on human-defined features). However, representation learning has a limitation when it comes to intricate problems like voice or image recognition, where it becomes challenging to create a representation due to complexities. For instance, in speech recognition, accents pose a difficulty in representing them as a mere set of features. This is where deep learning steps in to solve this problem. Deep learning expresses the representation in terms of other simpler representations. For example, if we need to identify whether an image depicts a car or an animal, the first input to the model can be just the colors. These colors then serve as input to another representation of edges, which further feeds into contours, and so on, until after multiple layers, we can understand the type...

Perceptron Model

Image
The perceptron is a simple machine learning model that can classify data into two categories, like positive or negative, spam or not spam, and so on. It's one of the oldest and most fundamental models that laid the foundation for more advanced neural networks and deep learning techniques. Imagine you have a collection of customer reviews, and you want to classify them as either positive or negative based on the words used in the review. The perceptron model looks at the number of times certain words appear in the review and uses that information to decide whether the review is positive or negative. Here's how it works: First, we take a set of customer reviews that have already been labelled. This is our training data. To keep it simple in this example, we count the number of times the word "happy" appears and the number of times the word "sad" appears. See table 1 below. We plot these counts on a graph, with the "happy" count on one axis and the ...

Non-Functional Requirements

  Imagine you're finding a house. You have the blueprints which show the different rooms, hallways, and the overall layout of the house. These blueprints represent the logical components of your house, just like the components that make up a software system. However, there's more to a house than just the layout. You also need to consider other characteristics that will determine how well the house functions and meets your needs. Is it in a good neighbourhood, are there schools nearby, does it get 24 hours water (important if you are in Bangalore!!), does it offer good sunlight, etc.     These characteristics are like the non-functional requirements or NFRs. NFRs are the system's capabilities that define how well it performs its intended functions. Just like a house needs to be sturdy, energy-efficient, and comfortable, a software system needs to meet certain requirements beyond its basic functionality.   We can think of NFRs as the "system's capabilities" that...

What is Inference

  Any model has two phases. First, there's the training phase, where the model is trained on a dataset. It produces output, which is validated, and the updates are fed back for improvement. There can be multiple passes over the training data.   Once the model is ready, it is deployed into production, where it encounters real-world data. This process of putting the model into production and getting the actual output is called   Inference .   Think of it in terms of food. Given some ingredients, a chef experiments with them. She creates a recipe, tries the food, and if she doesn't like it, she tweaks the recipe, makes another dish, and iterates multiple times before finalizing her recipe (or model). This was all training.   Now, she takes this recipe to her restaurant and puts it into production in the actual kitchen so that customers can enjoy it. This process is inference.   To summarize, machine learning inference is the process of running real-world or pr...

Kafka

Image
  As we develop applications, there are scenarios where data from one application or system needs to be shared with another. There are multiple ways to achieve this data sharing and enable communication between systems. One common approach is to expose the data as a service by opening an interface. Imagine a shop with a window; if someone or another system wants something available in that shop, they can obtain it through that window. If there are only two systems involved, it's a simple two-way interaction. However, when there are three or more systems that need to communicate with each other, the complexity increases. The number of connections grows, and the scalability and latency of the entire system become more challenging (see Image 1). This is an example of tightly coupled systems. So, what's the solution? The answer is to introduce a broker that can listen to different systems and then distribute that information to other systems (Image 2). This "decouples" th...

Data Models

  When you think of system design,   Data Models   is a common term that you will hear. So what are Data Models? Very simply it is the way how data for a product or tech system is represented. And this represented ideally depends on the customer or the use case. Take the example of a car. One way to define a car can be to use the attributes like model, color, price, year of manufacture. Another way to define a car can be using attributes - License plate, owner name, history of accidents. If you are making a tech product for the use of a car buyer - then the Data Model 1 is more relevant and if you are making a tech system for the police, then Data model 2 is more relevant.  (One more reason to think customer backwards!!) . Broadly, there are three types of data models: 1) Relational; 2) Document; 3) Graph. The most well-known is the  relational data model . You can see this data typically in a tabular format where each row represents a relation. (SQL is a popula...