In this post, we will see some of the frequently used concepts/vocabulary in System Design interviews preparations.
This is very basic of all. An individual machine or server can be called as a node or a single container can be called as a node. Lets take it as a single unit which is running your code or microservice.
Consistency means whatever data you have committed to your database, it should remain there. Any future read requests must return the last committed data. In a clustered environment or master/slave environment, every slave nodes must be replicated the data before any read request.
i.e. You can read the last committed data from any of your cluster node.
This has some relaxation from Consistency property. It says, all of my nodes in the cluster will eventually have the same data in near future. There might be some time window where some nodes returned the old data. But, they all will have the latest data in some time.
Thats what this term means, the nodes will be eventual consistent. And, they all will reach a state where all the nodes will have same data.
Availability in terms of data means, the data is available irrespective of some nodes going down. The system is always responding to the read or write requests.
In a distributed environment, where there are multiple nodes working and data is replicated across multiple nodes. The system continues to work even when some nodes failed to work or communicate with each other.
Replication means the same data is copied across multiple nodes. This is kind of a backup, when one or more nodes dies those backups can become the primary data partition. The more the redundancy more the safety of data.
But, more redundancy means you are taking extra memory. This all depends upon the business requirements.
Every machine or node is having a limited capacity to serve user request. A single node will have a finite set of hardware and memory. Scaling is all about designing yor system that your application can handle much larger traffic.
Scaling is of two types:
This is the simplest of all. In this scaling, you increase the CPU/RAM/Storage of the server.
In this scheme, you have a finite set of computational capacity of each server. And, you increase the number of such nodes. And, such nodes will be behind some load balancer.
This is very specific to how you save your data. With fairly large applications, you can not save all the data on a single machine. You need to split the data across multiple machines. There are various schemes of sharding depending upon the nature of data and database.
Each of those individual machines which are holding part of data are called Shards.