Section 5:44. Overview of Block size and Replication factor

In HDFS (Hadoop Distributed File System), block size and replication factor are two important concepts that determine the storage and data availability of a file stored in HDFS.

  1. Block size: The block size in HDFS is the size of each block of a file that is stored in the HDFS cluster. By default, the block size in HDFS is 128 MB, but it can be configured to a different value depending on the size of the file and the storage capacity of the cluster. A large block size can result in better performance and less overhead, but it also increases the amount of storage required.
  1. Replication factor: The replication factor in HDFS determines the number of copies of a file that are stored in the HDFS cluster. The replication factor is set when a file is added to HDFS, and it determines how many times the file will be replicated across different nodes in the cluster. The replication factor is an important factor in ensuring the availability of data in the event of node failures. By default, the replication factor is set to 3, but it can be configured to a different value depending on the data availability requirements of the cluster.

Share this post