Hadoop & Mapreduce Tutorial | Data Backup

Data Backup

There is no classic backup and recovery functionality in Hadoop. There are several reasons for this:

  • HDFS uses block level replication for data protection via redundancy.
  • HDFS scales out massively in size, and it is becoming more economic to backup to disk, rather than tape.
  • The size of “Big Data” doesn’t lend itself to being easily backed up.

Instead of backups, Hadoop uses data replication. Internally, it creates multiple copies of each block of data (by default, 3 copies). It also has a function called ‘distcp’, which allows you to replicate copies of data between clusters. This is what’s typically done for “backups” by most Hadoop operators.

Some companies, like Cloudera, are incorporating the distcp tool into creating a ‘backup’ or ‘replication’ service for their distribution of Hadoop. It operates against a specific directory in HDFS, and replicates it to another cluster.

If you really wanted to create a backup service for Hadoop, you can create one manually yourself. You would need some mechanism of accessing the data (NFS gateway, webFS, etc), and could then use tape libraries, VTLs, etc. to create backups.

Namenode Backup

If the namenode’s persistent metadata is lost or damaged, the entire filesystem is rendered unusable, so it is critical that backups are made of these files. You should keep multiple copies of different ages (one hour, one day, one week, and one month, say) to protect against corruption, either in the copies themselves or in the live files running on the namenode. Making backup by the dfsadmin command to download a copy of the namenode’s most recent fsimage, is done as

% hdfs dfsadmin -fetchImage fsimage.backup

The distcp tool is ideal for making backups to other HDFS clusters (preferably running on a different version of the software, to guard against loss due to bugs in HDFS) or other Hadoop filesystems (such as S3) because it can copy files in parallel.

HDFS Snapshots

HDFS Snapshots are read-only point-in-time copies of the file system. Snapshots can be taken on a subtree of the file system or the entire file system. Some common use cases of snapshots are data backup, protection against user errors and disaster recovery.

The implementation of HDFS Snapshots is efficient:

  • Snapshot creation is instantaneous: the cost is O(1) excluding the inode lookup time.
  • Additional memory is used only when modifications are made relative to a snapshot: memory usage is O(M), where M is the number of modified files/directories.
  • Blocks in datanodes are not copied: the snapshot files record the block list and the file size. There is no data copying.
  • Snapshots do not adversely affect regular HDFS operations: modifications are recorded in reverse chronological order so that the current data can be accessed directly. The snapshot data is computed by subtracting the modifications from the current data.

Apply for Big Data and Hadoop Developer Certification

https://www.vskills.in/certification/certified-big-data-and-apache-hadoop-developer

Back to Tutorials

Hadoop & Mapreduce Tutorial | Logging
Hadoop & Mapreduce Tutorial | Add and removal of nodes

Get industry recognized certification – Contact us

keyboard_arrow_up
Open chat
Need help?
Hello 👋
Can we help you?