This site uses cookies to deliver our services. By using this site, you acknowledge that you have read and understand our Cookie and Privacy policy. Your use of Kontext website is subject to this policy. Allow Cookies and Dismiss

Hadoop datanode issue and resolution - ‘Incompatible clusterIDs’

147 views 0 comments last modified about 12 months ago Raymond Tang

lite-log hadoop hdfs

Issue

After finishing installation Hadoop 3.0.0 in my Windows: Install Hadoop 3.0.0 in Windows (Single Node), I got the following error after I formated the name node several times.

The following error is thrown out when I tried to start Hadoop HDFS.

2018-02-19 22:02:06,848 WARN common.Storage: Failed to add storage directory [DISK]file:/F:/DataAnalytics/dfs/data
java.io.IOException: Incompatible clusterIDs in F:\DataAnalytics\dfs\data: namenode clusterID = CID-ef46d03c-27ff-45f3-88ae-a2b6f207b001; datanode clusterID = CID-242a04f8-6b63-4325-b56e-14b608af786c
         at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:722)
         at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:286)
         at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:399)
         at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:379)
         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:544)
         at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1690)
         at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1650)
         at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:376)
         at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
         at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
         at java.lang.Thread.run(Thread.java:748)
2018-02-19 22:02:06,851 ERROR datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid 94e366db-2ed5-40a2-bc36-7335bfcdec05) service to /0.0.0.0:19000. Exiting.
java.io.IOException: All specified directories have failed to load.

Root cause

This issue happened because the cluster id in my hadoop name node and data node VERSION files are not consistent.

image

The root cause is: I formated the namenode using the following command without deleting the files in data node directory:

%HADOOP_HOME%\sbin\start-dfs.cmd

Resolution

Before formarting name node, ensure the files under <dfs.data.dir>/ directories are deleted on all data nodes.

Related pages

Password Security Solution for Sqoop

13 views   0 comments last modified about 13 days ago

In Sqoop, there are multiple approaches to pass in passwords for RDBMS. Options Option 1 - clear password through --password argument sqoop [subcommand] --username user --password pwd This is the weakest approach as password is exposed directly...

View detail

Install Hadoop 3.0.0 in Windows (Single Node)

9221 views   14 comments last modified about 11 months ago

This page summarizes the steps to install Hadoop 3.0.0 in your Windows environment. Reference page: https://wiki.apache.org/hadoop/Hadoop2OnWindows ...

View detail

Resolve Hadoop RemoteException - Name node is in safe mode

171 views   0 comments last modified about 9 months ago

In Safe Mode, the HDFS cluster is read-only. After completion of block replication maintenance activity, the name node leaves safe mode automatically. If you try to delete files in safe mode, the following exception may raise: org.apache.hadoop.ipc.RemoteException(org.apac...

View detail

Configure Sqoop in a Edge Node of Hadoop Cluster

822 views   0 comments last modified about 9 months ago

This page continues with the following documentation about configuring a Hadoop multi-nodes cluster via adding a new edge node to configure administration or client tools. ...

View detail

Configure YARN and MapReduce Resources in Hadoop Cluster

513 views   0 comments last modified about 9 months ago

When configuring YARN and MapReduce in Hadoop cluster, it is very important to configure the memory and virtual processors correctly. If the configurations are incorrect, the nodes may not be able to start properly and the applications may not be able to run successfully. For example...

View detail

Configure Hadoop 3.1.0 in a Multi Node Cluster

3331 views   0 comments last modified about 9 months ago

Previously, I summarized the steps to install Hadoop in a single node Windows machine. Install Hadoop 3.0.0 in Windows (Single Node) In this page, I...

View detail

Add comment

Please login first to add comments.  Log in New user?  Register

Comments (0)

No comments yet.

Contacts

  • enquiry[at]kontext.tech

Subscribe