arrow_back Install Hadoop 3.2.1 on Windows 10 Step by Step Guide

comment Comments
T Tim Reynolds #310 access_time 4 years ago more_vert

As I did not have a way to update my last comment I'm adding a new one. Regarding the winutils.exe maybe being wrong version.

At the command line I have issued the command 

>winutils.exe systeminfo

23994769408,17015447552,3802951680,4812754944,8,2112000,149723796

This shows that indeed it is only returning 7, but something in this hadoop installation is expecting 11.  That I think is proof that the winutils.exe I'm using is not the correct one. Although mine is compatible with 64bit, unlike some others I downloaded and tried, somewhere i need the version of this that will return 11 it seems.

T Tim Reynolds #309 access_time 4 years ago more_vert

Hello,

I have been able to get around my need for admin it seems so far by changing my config so the tmp-nm folder is in my Documents versus in C drive directly in tmp.

However, it seems I still have some issues.   Two of them seem to point to wrong version of winutils.exe.   I am running windows 10 64 bit and am trying to get hadoop 3.2.1 running. One symtom of the wrong version is the repeated warning in Yarn node manager window over and over

WARN util.SysInfoWindows: Expected split length of sysInfo to be 11. Got 7

Another was the failure code of a job I submitted to insert data into a table from the hive prompt.  Job details were found in the Hadoop cluster local UI 

Application application_1589548856723_0001 failed 2 times due to AM Container for appattempt_1589548856723_0001_000002 exited with exitCode: 1639

Failing this attempt.Diagnostics: [2020-05-15 09:53:23.804]Exception from container-launch.

Container id: container_1589548856723_0001_02_000001

Exit code: 1639

Exception message: Incorrect command line arguments.

Shell output: Usage: task create [TASKNAME] [COMMAND_LINE] |

task isAlive [TASKNAME] |

task kill [TASKNAME]

task processList [TASKNAME]

Creates a new task jobobject with taskname

Checks if task jobobject is alive

Kills task jobobject

Prints to stdout a list of processes in the task

along with their resource usage. One process per line

and comma separated info per process

ProcessId,VirtualMemoryCommitted(bytes),

WorkingSetSize(bytes),CpuTime(Millisec,Kernel+User)

[2020-05-15 09:53:23.831]Container exited with a non-zero exit code 1639.


Some sites have said these two issues are symtom of having the wrong winutils.exe.

I have some other issues I'll wait to post after I can get these fixed.

I have used the link in this article to get winutils.exe.    I have also tried other winutils.exe's I find out there.  However, for the other ones I've tried when trying to start yarn, in the yarn node manager window it is full of errors like

2020-05-15 10:12:16,444 ERROR util.SysInfoWindows: java.io.IOException: Cannot run program "C:\Users\XXX\Documents\Big-Data\Hadoop\hadoop-3.2.1\bin\winutils.exe": CreateProcess error=216, This version of %1 is not compatible with the version of Windows you're running. Check your computer's system information and then contact the software publisher

So those ones are worse - I can't even get yarn started with those due to that error.  

So with the version I am using now I can get YARN to start although I get the warning about "WARN util.SysInfoWindows: Expected split length of sysInfo to be 11. Got 7" but the actual hive insert fails anyway... 

Appreciate the help.  How do I find or know if a winutil.exe is meant for windows 10 64 bit and Hadoop 3.2.1?

format_quote

person Raymond access_time 4 years ago

Hi Tim,

I did similar changes as you did:

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
	<property>
		<name>yarn.nodemanager.local-dirs</name>
		<value>F:/big-data/data/tmp</value>
	</property>
</configuration>

And I cloud not start YARN nodemanager service because of the following error:

Caused by: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Permissions incorrectly set for dir F:/big-data/data/tmp/filecache, should be rwxr-xr-x, actual value = rwxrwxr-x

This issue is recorded here:

I cannot resolve this problem without running the commands as Administrator. 

Based on the JIRA links, these issues should have been fixed. However it may not work because my Windows account is not a local account or domain account.

I will find sometime to try directly using a local account (without Microsoft account) to see if it works. 

It seems you didn't get any issue when changing the local tmp folder, is that correct?

Raymond Raymond #308 access_time 4 years ago more_vert

Hi Tim,

I did similar changes as you did:

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
	<property>
		<name>yarn.nodemanager.local-dirs</name>
		<value>F:/big-data/data/tmp</value>
	</property>
</configuration>

And I cloud not start YARN nodemanager service because of the following error:

Caused by: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Permissions incorrectly set for dir F:/big-data/data/tmp/filecache, should be rwxr-xr-x, actual value = rwxrwxr-x

This issue is recorded here:

I cannot resolve this problem without running the commands as Administrator. 

Based on the JIRA links, these issues should have been fixed. However it may not work because my Windows account is not a local account or domain account.

I will find sometime to try directly using a local account (without Microsoft account) to see if it works. 

It seems you didn't get any issue when changing the local tmp folder, is that correct?

format_quote

Comment is deleted or blocked.

Raymond Raymond #307 access_time 4 years ago more_vert

Hi Tim,

Have you checked YARN web portal to see if you can see the Spark application is submitted successfully? You should be able to find more details there too (assuming you are run Spark with master set as yarn).

I’m working today and will try to replicate what you did in my machine when I am off work.



format_quote

Comment is deleted or blocked.

Raymond Raymond #298 access_time 4 years ago more_vert

Hi Saad,

Refer to the Reference section on this page: Default Ports Used by Hadoop Services (HDFS, MapReduce, YARN). It has the links to the official documentation about all the parameters you can configure in HDFS and YARN. It also shows the default values for each configurations.

For different versions of Hadoop, the default values might be different. 

format_quote

person Saad access_time 4 years ago

Hi,

http://localhost:9870/dfshealth.html#tab-overview

http://localhost:9864/datanode.html
these 2 links were not opening once i reached till end, then i started changing values in hdfs-site.xml 

to some other paths locations in E drive and then i think got lost.

Today when i run start-dfs.cmd then data and name node start without any error and i can see above 2 urls without any error. 

Thanks for quick reply.

Can you also guide me where can i find and change ports values like 8088,9870 etc.  

 Thanks again for this tutorial.

Regards,

Saad

S Saad United #297 access_time 4 years ago more_vert

Hi,

http://localhost:9870/dfshealth.html#tab-overview

http://localhost:9864/datanode.html
these 2 links were not opening once i reached till end, then i started changing values in hdfs-site.xml 

to some other paths locations in E drive and then i think got lost.

Today when i run start-dfs.cmd then data and name node start without any error and i can see above 2 urls without any error. 

Thanks for quick reply.

Can you also guide me where can i find and change ports values like 8088,9870 etc.  

 Thanks again for this tutorial.

Regards,

Saad

format_quote

person Raymond access_time 4 years ago

Hi Saad,

I don't see any error message in the log you pasted.

Can you please be more specific about the errors you encounterred.

For formatting namenode, it is correct to expect the namenode daemon to shutdown after the format is done. We will start all the HDFS and YAN daemons at the end. 

Raymond Raymond #296 access_time 4 years ago more_vert

Hi Saad,

I don't see any error message in the log you pasted.

Can you please be more specific about the errors you encounterred.

For formatting namenode, it is correct to expect the namenode daemon to shutdown after the format is done. We will start all the HDFS and YAN daemons at the end. 

format_quote

person Saad access_time 4 years ago

hello Raymond,


I learning about Hadoop and was following your detailed information on windows installation.




2020-04-27 22:32:04,347 INFO namenode.NameNode: createNameNode [-format]


Formatting using clusterid: CID-1d0c51aa-5dde-446b-99c1-3997255160fa


2020-04-27 22:32:05,369 INFO namenode.FSEditLog: Edit logging is async:true


2020-04-27 22:32:05,385 INFO namenode.FSNamesystem: KeyProvider: null


2020-04-27 22:32:05,387 INFO namenode.FSNamesystem: fsLock is fair: true


2020-04-27 22:32:05,388 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false


2020-04-27 22:32:05,428 INFO namenode.FSNamesystem: fsOwner             = saad (auth:SIMPLE)


2020-04-27 22:32:05,431 INFO namenode.FSNamesystem: supergroup          = supergroup


2020-04-27 22:32:05,431 INFO namenode.FSNamesystem: isPermissionEnabled = true


2020-04-27 22:32:05,432 INFO namenode.FSNamesystem: HA Enabled: false


2020-04-27 22:32:05,535 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling


2020-04-27 22:32:05,554 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000


2020-04-27 22:32:05,554 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true


2020-04-27 22:32:05,562 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000


2020-04-27 22:32:05,563 INFO blockmanagement.BlockManager: The block deletion will start around 2020 Apr 27 22:32:05


2020-04-27 22:32:05,566 INFO util.GSet: Computing capacity for map BlocksMap


2020-04-27 22:32:05,566 INFO util.GSet: VM type       = 64-bit


2020-04-27 22:32:05,568 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB


2020-04-27 22:32:05,568 INFO util.GSet: capacity      = 2^21 = 2097152 entries


2020-04-27 22:32:05,579 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled


2020-04-27 22:32:05,580 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false


2020-04-27 22:32:05,588 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS


2020-04-27 22:32:05,589 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033


2020-04-27 22:32:05,589 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0


2020-04-27 22:32:05,589 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000


2020-04-27 22:32:05,591 INFO blockmanagement.BlockManager: defaultReplication         = 1


2020-04-27 22:32:05,591 INFO blockmanagement.BlockManager: maxReplication             = 512


2020-04-27 22:32:05,591 INFO blockmanagement.BlockManager: minReplication             = 1


2020-04-27 22:32:05,592 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2


2020-04-27 22:32:05,592 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms


2020-04-27 22:32:05,592 INFO blockmanagement.BlockManager: encryptDataTransfer        = false


2020-04-27 22:32:05,593 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000


2020-04-27 22:32:05,646 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911


2020-04-27 22:32:05,646 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215


2020-04-27 22:32:05,647 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215


2020-04-27 22:32:05,647 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215


2020-04-27 22:32:05,664 INFO util.GSet: Computing capacity for map INodeMap


2020-04-27 22:32:05,664 INFO util.GSet: VM type       = 64-bit


2020-04-27 22:32:05,664 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB


2020-04-27 22:32:05,665 INFO util.GSet: capacity      = 2^20 = 1048576 entries


2020-04-27 22:32:05,666 INFO namenode.FSDirectory: ACLs enabled? false


2020-04-27 22:32:05,666 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true


2020-04-27 22:32:05,667 INFO namenode.FSDirectory: XAttrs enabled? true


2020-04-27 22:32:05,667 INFO namenode.NameNode: Caching file names occurring more than 10 times


2020-04-27 22:32:05,674 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536


2020-04-27 22:32:05,677 INFO snapshot.SnapshotManager: SkipList is disabled


2020-04-27 22:32:05,681 INFO util.GSet: Computing capacity for map cachedBlocks


2020-04-27 22:32:05,681 INFO util.GSet: VM type       = 64-bit


2020-04-27 22:32:05,682 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB


2020-04-27 22:32:05,683 INFO util.GSet: capacity      = 2^18 = 262144 entries


2020-04-27 22:32:05,713 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10


2020-04-27 22:32:05,714 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10


2020-04-27 22:32:05,714 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25


2020-04-27 22:32:05,720 INFO namenode.FSNamesystem: Retry cache on namenode is enabled


2020-04-27 22:32:05,721 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis


2020-04-27 22:32:05,723 INFO util.GSet: Computing capacity for map NameNodeRetryCache


2020-04-27 22:32:05,723 INFO util.GSet: VM type       = 64-bit


2020-04-27 22:32:05,724 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB


2020-04-27 22:32:05,724 INFO util.GSet: capacity      = 2^15 = 32768 entries


2020-04-27 22:32:05,765 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1264791665-192.168.10.2-1588008725757


2020-04-27 22:32:05,810 INFO common.Storage: Storage directory E:\big-data\data\dfs\namespace_logs has been successfully formatted.


2020-04-27 22:32:05,841 INFO namenode.FSImageFormatProtobuf: Saving image file E:\big-data\data\dfs\namespace_logs\current\fsimage.ckpt_0000000000000000000 using no compression


2020-04-27 22:32:05,939 INFO namenode.FSImageFormatProtobuf: Image file E:\big-data\data\dfs\namespace_logs\current\fsimage.ckpt_0000000000000000000 of size 399 bytes saved in 0 seconds .


2020-04-27 22:32:05,957 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0


2020-04-27 22:32:05,963 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.


2020-04-27 22:32:05,963 INFO namenode.NameNode: SHUTDOWN_MSG:


/************************************************************


SHUTDOWN_MSG: Shutting down NameNode at DESKTOP-ROC4R5P/192.168.10.2


************************************************************/




i have downloaded jar and put in folder also.




https://github.com/FahaoTang/big-data/blob/master/hadoop-hdfs-3.2.1.jar 


Can you help me what thing i am setting wrong??? it will be great help and guidance.


Regards,

Saad

S Saad United #295 access_time 4 years ago more_vert

hello Raymond,


I learning about Hadoop and was following your detailed information on windows installation.




2020-04-27 22:32:04,347 INFO namenode.NameNode: createNameNode [-format]


Formatting using clusterid: CID-1d0c51aa-5dde-446b-99c1-3997255160fa


2020-04-27 22:32:05,369 INFO namenode.FSEditLog: Edit logging is async:true


2020-04-27 22:32:05,385 INFO namenode.FSNamesystem: KeyProvider: null


2020-04-27 22:32:05,387 INFO namenode.FSNamesystem: fsLock is fair: true


2020-04-27 22:32:05,388 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false


2020-04-27 22:32:05,428 INFO namenode.FSNamesystem: fsOwner             = saad (auth:SIMPLE)


2020-04-27 22:32:05,431 INFO namenode.FSNamesystem: supergroup          = supergroup


2020-04-27 22:32:05,431 INFO namenode.FSNamesystem: isPermissionEnabled = true


2020-04-27 22:32:05,432 INFO namenode.FSNamesystem: HA Enabled: false


2020-04-27 22:32:05,535 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling


2020-04-27 22:32:05,554 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000


2020-04-27 22:32:05,554 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true


2020-04-27 22:32:05,562 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000


2020-04-27 22:32:05,563 INFO blockmanagement.BlockManager: The block deletion will start around 2020 Apr 27 22:32:05


2020-04-27 22:32:05,566 INFO util.GSet: Computing capacity for map BlocksMap


2020-04-27 22:32:05,566 INFO util.GSet: VM type       = 64-bit


2020-04-27 22:32:05,568 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB


2020-04-27 22:32:05,568 INFO util.GSet: capacity      = 2^21 = 2097152 entries


2020-04-27 22:32:05,579 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled


2020-04-27 22:32:05,580 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false


2020-04-27 22:32:05,588 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS


2020-04-27 22:32:05,589 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033


2020-04-27 22:32:05,589 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0


2020-04-27 22:32:05,589 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000


2020-04-27 22:32:05,591 INFO blockmanagement.BlockManager: defaultReplication         = 1


2020-04-27 22:32:05,591 INFO blockmanagement.BlockManager: maxReplication             = 512


2020-04-27 22:32:05,591 INFO blockmanagement.BlockManager: minReplication             = 1


2020-04-27 22:32:05,592 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2


2020-04-27 22:32:05,592 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms


2020-04-27 22:32:05,592 INFO blockmanagement.BlockManager: encryptDataTransfer        = false


2020-04-27 22:32:05,593 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000


2020-04-27 22:32:05,646 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911


2020-04-27 22:32:05,646 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215


2020-04-27 22:32:05,647 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215


2020-04-27 22:32:05,647 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215


2020-04-27 22:32:05,664 INFO util.GSet: Computing capacity for map INodeMap


2020-04-27 22:32:05,664 INFO util.GSet: VM type       = 64-bit


2020-04-27 22:32:05,664 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB


2020-04-27 22:32:05,665 INFO util.GSet: capacity      = 2^20 = 1048576 entries


2020-04-27 22:32:05,666 INFO namenode.FSDirectory: ACLs enabled? false


2020-04-27 22:32:05,666 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true


2020-04-27 22:32:05,667 INFO namenode.FSDirectory: XAttrs enabled? true


2020-04-27 22:32:05,667 INFO namenode.NameNode: Caching file names occurring more than 10 times


2020-04-27 22:32:05,674 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536


2020-04-27 22:32:05,677 INFO snapshot.SnapshotManager: SkipList is disabled


2020-04-27 22:32:05,681 INFO util.GSet: Computing capacity for map cachedBlocks


2020-04-27 22:32:05,681 INFO util.GSet: VM type       = 64-bit


2020-04-27 22:32:05,682 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB


2020-04-27 22:32:05,683 INFO util.GSet: capacity      = 2^18 = 262144 entries


2020-04-27 22:32:05,713 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10


2020-04-27 22:32:05,714 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10


2020-04-27 22:32:05,714 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25


2020-04-27 22:32:05,720 INFO namenode.FSNamesystem: Retry cache on namenode is enabled


2020-04-27 22:32:05,721 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis


2020-04-27 22:32:05,723 INFO util.GSet: Computing capacity for map NameNodeRetryCache


2020-04-27 22:32:05,723 INFO util.GSet: VM type       = 64-bit


2020-04-27 22:32:05,724 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB


2020-04-27 22:32:05,724 INFO util.GSet: capacity      = 2^15 = 32768 entries


2020-04-27 22:32:05,765 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1264791665-192.168.10.2-1588008725757


2020-04-27 22:32:05,810 INFO common.Storage: Storage directory E:\big-data\data\dfs\namespace_logs has been successfully formatted.


2020-04-27 22:32:05,841 INFO namenode.FSImageFormatProtobuf: Saving image file E:\big-data\data\dfs\namespace_logs\current\fsimage.ckpt_0000000000000000000 using no compression


2020-04-27 22:32:05,939 INFO namenode.FSImageFormatProtobuf: Image file E:\big-data\data\dfs\namespace_logs\current\fsimage.ckpt_0000000000000000000 of size 399 bytes saved in 0 seconds .


2020-04-27 22:32:05,957 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0


2020-04-27 22:32:05,963 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown.


2020-04-27 22:32:05,963 INFO namenode.NameNode: SHUTDOWN_MSG:


/************************************************************


SHUTDOWN_MSG: Shutting down NameNode at DESKTOP-ROC4R5P/192.168.10.2


************************************************************/




i have downloaded jar and put in folder also.




https://github.com/FahaoTang/big-data/blob/master/hadoop-hdfs-3.2.1.jar 


Can you help me what thing i am setting wrong??? it will be great help and guidance.


Regards,

Saad

Raymond Raymond #273 access_time 4 years ago more_vert

Did you follow step 3?

Step 3 - Install Hadoop native IO binary

If you've done that, you should be able to see the exe file in %HADOOP_HOME%/bin folder:

And also make sure HADOOP_HOME environment variable is configured correctly and also PATH environment variable has Hadoop bin folder.

You also need to restart PowerShell to source the latest environment variables if you are configure all these variables manually. 

Please let me know if that still exists. 

format_quote

person J Macklin access_time 4 years ago

When i type winutils.exe anr run

getting this error

winutils.exe : The term 'winutils.exe' is not recognized as the name of a cmdlet, function, script file, or operable

program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

At line:1 char:1

+ winutils.exe

+ ~~~~~~~~~~~~

    + CategoryInfo          : ObjectNotFound: (winutils.exe:String) [], CommandNotFoundException

    + FullyQualifiedErrorId : CommandNotFoundException

what to do


JM J Macklin Navamani #272 access_time 4 years ago more_vert

When i type winutils.exe anr run

getting this error

winutils.exe : The term 'winutils.exe' is not recognized as the name of a cmdlet, function, script file, or operable

program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

At line:1 char:1

+ winutils.exe

+ ~~~~~~~~~~~~

    + CategoryInfo          : ObjectNotFound: (winutils.exe:String) [], CommandNotFoundException

    + FullyQualifiedErrorId : CommandNotFoundException

what to do


Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts