arrow_back Apache Hive 3.0.0 Installation on Windows 10 Step by Step Guide

comment Comments
Raymond Raymond #284 access_time 4 years ago more_vert

Thanks for pointing this out. When I first created this article, it is based on Hadoop 3.0.0. If you install Hadoop 3.0.0, you won't get this error. 

I followed the steps again with the following combination:

  • Hadoop 3.2.1 on Windows
  • Hive 3.0.0 on Windows

I could get reproduce the error you encountered. 

To fix this issue, we just need to ensure Map Reduce required libs are in included in JAVA classpath.

So we can change mapred-site.xml file to ensure the following config exists:

	<property> 
		<name>mapreduce.application.classpath</name>
		<value>%HADOOP_HOME%/share/hadoop/mapreduce/*,%HADOOP_HOME%/share/hadoop/mapreduce/lib/*,%HADOOP_HOME%/share/hadoop/common/*,%HADOOP_HOME%/share/hadoop/common/lib/*,%HADOOP_HOME%/share/hadoop/yarn/*,%HADOOP_HOME%/share/hadoop/yarn/lib/*,%HADOOP_HOME%/share/hadoop/hdfs/*,%HADOOP_HOME%/share/hadoop/hdfs/lib/*</value>
	</property>

The INSERT statement should be able to complete now successfully.


Please let me know if you still encounter any errors.

format_quote

person Saikat access_time 4 years ago

Thanks a lot for your response. I actually resolved the issue yesterday by adding the user in the local group policies --> create symbolic links.

But now I got into a new issue where it says not able to find or load mapreduce while I am trying to insert new data into the test table. Relevant screenshots below for your reference.I think this has something to do with mapred-site.xml, but I have actually configured it as per your steps while installing hadoop 3.2.1.


I tried adding the additional parameters in the mapred-site.xml like below, but still no luck. Do we need to configure the mapred-site.xml file with additional parameters to make hive work with it? 


S Saikat Sengupta #282 access_time 4 years ago more_vert

Thanks a lot for your response. I actually resolved the issue yesterday by adding the user in the local group policies --> create symbolic links.

But now I got into a new issue where it says not able to find or load mapreduce while I am trying to insert new data into the test table. Relevant screenshots below for your reference.I think this has something to do with mapred-site.xml, but I have actually configured it as per your steps while installing hadoop 3.2.1.


I tried adding the additional parameters in the mapred-site.xml like below, but still no luck. Do we need to configure the mapred-site.xml file with additional parameters to make hive work with it? 


format_quote

person Raymond access_time 5 years ago

Can you try starting your Hadoop daemons (HDFS and YARN services) and also Hive services using Command Prompt (Run As Administrator)?

Raymond Raymond #281 access_time 5 years ago more_vert

Can you try starting your Hadoop daemons (HDFS and YARN services) and also Hive services using Command Prompt (Run As Administrator)?

format_quote

person Saikat access_time 5 years ago

I have completed all the steps and was able to run the hive server as well,.

But when i create a new table like test_table and try to insert data, i get the below error. The error is due to the symlink for sure but don't know why I am getting this. I have followed exact steps as mentioned above.

Application application_1587233378296_0003 failed 2 times due to AM Container for appattempt_1587233378296_0003_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2020-04-18 22:29:36.493]Exception from container-launch.
Container id: container_1587233378296_0003_02_000001
Exit code: 1
Exception message: CreateSymbolicLink error (1314): A required privilege is not held by the client.
Shell output: 1 file(s) moved.
"Setting up env variables"
"Setting up job resources"
S Saikat Sengupta #280 access_time 5 years ago more_vert

I have completed all the steps and was able to run the hive server as well,.

But when i create a new table like test_table and try to insert data, i get the below error. The error is due to the symlink for sure but don't know why I am getting this. I have followed exact steps as mentioned above.

Application application_1587233378296_0003 failed 2 times due to AM Container for appattempt_1587233378296_0003_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2020-04-18 22:29:36.493]Exception from container-launch.
Container id: container_1587233378296_0003_02_000001
Exit code: 1
Exception message: CreateSymbolicLink error (1314): A required privilege is not held by the client.
Shell output: 1 file(s) moved.
"Setting up env variables"
"Setting up job resources"
Raymond Raymond #279 access_time 5 years ago more_vert

Apologies for the late response. I've been very busy recently. 

Just to double confirm:

Did you follow all the exact steps in my post?

The following step is quite import too to make sure Java can also understand the paths correctly since Hive and Hadoop are mainly using Java (except for native hdfs libs):

The symbolic link needs to based on your folder structure, i.e. not exactly what I provided on the page.

Can you also ensure you add those environment variable setups into bashrc file?

If you followed the above two steps exactly and still get the error, we can try using collaboration tools (ping me on Linkedin with details) so that you can share your screen with me on weekend and I can have a quick look for you.  

format_quote

person Muhammad Salman access_time 5 years ago

Did not follow pre-requisites because I already have hadoop setup on my Windows 10 which is working fine.

MS Muhammad Salman Ahsan #278 access_time 5 years ago more_vert

Did not follow pre-requisites because I already have hadoop setup on my Windows 10 which is working fine.

format_quote

person Raymond access_time 5 years ago

Did you follow Hadoop installation link in prerequisites section to install Hadoop?

Raymond Raymond #277 access_time 5 years ago more_vert

BTW if you found it not easy to follow the instructions, try this series via Windows Subsystem for Linux:

https://kontext.tech/column/apache-sqoop/313/big-data-tools-on-windows-via-windows-subsystem-for-linux-wsl

format_quote

person Muhammad Salman access_time 5 years ago

When I am trying to run $HIVE_HOME/bin/schematool -help in Cygwin:

It gives me this error:

"Cannot find hadoop installation: $HADOOP_HOME or $HADOOP_PREFIX must be set or hadoop must be in the path".

But when I write $HADOOP_HOME in cygwin to verify path it gives "-bash: /cygdrive/c/hadoop/: Is a directory"



Please help..

Raymond Raymond #276 access_time 5 years ago more_vert

Did you follow Hadoop installation link in prerequisites section to install Hadoop?

format_quote

person Muhammad Salman access_time 5 years ago

When I am trying to run $HIVE_HOME/bin/schematool -help in Cygwin:

It gives me this error:

"Cannot find hadoop installation: $HADOOP_HOME or $HADOOP_PREFIX must be set or hadoop must be in the path".

But when I write $HADOOP_HOME in cygwin to verify path it gives "-bash: /cygdrive/c/hadoop/: Is a directory"



Please help..

MS Muhammad Salman Ahsan #274 access_time 5 years ago more_vert

When I am trying to run $HIVE_HOME/bin/schematool -help in Cygwin:

It gives me this error:

"Cannot find hadoop installation: $HADOOP_HOME or $HADOOP_PREFIX must be set or hadoop must be in the path".

But when I write $HADOOP_HOME in cygwin to verify path it gives "-bash: /cygdrive/c/hadoop/: Is a directory"



Please help..

Raymond Raymond #228 access_time 5 years ago more_vert

Hello, yes you are right. You may also need to install a metastore database depends on which database you want to use as detailed in the above installation guide.

BTW, if you are using Windows 10, I would recommend using WSL to install. 

Refer to this page for more details:

https://kontext.tech/docs/DataAndBusinessIntelligence/p/big-data-tools-on-windows-via-windows-subsystem-for-linux-wsl

You can find my LinkedIn link on the About page of this site.

Cheers,

Raymond

format_quote

person Swati Agarwal access_time 5 years ago

Hi Team, 

Yes it was installation issue. Thanks for the help.

I am new to Hadoop 3, and would seek your guidance.

For installing and working in Hadoop 3, we have to follow:

1) Hadoop 3 installation process 

https://kontext.tech/docs/DataAndBusinessIntelligence/p/install-hadoop-300-in-windows-single-node

2) Hive process

https://kontext.tech/docs/DataAndBusinessIntelligence/p/apache-hive-300-installation-on-windows-10-step-by-step-guide

Please correct me if I am wrong.

Is there anything else that is required to  be installed or set up? Please suggest and guide me.

Also can we connect over linkedin? If I get stuck somewhere I would need your expert advice.

My linkedin id is : https://www.linkedin.com/in/swati0303/

It will be really really helpful.

Regards,

Swati


Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts