Apache Hive 3.0.0 Installation on Windows 10 Step by Step Guide

Raymond Raymond event 2019-03-25 visibility 41,863 comment 29
more_vert

In this article, I’m going to demo how to install Hive 3.0.0 on Windows 10.

warning Alert - Apache Hive is impacted by Log4j vulnerabilities; refer to page Apache Log4j Security Vulnerabilities to find out the fixes.

Prerequisites

Before installation of Apache Hive, please ensure you have Hadoop available on your Windows environment. We cannot run Hive without Hadoop. 

Install Hadoop (mandatory)

I recommend to install Hadoop 3.x to work with Hive 3.0.0.

There are two articles I've published so far and you can follow either of them to install Hadoop:

Hadoop 3.2.1 is recommended as that one provides very detailed steps that are easy to follow. 

Tools and Environment

  • Windows 10
  • Cygwin
  • Command Prompt

Install Cygwin

Please install Cygwin so that we can run Linux shell scripts on Windows. From Hive 2.3.0, the binary doesn’t include any CMD file anymore. Thus you have to use Cygwin or any other bash/sh compatible tools to run the scripts.

You can install Cygwin from this site: https://www.cygwin.com/.

Download Binary Package

Download the latest binary from the official website:

https://hive.apache.org/downloads.html

Save the downloaded package to a local drive. For my case, I am saving to ‘F:\DataAnalytics’.

If you cannot find the package, you can download from the archive site too: https://archive.apache.org/dist/hive/hive-3.0.0/.

UnZip binary package

Open Cygwin terminal, and change directory (cd) to the folder where you save the binary package and then unzip:

$ cd F:\DataAnalytics
fahao@Raymond-Alienware /cygdrive/f/DataAnalytics $ tar -xvzf apache-hive-3.0.0-bin.tar.gz

Setup environment variables

Run the following commands in Cygwin to setup the environment variables:

export HADOOP_HOME='/cygdrive/f/DataAnalytics/hadoop-3.0.0'
export PATH=$PATH:$HADOOP_HOME/bin
export HIVE_HOME='/cygdrive/f/DataAnalytics/apache-hive-3.0.0-bin'
export PATH=$PATH:$HIVE_HOME/bin
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HIVE_HOME/lib/*.jar

You can add these exports to file .bashrc so that you don’t need to run these command manually each time when you launch Cygwin:

vi ~/.bashrc

* Add the above lines into this file.

Setup Hive HDFS folders

Open Command Prompt (not Cygwin) and then run the following commands:

hadoop fs -mkdir /tmp
hadoop fs -mkdir -p /user/hive/warehouse
hadoop fs -chmod g+w   /tmp
hadoop fs -chmod g+w   /user/hive/warehouse

As Java doesn’t understand Cygwin path properly, you may encounter errors like the following:

JAR does not exist or is not a normal file: F:\cygdrive\f\DataAnalytics\apache-hive-3.0.0-bin\lib\hive-beeline-3.0.0.jar

In my system, Hive is installed in F:\DataAnalytics\ folder. To make it work, follow these steps:

  • Create a folder in F: driver named cygdrive
  • Open Command Prompt (Run as Administrator) and then run the following command:
C:\WINDOWS\system32>mklink /J  F:\cygdrive\f\ F:\
Junction created for F:\cygdrive\f\ <<===>> F:\

In this way, ‘F:\cygdrive\f’ will be equal to ‘F:\’.  You need to change the drive to the appropriate drive where you are installing Hive. For example, if you are installing Hive in C driver, the command line will be:

C:\WINDOWS\system32>mklink /J  C:\cygdrive\c\ C:\

Initialize metastore

Now we need to initialize the schemas for metastore.

$HIVE_HOME/bin/schematool -dbType <db type> -initSchema

Type the following command to view all the options:

$HIVE_HOME/bin/schematool -help

For argument dbType, the value can be one of the following databases:

derby|mysql|postgres|oracle|mssql

For this article, I am going to use derby as it is purely Java based and also already built-in with the Hive release:

$HIVE_HOME/bin/schematool -dbType derby -initSchema

The output looks similar to the following:

Metastore connection URL:        jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver :    org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User:       APP
Starting metastore schema initialization to 3.0.0
Initialization script hive-schema-3.0.0.derby.sql
Initialization script completed schemaTool completed

A folder named metastore_db will be created on your current path (pwd).

Configure a remote database as metastore

This step is optional for this article. You can configure it to support multiple sessions. 

Please refer to this post about configuring SQL Server database as metastore.

Configure a SQL Server Database as Remote Hive Metastore

* I would recommend using a remote database as metastore for Hive for proper environment.

Configure API authentication

Add the following configuration into hive-site.xml file.

<property>
    <name>hive.metastore.event.db.notification.api.auth</name>
     <value>false</value>
     <description>
       Should metastore do authorization against database notification related APIs such as get_next_notification.
       If set to true, then only the superusers in proxy settings have the permission
     </description>
   </property>

Alternatively you can configure proxy user in Hadoop core-site.xml file. Refer to the following post for more details:

HiveServer2 Cannot Connect to Hive Metastore Resolutions/Workarounds

Start HiveServer2 service

Run the following command in Cygwin to start HiveServer2 service:

$HIVE_HOME/bin/hive --service hiveserver2 start

Start HiveServer2 service and run Beeline CLI

Now you can run the following command to start HiveServer2 service and Beeline in the same process:

$HIVE_HOME/bin/beeline -u jdbc:hive2://
image

Run CLI directly

You can also  run the CLI either via hive or beeline command.

$HIVE_HOME/bin/beeline
$HIVE_HOME/bin/hive

image

image

You can also specify beeline commands with JDBC URL of HiveServer2.  In the following command, you need to replace $HS2_HOST with HiveServer2 address and $HS2_PORT with HiveServer2 port.

$HIVE_HOME/bin/beeline -u jdbc:hive2://$HS2_HOST:$HS2_PORT

By default the URL is: jdbc:hive2://localhost:10000.

Until now, we have installed the clients successfully.

DDL practices

Now we have Hive installed successfully, we can run some commands to test.

For more details about the commands, refer to official website:

https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients

Create a new Hive database

Run the following command in Beeline to create a database named test_db:

 create database if not exists test_db;

Output of the command looks similar to the following:

0: jdbc:hive2://> create database if not exists test_db;
19/03/26 21:44:09 [HiveServer2-Background-Pool: Thread-115]: WARN metastore.ObjectStore: Failed to get database hive.test_db, returning NoSuchObjectException
OK
No rows affected (0.312 seconds)

As I didn’t specify the database location, it will be created under the default HDFS location: /user/hive/warehouse.

In HDFS name node, we can see a new folder is created.

image

Create a new Hive table

Run the following commands to create a table named test_table:

 use test_db;
create table test_table (id bigint not null, value varchar(100));
show tables;

Insert data into Hive table

Run the following command to insert some sample data:

insert into test_table (id,value) values (1,'ABC'),(2,'DEF');

Two records will be created by the above command.

The command will submit a MapReduce job to YARN. You can also configure Hive to use Spark as execution engine instead of MapReduce.

You can track the job status through Tracking URL printed out by the console output.

Go to YARN, you can also view the job status:

image

Wait until the job is completed.

Select data from Hive table

Now, you can display the data by running the following command in Beeline:

select * from test_table;

The output looks similar to the following:

0: jdbc:hive2://> select * from test_table;
19/03/26 23:23:18 [93fd08aa-09f6-488a-aa43-28b37d69a504 main]: WARN metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
OK
+----------------+-------------------+
| test_table.id  | test_table.value  |
+----------------+-------------------+
| 1              | ABC               |
| 2              | DEF               |
+----------------+-------------------+
2 rows selected (0.476 seconds)

In Hadoop NameNode website, you can also find the new files are created:

image

Now you can enjoy working with Hive on Windows!

Fix some errors

hadoop-3.0.0/bin/hadoop: line 2: $'\r': command not found

I got this error in my environment because end line is in Windows format instead of UNIX.

 /cygdrive/f/DataAnalytics/apache-hive-3.1.1-bin
$ $HIVE_HOME/bin/beeline
F:\DataAnalytics\hadoop-3.0.0/bin/hadoop: line 2: $'\r': command not found
F:\DataAnalytics\hadoop-3.0.0/bin/hadoop: line 17: $'\r': command not found
F:\DataAnalytics\hadoop-3.0.0/bin/hadoop: line 20: $'\r': command not found
F:\DataAnalytics\hadoop-3.0.0/bin/hadoop: line 26: syntax error near unexpected token `$'{\r''
':\DataAnalytics\hadoop-3.0.0/bin/hadoop: line 26: `{
Unable to determine Hadoop version information.
'hadoop version' returned:
F:\DataAnalytics\hadoop-3.0.0/bin/hadoop: line 2: $'\r': command not found
F:\DataAnalytics\hadoop-3.0.0/bin/hadoop: line 17: $'\r': command not found
F:\DataAnalytics\hadoop-3.0.0/bin/hadoop: line 20: $'\r': command not found
F:\DataAnalytics\hadoop-3.0.0/bin/hadoop: line 26: syntax error near unexpected token `$'{\r''
':\DataAnalytics\hadoop-3.0.0/bin/hadoop: line 26: `{

image

This can be confirmed by opening the hadoop file in Notepad++.

To fix this, click Edit -> EOL Conversion -> Unix (LF).

Do the same for all the other shell scripts if similar errors occur.

I applied the same fix to my following scripts in folder $HADOOP_HOME/bin:

  • hdfs
  • mapred
  • yarn

Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V

If you use Hadoop 3.2.1 and Hive 3.0.0, you may encounter this error because guava versions are different.

Follow this link to fix it: Hive: Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V.

Other issues

If you encounter any other issues, feel free to post a comment here. I will try to help as much as I can. Before you ask a question, please ensure you exactly followed all the above steps. 

More from Kontext
comment Comments
N NA VN

NA access_time 10 months ago link more_vert

I got error:  "Could not find or load main class org.apache.hadoop.util.RunJar"  - at  Initialize metastore when I tried to ran $HIVE_HOME/bin/schematool -help. 

My path is ok. This is my config path (I have checked it carefully)

export HADOOP_HOME='D:/Dowload/hadoop/hadoop-3.3.1'

export PATH=$PATH:$HADOOP_HOME/bin

export HIVE_HOME='D:/Dowload/hadoop/apache-hive-3.1.2-bin'

export PATH=$PATH:$HIVE_HOME/bin

export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HIVE_HOME/lib/*.jar

2024022495937-image.png

My hadoop is work ok


2024022495735-image.png

Thank you!

Raymond Raymond

Raymond access_time 10 months ago link more_vert

It seems you have installed Hadoop correctly. Can you print out $HADOOP_CLASSPATH? BTW, the guide was tested with Hive 3.0.0 and Hadoop 3.0.0. I noticed you are using later versions and the Hadoop jar files may have changed in those versions.


I would suggest using docker to run Hive now since it is available: QuickStarted (apache.org)

SK Shiv Kumar Mahato

Shiv Kumar access_time 5 years ago link more_vert

Can you please let us know how to uninstall HIVE USING CYGWIN?

Raymond Raymond

Raymond access_time 5 years ago link more_vert

Hi Shiv,

If you want to uninstall, you can follow these steps:

  • Use vi command to edit ~/.bashrc file to remove Hive related environment variables.
  • Remove Hive metastore database if you can using an external database.
  • And then remove the Hive home folder from disk.
P Praveen Kumar

Praveen access_time 5 years ago link more_vert

While I try to execute the below command in cygwin 

$HIVE_HOME/bin/schematool -dbType derby -initSchema

I am facing issues... I have installed hadoop 3.1.0

Can anyone help. Me in this? 

Raymond Raymond

Raymond access_time 5 years ago link more_vert

Hello Praveen,

If you close the cygwin windows and reopen it and then type the following:

echo $HIVE_HOME
echo $HADOOP_HOME

Does that still list all the values?

And also, I noticed you are using mysql as hive metastore, have you configured all the values correctly? If not, I would recommend using derby if you are just installing Hive for learning.  derby is built in however it only supports on session concurrently.  

P Praveen Kumar

Praveen access_time 5 years ago link more_vert

Still facing same issue.... 

Tries with derby command. I have shown  the echo path of hive and hadoop in the below screen shot... 

Can you help me in this?


Raymond Raymond

Raymond access_time 5 years ago link more_vert

Can you add the environment variables into bash profile?

vi ~/.bashrc

And then insert the following lines (replace the values to your paths as shown in your screenshot):

export HADOOP_HOME='/cygdrive/f/DataAnalytics/hadoop-3.0.0'
export PATH=$PATH:$HADOOP_HOME/bin
export HIVE_HOME='/cygdrive/f/DataAnalytics/apache-hive-3.0.0-bin'
export PATH=$PATH:$HIVE_HOME/bin
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HIVE_HOME/lib/*.jar

Save the file after insert.

It's very hard to debug without access to your environment. 

S Saikat Sengupta

Saikat access_time 5 years ago link more_vert

I have completed all the steps and was able to run the hive server as well,.

But when i create a new table like test_table and try to insert data, i get the below error. The error is due to the symlink for sure but don't know why I am getting this. I have followed exact steps as mentioned above.

Application application_1587233378296_0003 failed 2 times due to AM Container for appattempt_1587233378296_0003_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2020-04-18 22:29:36.493]Exception from container-launch.
Container id: container_1587233378296_0003_02_000001
Exit code: 1
Exception message: CreateSymbolicLink error (1314): A required privilege is not held by the client.
Shell output: 1 file(s) moved.
"Setting up env variables"
"Setting up job resources"
Raymond Raymond

Raymond access_time 5 years ago link more_vert

Can you try starting your Hadoop daemons (HDFS and YARN services) and also Hive services using Command Prompt (Run As Administrator)?

S Saikat Sengupta

Saikat access_time 5 years ago link more_vert

Thanks a lot for your response. I actually resolved the issue yesterday by adding the user in the local group policies --> create symbolic links.

But now I got into a new issue where it says not able to find or load mapreduce while I am trying to insert new data into the test table. Relevant screenshots below for your reference.I think this has something to do with mapred-site.xml, but I have actually configured it as per your steps while installing hadoop 3.2.1.


I tried adding the additional parameters in the mapred-site.xml like below, but still no luck. Do we need to configure the mapred-site.xml file with additional parameters to make hive work with it? 


Raymond Raymond

Raymond access_time 5 years ago link more_vert

Thanks for pointing this out. When I first created this article, it is based on Hadoop 3.0.0. If you install Hadoop 3.0.0, you won't get this error. 

I followed the steps again with the following combination:

  • Hadoop 3.2.1 on Windows
  • Hive 3.0.0 on Windows

I could get reproduce the error you encountered. 

To fix this issue, we just need to ensure Map Reduce required libs are in included in JAVA classpath.

So we can change mapred-site.xml file to ensure the following config exists:

	<property> 
		<name>mapreduce.application.classpath</name>
		<value>%HADOOP_HOME%/share/hadoop/mapreduce/*,%HADOOP_HOME%/share/hadoop/mapreduce/lib/*,%HADOOP_HOME%/share/hadoop/common/*,%HADOOP_HOME%/share/hadoop/common/lib/*,%HADOOP_HOME%/share/hadoop/yarn/*,%HADOOP_HOME%/share/hadoop/yarn/lib/*,%HADOOP_HOME%/share/hadoop/hdfs/*,%HADOOP_HOME%/share/hadoop/hdfs/lib/*</value>
	</property>

The INSERT statement should be able to complete now successfully.


Please let me know if you still encounter any errors.

S Saikat Sengupta

Saikat access_time 5 years ago link more_vert

Hey Raymond,

Thanks a lot brother for your prompt response.

This worked like a charm after amending mapred-site.xml with %HADOOP_HOME%.  I made a mistake using the unix convention of the variable and was trying the same thing with $HADOOP_HOME$.

Successfully inserted data. Cheers!

Raymond Raymond

Raymond access_time 5 years ago link more_vert
I’m glad it is working. Have fun with your big data journey!
MS Muhammad Salman Ahsan

Muhammad Salman access_time 5 years ago link more_vert

When I am trying to run $HIVE_HOME/bin/schematool -help in Cygwin:

It gives me this error:

"Cannot find hadoop installation: $HADOOP_HOME or $HADOOP_PREFIX must be set or hadoop must be in the path".

But when I write $HADOOP_HOME in cygwin to verify path it gives "-bash: /cygdrive/c/hadoop/: Is a directory"



Please help..

Raymond Raymond

Raymond access_time 5 years ago link more_vert

BTW if you found it not easy to follow the instructions, try this series via Windows Subsystem for Linux:

https://kontext.tech/column/apache-sqoop/313/big-data-tools-on-windows-via-windows-subsystem-for-linux-wsl

Raymond Raymond

Raymond access_time 5 years ago link more_vert

Did you follow Hadoop installation link in prerequisites section to install Hadoop?

MS Muhammad Salman Ahsan

Muhammad Salman access_time 5 years ago link more_vert

Did not follow pre-requisites because I already have hadoop setup on my Windows 10 which is working fine.

Raymond Raymond

Raymond access_time 5 years ago link more_vert

Apologies for the late response. I've been very busy recently. 

Just to double confirm:

Did you follow all the exact steps in my post?

The following step is quite import too to make sure Java can also understand the paths correctly since Hive and Hadoop are mainly using Java (except for native hdfs libs):

The symbolic link needs to based on your folder structure, i.e. not exactly what I provided on the page.

Can you also ensure you add those environment variable setups into bashrc file?

If you followed the above two steps exactly and still get the error, we can try using collaboration tools (ping me on Linkedin with details) so that you can share your screen with me on weekend and I can have a quick look for you.  

hide_source Anonymous

Swati Agarwal access_time 6 years ago link more_vert

Hi Team, 

Yes it was installation issue. Thanks for the help.

I am new to Hadoop 3, and would seek your guidance.

For installing and working in Hadoop 3, we have to follow:

1) Hadoop 3 installation process 

https://kontext.tech/docs/DataAndBusinessIntelligence/p/install-hadoop-300-in-windows-single-node

2) Hive process

https://kontext.tech/docs/DataAndBusinessIntelligence/p/apache-hive-300-installation-on-windows-10-step-by-step-guide

Please correct me if I am wrong.

Is there anything else that is required to  be installed or set up? Please suggest and guide me.

Also can we connect over linkedin? If I get stuck somewhere I would need your expert advice.

My linkedin id is : https://www.linkedin.com/in/swati0303/

It will be really really helpful.

Regards,

Swati


Raymond Raymond

Raymond access_time 6 years ago link more_vert

Hello, yes you are right. You may also need to install a metastore database depends on which database you want to use as detailed in the above installation guide.

BTW, if you are using Windows 10, I would recommend using WSL to install. 

Refer to this page for more details:

https://kontext.tech/docs/DataAndBusinessIntelligence/p/big-data-tools-on-windows-via-windows-subsystem-for-linux-wsl

You can find my LinkedIn link on the About page of this site.

Cheers,

Raymond

hide_source Anonymous

Swati Agarwal access_time 6 years ago link more_vert

Hi Team,

At step, Set up Hive HDFS folders while creating dir using hadoop fs -mkdir /tmp at cmd, the system is throwing an error.

mkdir: Your endpoint configuration is wrong

Please suggest how to resolve this.

Regards,

Swati

Raymond Raymond

Raymond access_time 6 years ago link more_vert

Did you get Hadoop installation successfully first?

This is a HDFS CLI issue and is not related to Hive installation.

hide_source Anonymous

Ba*** access_time 6 years ago link more_vert

So, $HIVE_HOME/bin was in the path. So, I just ran schematool -dbType mysql -initSchema. Also, before that I did hive -service metastore.

Raymond Raymond

Raymond access_time 6 years ago link more_vert

You mentioned $HIVE_PATH in one of the previous comments while it should be $HIVE_HOME. Can you please double check that?

Based on what you have described and also if I understand correctly:

Your issue is that you cannot run the following command successfully:

$HIVE_HOME/bin/schematool -dbType derby -initSchema

And you got error: no such file or directory exist.

Usually this issue will happen if:

  • No x permission (execute) for your account on schematool file, which is why I recommended to check that permission and add it if missing.
  • Or as the error message self-described, the path doesn't exist. For example, it may be because your $HIVE_HOME environment variable is not setup correctly. I would recommend to follow the steps below to add it into your .bashrc file and then re-run the schema initialisation command: 

1) Edit file ~/.bashrc by running the following command

vi ~/.bashrc

2) Add the following line at the end of the file:

export HIVE_HOME={your hive home folder path}

3) Source the settings 

source ~/.bashrc

And then run the command:

$HIVE_HOME/bin/schematool -dbType derby -initSchema

If the above suggestions don't work still, I'm not sure whether I can help more unless you provide screenshots about your Cygwin window, hive folder,  and detailed error messages here.

You can upload images in the comment section directly.  Or alternatively, write me an email at enquiry[at]kontext.tech

hide_source Anonymous

Ba*** access_time 6 years ago link more_vert

Yes, I did. When I go the path from cygwin, and do a ls, I see schematool there. Also when I print $HIVE_PATH and $HADOOP_PATH I get the correct location.

Raymond Raymond

Raymond access_time 6 years ago link more_vert

Can you run the following command in Cygwin to see if the script file is executable?

ls -alt $HIVE_HOME/bin

-rwxr-xr-x+ 1 fahao fahao   832 May 16  2018 metatool

-rwxr-xr-x+ 1 fahao fahao   884 May 16  2018 schematool

The output should look like the above one. 'x' means execution permission. You will need that permission before you can execute the scripts. 

If no permission, please try the following command to add it:

chmod +x $HIVE_HOME/bin/schematool

hide_source Anonymous

Bali access_time 6 years ago link more_vert

When I try to run $HIVE_HOME/bin/schematool -dbType derby -initSchema I get no such file or directory exist. But when I go to the exact location with cd and run ls, I see the file there. Also, when I echo HIVE_HOME it return me the exact path.

Raymond Raymond

Raymond access_time 6 years ago link more_vert
Did you run the command in Cygwin terminal? $HIVE_HOME is the syntax for Linux/UNIX and only works in Cygwin (or other equivalent terminal) in Windows.

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts