By using this site, you acknowledge that you have read and understand our Cookie policy, Privacy policy and Terms .

This post shows you how to connect to HiveServer2 via Hive JDBC driver in Java.

*The way to connect to HiveServer1 is very similar though the driver names are different:

Version Driver Name
HiveServer1 org.apache.hadoop.hive.jdbc.HiveDriver
HiveServer2 org.apache.hive.jdbc.HiveDriver

The JDBC connection URL will also be different:

Version Driver Name
HiveServer1 jdbc:hive://…
HiveServer2 jdbc:hive2://…

Add dependency

First we need to add dependency in your project pom file (if you are using Maven):

<project xmlns="http://maven.apache.org/POM/4.0.0"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
     <modelVersion>4.0.0</modelVersion>
     <groupId>kontext.tech</groupId>
     <artifactId>examples</artifactId>
     <version>0.0.1-SNAPSHOT</version>
     <dependencies>
        <dependency>
             <groupId>org.apache.hive</groupId>
             <artifactId>hive-jdbc</artifactId>
             <version>3.1.1</version>
         </dependency>
             <groupId>org.apache.hive</groupId>
             <artifactId>hive-jdbc</artifactId>
             <version>3.1.1</version>
         </dependency>
     </dependencies> </project>

Or you need to directly reference Hive JAR file.

Create a Java class to connect to HiveServer2

Now, we can connect to HiveServer2 as you can do with any other supported databases.

The following is a sample class for your reference:

package examples;
import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.SQLException; import java.sql.Statement;
public class HiveJdbcExample {
    /**
      * HiveServer2 JDBC driver name
      */
     private static String driverName = "org.apache.hive.jdbc.HiveDriver";
    public static void main(String[] args) throws SQLException {
         try {
             Class.forName(driverName);
         } catch (ClassNotFoundException e) {
             e.printStackTrace();
             System.exit(1);
         }
        Connection conn = DriverManager.getConnection("jdbc:hive2://localhost:10000", "hive", "");
         Statement stmt = conn.createStatement();
         // show tables
         String sql = "SHOW databases";
         System.out.println("Running: " + sql);
         ResultSet rs = stmt.executeQuery(sql);
         while (rs.next()) {
             System.out.println(rs.getString(1));
         }
         rs.close();
        String sql2 = "select * from test_db.test_table";
         System.out.println("Running: " + sql2);
         ResultSet rs2 = stmt.executeQuery(sql2);
        ResultSetMetaData rsmd = rs2.getMetaData();
         int columnsNumber = rsmd.getColumnCount();
         while (rs2.next()) {
             for (int i = 1; i <= columnsNumber; i++) {
                 if (i > 1)
                     System.out.print(",  ");
                 String columnValue = rs2.getString(i);
                 System.out.print(rsmd.getColumnName(i) + " " + columnValue);
             }
             System.out.println("");
         }
        rs2.close();
        conn.close();
     }
}

For detailed information about connection URLs (as HiveServer2 service can run in different modes), please refer to the following page published by Apache:

https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-ConnectionURLs

Sample output

image

In HiveServer2 Web UI, we can also find the query history:

image

Clicking Drilldown link, you can also view the query detailed information:

image

info Last modified by Raymond at 3 months ago * This page is subject to Site terms.

More from Kontext

local_offer sqlite local_offer python local_offer Java

visibility 4
thumb_up 0
access_time 1 hour ago

To read data from SQLite database in Python, you can use the built-in sqlite3 package . Another approach is to use SQLite JDBC driver via  ...

open_in_new View open_in_new Python Programming

local_offer Java local_offer python local_offer SQL Server

visibility 3
thumb_up 0
access_time 3 hours ago

In my previous article  Connect to SQL Server via JayDeBeApi in Python , I showed examples of u...

open_in_new View open_in_new Python Programming

local_offer hive local_offer hdfs

visibility 61
thumb_up 0
access_time 2 months ago

In Hive, there are two types of tables can be created - internal and external table. Internal tables are also called managed tables. Different features are available to different types. This article lists some of the common differences.  Internal table By default, Hive creates ...

open_in_new View open_in_new Hadoop

Schema Merging (Evolution) with Parquet in Spark and Hive

local_offer parquet local_offer pyspark local_offer spark-2-x local_offer hive local_offer hdfs

visibility 328
thumb_up 0
access_time 3 months ago

Schema evolution is supported by many frameworks or data serialization systems such as Avro, Orc, Protocol Buffer and Parquet. With schema evolution, one set of data can be stored in multiple files with different but compatible schema. In Spark, Parquet data source can detect and merge sch...

open_in_new View open_in_new Spark + PySpark

info About author

Dark theme mode

Dark theme mode is available on Kontext.

Learn more arrow_forward
Kontext Column

Kontext Column

Created for everyone to publish data, programming and cloud related articles. Follow three steps to create your columns.

Learn more arrow_forward
info Follow us on Twitter to get the latest article updates. Follow us