Create, Drop, and Truncate Table - Hive SQL

access_time 4 months ago visibility375 comment 0

This page shows how to create, drop, and truncate Hive tables via Hive SQL (HQL).

External and internal tables

Refer to Differences between Hive External and Internal (Managed) Tables to understand the differences between managed and unmanaged tables in Hive. 

Create table


CREATE TABLE IF NOT EXISTS hql.customer(cust_id INT, name STRING, created_date DATE)
COMMENT 'A table to store customer records.';



Drop table




DROP TABLE IF EXISTS hql.customer;

Underlying data of this internal table will be moved to Trash folder.


Underlying data in HDFS will be purged directly and table cannot be restored. 

Truncate table


TRUNCATE [TABLE] table_name [PARTITION partition_spec];
  : (partition_column = partition_col_value, partition_column = partition_col_value, ...)

TRUNCATE state is used to truncate a table or partitions in a table. 


TRUNCATE TABLE hql.customer;

Install Hive database

Follow the article below to install Hive on Windows 10 via WSL if you don't have available available Hive database to practice Hive SQL:

Examples on this page are based on Hive 3.* syntax.

Run query

All these SQL statements can be run using beeline CLI:

$HIVE_HOME/bin/beeline --silent=true

The above command line connects to the default HiveServer2 service via beeline. Once beeline is loaded, type the following command to connect:

0: jdbc:hive2://localhost:10000> !connect jdbc:hive2://localhost:10000
Enter username for jdbc:hive2://localhost:10000: hive
Enter password for jdbc:hive2://localhost:10000:
1: jdbc:hive2://localhost:10000>

The terminal looks like the following screenshot:

info Last modified by Administrator at 4 months ago copyright This page is subject to Site terms.
Like this article?
Share on

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

Want to publish your article on Kontext?

Learn more

Kontext Column

Created for everyone to publish data, programming and cloud related articles.
Follow three steps to create your columns.

Learn more arrow_forward

More from Kontext

local_offer python local_offer spark local_offer pyspark local_offer hive local_offer spark-database-connect

visibility 26374
thumb_up 4
access_time 2 years ago

From Spark 2.0, you can easily read data from Hive data warehouse and also write/append new data to Hive tables. This page shows how to operate with Hive in Spark including: Create DataFrame from existing Hive table Save DataFrame to a new Hive table Append data to the existing Hive table via ...

local_offer teradata local_offer SQL

visibility 181
thumb_up 0
access_time 3 months ago

Function CURRENT_TIMESTAMP can be used to retrieve the current timestamp: SELECT CURRENT_TIMESTAMP; Sample output: 20/09/2020 20:55:35.390000-04:00 Function CAST can be used to convert TimeStamp to DATE. SELECT CAST(CURRENT_TIMESTAMP AS DATE) Sample output: 20/09/2020 SELECT ...

local_offer hadoop local_offer hive local_offer Java

visibility 1374
thumb_up 3
access_time 8 months ago

When I was configuring Hive 3.0.0 in Hadoop 3.2.1 environment, I encountered the following error: Exception in thread "main" java.lang.NoSuchMethodError:;Ljava/lang/Object;)V This issue happened because  guava lib ...

About column

Articles about Apache Hadoop installation, performance tuning and general tutorials.

*The yellow elephant logo is a registered trademark of Apache Hadoop.

rss_feed Subscribe RSS