Create Temporary Table - Hive SQL

access_time 1 month ago visibility44 comment 0

This page shows how to create a temporary Hive table via Hive SQL (HQL).

Create temporary table


(`cust_id` int, `name` string,`created_date` date)

Temporary Hive tables are only visible to the creation session and will be deleted automatically when the session ends. The underlying data is stored in user scratch folder (/tmp/hive/$USERID/*).

Install Hive database

Follow the article below to install Hive on Windows 10 via WSL if you don't have available available Hive database to practice Hive SQL:

Examples on this page are based on Hive 3.* syntax.

Run query

All these SQL statements can be run using beeline CLI:

$HIVE_HOME/bin/beeline --silent=true

The above command line connects to the default HiveServer2 service via beeline. Once beeline is loaded, type the following command to connect:

0: jdbc:hive2://localhost:10000> !connect jdbc:hive2://localhost:10000
Enter username for jdbc:hive2://localhost:10000: hive
Enter password for jdbc:hive2://localhost:10000:
1: jdbc:hive2://localhost:10000>

The terminal looks like the following screenshot:

info Last modified by Administrator at 1 month ago copyright This page is subject to Site terms.
Like this article?
Share on

Please log in or register to comment.

account_circle Log in person_add Register

Log in with external accounts

Kontext Column

Created for everyone to publish data, programming and cloud related articles.
Follow three steps to create your columns.

Learn more arrow_forward

More from Kontext

local_offer teradata local_offer SQL

visibility 140
thumb_up 0
access_time 6 months ago

In SQL Server, we can use TRUNCATE statement to clear all the records in a table and it usually performs better compared with DELETE statements as no transaction log for each individual row deletion. The syntax looks like the following: TRUNCATE TABLE { database_name.schema_name.table_name | ...

local_offer python local_offer spark local_offer pyspark local_offer hive local_offer spark-database-connect

visibility 22187
thumb_up 4
access_time 2 years ago

From Spark 2.0, you can easily read data from Hive data warehouse and also write/append new data to Hive tables. This page shows how to operate with Hive in Spark including: Create DataFrame from existing Hive table Save DataFrame to a new Hive table Append data to the existing Hive table via ...

local_offer teradata local_offer SQL local_offer teradata-functions

visibility 130
thumb_up 0
access_time 3 months ago

This page shows how to trim or remove leading or/and trailing zeros using Teradata SQL.

About column

Articles about Apache Hadoop installation, performance tuning and general tutorials.

*The yellow elephant logo is a registered trademark of Apache Hadoop.

rss_feed Subscribe RSS