Thanks for pointing this out. When I first created this article, it is based on Hadoop 3.0.0. If you install Hadoop 3.0.0, you won't get this error.
I followed the steps again with the following combination:
I could get reproduce the error you encountered.
To fix this issue, we just need to ensure Map Reduce required libs are in included in JAVA classpath.
So we can change mapred-site.xml file to ensure the following config exists:
The INSERT statement should be able to complete now successfully.
Please let me know if you still encounter any errors.
person Saikat access_time 4 years ago
Thanks a lot for your response. I actually resolved the issue yesterday by adding the user in the local group policies --> create symbolic links.
But now I got into a new issue where it says not able to find or load mapreduce while I am trying to insert new data into the test table. Relevant screenshots below for your reference.I think this has something to do with mapred-site.xml, but I have actually configured it as per your steps while installing hadoop 3.2.1.
I tried adding the additional parameters in the mapred-site.xml like below, but still no luck. Do we need to configure the mapred-site.xml file with additional parameters to make hive work with it?
person Raymond access_time 4 years ago
Can you try starting your Hadoop daemons (HDFS and YARN services) and also Hive services using Command Prompt (Run As Administrator)?
I have completed all the steps and was able to run the hive server as well,.
But when i create a new table like test_table and try to insert data, i get the below error. The error is due to the symlink for sure but don't know why I am getting this. I have followed exact steps as mentioned above.
Apologies for the late response. I've been very busy recently.
Just to double confirm:
Did you follow all the exact steps in my post?
The following step is quite import too to make sure Java can also understand the paths correctly since Hive and Hadoop are mainly using Java (except for native hdfs libs):
The symbolic link needs to based on your folder structure, i.e. not exactly what I provided on the page.
Can you also ensure you add those environment variable setups into bashrc file?
If you followed the above two steps exactly and still get the error, we can try using collaboration tools (ping me on Linkedin with details) so that you can share your screen with me on weekend and I can have a quick look for you.
person Muhammad Salman access_time 4 years ago
Did not follow pre-requisites because I already have hadoop setup on my Windows 10 which is working fine.
Did you follow Hadoop installation link in prerequisites section to install Hadoop?
BTW if you found it not easy to follow the instructions, try this series via Windows Subsystem for Linux:
When I am trying to run $HIVE_HOME/bin/schematool -help in Cygwin:
It gives me this error:
"Cannot find hadoop installation: $HADOOP_HOME or $HADOOP_PREFIX must be set or hadoop must be in the path".
But when I write $HADOOP_HOME in cygwin to verify path it gives "-bash: /cygdrive/c/hadoop/: Is a directory"
Hello, yes you are right. You may also need to install a metastore database depends on which database you want to use as detailed in the above installation guide.
BTW, if you are using Windows 10, I would recommend using WSL to install.
Refer to this page for more details:
You can find my LinkedIn link on the About page of this site.
person Swati Agarwal access_time 5 years ago
Yes it was installation issue. Thanks for the help.
I am new to Hadoop 3, and would seek your guidance.
For installing and working in Hadoop 3, we have to follow:
1) Hadoop 3 installation process
2) Hive process
Please correct me if I am wrong.
Is there anything else that is required to be installed or set up? Please suggest and guide me.
Also can we connect over linkedin? If I get stuck somewhere I would need your expert advice.
My linkedin id is : https://www.linkedin.com/in/swati0303/
It will be really really helpful.
Please log in or register to comment.
Please log in or register first.