Hive Introduction:-
Hive is a data warehouse infrastructure built on top of Hadoop that provides tools to enable easy data summarization, adhoc querying and analysis of large datasets data stored in Hadoop files.
The size of data sets being collected and analyzed in the industry for business intelligence is growing rapidly, making traditional warehousing solutions prohibitively expensive. Hadoop is a popular open-source map-reduce implementation which is being used in companies like Yahoo, Facebook etc. to store and process extremely large data sets on commodity hardware. However, the map-reduce programming model is very low level and requires developers to write custom programs which are hard to maintain and reuse. In this blog, we present Hive, an open-source data warehousing solution built on top of Hadoop. Hive supports queries expressed in a SQL-like declarative language - HiveQL, which are compiled into map- reduce jobs that are executed using Hadoop.
Hive Installation:-
1. Requirement:-
- Java 1.6
- Hadoop 0.17.x to 0.20.x.
3. You need to unpack the tarball.
$tar -xzvf hive-x.y.z.tar.gz
4. Set the environment variable HIVE_HOME to point to the installation directory
5. Hive uses hadoop that means:
- you must have hadoop in your path OR
- export HADOOP_HOME=<hadoop-install-dir>
6. In addition, you must create /tmp and /user/hive/warehouse and set them chmod g+w in HDFS before a table can be created in Hive.
$ $HADOOP_HOME/bin/hadoop fs –mkdir /tmp
$ $HADOOP_HOME/bin/hadoop fs –mkdir /user/hive/warehouse
$ $HADOOP_HOME/bin/hadoop fs -chmod g+w /tmp
$ $HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehouse
Hive Command Line Interface:-
$ $HIVE_HOME/bin/hive
DDL Operation:-
Create Hive Table:-
1. hive> Create table info(name INT , add STRING);
2. hive>Create table data(name INT , add STRING) PARTITIONED BY (date STRING);
(Creates a table called data with two columns and a partition column called date .The partition column is a virtual column. It is not part of the data itself but is derived from the partition that a particular dataset is loaded into.)
Show table:-
1. hive>Show tables;
2. hive>Show tables ‘.*s’;
(List all the table end that end with s).
Describe table:-
1. hive>Describe info;
(Show list of all columns)
Alter table:-
1. hive>Alter table info ADD COLUMNS (PNO INT);
2. hive>Alter table info RENAME TO information;
Drop table:-
1. hive>drop table data;
DML Operation:-
Load data:-
1. hive>LOAD DATA LOCAL INPATH './examples/files/kv1.txt' OVERWRITE INTO TABLE info;
(Loads a file that contains two columns separated by ctrl-a into info table. 'local' signifies that the input file is on the local file system. If 'local' is omitted then it looks for the file in HDFS.)
2. hive>LOAD DATA LOCAL INPATH './examples/files/kv2.txt' OVERWRITE INTO TABLE data PARTITION (date='20-11-2010');
3. hive>LOAD DATA INPATH 'hdfs://localhost:9000/user/hive/warehouse/kv2.txt' OVERWRITE INTO TABLE data PARTITION (date='21-11-2010');
(The above command will load data from an HDFS file/directory to the table. Note that loading data from HDFS will result in moving the file/directory.)
SQL Operation:-
Select:-
1. Select * from info;
2. SELECT a.name FROM info a WHERE (a.date='20-11-2010');
Count:-
1. SELECT COUNT(*) FROM info;
Creating custom UDFs:-
1. Create a new class that extends UDF, with one or more methods named evaluate.
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.Text;
public final class Lower extends UDF
{
public Text evaluate(final Text s)
{
if (s == null) { return null; }
return new Text(s.toString().toLowerCase());
}
}
(Note that there's already a built-in function for this, it's just an easy example).
2. Now, Compile the function and make jar.
hive> add jar my_jar.jar;
Added my_jar.jar to class path
4. Once hive is started up with your jars in the classpath, the final step is to register your function:
create temporary function my_lower as ‘Lower’;
5 . Now you can start using it.
hive> select my_lower(title), sum(freq) from titles group by my_lower(title);