Hive的设置和行使

1 .安装Hive:

sudo tar xvfz apache-hive-2.1.0-bin.tar.gz

sudo cp -R apache-hive-2.1.0-bin /usr/local/hive

sudo chown -R hadoop:hadoop /usr/local/hive

2. 修改/etc/profile加入HIVE_HOME的变量:

export HIVE_HOME=/usr/local/hive

export PATH=$PATH:$HIVE_HOME/bin

export
CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:/usr/local/hive/lib

$source /etc/profile

3 .修改hive/conf下的多少个template模板一碗水端平命名为其余:

cp hive-env.sh.template hive-env.sh

cp hive-default.xml.template hive-site.xml

配置hive-env.sh文件,指定HADOOP_HOME

HADOOP_HOME=/usr/local/hadoop

4
.修改hive-site.xml文件,指定MySQL数据库驱动、数据库名、用户名及密码,修改的内容如下所示

<property>

  <name>javax.jdo.option.ConnectionURL</name>

<value>jdbc:mysql://192.168.111.16:3306/hive?createDatabaseIfNotExist=true</value>

  <description>JDBC connect string for a JDBC
metastore</description>

</property>

<property>

  <name>javax.jdo.option.ConnectionDriverName</name>

  <value>com.mysql.jdbc.Driver</value>

  <description>Driver class name for a JDBC
metastore</description>

</property>

<property>

  <name>javax.jdo.option.ConnectionUserName</name>

  <value>hadoop</value>

  <description>username to use against metastore
database</description>

</property>

<property>

  <name>javax.jdo.option.ConnectionPassword</name>

  <value>1234</value>

  <description>password to use against metastore
database</description>

</property>

其中:

javax.jdo.option.ConnectionURL参数指定的是Hive连接数据库的连日字符串;

javax.jdo.option.ConnectionDriverName参数指定的是驱动的类入口名称;

javax.jdo.option.ConnectionUserName参数指定了数据库的用户名;

javax.jdo.option.ConnectionPassword参数指定了数据库的密码。

5 .缓存目录的问题,假使不布署也会出错的

 <property>

 <name>hive.exec.local.scratchdir</name>

 <value>/home/hadoop/iotmp</value>

 <description>Local scratch space for Hive
jobs</description>

 </property>

 <property>

 <name>hive.downloaded.resources.dir</name>

 <value>/home/hadoop/iotmp</value>

 <description>Temporary local directory for added resources in the
remote file system.</description>

 </property>

而且须求对目录举行权力设定:

mkdir -p /home/hadoop/iotmp

chmod -R 775 /home/hadoop/iotmp

6.修改hive/bin下的hive-config.sh文件,设置JAVA_HOME,HADOOP_HOME

export JAVA_HOME=/usr/lib/jvm

export HADOOP_HOME=/usr/local/hadoop

export HIVE_HOME=/usr/local/hive

7.下载mysql-connector-java-5.1.27-bin.jar文件,并放到$HIVE_HOME/lib目录下得以从Mysql的官方网站下载

8.在HDFS中创建/tmp和/user/hive/warehouse并安装权限

hadoop fs -mkdir /tmp

hadoop fs -mkdir -p /user/hive/warehouse

hadoop fs -chmod g+w /tmp

hadoop fs -chmod g+w /user/hive/warehouse

9.初始化meta数据库

进去前边须要起首化数据库

schematool -initSchema -dbType mysql

 

hadoop@hadoopmaster:/usr/local/hive/lib$ schematool -initSchema -dbType
mysql

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in
[jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html\#multiple\_bindings for an
explanation.

SLF4J: Actual binding is of type
[org.apache.logging.slf4j.Log4jLoggerFactory]

Metastore connection URL:
jdbc:mysql://192.168.1.166:3306/hive?createDatabaseIfNotExist=true

Metastore Connection Driver : com.mysql.jdbc.Driver

Metastore connection User: hive

Starting metastore schema initialization to 2.1.0

Initialization script hive-schema-2.1.0.mysql.sql

Initialization script completed

schemaTool completed

 图片 1

 

10.测试hive shell

hive

show databases;

show tables;

11.得以在hadoop中查看hive生产的文本

hadoop dfs -ls /user/hive/warehouse

12. Hive shell用到实例

(1) 创制数量(文本以tab分隔)

~ vi /home/cos/demo/t_hive.txt

16      2       3

61      12      13

41      2       31

17      21      3

71      2       31

1       12      34

11      2       34

(2 )创设新表

hive> CREATE TABLE t_hive (a int, b int, c int) ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘\t’;

OK

Time taken: 0.121 seconds

(3 )导入数据t_hive.txt到t_hive表

hive> LOAD DATA LOCAL INPATH ‘/tmp/t_hive.txt’ OVERWRITE INTO TABLE
t_hive ;

Loading data to table default.t_hive

OK

Time taken: 0.609 seconds

(4 )查看表

hive> show tables;

OK

t_hive

Time taken: 0.099 seconds

(5 )正则匹配表名

hive>show tables ‘*t*’;

OK

t_hive

Time taken: 0.065 seconds

(6 )查看表数据

hive> select * from t_hive;

OK

16      2       3

61      12      13

41      2       31

17      21      3

71      2       31

1       12      34

11      2       34

Time taken: 0.264 seconds

(7) 查看表结构

hive> desc t_hive;

OK

a       int

b       int

c       int

Time taken: 0.1 seconds

(8 )增添一个字段

hive> ALTER TABLE t_hive ADD COLUMNS (new_col String);

OK

Time taken: 0.186 seconds

hive> desc t_hive;

OK

a       int

b       int

c       int

new_col string

Time taken: 0.086 seconds

(9 )重命令表名

~ ALTER TABLE t_hive RENAME TO t_hadoop;

OK

Time taken: 0.45 seconds

hive> show tables;

OK

t_hadoop

Time taken: 0.07 seconds

(10 )删除表

hive> DROP TABLE t_hadoop;

OK

Time taken: 0.767 seconds

hive> show tables;

OK

Time taken: 0.064 seconds

FAQ

1.hadoop@hadoopmaster:/usr/local/hive/conf$ hive

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in
[jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html\#multiple\_bindings for an
explanation.

SLF4J: Actual binding is of type
[org.apache.logging.slf4j.Log4jLoggerFactory]

原因:

从没进行schematool -initSchema -dbType mysql  执行之后搞定

 

2. hadoop@hadoopmaster:/usr/local/hive/lib$ hive

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in
[jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html\#multiple\_bindings for an
explanation.

SLF4J: Actual binding is of type
[org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in
jar:file:/usr/local/hive/lib/hive-common-2.1.0.jar!/hive-log4j2.properties
Async: true

Exception in thread “main” java.lang.IllegalArgumentException:
java.net.URISyntaxException: Relative path in absolute URI:
${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D

 

亟需对目录进行权力设定

mkdir -p /home/hadoop/iotmp

chmod -R 775 /home/hadoop/iotmp

 

3.hadoop发令报错:

 图片 2

焚林而猎办法:

 图片 3

 

4.启动hive:

 图片 4

 

5.导入hive数据为NULL

 图片 5

图片 6

化解出现NULL:

直白copy 过来的,使用TAB键,可以健康突显

 

网站地图xml地图