hi,maillist:
i install hive-metastore and hive-server2 on one box,and i set
heapsize 4096 on hive-env.sh
but i only see hive-metastore use 4096m memory ,server2 still use 2000m
memory why?
# ps -ef|grep metastore
root 22540 62329 0 16:23 pts/15 00:00:00 grep metastore
hive 569
run quey in hive get error
0: jdbc:hive2://localhost:1/default> select count(*) from hive_test;
Error: Error while processing statement: FAILED: Execution Error, return
code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
(state=08S01,code=1)
0: jdbc:hive2://localhost:1/default> select
i am not config a local hadoop ,i just miss
/etc/hadoop/conf/mapred-site.xml, now it's work
On Mon, May 12, 2014 at 4:42 PM, Shengjun Xin wrote:
> According to the log, you configured a local hadoop, you need to check
> you configuration
>
>
> On Mon, May 12, 2014 at 3:46
hi,maillist:
i try hive 0.12,but it always run job in local mode ,why? i install
hive on a seperate hox ,and install hadoop client on it ,and configure the
client connect to my hadoop cluster
hive> select count(*) from media_visit_info;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Numb
hi,maillist:
i find sometime hive drop table will take long time ,why?
Time taken: 0.994 seconds
hive (default)> drop table t2;
OK
Time taken: 60.661 seconds
hive (default)> drop table t3;
OK
Time taken: 0.229 seconds
hi,maillist:
Due to scribe maillist is not very active ,so i ask question
here ,hope any expert can answer the problem.
i use scribe write data into HDFS ,the HDFS path is a partition of a hive
table,the partition based on date,what i think is let the scribe category
name as hive tabl
hink we need to
>> support adding encoding parameter as part of jdbc url similar to mysql
>> jdbc's useUnicode/characterEncoding flags.
>>
>> I can take a look at it if nobody else has. For now, I think you can
>> manually encode the result value from jdbc.
>>
>
hi,maillist:
we use hive to store UTF8 chinese character ,but query through
hive jdbc ,it become some unreadable characters,it's normal to use hive
shell.why? it's a bug in hive jdbc?how can i solve this?
it's not worked ,when i write script,first way i used is what you suggested
On Mon, Dec 2, 2013 at 8:07 PM, Nitin Pawar wrote:
> try writing a wrong ddl and capture the command exit status with $?
>
> if thats what you are looking for
>
>
> On Mon, Dec 2, 2013 at 5
the following
> command.
>
>
>
> $HIVE_HOME/bin/hive –e "DDL"
>
>
>
> Thanks & Regards,
>
> *Rinku Garg *- Global Commercial Services
>
> *From:* ch huang [mailto:justlo...@gmail.com]
> *Sent:* 02 December 2013 13:22
> *To:* user@hive.ap
hi,maillist:
i am write a bash script,what i want is run "hive -e 'DDL' "
in script, and i want judge if the hive DDL is success or fail,how can i do
this ?
205006E8A3EB4CA1997D947D89C5FD1B205006E8A3EB4CA1997D947D89C5FD1B
1 NULL2013-11-28
2050054E19B44C4D992D97C1661A26C32050054E19B44C4D992D97C1661A26C3
1 NULL2013-11-28
204fa32a43ef4aefac3b391562c5a25b7149D57B47E74C6F8C22CC30292FADF1
1 NULL2013-11-28
On Thu, Nov 28, 2013 at 10:
hi,maillist:
i add some new columns to hive table ,but find when i use
beeline, or write my own java code,i can not get these new columns
value,but when i use hive shell,it's ok
0: jdbc:hive2://localhost:1> desc test_alex1;
+---++--+
| col_name | da
Time taken: 0.272 seconds
hive (default)> drop table demo3;
OK
Time taken: 60.524 seconds
hive (default)> create table demo3 like demo;
OK
Time taken: 0.273 seconds
hi,all:
i run hive client in seperate box ,but all job submit from the client is
local job,why? ,i try it from hive-server2 running box ,the job will submit
as distribute job
hi,all:
i use flume collect log data and put it in hdfs ,i want to use hive
to do some caculate, query based on timerange,i want to use parttion table ,
but the data file in hdfs is a big file ,how can i put it into pratition
table in hive?
hi,all:
i have a question ,when my table data is on hdfs,it's very quick,but
when table data on hbase ,it's very slow when i query use hive,why?
hive> select page_url,concat_ws('&',
map_keys(UNION_MAP(MAP(first_category,'fcategory' as fcategorys ,
token,concat_ws('&',
map_keys(UNION_MAP(MAP(concat(original_category,',',weight),'dummy' as
r from media_visit_info group by page_url,token ;
# jstat -gcutil 28409 1000 10
S0 S1
HI,ALL:
i execute a query ,but error,any one know what happened? BTW i use yarn
framework
2013-08-22 09:47:09,893 Stage-1 map = 28%, reduce = 1%, Cumulative CPU
4140.64 sec
2013-08-22 09:47:10,952 Stage-1 map = 28%, reduce = 1%, Cumulative CPU
4140.72 sec
2013-08-22 09:47:12,008 Stage-1 map = 28
hi,all:
i do not very familar with HQL, and my problem is ,now i have 2
queries
Q1: select page_url, original_category,token from media_visit_info group by
page_url, original_category,token limit 10
Q2: select original_category as code , weight from media_visit_info where
page_url='X' gr
hi,all:
i have a problem ,i have a hive table ,it source data based on
HDFS,and i want to get part query result for processing(the whole data is
huge,so must
load partly),any good suggestions? (hive has no paging query result
function)
the limit cause has no offset option,how to page result query by hive?
.99 MB/s). Index size is 0.52 KB.
13/07/22 09:39:06 INFO lzo.LzoIndexer: [INDEX] LZO Indexing file
hdfs://CH22:9000/alex/test1.lzo, size 0.00 GB...
13/07/22 09:39:06 INFO lzo.LzoIndexer: Completed LZO Indexing in 0.08
seconds (0.00 MB/s). Index size is 0.01 KB.
On Mon, Jul 22, 2013 at 1:37 PM,
hi ,all:
i already install and testing lzo in hadoop and hbase,all success,but
when i try it on hive ,it failed ,how can i do let hive can recognize lzo?
hive> set mapred.map.output.compression.codec;
mapred.map.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec
hive> set
map
why the task failed? anyone can help?
hive> select cookieid,count(url) as visit_num from alex_test_big_seq group
by cookieid order by visit_num desc limit 10;
MapReduce Total cumulative CPU time: 49 minutes 20 seconds 870 msec
Ended Job = job_1374214993631_0037 with errors
Error during job, obta
ATT
the table records are more than 12000
On Fri, Jul 19, 2013 at 9:34 AM, Stephen Boesch wrote:
> one mapper. how big is the table?
>
>
> 2013/7/18 ch huang
>
>> i wait long time,no result ,why hive is so slow?
>>
>> hive> select cookie,url,ip,so
i wait long time,no result ,why hive is so slow?
hive> select cookie,url,ip,source,vsid,token,residence,edate from
hb_cookie_history where edate>='1371398400500' and edate<='1371400200500';
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce
here is my test output,why the beeline report error when i use where
condition??
hive> select foo from demo_hive where bar='value3';
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1373509276088_0015, Tracking
how can i do ,to change the example code based on 0.92 api ,so it can
running in 0.94?
i export env variable HADOOP_MAPRED_HOME= /usr/lib/hadoop-mapreduce in
hive-env.sh,now it's work fine
On Thu, Jul 11, 2013 at 10:42 AM, ch huang wrote:
> i already test yarn ,is work fine ,but i find it can not work in hive
> ,why? i use hive 0.94.6 ,it still can not support yarn?
i already test yarn ,is work fine ,but i find it can not work in hive ,why?
i use hive 0.94.6 ,it still can not support yarn?
i do not know why ,i have no mapred.reduce.tasks option defined in config
file,anyone can help?
hive-metastore.log:2013-07-10 10:10:34,890 WARN conf.Configuration
(Configuration.java:warnOnceIfDeprecated(824)) -
mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
mapreduce.redu
i try start hiveserver2 ,buy error in log file ,why ? i use hive 0.10
2013-07-10 14:18:31,821 WARN hive.metastore
(HiveMetaStoreClient.java:open(285)) - Failed to connect to the MetaStore
Server...
2013-07-10 14:18:32,830 ERROR service.CompositeService
(CompositeService.java:start(74)) - Error st
uot;jdbc:hive"
> from " jdbc:hive2"
>
>
>
>
> On Wed, Jul 10, 2013 at 12:34 PM, ch huang wrote:
>
>> i use following java code
>>
>>
>> import java.sql.Connection;
>> import java.sql.DriverManager;
>> import java.sql.ResultSet;
i use following java code
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
public class DemoHive {
public static void main(String[] args) throws Exception {
Class.forName("org.apache.hadoop.hive.
here is my hive config file,i do not know why,anyone can help?
javax.jdo.option.ConnectionURL
jdbc:mysql://192.168.10.118/metastore
the URL of the MySQL database
javax.jdo.option.ConnectionDriverName
com.mysql.jdbc.Driver
javax.jdo.option.ConnectionUserName
hive
javax.jdo.op
i install CDH4.3,i already try mapreduce V1,it's work fine,but when i stop
mapredv1,and start yarn ,hive can not use it,why?
set mapred.reduce.tasks=
java.net.ConnectException: Call From CH22/192.168.10.22 to CH22:9001 failed
on connection exception: java.net.ConnectException: Connection refused
i am testing hive web interface, anyone can help? thanks
content in hive-site.xml
hive.hwi.war.file
/usr/lib/hive/lib/hive-hwi-0.7.1-cdh3u4.war
This is the WAR file with the jsp content for Hive Web
Interface
# export ANT_LIB=/usr/share/ant/lib
# hive --service hwi
13/07/09 13:19:40 INFO
)
... 24 more
)
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask
On Mon, Jul 8, 2013 at 2:52 PM, Cheng Su wrote:
> Did you hbase cluster start up?
>
> The error message is more like that something wrong with the classpath.
> So maybe you'd better also
and i have zookeeper on port 2281 ,not default port
On Mon, Jul 8, 2013 at 2:52 PM, Cheng Su wrote:
> Did you hbase cluster start up?
>
> The error message is more like that something wrong with the classpath.
> So maybe you'd better also check that.
>
>
> On Mon,
wrote:
> Did you hbase cluster start up?
>
> The error message is more like that something wrong with the classpath.
> So maybe you'd better also check that.
>
>
> On Mon, Jul 8, 2013 at 1:54 PM, ch huang wrote:
>
>> i get error when try create table on hbase us
i get error when try create table on hbase use hive, anyone can help?
hive> CREATE TABLE hive_hbasetable_demo(key int,value string)
> STORED BY 'ora.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
> TBLPROPERTIES ("hbase.t
43 matches
Mail list logo