On Wed, Jan 23, 2013 at 4:46 PM, Ehsan Haq wrote:
> ERROR: invalid escape string
> Hint: Escape string must be empty or one character..)
>
You can set standard_conforming_strings = off in postgresql.conf to avoid
this.
This problem was caused by some env values not include when run by
crontab, it's a common error for shell script writing . :)
Add the following line before your crontab config
source ~/.bashrc
On Thu, Nov 22, 2012 at 5:59 PM, Chunky Gupta wrote:
> Hi,
> I have a python script :-
>
> ---cron_script.py---
>
> import os
> import sys
> from subprocess import call
> print 'starting'
http://www.mail-archive.com/user@hive.apache.org/msg01293.html
Maybe the same error?
On Wed, Aug 29, 2012 at 12:27 PM, rohithsharma
wrote:
> Hi
>
>
>
> I am using PostgreSQl-9.0.7 as metastore and + Hive-0.9.0. I integrated
> postgres with hive. Few queries are working fine. I am using
>
> postg
n it came to dropping 1 partition for an hour for a particular day, it
> seemed to drop all of the hour partitions for the day.
>
> IMO it's a bug. We just moved to using text
>
>
> Malc
>
>
> -Original Message-
> From: wd [mailto:w...@wdicc.com]
> Sent:
Is this a bug? Should I report a bug to hive?
On Thu, May 31, 2012 at 3:56 PM, wd wrote:
> Still no useful output. But we found the problem.
>
> When the partition col type is int, it can't be droped. After change
> it to string, it can be droped.
>
> On Thu, May 31,
ir="$logDir" \
> -hiveconf hive.log.file="$logFile" \
>
> On Thu, May 31, 2012 at 12:41 AM, wd wrote:
>>
>> Nothing output in hive history file, is there an other log file or an
>> option to output detail log ?
>>
>> On Thu, May 31, 2012 at 3:34 P
Nothing output in hive history file, is there an other log file or an
option to output detail log ?
On Thu, May 31, 2012 at 3:34 PM, Aniket Mokashi wrote:
> You should look at hive log and find exact exception. That will give you a
> hint.
>
>
> On Thu, May 31, 2012 at 12:3
saw that.
> Really Sorry.
>
> On Thu, May 31, 2012 at 12:53 PM, Bhavesh Shah
> wrote:
>>
>> Hello wd,
>> Try this one... I am not sure about this
>> ALTER TABLE t1 DROP PARTITION(dt = '111')
>>
>>
>> --
>> Regards,
>> Bhavesh
hi,
We setup a new hive 0.9 client, Found some sql did not work, for example
hive> create table t1(a int) partitioned by ( dt int );
OK
Time taken: 0.097 seconds
hive> load data local inpath '/tmp/t' into table t1 partition (dt=111);
Copying data from file:/tmp/t
Copying file: file:/tmp/t
Loading
count from gt group by category) a,
>
Maybe you should delete this white line?
> (select count(*) as totalCount from gt) b ;
>
> On Mon, May 28, 2012 at 1:55 PM, wd wrote:
>>
>> group by category
>>
>> On Mon, May 28, 2012 at 2:20 PM, shan s wrote:
>> > (select category, count(*) as count from gt group by cat) a,
>
>
group by category
On Mon, May 28, 2012 at 2:20 PM, shan s wrote:
> (select category, count(*) as count from gt group by cat) a,
select a.id, a.count, a.count/b.val
from (select id, count(*) as count from data group by id) a,
(select val from tableY ) b
You don't have a join condition, this maybe output very large data.
On Mon, May 28, 2012 at 11:45 AM, shan s wrote:
> Thanks Edward. But I didn't get the trick yet.
>
Hive can auto create metadata tables, maybe you can try it out.
On Sat, May 12, 2012 at 2:28 PM, Xiaobo Gu wrote:
> I can't find it in the release package.
>
>
> Xiaobo Gu
Maybe someone can setup a site to accept user upload UDF or UDAF jars.
Like cpan for perl, aur for archlinux. :D
On Sun, May 6, 2012 at 1:28 AM, Edward Capriolo wrote:
> Hey all,
>
> We (m6d.com) have released a implementation of the rank feature to github:
>
> https://github.com/edwardcapriolo/h
Hive does not 'join' your data, it's all done by hadoop.
On Sat, Mar 17, 2012 at 7:27 AM, Dani Rayan wrote:
> Can Hive be configured to work with multiple namenodes(clusters)? I
> understand we can use command 'SET' to set any hadoop (or hive)
> configuration variable. But is it possible to handl
An UDF can accept some params, you can pass your configuration in one of it?
On Tue, Feb 14, 2012 at 10:02 AM, Parimi, Nagender wrote:
> Hi,
>
>
>
> Is there a way to pass in some custom arguments to a UDF when initializing
> it? I’d like to pass in some configuration parameters to a UDF, but did
Because 'select *' will not run map reduce job, may be you should
check if your hadoop cluster is work
On Mon, Jan 2, 2012 at 10:37 AM, Aditya Kumar wrote:
>
> Hi,
> I am able to install hive, and create a table (external) and map it to my
> Hbasetable.
>
> I am able to do
> hive>select * from m
the problem you've meet?
On Thu, Nov 3, 2011 at 1:56 AM, Weishung Chung wrote:
> Hi,
> I am trying to integrate Hive and HBase using Cloudera cdh3u1, but still
> can't get it to work? Anyone has any success?
> Thank you,
> Wei Shung
use utf8
or have you try to use insert overwrite to local directory and chek the
output file?
On Wed, Nov 2, 2011 at 9:39 PM, Bing Li wrote:
> Hi, guys
> I want to load some data files which including Chinese words.
> Currently, I found Hive can't display it well.
> Is there some setting/propert
http://www.apache.org/dyn/closer.cgi/hive/
On Fri, Oct 28, 2011 at 9:02 AM, trang van anh wrote:
> Dear all,
>
> Anybody show me how to get latest hive source code?
>
> Thanks in advance.
>
> Trang.
>
>
>
You can look at your hadoop jobtracker log for the detail of the error.
On Tue, Oct 11, 2011 at 5:04 PM, trang van anh wrote:
> Dear Experts,
>
> I have problem with building index table in hive, step by step :
>
> 1. i created table named pv_users (pageid int, age int ) that has around 31
> mi
Hive support more than one partitions, have your tried? Maybe you can
create to partitions named as date and user.
Hive 0.7 also support index, maybe you can have a try.
On Sat, Sep 3, 2011 at 1:18 AM, Mark Grover wrote:
> Hello folks,
> I am fairly new to Hive and am wondering if you could shar
What about your total Map Task Capacity?
you may check it from http://your_jobtracker:50030/jobtracker.jsp
2011/8/24 Daniel,Wu :
> I checked my setting, all are with the default value.So per the book of
> "Hadoop the definitive guide", the split size should be 64M. And the file
> size is about 500
you can try hive 0.5, after create the metadata, use upgrade sql file
in hive 0.7.1 to upgrade to 0.7.1
On Sat, Aug 20, 2011 at 2:20 PM, Xiaobo Gu wrote:
> Hi,
> I have just set up a PostgreSQL 9.0.2 server for hive 0.7.1 metastore,
> and I am using the postgresql-9.0-801.jdbc4.jar jdbc driver, w
ysql jar in the class path , why cant the stats publisher find it. I looked
> at the stats source and everything looks fine.
>
> My conn string is :
> jdbc:mysql://:3306/TempStatsStore&user=&password=.
>
> Am I missing something?
>
> Thanks
The error in log is 'java.lang.ClassNotFoundException:
org.postgresql.Driver', not can't connect or user name or password error.
On Wed, Aug 17, 2011 at 3:53 PM, Jander g wrote:
> Hi,wd
>
> You should configure "hive.stats.dbconnectionstring" as follows.
>
xception e) {
return null;
}
}
//public static void main(String args[]) {
//String t = "%E5%A4%AA%E5%8E%9F-%E4%B8%89%E4%BA%9A";
//System.out.println( getString(t) );
//}
}
On Tue, Aug 16, 2011 at 10:47 AM, wd wrote:
> Thanks for all your advise, I
e.
>> On Mon, Aug 15, 2011 at 1:49 AM, wd wrote:
>>>
>>> hi,
>>>
>>> I create a udf to decode urlencoded things, but found the speed for
>>> mapred is 3 times(73sec -> 213 sec) as before. How to optimize it?
>>>
>>> package co
hi,
I create a udf to decode urlencoded things, but found the speed for
mapred is 3 times(73sec -> 213 sec) as before. How to optimize it?
package com.test.hive.udf;
import org.apache.hadoop.hive.ql.exec.UDF;
import java.net.URLDecoder;
public final class urldecode extends UDF {
public Str
HBase Publisher/Aggregator classes cannot be loaded.
need to configure publisher/aggregator for hbase...there is only one
way, that is use mysql ..
does stats database will optimize hive query? Consider whether or not
setup a mysql for this.
On Mon, Aug 15, 2011 at 3:17 PM, wd wrote:
>
oh, found hive only support mysql and hbase. I'll try hbase.
On Mon, Aug 15, 2011 at 3:09 PM, wd wrote:
> hi,
>
> I'm try to use postgres as stats database. And made following settings
> in hive-site.xml
>
>
>
> hive.stats.dbclass
> jdbc:postgresq
hi,
I'm try to use postgres as stats database. And made following settings
in hive-site.xml
hive.stats.dbclass
jdbc:postgresql
The default database that stores temporary hive
statistics.
hive.stats.autogather
true
A flag to gather statistics automatically during the
INSERT OVERWR
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML
2011/8/12 Daniel,Wu
> suppose the table is partitioned by period_key, and the csv file also has
> a column named as period_key. The csv file contains multiple days of data,
> how can we load it in the the table?
>
> I think of
Hi,
Can I use a spacial char like '^B' as the split pattern ?
I've tried '\002', '^B', '0x02', all failed.
aw anything useful in that file. I delete it whenever I see
> it.
>
> What’s in TempStatsStore?
>
> Pat
>
> *From:* wd [mailto:w...@wdicc.com]
> *Sent:* Tuesday, May 10, 2011 3:07 AM
> *To:* hive-u...@hadoop.apache.org
> *Subject:* What does 'TempStatsStore
hi,
After upgrade to hive 0.7, I found there will a file named 'derby.log' and
directory named 'TempStatsStore' be created every time I run the hive job .
What does these files do? Is there a way to prevent them created ?
raw_logs table into the new table (e.g., raw_logs_rcfile) that you
> have defined in the different format.
>
> So, this is the only way I can put data into a table defined as sequence
file? Can I generate the RCFile use a unix command or some tools ?
>
> On Apr 27, 2011, at 9:33 PM, wd wrot
769+08 |
2011-04-28 13:54:16.404204+08 | 2011-04-28 13:54:01.784839+08 | 12
7.0.0.1 | 37438
But there is no table named "IDXS" exists. So I think this problem is cased
by these missing tables. Then I migrate a upgrade sql file from mysql, and
upgrade my postgress, the problem s
hi,
I've tried to load gzip files into hive to save disk space, but failed.
hive> load data local inpath 'tmp_b.20110426.gz' into table raw_logs
partition ( dt=20110426 );
Copying data from file:/home/wd/t/tmp_b.20110426.gz
Copying file: file:/home/wd/t/tmp_b.20110426.gz
Loa
where date_day='20110202') u group
>> by u.eser_sid
>>
>> date_day is a partition
>>
>> and this produced the results i wanted, but as you can see it is a
>> double query. I dont know if there is a single query way of doing it.
>>
>> b
May be
select item_sid, count(distinct ip_number, session_id) from item_raw group
by item_sid, ip_number, session_id (I've not test it, maybe it should be
concat(ip_number, session_id) instead of ip_number, session_id )
is what you want.
2011/2/21 Cam Bazz
> Hello,
>
> So I have table of item vi
Finally I found, when use hive-0.5.0-bin, 'drop table' will hung at first
time, after Ctrl-c kill the client, and run hive again, it can successfully
drop the table. When use hive-0.6.0-bin, it will always hung there.
2011/1/6 wd
> hi,
>
> I've setup a single node
Oh, WTF
It worked now, But I've done nothing!
在 2011年1月17日 下午3:14,wd 写道:
> I've tried in postgresql-8.1.22-1.el5_5.1, and tried hive-0.5-bin,the
> problem still there...
> Also tried postgresql-8.4-702.jdbc4.jar, anyone else have this problem ?
>
> 2011/1/6 wd
&
I've tried in postgresql-8.1.22-1.el5_5.1, and tried hive-0.5-bin,the
problem still there...
Also tried postgresql-8.4-702.jdbc4.jar, anyone else have this problem ?
2011/1/6 wd
> 11/01/06 18:20:14 INFO metastore.HiveMetaStore: 0: get_table : db=default
> tbl=t1
> 11/01/06
hi,
I have a file like:
1000^A1,2,3,4,5^B4,5,6,7,8^B4,5,6,9,7
Expect to create a row like
col1 col2
1[[1,2,3,4,5],[4,5,6,7,8],[4,5,6,9,7]]
So we can select it like "select col2[2][1] from t1", and the result should
"4".
The table can be created by sql:
create table t1 (
col1 int,
want even more logging info try
>
> hive -hiveconf hive.root.logger=DEBUG,console
>
> Thanks.
>
> Carl
>
> On Thu, Jan 6, 2011 at 1:29 AM, wd wrote:
>
>> hi,
>>
>> I've setup a single node hadoop and hive. And can create table in hive,
>> but c
hi,
I've setup a single node hadoop and hive. And can create table in hive, but
can't drop table, hive cli will hang there, nothing more infos.
hive-0.6.0-bin
hadoop-0.20.2
jre1.6.0_23
postgresql-9.0-801.jdbc4.jar (have tried postgresql-8.4-701.jdbc4.jar)
pgsql 9.0.2
How to find what's wrong hap
join_table:
table_reference JOIN table_factor [join_condition]
| table_reference {LEFT|RIGHT|FULL} [OUTER] JOIN table_reference
join_condition
| table_reference LEFT SEMI JOIN table_reference join_condition
table_reference:
table_factor
| join_table
table_factor:
tbl_name [alias
49 matches
Mail list logo