+1
On Wed, Apr 20, 2016 at 1:24 AM, Jimmy Xiang wrote:
> +1
>
> On Tue, Apr 19, 2016 at 2:58 PM, Alpesh Patel
> wrote:
> > +1
> >
> > On Tue, Apr 19, 2016 at 1:29 PM, Lars Francke
> > wrote:
> >>
> >> Thanks everyone! Vote runs for at least one more day. I'd appreciate it
> if
> >> you could p
Hi folks,
I am trying to create HFiles from a Hive table to bulk load into HBase and
am following the HWX [1] tutorial.
It creates the HFiles correctly but then fails when closing the
RecordWriter with the following stack trace.
Error: java.lang.RuntimeException: Hive Runtime Error while closing
the argument is void, so that
> all the invocations would be "having the same value", then I tried to pass
> in a param to prevent this possibility.
>
>
> On Mon, Sep 30, 2013 at 1:55 PM, Tim Robertson
> wrote:
>
>> It's been ages since I wrote one, bu
That class is:
https://code.google.com/p/gbif-occurrencestore/source/browse/trunk/occurrence-store/src/main/java/org/gbif/occurrencestore/hive/udf/UDFRowSequence.java
Cheers,
Tim
On Mon, Sep 30, 2013 at 10:55 PM, Tim Robertson
wrote:
> It's been ages since I wrote one, but the differ
It's been ages since I wrote one, but the differences to mine:
a) I use LongWritable: public LongWritable evaluate(LongWritable startAt) {
b) I have annotations on the class (but I think they are just for docs)
@Description(name = "row_sequence",
value = "_FUNC_() - Returns a generated row sequ
works out. Thanks.
>
>
>
> On Sun, Sep 16, 2012 at 10:51 PM, Tim Robertson > wrote:
>
>> Note: I am a newbie to Hive.
>>>
>>> Can someone please answer the following questions?
>>>
>>> 1) Does Hive provide APIs (like HBase does) that
>
> Note: I am a newbie to Hive.
>
> Can someone please answer the following questions?
>
> 1) Does Hive provide APIs (like HBase does) that can be used to retrieve
> data from the tables in Hive from a Java program? I heard somewhere that
> the data can be accessed with JDBC (style) APIs. True
Hi all,
I have a 6 node cluster, and on a simple query created with a table from a
CSV, I was seeing a lot of mappers reporting that they were not using data
locality.
I changed the replication factor to 6 but still MR is showing only about
60% data locality in the data-local map tasks.
How can t
It sounds like you have run Sqoop without specifying a durable metastore
for Hive. E.g. you haven't told Hive to use MySQL, PostGRES etc to store
it's metadata. It probably used Derby DB which either put it all in
memory, or put it all on the /tmp directory, which was destroyed on restart.
I wou
-a105-0200ac1d1c3d
>
> "127.0.0.13"~"644c1c9a-8820-11e1-aaa8-00219b8a879e"~"2012-04-17T00:00:01Z"~"476825ea-8820-11e1-a105-0200ac1d1c3d
>
>
> --
> *From:* Tim Robertson
> *To:* user@hive.apache.org; Gopi Kodumur
>
I believe so. From the tutorial [1] :
CREATE EXTERNAL TABLE page_view_stg(viewTime INT, userid BIGINT,
page_url STRING, referrer_url STRING,
ip STRING COMMENT 'IP Address of the User',
country STRING COMMENT 'country of origination')
Apologies, it does indeed work when you add the correct JARs in Hive.
Tim
On Tue, Apr 17, 2012 at 3:33 PM, Tim Robertson wrote:
> Hi all,
>
> I am *really* interested in Hive-1634 (
> https://issues.apache.org/jira/browse/HIVE-1634). I have just built from
> Hive trunk using
Hi all,
I am *really* interested in Hive-1634 (
https://issues.apache.org/jira/browse/HIVE-1634). I have just built from
Hive trunk using HBase 0.90.4 as the version (e.g. we run cdh3u2).
We have an HBase table populated with Bytes, so I create the Hive table
like so:
CREATE EXTERNAL TABLE tim_
Hi Jason,
I work for an international organization involved in the mobilization of
biodiversity data (specifically we are dealing a lot with observations of
species) so think of it as a lot of point based information with metadata
tags. We have built an Oozie workflow that uses Sqoop to suck in a
Hi all,
I need to perform a lot of "point in polygon" checks and want to use Hive
(currently I mix Hive, Sqoop and PostGIS in an Oozie workto do this).
In an ideal world, I would like to create a Hive table from a Shapefile
containing polygons, and then do the likes of the following:
SELECT p.
Hi all,
(cross posted to a few Hadoop mailing lists - apologies for the SPAM)
Are there any users around the Copenhagen area that would like a HUG meetup?
Just reply with +1 and I'll gauge interest. We could probably host a
1/2 or full day if people were coming from Sweden...
We are using Hadoo
Hi all
Can someone please tell me how to achieve the following in a single hive script?
set original_value = mapred.reduce.tasks;
set mapred.reduce.tasks=1;
... do stuff
set mapred.reduce.tasks=original_value;
It is the first and last lines that don't work - is it possible?
Thanks,
Tim
Hi all
Can someone please tell me how to achieve the following in a single hive script?
set original_value = mapred.reduce.tasks;
set mapred.reduce.tasks=1;
... do stuff
set mapred.reduce.tasks=original_value;
It is the first and last lines that don't work - is it possible?
Thanks,
Tim
Hi all,
I am using UDFRowSequence as follows:
CREATE TEMPORARY FUNCTION rowSequence AS
'org.apache.hadoop.hive.contrib.udf.UDFRowSequence';
mapred.reduce.tasks=1;
CREATE TABLE temp_tc1_test
as
SELECT
rowSequence() AS id,
data_resource_id,
local_id,
local_parent_id,
name,
author
FROM n
Hi all,
Sorry if I am missing something obvious but is there an inverse of an explode?
E.g. given t1
ID Name
1 Tim
2 Tim
3 Tom
4 Frank
5 Tim
Can you create t2:
Name ID
Tim1,2,5
Tom 3
Frank 4
In Oracle it would be a
select name,collect(id) from t1 group by name
I suspect in Hive
What about the count or max?
http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCount.java
http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFMax.java
I've not used UDAFs, but I only got
Does it need to be a sequential INT? If not, then a UUID works very well.
Cheers,
Tim
On Tue, Nov 16, 2010 at 8:55 AM, afancy wrote:
> Hi, Zhang,
> How to integrate this snowflake with Hive? Thanks!
> Regards,
> afancy
>
> On Mon, Nov 15, 2010 at 10:35 AM, Jeff Zhang wrote:
>>
>> Please refer
Is there a maximum limit of 10 fields in a UDTF?
The following is always giving issues:
public void process(Object[] args) throws HiveException {
...
((LazyInteger) args[10]).getWritableObject().get();
I am trying to do:
create table density_cluster_ungrouped
as select taxonDensityUDTF(king
_id) as (p,k) ...
>
> -Original Message-
> From: Tim Robertson [mailto:timrobertson...@gmail.com]
> Sent: Monday, November 08, 2010 5:53 AM
> To: user@hive.apache.org
> Subject: Re: Only a single expression in the SELECT clause is supported with
> UDTF's
>
> Thank you onc
taxonId,tileX,tileY,zoom,clusterX,clusterY,count
group by taxonId,tileX,tileY,zoom,clusterX,clusterY;
Thanks again for the pointers Sonal and Namit, and also on the other thread,
Tim
On Mon, Nov 8, 2010 at 9:17 AM, Tim Robertson wrote:
> I am writing a GenericUDTF now, but notice on
>
i.apache.org/hadoop/Hive/LanguageManual/UDF#UDTF. I think you
> should be able to use lateral view in your query.
>
> Thanks and Regards,
> Sonal
>
> Sonal Goyal | Founder and CEO | Nube Technologies LLP
> http://www.nubetech.co
> http://code.google.com/p/hiho/
>
>
>
Hi all,
I am trying my first UDTF, but can't seem to get it to run. Can
anyone spot anything wrong with this please:
hive> select taxonDensityUDTF(kingdom_concept_id, phylum_concept_id)
as p,k from temp_kingdom_phylum;
FAILED: Error in semantic analysis: Only a single expression in the
SELECT cl
ht be worth
fixing that page.
Cheers,
Tim
On Mon, Nov 8, 2010 at 7:35 AM, Tim Robertson wrote:
> Thank you both,
>
> A quick glance looks like that is what I am looking for. When I get
> it working, I'll post the solution.
>
> Cheers,
> Tim
>
> On Mon, Nov
input?
>
> http://wiki.apache.org/hadoop/Hive/DeveloperGuide/UDTF
>
> Thanks and Regards,
> Sonal
>
> Sonal Goyal | Founder and CEO | Nube Technologies LLP
> http://www.nubetech.co | http://in.linkedin.com/in/sonalgoyal
>
>
>
>
>
> On Mon, Nov 8, 2010 at 2:31 A
Hi all,
I am porting custom MR code to Hive and have written working UDFs
where I need them. Is there a work around to having to do this in
Hive:
select * from
(
select name_id, toTileX(longitude,0) as x, toTileY(latitude,0) as
y, 0 as zoom, funct2(lontgitude, 0) as f2_x, funct2(latitude,0)
Please try this in Hive:
select distinct a.id from tableA a LEFT OUTER join tableB b on
a.id=b.id where b.id is null
Cheers,
Tim
On Wed, Nov 3, 2010 at 1:19 PM, Tim Robertson wrote:
> In SQL you use a left join:
>
> # so in mysql:
> select distinct a.id from tableA a left join tabl
In SQL you use a left join:
# so in mysql:
select distinct a.id from tableA a left join tableB b on a.id=b.id
where b.id is null
Not sure exactly how that ports to Hive, but it should be something
along those lines.
HTH,
Tim
On Wed, Nov 3, 2010 at 1:13 PM, איל (Eyal) wrote:
> Hi,
>
> I have a
Thanks Edward. I'll poke around there.
On Tue, Nov 2, 2010 at 6:40 PM, Edward Capriolo wrote:
> On Tue, Nov 2, 2010 at 12:47 PM, Tim Robertson
> wrote:
>> Hi all,
>>
>> Is the following a valid UDF please?
>>
>> When I run it I get the fo
Hi all,
Is the following a valid UDF please?
When I run it I get the following so I presume not:
hive> select toGoogleCoords(latitude,longitude,1) from
raw_occurrence_record limit 100;
FAILED: Error in semantic analysis:
java.lang.IllegalArgumentException: Error: name expected at the
position 7 o
That's right. Hive can use an HBase table as an input format to the
hive query regardless of output format, and can also write the output
to an HBase table regardless of the input format. You can also
supposedly do a join in Hive that uses 1 side of the join from an
HBase table, and the other sid
35 matches
Mail list logo