Hello,
I am using the older versions of the following stacks.
hadoop-3.3.6
hive-3.1.3
spark-3.4.4
Shall I upgrade them to the latest ones? including:
hadoop-3.4
hive-4.0
spark-3.5
Thanks for your suggestions.
Unsubscribe
On Tue, Apr 28, 2020, 1:23 AM Deepak Krishna wrote:
> Hi team,
>
> We came across a bug related to count function. We are using hive
> 3.0.0.3.1 with Tez 0.9.0.3.1. PFA the queries to replicate the issue.
>
> Please register this as a bug and let us know if we can support in anyway
>
+1
On Wed, Apr 20, 2016 at 1:24 AM, Jimmy Xiang wrote:
> +1
>
> On Tue, Apr 19, 2016 at 2:58 PM, Alpesh Patel
> wrote:
> > +1
> >
> > On Tue, Apr 19, 2016 at 1:29 PM, Lars Francke
> > wrote:
> >>
> >> Thanks everyone! Vote runs for at least one more day. I'd appreciate it
> if
> >> you could p
logged it as a bug [2], which also details exactly my procedure, but I
wonder if someone could confirm that they also see this, or if perhaps I am
just doing something wrong and it works for them?
Thanks all,
Tim
[1]
https://community.hortonworks.com/articles/2745/creating-hbase-hfiles-from-an
Hey friend!
Check this out http://baokinhtedautu.com/heaven.php?ymvsx
Tim Wintle
Hello!
Important message, visit http://git.weywang.com/began.php?x1x
Tim Wintle
Hi Lefty,
I came across those documents as well. They gave me some good hints, but
were in some places too specific to Horton Works. For the record, I did
get Hive 0.13.1 and Tez 0.4.1 working together. (and I immediately saw a
200% speed up on my corpus of queries). For future travelers here a
ocuments/HDP2/HDP-2.1.2/bk_installing_manually_book/content/rpm-chap-tez_configure_tez.html
Tim
From: Alexander Alten-Lorenz mailto:wget.n...@gmail.com>>
Reply-To: "user@hive.apache.org<mailto:user@hive.apache.org>"
mailto:user@hive.apache.org>>, Alexander Alten-Lorenz
mailto:wget
Hi all,
Is there a wiki page somewhere that shows how to turn on Tez for Hive?
I found "hive.execution.engine" in hive-default.xml.template. But I'm sure
there must be more. Do I have to install Tez separately?
Thanks,
Tim
run? I'm not even sure these
parameters are getting picked up correctly, other than when I change them it
may make the job fail in some new way than it did with the prior setting. Yuck!
Thanks,
Tim
From: Bala Krishna Gangisetty mailto:b...@altiscale.com>>
Reply-To: "us
s, Fetched: 278 row(s)
From: Hari Subramaniyan
mailto:hsubramani...@hortonworks.com>>
Reply-To: "user@hive.apache.org<mailto:user@hive.apache.org>"
mailto:user@hive.apache.org>>
Date: Tuesday, July 8, 2014 2:12 PM
To: "user@hive.apache.org<mailto:user@hive.apache.org
Hi,
I asked a question on Stack Overflow
(http://stackoverflow.com/questions/24621002/hive-job-stuck-at-map-100-redu
ce-0) which hasn't seemed to get much traction, so I'd like to ask it here
as well.
I'm running hive-0.12.0 on hadoop-2.2.0. After submitting the query:
select i_item_desc
,i_ca
Or setting reducers to 1 and doing a GROUP BY all columns forces a single file
too.
Tim,
Sent from my iPhone (which makes terrible auto-correct spelling mistakes)
> On 21 Nov 2013, at 18:27, Eric Chu wrote:
>
> Hi,
>
> We often have map-only queries that result in a large
Hey Eric
I know this isnt the fix you're looking for but in the spirit of pragmatic
workarounds... What happens if you CREATE TABLE copy AS SELECT * FROM orig?
I used to use that with very early Hue versions.
Cheers,
Tim,
Sent from my iPhone (which makes terrible auto-correct spelling mis
Hi Jie,
Can you compile HIVE successfully now? You need to modify some settings
according to your error information.
Maybe you can use the release version to avoid the error.
Tim
2013/11/5 金杰
> I got it.
>
> I need to run "mvn install -DskipTests" before I run "mvn in
Dear Hivers,
I am a user of HIVE ( or HIVE-TEZ banch). I want to let hive use on
Hadoop-3.0.0-SNAPSHOT.
I have changed hadoop version when I compiled hive. But it still doesn't
work with Hadoop-3.0.0-SNAPSHOT.
I don't know how to do in this situation. Can someone help me?
Thanks,
Tim
Here is an example of a no arg that will return a different value for each
row:
https://code.google.com/p/gbif-occurrencestore/source/browse/trunk/occurrence-store/src/main/java/org/gbif/occurrencestore/hive/udf/UuidUDF.java
Hope this helps,
Tim
On Mon, Sep 30, 2013 at 10:59 PM, Yang wrote
That class is:
https://code.google.com/p/gbif-occurrencestore/source/browse/trunk/occurrence-store/src/main/java/org/gbif/occurrencestore/hive/udf/UDFRowSequence.java
Cheers,
Tim
On Mon, Sep 30, 2013 at 10:55 PM, Tim Robertson
wrote:
> It's been ages since I wrote one, but the differ
urns a generated row sequence number starting from
1")
@UDFType(deterministic = false)
public class UDFRowSequence extends UDF {
Hope this helps!
Tim
On Mon, Sep 30, 2013 at 10:47 PM, Yang wrote:
> I wrote a super simple UDF, but got some errors:
>
> UDF:
>
> package yy;
>
Excuting "hive -e select * from tablename" gives me back all my sample rows
without skipping one
Thanks
Tim
Von: Sanjay Subramanian [mailto:sanjay.subraman...@wizecommerce.com]
Gesendet: Mittwoch, 22. Mai 2013 19:16
An: user@hive.apache.org
Betreff: Re: Hive skipping first line
4.2.0) with the
"hiveClient.fetchN(rowncount)" command, it seems like he always skip the
first line of data. (perhaps he's expecting a header row?)
How can I avoid this?
Greetings,
Tim Bittersohl
Hi,
the problem is solved with your solution to set the hive input format.
After some cleaning and debugging the class execution it suddenly worked.
There must have been a problem in the building process...
Thanks alot
Tim
Von: shrikanth shankar [mailto:sshan...@qubole.com
org.apache.hadoop.hive.ql.exec.MapRedTask
Von: shrikanth shankar [mailto:sshan...@qubole.com]
Gesendet: Donnerstag, 18. April 2013 17:32
An: user@hive.apache.org
Betreff: Re: Hive query problem on S3 table
Tim,
Could you try doing
set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat
sure what's happening here, but one suggestion; use s3n://...
instead of s3://... The "new" version is supposed to provide better
performance.
dean
On Thu, Apr 18, 2013 at 8:43 AM, Tim Bittersohl wrote:
Hi,
I just found out, that I don't have to change the defa
]
Gesendet: Donnerstag, 18. April 2013 16:18
An: user@hive.apache.org
Betreff: Re: Hive query problem on S3 table
This means.. it is still not looking in S3...
On Apr 18, 2013 3:44 PM, "Tim Bittersohl" wrote:
Hi,
I just found out, that I don't have to change the default
ppose this is because the server tries to use the S3 space to perform
the job or create job information.
How can I solve this?
Thanks
Tim
4:58 PM, Tim Bittersohl wrote:
Hi,
I do have the following problem monitoring my Hive queries in Hadoop.
I create a Sever using the Hive library which connects to an Hadoop cluster
(file system, job tracker and Hive metastore are set up on this cluster).
The needed parameters for the Hive
t this property mapred.job.name and this should set the name for
the job
On Thu, Feb 28, 2013 at 8:26 PM, Tim Bittersohl wrote:
Thanks for the response,
I also found no way to access the job id via java thrift client, all I can
get is a query ID by the query planner.
How to set the name of a j
Thanks for the response,
I also found no way to access the job id via java thrift client, all I can
get is a query ID by the query planner.
How to set the name of a job where a Hive query is fired with, so I can find
it in the job tracker later?
Tim Bittersohl
Software Engineer
I use java and there HiveClient of the hive library (Version 0.10.0).
Tim Bittersohl
Software Engineer
http://www.innoplexia.de/ci/logo/inno_logo_links%20200x80.png
Innoplexia GmbH
Mannheimer Str. 175
69123 Heidelberg
Tel.: +49 (0) 6221 7198033
Fax: +49 (0) 6221 7198034
Web
Im trying to get the job id of the job created with a Hive query.
At the moment I can get the cluster status from the HiveClient, but I don
find any job id in there...
Tim Bittersohl
Software Engineer
http://www.innoplexia.de/ci/logo/inno_logo_links%20200x80.png
Innoplexia GmbH
Hi,
has the Hive client the possibility to give back the job id of the job
created when running a query? I need that for tracking.
Greetings
Tim Bittersohl
Software Engineer
http://www.innoplexia.de/ci/logo/inno_logo_links%20200x80.png
Innoplexia GmbH
Mannheimer Str. 175
69123
What version of hive are you using? IIRC we saw this on 0.7 and that prompted
us to 0.9. How are you inserting? Are you sure they are serialized Ints and not
strings? Check using the hbase shell.
Cheers,
Tim,
Sent from my iPhone (which makes terrible auto-correct spelling mistakes)
On 26 Dec
it.d/hiveserver2. Can someone please tell me what's the
> right way to do this? I mean create a table and then insert values into it!
> The Hive QL statements I use are very similar to the ones in the tutorials
> about loading data.
>
> Cheers!
> -- Younos
>
>
>
>
--
"The whole world is you. Yet you keep thinking there is something else." -
Xuefeng Yicun 822-902 A.D.
Tim R. Havens
Google Phone: 573.454.1232
ICQ: 495992798
ICBM: 37°51'34.79"N 90°35'24.35"W
ham radio callsign: NW0W
anage your own indexes again).
Cheers,
Tim
On Mon, Sep 17, 2012 at 8:07 AM, Something Something <
mailinglist...@gmail.com> wrote:
> Thank you both for the answers. We are trying to find out if Hive can be
> used as a replacement of Netezza, but if there are no indexes then I don
>
> Note: I am a newbie to Hive.
>
> Can someone please answer the following questions?
>
> 1) Does Hive provide APIs (like HBase does) that can be used to retrieve
> data from the tables in Hive from a Java program? I heard somewhere that
> the data can be accessed with JDBC (style) APIs. True
racker log gives a thread dump at that time but no exception.
>
> *2012-08-23 20:05:49,319 INFO org.apache.hadoop.mapred.TaskTracker:
> Process Thread Dump: lost task*
> *69 active threads*
>
> -------
> Thanks & Regards
> Himanish
>
--
"The
t
search for a table type or some other descriptor.
I've not really poked around in the metastore much ... but that's probably
where Hive would have to look anyway.
Not sure if there are any built in commands for selecting data from the
metastore directly like this...other than things li
What are you trying to accomplish that a method like this won't work for?
On Mon, Jul 2, 2012 at 10:25 PM, Abhishek wrote:
> Hi Tim,
>
> Is this the only way, or if we have any other ways.
>
> Sent from my iPhone
>
> On Jul 2, 2012, at 8:49 PM, Tim Havens wrote:
>
d-site.xml in Hive join
> query.
> Suppose for map reduce job we override using -D , how
> to do it with in hive query.
>
>
--
"The whole world is you. Yet you keep thinking there is something else." -
Xuefeng Yicun 822-902 A.D.
Tim R. Havens
Google Phone: 573.454.1232
ICQ: 495992798
ICBM: 37°51'34.79"N 90°35'24.35"W
ham radio callsign: NW0W
are running into the small files problem, there are other ways to get
> around like bucketing.
>
> Good luck!
> Mark
>
> - Original Message -----
> From: "Edward Capriolo"
> To: user@hive.apache.org
> Sent: Tuesday, June 19, 2012 11:12:48 AM
> Subject:
So...I have a table that has thousands of files, and Billions of rows
related it.
Lets make this a simple table:
CREATE TABLE test_table (
ts BIGINT,
exec_time DOUBLE,
domain_id BIGINT,
domain_name STRING,
)
PARTITIONED BY (logdate STRING, source STRING, datacenter STRING,
hostnam
Oozie for workflow pipelines, manually triggered but would use a cron. Maven
for everything, including packaging oozie stuff. All open source so can point
you at it if you want to poke around?
Tim,
Sent from my iPhone (which makes terrible auto-correct spelling mistakes)
On 26 May 2012, at 16
I frequently sort by partitioned columns, without issues. Post your table
schema, and your query that's failing, lets see what's going on?
Tim
On Mon, May 14, 2012 at 1:28 AM, Shin Chan wrote:
> Hi All
>
> Just curious if its possible to Order by or Sort by partitioned colu
I'm running Hive 0.7 and
0.9 trunk built about 2 weeks ago.
Many thanks!
Tim
estart.
I would imagine you need to sqoop in again, after you correct this.
HTH,
Tim
On Wed, Apr 18, 2012 at 5:29 AM, Đỗ Hoàng Khiêm wrote:
> HI, I have some problems with Hive, looks like Hive cannot read some of my
> tables which was imported before by Sqoop. After importing from Sqoop i
ic queries should work ok I
would think.
HTH,
Tim
On Wed, Apr 18, 2012 at 9:20 PM, Gopi Kodumur wrote:
> Thanks Tim, Sorry for not explaining the problem clearly...
>
> I have data in this format , I wanted to store the data in Ext-Hive table
> without the Double Quote
>
&
of origination')
COMMENT 'This is the staging page view table'
ROW FORMAT DELIMITED FIELDS TERMINATED BY '44' LINES TERMINATED BY '12'
HTH,
Tim
[1] https://cwiki.apache.org/confluence/display/Hive/Tutorial
On Tue, Apr 17, 2012 at 11:20 PM, Gopi Kodumu
Apologies, it does indeed work when you add the correct JARs in Hive.
Tim
On Tue, Apr 17, 2012 at 3:33 PM, Tim Robertson wrote:
> Hi all,
>
> I am *really* interested in Hive-1634 (
> https://issues.apache.org/jira/browse/HIVE-1634). I have just built from
> Hive trunk using
tim_hbase_occurrence WHERE data_resource_id=1081;
...
0 (no records)
Can anyone provide any guidance on this please?
Thanks!
Tim
is of interest,
Tim
On Wed, Apr 11, 2012 at 7:48 PM, Jason Rutherglen <
jason.rutherg...@gmail.com> wrote:
> Dear Hive User,
>
> We want your interesting case study for our upcoming book titled
> 'Programming Hive' from O'Reilly.
>
> How you use Hive, either h
g by political
boundaries etc)
I'd love to hear from anyone who's investigated this or could provide any
advice.
Thanks!
Tim
using Hadoop, Oozie, Hive, Pig, Sqoop in production, and
getting into HBase now, and would like to chat with like minded folks.
Cheers,
Tim
lang.ClassLoader.loadClass(ClassLoader.java:266)
... 18 more
FAILED: Execution Error, return code -101 from
org.apache.hadoop.hive.ql.exec.FunctionTask
On Sun, Jan 22, 2012 at 3:43 PM, Tim Havens wrote:
> Unfortunately the issue appears to be something with the Jar, or my UDF.
>
> What
Unfortunately the issue appears to be something with the Jar, or my UDF.
What I can't seem to resolve is what is causing the -101 Error Code.
Tim
On Sun, Jan 22, 2012 at 3:26 PM, Aniket Mokashi wrote:
> A simplest way would be to put the jar in auxlib directory. That does the
> bot
I have a similar UDF to this one which create's just fine.
I cam seem to resolve what 'return code -101' means however with this
one.
Can anyone tell me what 'return code -101' means?
My StemTermsUDF.jar has the proper classpath for the JWNL jars
already, I'm trying to insure they've REALLY avai
It should be here:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL
For everyone's benefit, the old wiki page you linked to has a link to a page
directory on the new wiki here:
https://cwiki.apache.org/confluence/pages/listpages-dirview.action?key=Hive
Tim
On Tue, J
successive
lines of your data.
Tim
On Sat, Jun 11, 2011 at 12:32 PM, Praveen wrote:
> Do you mean that my UDF would store the timestamp of the current row in a
> static field in the UDF's implementation, and when processing the next row,
> use that field to get the previous row's val
Praveen,
This would be best accomplished with a UDF because Hive does not support
cursors.
Best of luck,
Tim
On Fri, Jun 10, 2011 at 10:29 PM, Praveen Kumar wrote:
> If I have table timestamps:
>
> hive> desc timestamps;
>
> OK
> ts bigint
>
>
> hive> s
Is this functionality handled by ALTER TABLE [name] RECOVER PARTITIONS?
Take a look at this presentation for context:
http://www.slideshare.net/AmazonWebServices/aws-office-hours-amazon-elastic-mapreduce
Best of luck,
Tim
On Thu, May 19, 2011 at 2:25 AM, Jasper Knulst wrote:
> Hi,
&g
master.
I'm trying now to import each day's server logs one at a time (instead of
importing all logs in one Hive command) to see if that solves my issue with
inconsistent results after mass loading of server logs. I'll post an update
if I find anything useful.
Tim
On Wed, May 1
chever way; all I want is confidence that all of my data has
been properly imported.
Thanks,
Tim
ing
would be so welcome right now.
Tim
On Wed, May 4, 2011 at 2:10 PM, Jonathan Bender
wrote:
> Hey all,
>
> Just wondering if there is native support for input arguments on Hive
> scripts.
>
> eg. $ bin/hive -f script.q
>
> Any documentation I could reference to look into this further?
>
> Cheers,
> Jon
>
2011 05:02 PM
Subject:RE: Number of map reduce jobs generated
In this case would 2 M/R job run faster than one?
From: Tim Kaldewey [mailto:tkal...@us.ibm.com]
Sent: Thursday, March 24, 2011 2:30 PM
To: user@hive.apache.org
Subject: Number of map reduce jobs generated
Hello,
I notice
queries I am looking at.
Thanks
Tim
select /*+ MAPJOIN(Table2) */ sum(t1_10 * t1_12)
from Table1 join Table2 on (Table1.t1_6 = Table2.t2_1)
where Table2.t2_5 = 1234
and 8 <= Table1.t1_12 <= 10
and Table1.t1_9 < 42;
to explain:
- table 2 is small, thus I choose a map-side (
Hi all
Can someone please tell me how to achieve the following in a single hive script?
set original_value = mapred.reduce.tasks;
set mapred.reduce.tasks=1;
... do stuff
set mapred.reduce.tasks=original_value;
It is the first and last lines that don't work - is it possible?
Thanks,
Tim
Hi all
Can someone please tell me how to achieve the following in a single hive script?
set original_value = mapred.reduce.tasks;
set mapred.reduce.tasks=1;
... do stuff
set mapred.reduce.tasks=original_value;
It is the first and last lines that don't work - is it possible?
Thanks,
Tim
Cytospora elaeagni Allesch.
8915168 7 6 Achromadora inflata Abebe & Coomans, 1996
Is there any way to enforce the UDF is called in the reduce?
Thanks,
Tim
Hi all,
Sorry if I am missing something obvious but is there an inverse of an explode?
E.g. given t1
ID Name
1 Tim
2 Tim
3 Tom
4 Frank
5 Tim
Can you create t2:
Name ID
Tim1,2,5
Tom 3
Frank 4
In Oracle it would be a
select name,collect(id) from t1 group by name
I suspect in Hive
only got my UDTFs working by looking at the
examples in the Hive SVN itself.
HTH,
Tim
On Wed, Dec 8, 2010 at 3:27 PM, Leo Alekseyev wrote:
> I am trying to write a very simple aggregation function which seems
> like an overkill for using GenericUDAF as described on the wiki.
> However, I
Does it need to be a sequential INT? If not, then a UUID works very well.
Cheers,
Tim
On Tue, Nov 16, 2010 at 8:55 AM, afancy wrote:
> Hi, Zhang,
> How to integrate this snowflake with Hive? Thanks!
> Regards,
> afancy
>
> On Mon, Nov 15, 2010 at 10:35 AM, Jeff Zhang w
(kingdom_concept_id, phylum_concept_id,
class_concept_id, order_concept_id,family_concept_id,
genus_concept_id, species_concept_id,nub_concept_id, latitude,
longitude, count, 23) as
(taxonId,tileX,tileY,zoom,clusterX,clusterY,count)
from density_occurrence_grouped;
Thanks,
Tim
_id) as (p,k) ...
>
> -Original Message-
> From: Tim Robertson [mailto:timrobertson...@gmail.com]
> Sent: Monday, November 08, 2010 5:53 AM
> To: user@hive.apache.org
> Subject: Re: Only a single expression in the SELECT clause is supported with
> UDTF's
>
> Thank you onc
taxonId,tileX,tileY,zoom,clusterX,clusterY,count
group by taxonId,tileX,tileY,zoom,clusterX,clusterY;
Thanks again for the pointers Sonal and Namit, and also on the other thread,
Tim
On Mon, Nov 8, 2010 at 9:17 AM, Tim Robertson wrote:
> I am writing a GenericUDTF now, but notice on
>
andnot in the close()
which is against the example shipped with hive, but in accord with the
docs which say one must not do this]
Cheers,
Tim
On Mon, Nov 8, 2010 at 2:18 PM, Sonal Goyal wrote:
> Hi Tim,
>
> I guess you are running into limitations while using UDTFs. Check
> http://wik
e
SELECT clause is supported with UDTF's
hive>
Below is my code. Thanks for any pointers,
Tim
@description(
name = "taxonDensityUDTF",
value = "_FUNC_(kingdom_concept_id, phylum_concept_id)"
)
public class
ht be worth
fixing that page.
Cheers,
Tim
On Mon, Nov 8, 2010 at 7:35 AM, Tim Robertson wrote:
> Thank you both,
>
> A quick glance looks like that is what I am looking for. When I get
> it working, I'll post the solution.
>
> Cheers,
> Tim
>
> On Mon, Nov
Thank you both,
A quick glance looks like that is what I am looking for. When I get
it working, I'll post the solution.
Cheers,
Tim
On Mon, Nov 8, 2010 at 6:55 AM, Namit Jain wrote:
> Other option would be to create a wrapper script (not use either UDF or
> UDTF)
> That
this well enough to make sense.
Thanks in advance,
Tim
Please try this in Hive:
select distinct a.id from tableA a LEFT OUTER join tableB b on
a.id=b.id where b.id is null
Cheers,
Tim
On Wed, Nov 3, 2010 at 1:19 PM, Tim Robertson wrote:
> In SQL you use a left join:
>
> # so in mysql:
> select distinct a.id from tableA a left join tabl
In SQL you use a left join:
# so in mysql:
select distinct a.id from tableA a left join tableB b on a.id=b.id
where b.id is null
Not sure exactly how that ports to Hive, but it should be something
along those lines.
HTH,
Tim
On Wed, Nov 3, 2010 at 1:13 PM, איל (Eyal) wrote:
> Hi,
>
&g
Thanks Edward. I'll poke around there.
On Tue, Nov 2, 2010 at 6:40 PM, Edward Capriolo wrote:
> On Tue, Nov 2, 2010 at 12:47 PM, Tim Robertson
> wrote:
>> Hi all,
>>
>> Is the following a valid UDF please?
>>
>> When I run it I get the fo
n 7 of 'struct<>' but '>' is found.
Is it possible to return an Array from a UDF?
Thanks for any pointers,
Tim
public class GoogleTileCoordsUDF extends UDF {
public IntWritable[] evaluate(Text latitude, Text longitude,
IntWritable zoomL
other side a text file, which is very powerful.
I haven't done it myself, but intend to shortly.
HTH,
Tim
On Wed, Oct 13, 2010 at 10:07 PM, Otis Gospodnetic
wrote:
> Hi,
>
> I was wondering how I can query data stored in HBase and remembered Hive's
> HBase
> integration:
&
84 matches
Mail list logo