It might be best to use the forum for Impala. I am not sure what replies
your are expecting from the hive and hbase mailing lists.
https://groups.google.com/a/cloudera.org/forum/#!forum/impala-user
Regard
Bertrand
On Tue, Oct 22, 2013 at 10:13 AM, Garg, Rinku wrote:
> Hi All,
>
> ** **
T Infotech will not accept
> responsibility or liability for the accuracy or completeness of, or the
> presence of any virus or disabling code in this e-mail"
>
--
Bertrand Dechoux
que and alone file (given the file name) like (INSERT OVERWRITE LOCAL
>> DIRECTORY '/directory_path_name/')?
>> Thanks for your answers
>>
>>
>
>
> --
> Nitin Pawar
>
--
Bertrand Dechoux
les and instruct the enclosing query to execute these files.
> This way these subqueries can potentially be reused by other questions or
> just run by themselves.
>
> Thanks,
> Sha Liu
>
--
Bertrand Dechoux
the
> intended recipient(s) and may contain confidential and privileged
> information. Any unauthorized review, use, disclosure or distribution is
> prohibited. If you are not the intended recipient, please contact the
> sender by reply email and destroy all copies of the original message along
> with any attachments, from your computer system. If you are the intended
> recipient, please be advised that the content of this message is subject to
> access, review and disclosure by the sender's Email System Administrator.
>
>
--
Bertrand Dechoux
a per-file basis is something I'd
>>> like to avoid if at all possible.
>>>
>>> All the hive settings that we tried only got me as far as raising the
>>> number of mappers from 5 to 6 (yay!) where I would have needed at least
>>> ten times more.
>>>
>>> Thanks!
>>>
>>> D.Morel
>>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>
>
--
Bertrand Dechoux
that were added to
>>> a table. How would Hive tell which row to append by each value of the
>>> newly
>>> added columns? Does it do a column name matching?
>>>
>>> Sincerely,
>>> Younos
>>>
>>>
>>>
>>>
>>>
>>
>
>
> Best regards,
> Younos Aboulnaga
>
> Masters candidate
> David Cheriton school of computer science
> University of Waterloo
> http://cs.uwaterloo.ca
>
> E-Mail: younos.abouln...@uwaterloo.ca
> Mobile: +1 (519) 497-5669
>
>
>
>
--
Bertrand Dechoux
trand
On Sat, Nov 24, 2012 at 10:20 PM, Dalia Sobhy wrote:
> Dear all,
>
> I want to run java code on top of hadoop on ubuntu server , do anyone know
> the commands??
>
>
--
Bertrand Dechoux
configuration parameter to do the same ?
>
> PS: The data is in S3 and running HIVE on AWS EMR infrastructure in
> interactive mode.
>
> Thank You,
> Chunky.
>
>
--
Bertrand Dechoux
e does is not embedded. I have described how I accessed hive
> from jython at
> http://csgrad.blogspot.in/2010/04/to-use-language-other-than-java-say.html.
> The example there may be relevant for your use case.
>
> Dilip
>
>
> On Sun, Sep 30, 2012 at 8:05 AM, Bertrand Dechou
local standalone server?
Regards
Bertrand Dechoux
va:102)
>> >>at org.apache.hadoop.hive.hwi.HWIServer.main(HWIServer.java:132)
>> >>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>at java.lang.reflect.Method.invoke(Method.java:597)
>> >>at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> >> 12/09/27 11:05:02 INFO mortbay.log: Started
>> SocketConnector@0.0.0.0:
>> >>
>> >> Someone have an idea where does it come from? and how to fix it?
>> >>
>> >> Thanks for your help,
>> >>
>> >> Regards,
>> >>
>> >> Germain Tanguy.
>>
>>
>
>
> --
> Bertrand Dechoux
>
>
>
--
Bertrand Dechoux
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>at java.lang.reflect.Method.invoke(Method.java:597)
> >>at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> >> 12/09/27 11:05:02 INFO mortbay.log: Started
> SocketConnector@0.0.0.0:
> >>
> >> Someone have an idea where does it come from? and how to fix it?
> >>
> >> Thanks for your help,
> >>
> >> Regards,
> >>
> >> Germain Tanguy.
>
>
--
Bertrand Dechoux
be the reason for this.
>
> Regards
> Abhi
>
>
> Sent from my iPhone
--
Bertrand Dechoux
javax.jdo.option.ConnectionDriverName
> org.apache.derby.jdbc.ClientDriver
> Driver class name for a JDBC metastore
>
>
> 2) I tried hadoop-0.22.0 with hive-0.8.0. It is always throwing shims
> issue... Can you please tell me how to overcome this?
>
> Thanks,
> B Anil Kumar.
>
--
Bertrand Dechoux
ep 25, 2012, at 1:15 PM, Haijia Zhou < leons...@gmail.com > wrote:
>> >
>> >
>> >
>> >
>> > https://cwiki.apache.org/Hive/hiveclient.html#HiveClient-JDBC
>> >
>> >
>> > On Tue, Sep 25, 2012 at 1:13 PM, Abhishek < abhishek.dod...@gmail.com>
>> > wrote:
>> >
>> >
>> > Hi all,
>> >
>> > Is there any way to connect to hive server through API??
>> >
>> > Regards
>> > Abhi
>> >
>> >
>> >
>> > Sent from my iPhone
>> >
>>
>
>
>
> --
> _
> Dilip Antony Joseph
> http://csgrad.blogspot.com
> http://www.marydilip.info
>
--
Bertrand Dechoux
TaskLogProcessor.java:120)
>> ... 3 more
>> FAILED: Execution Error, return code 2 from
>> org.apache.hadoop.hive.ql.exec.MapRedTask
>> MapReduce Jobs Launched:
>> Job 0: Map: 1 HDFS Read: 0 HDFS Write: 0 FAIL
>> Total MapReduce CPU Time Spent: 0 msec
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Thanks for ur help in advance :)
>>
>>
>>
>>
>>
>>
>>
>> Thanks & Regards,
>>
>> Manu
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> ** **
>>
>> ** **
>>
>
>
>
> --
> *Thanks & Regards,*
> *Tamil*
>
>
--
Bertrand Dechoux
.8.1.jar
> -rw-rw-r-- 1 root root 1765743 Jan 26 2012 hive-metastore-0.8.1.jar
> -rw-rw-r-- 1 root root 14081 Jan 26 2012 hive-pdk-0.8.1.jar
> -rw-rw-r-- 1 root root 509488 Jan 26 2012 hive-serde-0.8.1.jar
> -rw-rw-r-- 1 root root 174445 Jan 26 2012 hive-service-0.8.1.jar
> -rw-rw-r-- 1 root root 110154 Jan 26 2012 hive-shims-0.8.1.jar
> -rw-rw-r-- 1 root root 15260 Jan 24 2012 javaewah-0.3.jar
> -rw-rw-r-- 1 root root 198552 Dec 24 2009 jdo2-api-2.3-ec.jar
>
>
> Please suggest regarding
>
> Thanks & regards
> Yogesh Kumar
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Swarnim
>
>
>
>
> --
> Swarnim
>
--
Bertrand Dechoux
rom my iPhone
>> >
>> > On Sep 14, 2012, at 10:34 AM, Bharath Mundlapudi
>> wrote:
>> >
>> >> Hello Community,
>> >>
>> >> Is there any document/blog comparing different features offered by Pig
>> 0.8 (0.9, 0.10) or greater and Hive 0.8 (0.9)?
>> >>
>> >> -Bharath
>>
>
>
>
> --
> "...:::Aniket:::... Quetzalco@tl"
>
>
--
Bertrand Dechoux
is there a way to do it
> in one query?
> I know this query works in mysql, but not hive.
> select
> userType
> , count(1)/(select count(1) from some_table)
> from
> some_table
> group by
> userType
>
--
Bertrand Dechoux
the prior
> written consent of authorized representative of
> HCL is strictly prohibited. If you have received this email in error
> please delete it and notify the sender immediately.
> Before opening any email and/or attachments, please check them for viruses
> and other defects.
>
>
>
>
--
Bertrand Dechoux
ry on hive on top of 90 million records that took 12 minutes
> to
> > execute and same query on sql server took 8 minutes.My question is how
> can i
> > make hadoop's performance better.What all configurations will improve the
> > latency?
> >
> > Thanks & Regards
> > Prabhjot
>
--
Bertrand Dechoux
python from hive.Now looking for Java
> methods from within a Hive query.Is UDF the only option to achieve it.if
> any other option is available then please guide me with links or examples.
>
>
> Thanks & Regards,
> Tamil
>
>
--
Bertrand Dechoux
>> users to a production environnement which is thightly fire walled? Ssh is
>> not a viable solution in my context and the hive web interface does not
>> seem mature enough.
>>
>
> I recommend taking a look at the Beeswax web interface for Hive. More
> details (including screenshots) are available here:
> https://ccp.cloudera.com/display/CDHDOC/Beeswax
>
> Thanks.
>
> Carl
>
>
--
Bertrand Dechoux
t of box) as there is
>> no userid and password restrictions. On the concurrency part, it is single
>> threaded…….one query gets executed after the other.
>>
>> ** **
>>
>> Thanks,
>>
>> Ranjith
>>
>> ** **
>>
>> *Fr
Hi,
I would like to have more information about this specific sentence from the
documentation.
"HiveServer can not handle concurrent requests from more than one client."
https://cwiki.apache.org/Hive/hiveserver.html
Does it mean it is not possible with this server to provide a JDBC access
to an '
>>> 1,694,531,584 Total committed heap usage (bytes)990,052,352
>>>
>>> The tasktracker log gives a thread dump at that time but no exception.
>>>
>>> *2012-08-23 20:05:49,319 INFO org.apache.hadoop.mapred.TaskTracker:
>>> Process Thread Dump: lost task*
>>> *69 active threads*
>>>
>>> ---
>>> Thanks & Regards
>>> Himanish
>>>
>>
>>
>>
>> --
>> "The whole world is you. Yet you keep thinking there is something else."
>> - Xuefeng Yicun 822-902 A.D.
>>
>> Tim R. Havens
>> Google Phone: 573.454.1232
>> ICQ: 495992798
>> ICBM: 37°51'34.79"N 90°35'24.35"W
>> ham radio callsign: NW0W
>>
>
>
--
Bertrand Dechoux
here is no update.
> >
> > Whats the best way to do this using Hive ? Looking forward to hear the
> suggestions.
> >
> > Thanks
>
>
--
Bertrand Dechoux
12-07-19 12:09:56 | 111026
>> >> | 505 | 2012-07-17 12:06:40 | 111024
>> >> | 505 | 2012-07-17 12:06:40 | 111024
>> >> | 505 | 2012-07-17 12:09:15 | 111024
>> >> | 504 | 2012-07-18 00:03:18 | 101020
>> >> | 504 | 2012-07-18 00:15:41 | 101020
>> >>
>> >>
>> >> As we cannot use >= , <= in Hive joins the between logic cannot be
>> >> implemented in joins, is there any way to accomplish this or do we
>> >> need to write custom M/R code for this.Looking forward for any
>> >> suggestions to accomplish this.
>> >>
>> >> --
>> >> Thanks & Regards
>> >> Himanish
>> >
>> >
>>
>>
>>
>> --
>> Thanks & Regards
>> Himanish
>>
>
>
--
Bertrand Dechoux
ich
> one should we use in general and why?
> Given that I am on Hive 0.6 version.
>
>
>
> *Raihan Jamal*
>
--
Bertrand Dechoux
t; Mapreduce job goes through serialization and deserilaization of objects
> Isnt it a overhead.
>
> Store data in the smarter way? can you please elaborate on this.
>
> Regards
> Sudeep
>
> On Tue, Aug 14, 2012 at 11:39 AM, Bertrand Dechoux wrote:
>
>> You may want to
t;
>> Hi all,
>>
>> How to avoid serialization and deserialization overhead in hive join
>> query ? will this optimize my query performance.
>>
>> Regards
>> sudeep
>>
>
>
--
Bertrand Dechoux
that its electronic
> communications are free from viruses. However, given Internet
> accessibility, the Company cannot accept liability for any virus introduced
> by this e-mail or any attachment and you are advised to use up-to-date
> virus checking software.
>
--
Bertrand Dechoux
n Thu, Aug 9, 2012 at 11:08 PM, wrote:
> Thanks Guys, it worked.
>
> ** **
>
> *From:* ext Bertrand Dechoux [mailto:decho...@gmail.com]
> *Sent:* Thursday, August 09, 2012 5:03 PM
> *To:* user@hive.apache.org
> *Subject:* Re: Nested Select Statements
>
> ** **
&g
ws a parse error as the variable gets substituted by
> variable. So I have three questions.
> 1. What is wrong with the above queries?
> 2. Is there another way to find number of rows in a table?
> 3. Is there a better way for what I am trying to do?
> ** **
> Thanks,
> Richin
>
>
>
--
Bertrand Dechoux
e two rounds of
> mapreduce to do the job.
>
> I just try the buffer in mapper approach, the number of map output record
> matches with Hive. Thank you
>
> On 08/01/2012 11:40 AM, Bertrand Dechoux wrote:
>
> I am not sure about Hive but if you look at Cascading they u
p side aggregation. Hive does
> use writables, sometimes it uses ones from hadoop, sometimes it uses
> its own custom writables for things like timestamps.
>
> On Wed, Aug 1, 2012 at 11:40 AM, Bertrand Dechoux
> wrote:
> > I am not sure about Hive but if you look at Cascading th
String[] sLine = StringUtils.split(value.toString(),
> > StringUtils.ESCAPE_CHAR, HIVE_FIELD_DELIMITER_CHAR);
> > context.write(new MyKey(Integer.parseInt(sLine[0]),
> sLine[1]), new DoubleWritable(Double.parseDouble(sLine[2])));
> > }
> >
> > }
> >
> > I assume hive is doing something similar. Is there any trick in hive to
> speed this thing up? Thank you!
> >
> > Best,
> >
>
--
Bertrand Dechoux
, new DoubleWritable(Double.parseDouble(sLine[2])));
> }
>
> }
>
> I assume hive is doing something similar. Is there any trick in hive to
> speed this thing up? Thank you!
>
> Best,
>
>
--
Bertrand Dechoux
y to implement it.
>
>
> On Thu, Jul 26, 2012 at 11:19 PM, Bertrand Dechoux wrote:
>
>> That's a problem which is hadoop related and not really hive related.
>> The solution is to use only equal (as you know it). For that, you should
>> first extract your real i
;
> But hive don't support "OR" in left join.
> Table a is huge, and table b has 4 rows now(will increase).
> Is there any other solution to achieve this?
>
> Thanks very much.
>
> --
>
>
--
Bertrand Dechoux
15 PM, Bertrand Dechoux wrote:
> Great answer. Thanks a lot.
>
> 1) I understand the concern with branches but I quickly reviewed the
> change for 0.9.1 and not everything seemed to be a bug patch.
> So I thought : why not ask about HIVE-2910.
>
> 2) I wasn't sure about that, it
better magically, but they
> are again easy low hanging fruit that almost any coded can take care
> of.
>
> On Wed, Jul 25, 2012 at 11:25 AM, Bertrand Dechoux
> wrote:
> > Hi,
> >
> > Here is my stand. Hive provides a dsl to easily explore data contained in
> >
Hi,
Here is my stand. Hive provides a dsl to easily explore data contained in
hadoop with limited experience with java and MapReduce.
And Hive Web Interface provides an easy exposition : users need only a
browser and the hadoop cluster can be well 'fire-walled' because the
communication is only th
NULLNULLNULLNULLNULLNULL
> NULLNULLNULLNULLNULLNULLNULLNULL
> NULL NULL NULL NULLNULLNULLNULLNULLNULL
> NULLNULLNULLNULLNULLNULLNULLNULL
> Please help me on this issue, I have missed anything wrong?
>
> Thanks,
> Prabhu.
>
--
Bertrand Dechoux
Bertrand
On Wed, Jul 25, 2012 at 10:51 AM, Bertrand Dechoux wrote:
> @Puneet Khatod : I found that out. And that's why I am asking here. I
> guess non AWS users might have the same problems and a way to solve it.
>
> @Ruslan Al-fakikh : It seems great. Is there any documentat
e ‘recover partitions’ to enable
> > all top level partitions.
> >
> > This will add required dynamicity.
> >
> >
> >
> > Regards,
> >
> > Puneet Khatod
> >
> >
> >
> > From: Bertrand Dechoux [mailto:decho...@gmail.com]
> >
like a hack.
Is there any clean way to do it?
Regards,
Bertrand Dechoux
48 matches
Mail list logo