"There is a lengthy discussion, but it's unclear how to actually drop the
darn table. In fact, that page"
Dumb work around:
You could go directly to mysql and delete the table...
Or make a serde with that name just so you will not get the class not found
error :)
On Wed, Jun 11, 2014 at 9:59
searching this list will in fact show you're not alone. what is being
done about it is another matter.
On Wed, Jun 11, 2014 at 10:42 AM, Benjamin Bowman
wrote:
> All,
>
> I am running Hadoop 2.4 and Hive 0.13. I consistently run out of Hive
> heap space when running for a long period of time
Hi,
I am trying to run a simple join query on hive 13.
Both tables are in text format. Both tables are read in mappers, and the error
is thrown in reducer. I don't get why a reducer is reading a table when the
mappers have read it already and the reason for assuming that the video file is
in S
All,
I am running Hadoop 2.4 and Hive 0.13. I consistently run out of Hive heap
space when running for a long period of time. If I bump up the heap memory
- it will run longer, but still eventually throws an out of memory error
and becomes unresponsive. The memory usage has a clearly linear tre
Hi Rajesh,
interestingly enough, we don't specify that particular value. At least,
doing a "set -v" on the hive CLI doesn't show it. Here are the various
datanucleus properties we do have set:
datanucleus.autoCreateSchema=true
datanucleus.autoStartMechanismMode=checked
datanucleus.cache.level2=fa
That's beside the point. The question is, why can't I drop the table? There
is no excuse for not dropping a table just because some serde can't be found.
It shouldn't operate that way at all.
Thanks.
On Jun 10, 2014, at 23:33 , Nitin Pawar wrote:
> if you have added a table with a serde def
Yeah, that doesn't work. Hive gives clear failure-to-find serdes when running
queries against such a table. The serde unquestionably resides at the local
file system path I specified to "add jar". It just doesnt' work.
On Jun 10, 2014, at 23:33 , Nitin Pawar wrote:
> if you have added a tabl
Thanks for you answer,
but my line is complicated (I have not a simple separator, and it is
complicated to do with simple split, I test if I can do with nested
split) otherwise
I have to extended this UDF and do a personalised UDF ;)
Regards
2014-06-11 1:17 GMT+02:00 Andre Araujo :
> Unfortunat
Hi Soam,
Can you please provide the value specified for
"datanucleus.connectionPool.maxIdle"?
~Rajesh.B
On Wed, Jun 11, 2014 at 2:26 AM, Soam Acharya wrote:
> Hi Vaibhav,
>
> good question. We're using 0.8.0 RELEASE. Would 0.7.1 be preferable
> instead?
>
> Thanks!
>
> Soam
>
>
> On Tue, Jun