HI Felix,
I believe it is bug not able to change field seperator when sinking data to
files. It is already fixed in version 0.11.0. See
https://issues.apache.org/jira/browse/HIVE-3682.
On Wed, May 29, 2013 at 4:37 PM, Felix.徐 wrote:
> Hi all,
>
> I am wondering how to change the fields separat
I've done in this way many times , there must be some errors in your script
, you may paste your script here.
2013/5/30 Stephen Sprague
> i think it's a clever idea. Can you reproduce this behavior via a simple
> example and show it here? I ran a test on hive 0.80 and it worked as you
> wou
Thanks for helping.
Here is some more data:
create table max_sint_rows (s1 string)
partitioned by (p1 string)
ROW FORMAT DELIMITED
LINES TERMINATED BY '\n';
Create table small_table (p1 string)
ROW FORMAT DELIMITED
LINES TERMINATED BY '\n';
alter table max_sint_rows add partition (p1="
What is the data type of the p1 column? I've used hive with
partitions containing far above 2 billion rows without having any
problems like this.
On Wed, May 29, 2013 at 2:41 PM, Gabi Kazav wrote:
> Hi,
>
>
>
> We are working on hive DB with our Hadoop cluster.
>
> We now facing an issue about j
I know of no way to do this purely natively within hive, however, don't let
that stop you. Enter the transform() function. Write your JSON merge
using python, perl, ruby or whatever floats your boat.
Don't let the gnarly syntax on this page scare you:
https://cwiki.apache.org/confluence/display
i think it's a clever idea. Can you reproduce this behavior via a simple
example and show it here? I ran a test on hive 0.80 and it worked as you
would expect.
Regards,
Stephen.
hisql>select * from junk;
+-+
| _c0 |
+-+
| 1 |
+-+
1 affected
hisql>insert overwrite table junk sel
Hi,
We are working on hive DB with our Hadoop cluster.
We now facing an issue about joining a big partition with more than 2^31 rows.
When the partition has more than 2147483648 rows (even 2147483649) the output
of the join is a single row.
When the partition has less than 2147483648 rows (event
Hi,
I am a newbie and I don't want to break any layered abstractions.
I am in the situation where I want to be able to examine
the predicate in the query and if it's a filter that I recognize
then I would like to use it to cut down on the number of records
processed. In particular I would like to
Hi all,
I have this scenario to remove certain rows from a hive table. As far as I
understand, hive doesn't provide that functionality.
So, I'm trying to select inverse of what I want to delete and overwrite the
table with that. What do you think of this approach?
I tried to do it but seems it d
Peter,
Looks are you getting the error in hive shell? You can control client
memory usage by setting HADOOP_HEAPSIZE in conf/hadoop-env.sh.
Thanks,
Jaideep
On Mon, May 27, 2013 at 12:34 AM, Peter Chu wrote:
> Hi, I ran into memory problem while using Map Join. Errors below, how do
> I increa
Hi all,
I am wondering how to change the fields separator of INSERT OVERWRITE LOCAL
DIRECTORY , does anyone have experience doing this ? thanks!
11 matches
Mail list logo