Qiang,
Good point. Uploaded a new patch.
Thanks!
On Mon, Dec 17, 2012 at 9:14 PM, Qiang Wang wrote:
> "HiveHistory.parseHiveHistory use BufferedReader.readLine which takes '\n',
> '\r', '\r\n' as line delimiter to parse history file"
>
> And clients may be on mac, which takes '\r' as line delimit
"HiveHistory.parseHiveHistory use BufferedReader.readLine which takes '\n',
'\r', '\r\n' as line delimiter to parse history file"
And clients may be on mac, which takes '\r' as line delimiter
So I think '\r' should also be replaced with space in HiveHistory.log, so
that HiveHistory.parseHiveHist
Looks like a bug to me. This is the original JIRA that introduced this change:
https://issues.apache.org/jira/browse/HIVE-176
I don't think back in the day, we really cared about clients being on windows.
In any case, thanks for filing the JIRA, I have uploaded a patch which
I think doesn't break
That's not true, you don't need to restart the cluster.
Changing client-side mapredi-site.xml is the correct solution.
HTH,
+Vinod
On Dec 14, 2012, at 9:10 AM, Ted Reynolds wrote:
> Hi Krishna,
>
> You can also set these properties in the mapred-site.xml, but this would
> require a restart
Doubles are not perfect fractional numbers. Because of rounding errors, a
set of doubles added in different orders can produce different results
(e.g., a+b+c != b+c+a)
Because of this, if your computation is happening in a different order
locally than on the hive server, you might end up with diff
Mohit,
There is nothing wrong with the query. It seems (from the line below)
that you are using a custom SerDe:
org.apache.hadoop.hive.contrib.serde2.XmlInputFormat$XmlRecordReader.(XmlInputFormat.java:76)
That seems to the causing the problem. You'd need to look into the
input format's code to se
Krishna,
I usually put it in my home directory and that works. Did you try that?
HIVE-2911, adds another location to where it can be picked up from. If your
present version supports .hiverc (which is most likely the case), home
directory should work as well.
Mark
On Mon, Dec 17, 2012 at 5:44 AM,
Hi Farah,
#2 and #4 should be easy to figure by taking a look at the metastore
scripts. For example, I took a quick look at
https://github.com/apache/hive/blob/trunk/metastore/scripts/upgrade/mysql/hive-schema-0.10.0.mysql.sql
and it seems like 128 is the answer to #4
#1 and #3 I am not entirely s
Hi, Periya:
Can you take a look at the patch of
https://issues.apache.org/jira/browse/HIVE-3715 and see if you can apply
the similar change to make sinc/cons more accurate for your use case? Feel
free to comments on the jira as well. Thanks.
Johnny
On Sat, Dec 8, 2012 at 11:23 AM, Periya.Data w
Hive supports only equi-join
I recommend you to read some hive manual before use it. (e.g.
http://hive.apache.org/docs/r0.9.0/language_manual/joins.html
https://cwiki.apache.org/Hive/languagemanual-joins.html)
on the first sentence it says "Only equality joins, outer joins, and left
semi joins are
Hi,
Does anyone know of the SQL limits for Hive? Particularly the following:
1. The maximum number of tables in join, aka, the maximum number of tables
in a select query
2. The maximum number of columns in creating an index
3. Maximum size of SQL string accepted by ODBC driver
4.
You raise an important point; "metadata" commands like create table and
alter table only affect metadata, not the actual data itself. So, you have
to write the files into the partition directories yourself and in the
correct schema. One way to do the latter is to stage the raw data in a
"temporary"
Hive doesn't support theta joins. Your best bet is to do a full cross join
between the tables, and put your range conditions into the WHERE clause.
This may or may not work, depending on the respective sizes of your tables.
The fundamental problem is that parallelising a theta (or range) join via
Ah, it seems the Json parser issue was due to my avro schema having comments
//. I have seen some comments on the web about this parser that it can be
configured to accept comments.
Is there a Hive property to be passed to json parser and allow comments in Avro
schemas ?
--
Alexandre Fouc
can you explain your needs? may be there is another alternate way
a query is not of much help
On Mon, Dec 17, 2012 at 7:17 PM, Ramasubramanian Narayanan <
ramasubramanian.naraya...@gmail.com> wrote:
> Hi,
>
> We are trying to build a tree structure in a table.. hence we have the
> left and rig
anybody has an idea about this ?
https://issues.apache.org/jira/browse/HIVE-3810
2012/12/16 Qiang Wang
> glad to receive your reply!
>
> here is my point:
> Firstly, I think HiveHistoryViewer is inconsistent with HiveHistory.
> Secondly, hive server may be deloyed on linux, but client can be
Hi,
We are trying to build a tree structure in a table.. hence we have the left
and right limits...
Can't use where clause in that..
regards,
Rams
On Mon, Dec 17, 2012 at 6:53 PM, Nitin Pawar wrote:
> hive is not mysql :)
>
>
> On Mon, Dec 17, 2012 at 6:50 PM, Ramasubramanian Narayanan <
> ram
Thanks for the replies.
I went for the hiverc option. Unfortunately, with the verion of hive I'm
using, it meant I had to place the file in a bin directory. Our sys admin
was not pleased, but it look's like that issue is fixed in a later version
of Hive (https://issues.apache.org/jira/browse/HIVE-
Hi,
I have an avro table with a schema that is around 8000 chars, and cannot query
from it:
First i had issue when creating the table, Hive will throw an exception because
the field in MySQL (varchar(4000)) is too small. So i altered the column to
varchar(1) and it fixed this part.
But
hive is not mysql :)
On Mon, Dec 17, 2012 at 6:50 PM, Ramasubramanian Narayanan <
ramasubramanian.naraya...@gmail.com> wrote:
> Hi,
>
> But it is working fine in MySql...
>
> mysql> select count(A1.id) as LVL, A2.id, A2.code, A2.short_name, A2.lft,
> A2.rgt from product A1 join product A2 on (A
Hi,
But it is working fine in MySql...
mysql> select count(A1.id) as LVL, A2.id, A2.code, A2.short_name, A2.lft,
A2.rgt from product A1 join product A2 on (A1.lft <= A2.lft and A1.rgt >=
A2.rgt) group by A2.id, A2.code, A2.short_name, A2.lft, A2.rgt;
+-+--+--+--+--
are you trying to do a self join with less than and greater than without
having anything in where clause
I doubt that is going to work because less than and greater than will
always need a upper or lower limit to start the comparison (that includes
even in join statement)
so try something like
s
Hello, and thank you both for your answers...
I think I found the problem... keep in mind I'm quite new to all this
Hive/Hadoop stuff :)
I think my problem was due to the fact that the create table statement had
the partition defined but the information was not partitioned on the file
system (it w
I am using serde in hive to store data into hive table from xml file.
Whenever I retrieve data using command select * from table it give all records
from table.
But when I want to extract an individual column it gives error .
Please Tell me how can I retrieve a single column from this table.
Th
select * will just hdfs cat your file
when you are using serde, do you have column separator in place? if not can
you do select * from table limit 1
to get normally a single coulmn you should do select column from table
where column=value
On Mon, Dec 17, 2012 at 1:51 PM, Mohit Chaudhary01 <
moh
Hi
I am using serde in hive to store data into hive table from xml file.
Whenever I retrieve data using command select * from table it give all records
from table.
But when I want to extract an individual column it gives error .
Please Tell me how can I retrieve a single column from this table.
26 matches
Mail list logo