Both the parameters are set to false by default.
*hive> set hive.optimize.index.filter;*
*hive.optimize.index.filter=false*
*hive> set hive.orc.splits.include.file.footer;*
*hive.orc.splits.include.file.footer=false*
*hive> *
>>>I suspect this might be related to having 0 row files in the buc
I don't think *hive.server2.idle.session.timeout can be set via client. It
might need to be set on server side.*
*So before you run your query, you might need to restart HS2 with that
property set, but I'm not sure that's what you are looking for.*
On Sun, Nov 29, 2015 at 11:50 PM, reena upadhyay
I believe updates and deletes have always had this constraint. It's at
least hinted at by:
https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-ConfigurationValuestoSetforINSERT,UPDATE,DELETE
On Mon, Mar 7, 2016 at 7:46 PM, Mich Talebzadeh
wrote:
> Hi,
>
> I notice
> Is this something new in Hive 2 as I don't recall having this issue
>before?
No.
> | CREATE TABLE `sales3`( |
> | `prod_id` bigint, |
> | STORED AS INPUTFORMAT
Hi,
I noticed this one in Hive2.
insert into sales3 select * from smallsales;
FAILED: SemanticException [Error 10297]: Attempt to do update or delete on
table sales3 that does not use an AcidOutputFormat or is not bucketed
Is this something new in Hive 2 as I don't recall having this issue befor
Hi All,
Following on from from our parquet vs orc discussion, today I observed
hive's alter table ... concatenate command remove rows from an ORC
formatted table.
1. Has anyone else observed this (fuller description below)? And
2. How to do parquet users handle the file fragmentation issue?
Desc
Hi,
We had plan of using Record Separator has delimiter for our Hive table.
When we searched for Unicode character list, We found "Record Seperator"
uses code "\*U001E*" has Unicode character. When we used "\U001E" in our
Hive table script has delimiter. Has Query completed, we went to HDFS
a
> cvarchar(2)
...
> Num Buckets: 7
I suspect this might be related to having 0 row files in the buckets not
having any recorded schema.
You can also experiment with hive.optimize.index.filter=false, to see if
the zero row case is artificially produced via predi
Any help on this.
On 3/3/2016 2:38 PM, mahender bigdata wrote:
Hi,
I'm bit confused to know which character should be taken as delimiter
for hive table generically. Can any one suggest me best Unicode
character which doesn't come has part of data.
Here are the couple of options, Im thinking
Hi,
You arte looking at the top 25 of result set so you will have to get full
result set before looking at top 25
Something like this
select rs.prod_id, rs.score from
(
prod_id, count(prod_id) AS Score from sales GROUP BY prod_id ORDER BY
Score DESC
)rs
LIMIT 25;
HTH
Dr Mich Talebzadeh
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+WindowingAndAnalytics
This should help you , try rank/ dense rank as appropriate and mold it to
best use for yourself
Regards
Dev
On Mar 7, 2016 10:35 PM, "Awhan Patnaik" wrote:
> I have to take the first 25 IDs ranked by count(*).
I have to take the first 25 IDs ranked by count(*). But the following is
not allowed in Hive
select id from T order by count(*) desc limit 25;
Which yields a "NOt yet supported place for UDAF count". The way around it
is the following
select id, count(*) as cnt from T group by id order by cnt de
12 matches
Mail list logo