I am trying to setup hive securely doing authorization at the metastore.
However there is a problem.
I have relied on hive JIRA HIVE-3705 to decide the configuration which were set
as below:
javax.jdo.option.ConnectionURLjdbc
javax.jdo.option.ConnectionDriverName
It seemed stmt.setFetchSize(1); can be called before execution
(without casting)
2013/7/3 Christian Schneider :
> Hi, i browsed through the sources and found a way to tune the JDBC
> ResultSet.next() performance.
>
> final Connection con =
> DriverManager.getConnection("jdbc:hive2://carolin:10
For the time being I have added the create HDFS dir in my hive script…got to
keep moving on….cant wait for ideal solution :-) but would love to know the
ideal solution !
!hdfs dfs -mkdir
/user/beeswax/warehouse/impressions_hive_stats/outpdir_impressions_header/2013-07-01/record_counts
;
INSERT
THIS FAILS
=
INSERT OVERWRITE DIRECTORY
'/user/beeswax/warehouse/impressions_hive_stats/outpdir_impressions_header/2013-07-01/record_counts'
select 'outpdir_impressions_header', '2013-07-01', 'record_counts',
'all_servers', count(*) from outpdir_impressions_header where
header_date_part
you can get the hive 0.9 oracle script here:
https://github.com/apache/hive/blob/trunk/metastore/scripts/upgrade/oracle/hive-schema-0.9.0.oracle.sql
On Wed, Jul 3, 2013 at 1:22 PM, Raj Hadoop wrote:
> Hi,
>
> When I installed Hive earlier on my machine I used a oracle hive meta
> script. Please
Hi,
When I installed Hive earlier on my machine I used a oracle hive meta script.
Please find attached the script. HIVE worked fine for me on this box with no
issues.
I am trying to install Hive on another machine in a different Oracle metastore.
I executed the meta script but I am having is
well. a couple of comments.
1. you didn't have to change the your hive variable to a date. in your
case year = flocr(/1) and month=cast( % 100 as int) just as i
mentioned in my first reply. :) But given you did maybe that'll make
things easier for you down the road.
2. the 'into' c
Hi navis:
Thanks for your reply. Currently I'm working on the temporary solution by
changing the type of filter mask and doing the performance test. I try to read
the patches and source code now and when I get better understanding of the code
maybe I can help with this problem :)
--
wzc198
On Wed, Jul 3, 2013 at 5:19 AM, David Morel wrote:
>
> That is still not really answering the question, which is: why is it slower
> to run a query on a heavily partitioned table than it is on the same number
> of files in a less heavily partitioned table.
>
According to Gopal's investigations i
1) each partition object is a row in the metastore usually mysql, querying
large tables with many partitions has longer startup time as the hive query
planner has to fetch and process all of this meta-information. This is not
a distributed process. It is usually fast within a few seconds but for ve
How big were the files in each case in your experiment? Having lots of
small files will add Hadoop overhead.
Also, it would be useful to know the execution times of the map and reduce
tasks. The rule of thumb is that under 20 seconds each, or so, you're
paying a significant of the execution time i
Unfortunately the ip is stored with each partition in the metadatabase.
I once did an update on the metatdata for our server to replace all old
ip's with new ip's. It's not pretty but that actually works.
Op 28-6-2013 06:29, Manickam P schreef:
Hi,
What are the steps one should follow to move
On 2 Jul 2013, at 16:51, Owen O'Malley wrote:
> On Tue, Jul 2, 2013 at 2:34 AM, Peter Marron <
> peter.mar...@trilliumsoftware.com> wrote:
>
>> Hi Owen,
>>
>> ** **
>>
>> I’m curious about this advice about partitioning. Is there some
>> fundamental reason why Hive
>>
>> is slow when the n
Hi, i browsed through the sources and found a way to tune the JDBC
ResultSet.next() performance.
final Connection con =
DriverManager.getConnection("jdbc:hive2://carolin:1/default", "hive",
"");
final Statement stmt = con.createStatement();
final String tableName = "bigdata";
sql = "select *
instead of into we have as in hive
so your query will be select min(dt_jour) as d_debut_semaine from table
where col = value
also remember this as is valid only till the query is being executed, it
wont be preserved once query execution is over
On Wed, Jul 3, 2013 at 2:30 PM, Jérôme Verdier
wrot
Hi,
Thanks for your help.
I resolve the problem by changing my variable in_co_an_mois into a normal
date format, and extract month and year by using apporopriate functions :
year() and month().
But, i have a new question :
the PL/SQL script i have to translate in hive is written like this :
S
Hi jerome,
What about the from_unixtime and unix_timestamp Udf ?
from_unixtime() which accept bigint
my 2 cents
Paul
*De :* Nitin Pawar [mailto:nitinpawar...@gmail.com]
*Envoyé :* mercredi 3 juillet 2013 09:29
*À :* user@hive.apache.org
*Objet :* Re: Dealing with differents date f
easiest way in this kind would be write up a small udf.
As Stephen suggested, its just a number so you can do maths to extract year
and month out of the number and then do the comparison.
also 201307 is not a supported date format anywhere as per my knowledge
On Wed, Jul 3, 2013 at 12:55 PM, Jér
Hi Stephen,
Thanks for your reply.
The problem is that my input date is this : in_co_an_mois (format : MM,
integer), for example, this month, we have 201307
and i have to deal with this date : add one month, compare to over date,
etc...
The problem is that apparently, there is no way to do
19 matches
Mail list logo