Re: [VOTE] Apache Hive 2.1.1 Release Candidate 1

2016-12-08 Thread Jesus Camacho Rodriguez
@Sergio, it counts :)

Thanks everyone for testing and voting!

With 4 +1 PMC votes and more than 192 hours now having passed, the vote for
releasing 2.1.1 has passed. I will publish the artifacts shortly.

--
Jesús






On 12/7/16, 11:00 PM, "Sergio Pena"  wrote:

>Jesus,
>
>I was checking the md5 incorrectly. The md5 files from the links you
>provide are correct. I was trying to compare the md5 from the just created
>files I built, and they are different (due to different systems and OS of
>course).
>
>The release is good (+1 if counts).
>
>- Sergio
>
>On Wed, Dec 7, 2016 at 4:38 PM, Gary Gregory  wrote:
>
>> For the build, please see https://issues.apache.org/jira/browse/HIVE-15111
>>
>> Another Windows issue: https://issues.apache.org/jira/browse/HIVE-15152
>>
>> Gary
>>
>> On Wed, Dec 7, 2016 at 10:52 AM, Jesus Camacho Rodriguez <
>> jcamachorodrig...@hortonworks.com> wrote:
>>
>> > @Gary, thanks for the feedback. I did run the test suite in OSX 10.11 and
>> > I did
>> > not have problems of the kind; I attach the trace for the ORC module
>> tests
>> > at
>> > the end of this email. Indeed I did not run the test suite in Windows
>> (and
>> > probably we do not do it regularly enough, which is a valid point that
>> > should
>> > be raised in another issue, not in the release vote itself).
>> > You mentioned the problem has been existing in master for a while. Is
>> > there a
>> > JIRA case to track it? At which exact version did the tests start
>> failing?
>> > Have
>> > some other community members using Hive on Windows experience similar
>> > problems?
>> > This does not look like an issue that has been introduced in 2.1.1 and
>> thus
>> > deserves to block the release. Nevertheless, as soon as it gets solved, I
>> > am
>> > amenable to create a new bug fix release 2.1.2 that will include it.
>> > Please, create a JIRA case with all that information if there is not one.
>> > We
>> > can follow from there. If you have a fix sketched out, you are welcome to
>> > contribute it. I would also like to ask you to reconsider the -1 (even if
>> > it
>> > is non-binding)
>> >
>> > @Thejas, sure, I double-checked 2.1.0 and 2.0.0 and indeed previous
>> release
>> > notes were not included in the file. But I agree it is a good practice. I
>> > have
>> > logged HIVE-15380 and will change the RELEASE_NOTES file in the branches.
>> >
>> > --
>> > Jesús
>> >
>> >
>> >
>> > ---
>> >  T E S T S
>> > ---
>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>> > MaxPermSize=512m; support was removed in 8.0
>> > Running org.apache.orc.impl.TestBitFieldReader
>> > Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.382 sec
>> > - in org.apache.orc.impl.TestBitFieldReader
>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>> > MaxPermSize=512m; support was removed in 8.0
>> > Running org.apache.orc.impl.TestBitPack
>> > Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.775
>> sec
>> > - in org.apache.orc.impl.TestBitPack
>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>> > MaxPermSize=512m; support was removed in 8.0
>> > Running org.apache.orc.impl.TestColumnStatisticsImpl
>> > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.107 sec
>> > - in org.apache.orc.impl.TestColumnStatisticsImpl
>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>> > MaxPermSize=512m; support was removed in 8.0
>> > Running org.apache.orc.impl.TestDataReaderProperties
>> > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.281 sec
>> > - in org.apache.orc.impl.TestDataReaderProperties
>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>> > MaxPermSize=512m; support was removed in 8.0
>> > Running org.apache.orc.impl.TestDynamicArray
>> > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.112 sec
>> > - in org.apache.orc.impl.TestDynamicArray
>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>> > MaxPermSize=512m; support was removed in 8.0
>> > Running org.apache.orc.impl.TestInStream
>> > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.147 sec
>> > - in org.apache.orc.impl.TestInStream
>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>> > MaxPermSize=512m; support was removed in 8.0
>> > Running org.apache.orc.impl.TestIntegerCompressionReader
>> > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.148 sec
>> > - in org.apache.orc.impl.TestIntegerCompressionReader
>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>> > MaxPermSize=512m; support was removed in 8.0
>> > Running org.apache.orc.impl.TestMemoryManager
>> > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.368 sec
>> > - in org.apache.orc.impl.TestMemoryManager
>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>> > MaxPermSize=512m; suppo

Re: [VOTE] Apache Hive 2.1.1 Release Candidate 1

2016-12-08 Thread Jesus Camacho Rodriguez
I pushed the send button too quickly. Let's tally the votes correctly:

4 +1 votes:
* Alan Gates (binding)
* Ashutosh Chauhan (binding)
* Thejas Nair (binding)

* Sergio Peña (non-binding)

1 -1 vote:
* Gary Gregory (non-binding)

Thanks,
Jesús





On 12/8/16, 9:08 AM, "Jesus Camacho Rodriguez" 
 wrote:

>@Sergio, it counts :)
>
>Thanks everyone for testing and voting!
>
>With 4 +1 PMC votes and more than 192 hours now having passed, the vote for
>releasing 2.1.1 has passed. I will publish the artifacts shortly.
>
>--
>Jesús
>
>
>
>
>
>
>On 12/7/16, 11:00 PM, "Sergio Pena"  wrote:
>
>>Jesus,
>>
>>I was checking the md5 incorrectly. The md5 files from the links you
>>provide are correct. I was trying to compare the md5 from the just created
>>files I built, and they are different (due to different systems and OS of
>>course).
>>
>>The release is good (+1 if counts).
>>
>>- Sergio
>>
>>On Wed, Dec 7, 2016 at 4:38 PM, Gary Gregory  wrote:
>>
>>> For the build, please see https://issues.apache.org/jira/browse/HIVE-15111
>>>
>>> Another Windows issue: https://issues.apache.org/jira/browse/HIVE-15152
>>>
>>> Gary
>>>
>>> On Wed, Dec 7, 2016 at 10:52 AM, Jesus Camacho Rodriguez <
>>> jcamachorodrig...@hortonworks.com> wrote:
>>>
>>> > @Gary, thanks for the feedback. I did run the test suite in OSX 10.11 and
>>> > I did
>>> > not have problems of the kind; I attach the trace for the ORC module
>>> tests
>>> > at
>>> > the end of this email. Indeed I did not run the test suite in Windows
>>> (and
>>> > probably we do not do it regularly enough, which is a valid point that
>>> > should
>>> > be raised in another issue, not in the release vote itself).
>>> > You mentioned the problem has been existing in master for a while. Is
>>> > there a
>>> > JIRA case to track it? At which exact version did the tests start
>>> failing?
>>> > Have
>>> > some other community members using Hive on Windows experience similar
>>> > problems?
>>> > This does not look like an issue that has been introduced in 2.1.1 and
>>> thus
>>> > deserves to block the release. Nevertheless, as soon as it gets solved, I
>>> > am
>>> > amenable to create a new bug fix release 2.1.2 that will include it.
>>> > Please, create a JIRA case with all that information if there is not one.
>>> > We
>>> > can follow from there. If you have a fix sketched out, you are welcome to
>>> > contribute it. I would also like to ask you to reconsider the -1 (even if
>>> > it
>>> > is non-binding)
>>> >
>>> > @Thejas, sure, I double-checked 2.1.0 and 2.0.0 and indeed previous
>>> release
>>> > notes were not included in the file. But I agree it is a good practice. I
>>> > have
>>> > logged HIVE-15380 and will change the RELEASE_NOTES file in the branches.
>>> >
>>> > --
>>> > Jesús
>>> >
>>> >
>>> >
>>> > ---
>>> >  T E S T S
>>> > ---
>>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>>> > MaxPermSize=512m; support was removed in 8.0
>>> > Running org.apache.orc.impl.TestBitFieldReader
>>> > Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.382 sec
>>> > - in org.apache.orc.impl.TestBitFieldReader
>>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>>> > MaxPermSize=512m; support was removed in 8.0
>>> > Running org.apache.orc.impl.TestBitPack
>>> > Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.775
>>> sec
>>> > - in org.apache.orc.impl.TestBitPack
>>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>>> > MaxPermSize=512m; support was removed in 8.0
>>> > Running org.apache.orc.impl.TestColumnStatisticsImpl
>>> > Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.107 sec
>>> > - in org.apache.orc.impl.TestColumnStatisticsImpl
>>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>>> > MaxPermSize=512m; support was removed in 8.0
>>> > Running org.apache.orc.impl.TestDataReaderProperties
>>> > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.281 sec
>>> > - in org.apache.orc.impl.TestDataReaderProperties
>>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>>> > MaxPermSize=512m; support was removed in 8.0
>>> > Running org.apache.orc.impl.TestDynamicArray
>>> > Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.112 sec
>>> > - in org.apache.orc.impl.TestDynamicArray
>>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>>> > MaxPermSize=512m; support was removed in 8.0
>>> > Running org.apache.orc.impl.TestInStream
>>> > Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.147 sec
>>> > - in org.apache.orc.impl.TestInStream
>>> > Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
>>> > MaxPermSize=512m; support was removed in 8.0
>>> > Running org.apache.orc.impl.TestIntegerCompressionReader
>>> > Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.148 sec
>>> > - in org.apac

[HIVE/cli] need your review for HIVE-15378

2016-12-08 Thread Hui Fei
hi all
could you please give suggestions and review it
link is https://issues.apache.org/jira/browse/HIVE-15378

thanks


[jira] [Created] (HIVE-15387) NPE in HiveServer2 webUI Historical SQL Operations section

2016-12-08 Thread Barna Zsombor Klara (JIRA)
Barna Zsombor Klara created HIVE-15387:
--

 Summary: NPE in HiveServer2 webUI Historical SQL Operations section
 Key: HIVE-15387
 URL: https://issues.apache.org/jira/browse/HIVE-15387
 Project: Hive
  Issue Type: Bug
Reporter: Barna Zsombor Klara
Priority: Minor


The runtime value on a SQLOperationDisplay may be null, which may lead to NPEs 
on the web UI.

Stack trace:
{code}
java.lang.NullPointerException
at 
org.apache.hive.generated.hiveserver2.hiveserver2_jsp._jspService(hiveserver2_jsp.java:145)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:565)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:479)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:521)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:227)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1031)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:406)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:186)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:965)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)
at org.eclipse.jetty.server.Server.handle(Server.java:349)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:449)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:910)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:634)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:230)
at 
org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:76)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:609)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:45)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:599)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:534)
at java.lang.Thread.run(Thread.java:745)
{code}

Compiled jsp segment:
{code}
 124out.print( 
conf.get(ConfVars.HIVE_SERVER2_WEBUI_MAX_HISTORIC_QUERIES.varname) );
   125out.write(" Closed Queries\n\n\nUser Name\n
Query\nExecution Engine\nState\n 
   Opened (s)\nClosed Timestamp\nLatency 
(s)\nDrilldown Link\n\n");
   126  
   127queries = 0;
   128operations = 
sessionManager.getOperationManager().getHistoricalSQLOperations();
   129for (SQLOperationDisplay operation : operations) {
   130queries++;
   131  
   132out.write("\n\n");
   133out.print( operation.getUserName() );
   134out.write("\n");
   135out.print( operation.getQueryDisplay() == null ? "Unknown" : 
operation.getQueryDisplay().getQueryString() );
   136out.write("\n");
   137out.print( operation.getExecutionEngine() );
   138out.write("\n");
   139out.print( operation.getState() );
   140out.write("\n");
   141out.print( operation.getElapsedTime()/1000 );
   142out.write("\n");
   143out.print( operation.getEndTime() == null ? "In Progress" : new 
Date(operation.getEndTime()) );
   144out.write("\n");
   145out.print( operation.getRuntime()/1000 );
   146out.write("\n");
{code}

Still trying to find a way to easily reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: confirm subscribe to dev@hive.apache.org

2016-12-08 Thread sanchita sanyal
Hi,

Please add me to the group.

Thanks,
Sanchita

On Thu, Dec 8, 2016 at 3:59 PM,  wrote:

> Hi! This is the ezmlm program. I'm managing the
> dev@hive.apache.org mailing list.
>
> I'm working for my owner, who can be reached
> at dev-ow...@hive.apache.org.
>
> To confirm that you would like
>
>sanchita.sanyal.2...@gmail.com
>
> added to the dev mailing list, please send
> a short reply to this address:
>
>dev-sc.1481192940.pkjkecaddpgdbbhncblf-sanchita.sanyal.2012=
> gmail@hive.apache.org
>
> Usually, this happens when you just hit the "reply" button.
> If this does not work, simply copy the address and paste it into
> the "To:" field of a new message.
>
> or click here:
> mailto:dev-sc.1481192940.pkjkecaddpgdbbhncblf-sanchita.sanyal.2012
> =gmail@hive.apache.org
>
> This confirmation serves two purposes. First, it verifies that I am able
> to get mail through to you. Second, it protects you in case someone
> forges a subscription request in your name.
>
> Please note that ALL Apache dev- and user- mailing lists are publicly
> archived.  Do familiarize yourself with Apache's public archive policy at
>
> http://www.apache.org/foundation/public-archives.html
>
> prior to subscribing and posting messages to dev@hive.apache.org.
> If you're not sure whether or not the policy applies to this mailing list,
> assume it does unless the list name contains the word "private" in it.
>
> Some mail programs are broken and cannot handle long addresses. If you
> cannot reply to this request, instead send a message to
>  and put the
> entire address listed above into the "Subject:" line.
>
>
> --- Administrative commands for the dev list ---
>
> I can handle administrative requests automatically. Please
> do not send them to the list address! Instead, send
> your message to the correct command address:
>
> To subscribe to the list, send a message to:
>
>
> To remove your address from the list, send a message to:
>
>
> Send mail to the following for info and FAQ for this list:
>
>
>
> Similar addresses exist for the digest list:
>
>
>
> To get messages 123 through 145 (a maximum of 100 per request), mail:
>
>
> To get an index with subject and author for messages 123-456 , mail:
>
>
> They are always returned as sets of 100, max 2000 per request,
> so you'll actually get 100-499.
>
> To receive all messages with the same subject as message 12345,
> send a short message to:
>
>
> The messages should contain one line or word of text to avoid being
> treated as sp@m, but I will ignore their content.
> Only the ADDRESS you send to is important.
>
> You can start a subscription for an alternate address,
> for example "john@host.domain", just add a hyphen and your
> address (with '=' instead of '@') after the command word:
> 
>
> To stop subscription for this address, mail:
> 
>
> In both cases, I'll send a confirmation message to that address. When
> you receive it, simply reply to it to complete your subscription.
>
> If despite following these instructions, you do not get the
> desired results, please contact my owner at
> dev-ow...@hive.apache.org. Please be patient, my owner is a
> lot slower than I am ;-)
>
> --- Enclosed is a copy of the request I received.
>
> Return-Path: 
> Received: (qmail 94767 invoked by uid 99); 8 Dec 2016 10:29:00 -
> Received: from pnap-us-west-generic-nat.apache.org (HELO
> spamd1-us-west.apache.org) (209.188.14.142)
> by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 08 Dec 2016 10:29:00
> +
> Received: from localhost (localhost [127.0.0.1])
> by spamd1-us-west.apache.org (ASF Mail Server at
> spamd1-us-west.apache.org) with ESMTP id B35ACC273C
> for ; Thu,  8 Dec 2016 10:28:59
> + (UTC)
> X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org
> X-Spam-Flag: NO
> X-Spam-Score: 2.129
> X-Spam-Level: **
> X-Spam-Status: No, score=2.129 tagged_above=-999 required=6.31
> tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1,
> FREEMAIL_ENVFROM_END_DIGIT=0.25, HTML_MESSAGE=2,
> RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01,
> RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled
> Authentication-Results: spamd1-us-west.apache.org (amavisd-new);
> dkim=pass (2048-bit key) header.d=gmail.com
> Received: from mx1-lw-eu.apache.org ([10.40.0.8])
> by localhost (spamd1-us-west.apache.org [10.40.0.7])
> (amavisd-new, port 10024)
> with ESMTP id EfN5SZA9UN2p for ;
> Thu,  8 Dec 2016 10:28:59 + (UTC)
> Received: from mail-wm0-f41.google.com (mail-wm0-f41.google.com
> [74.125.82.41])
> by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org)
> with ESMTPS id 7458B5FB8D
> for ; Thu,  8 Dec 2016 10:28:58
> + (UTC)
> Received: by mail-wm0-f41.google.com with SMTP id t79so18317750wmt.0
> for ; Thu, 08 Dec 2016 02:28:58
> -0800 (PST)
> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;

[jira] [Created] (HIVE-15388) HiveParser spends lots of time in parsing queries with lots "("

2016-12-08 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HIVE-15388:
---

 Summary: HiveParser spends lots of time in parsing queries with 
lots "("
 Key: HIVE-15388
 URL: https://issues.apache.org/jira/browse/HIVE-15388
 Project: Hive
  Issue Type: Improvement
Reporter: Rajesh Balamohan


Branch: apache-master (applicable with previous releases as well)

Queries generated via tools can have lots "(" for "AND/OR" conditions. This 
causes huge delays in parsing phase when the number of expressions are high.

e.g
{noformat}
SELECT `iata`,
   `airport`,
   `city`,
   `state`,
   `country`,
   `lat`,
   `lon`
FROM airports
WHERE 
((`airports`.`airport`
 = "Thigpen"

OR `airports`.`airport` = "Astoria Regional")
   
OR `airports`.`airport` = "Warsaw Municipal")
  
OR `airports`.`airport` = "John F Kennedy Memorial")
 OR 
`airports`.`airport` = "Hall-Miller Municipal")
OR 
`airports`.`airport` = "Atqasuk")
   OR 
`airports`.`airport` = "William B Hartsfield-Atlanta Intl")
  OR 
`airports`.`airport` = "Artesia Municipal")
 OR 
`airports`.`airport` = "Outagamie County Regional")
OR 
`airports`.`airport` = "Watertown Municipal")
   OR 
`airports`.`airport` = "Augusta State")
  OR 
`airports`.`airport` = "Aurora Municipal")
 OR 
`airports`.`airport` = "Alakanuk")
OR 
`airports`.`airport` = "Austin Municipal")
   OR 
`airports`.`airport` = "Auburn Municipal")
  OR 
`airports`.`airport` = "Auburn-Opelik")
 OR 
`airports`.`airport` = "Austin-Bergstrom International")
OR 
`airports`.`airport` = "Wausau Municipal")
   OR 
`airports`.`airport` = "Mecklenburg-Brunswick Regional")
  OR 
`airports`.`airport` = "Alva Regional")
 OR 
`airports`.`airport` = "Asheville Regional")
OR 
`airports`.`airport` = "Avon Park Municipal")
   OR 
`airports`.`airport` = "Wilkes-Barre/Scranton Intl")
  OR 
`airports`.`airport` = "Marana Northwest Regional")
 OR 
`airports`.`airport` = "Catalina")
OR `airports`.`airport` 
= "Washington Municipal")
   OR `airports`.`airport` 
= "Wainwright")
  OR `airports`.`airport` = 
"West Memphis Municipal")
 OR `airports`.`airport` = 
"Arlington Municipal")
OR `airports`.`airport` = 
"Algona Municipal")
   OR `airports`.`airport` = 
"Chandler")
  OR `airports`.`airport` = 
"Altus Municipal")
 OR `airports`.`airport` = 
"Neil Armstrong")
OR `airports`.`airport` = 
"Angel Fire")
   OR `airports`.`airport` = 
"Waycross-Ware County")
  OR `airports`.`airport` = 
"Colorado City Municipal")
 OR `airports`.`airport` = 
"Hazelhurst")
OR `airports`.`airport` = 
"Kalamazoo County")
   OR `airports`.`airport

[jira] [Created] (HIVE-15389) Backport HIVE-15239 to branch-1

2016-12-08 Thread Niklaus Xiao (JIRA)
Niklaus Xiao created HIVE-15389:
---

 Summary: Backport HIVE-15239 to branch-1
 Key: HIVE-15389
 URL: https://issues.apache.org/jira/browse/HIVE-15389
 Project: Hive
  Issue Type: Bug
  Components: Spark
Affects Versions: 2.1.0, 1.2.0
Reporter: Niklaus Xiao
Assignee: Niklaus Xiao


env: hive on spark engine
reproduce step:
{code}
create table a1(KEHHAO string, START_DT string) partitioned by (END_DT string);
create table a2(KEHHAO string, START_DT string) partitioned by (END_DT string);

alter table a1 add partition(END_DT='20161020');
alter table a1 add partition(END_DT='20161021');

insert into table a1 partition(END_DT='20161020') 
values('2000721360','20161001');


SELECT T1.KEHHAO,COUNT(1) FROM ( 
SELECT KEHHAO FROM a1 T 
WHERE T.KEHHAO = '2000721360' AND '20161018' BETWEEN T.START_DT AND T.END_DT-1 
UNION ALL 
SELECT KEHHAO FROM a2 T
WHERE T.KEHHAO = '2000721360' AND '20161018' BETWEEN T.START_DT AND T.END_DT-1 
) T1 
GROUP BY T1.KEHHAO 
HAVING COUNT(1)>1; 

+-+--+--+
|  t1.kehhao  | _c1  |
+-+--+--+
| 2000721360  | 2|
+-+--+--+
{code}

the result should be none record




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 53845: 'like any' and 'like all' operators in hive

2016-12-08 Thread Simanchal Das

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/53845/
---

(Updated Dec. 8, 2016, 12:31 p.m.)


Review request for hive, Carl Steinbach and Vineet Garg.


Repository: hive-git


Description
---

https://issues.apache.org/jira/browse/HIVE-15229


In Teradata 'like any' and 'like all' operators are mostly used when we are 
matching a text field with numbers of patterns.
'like any' and 'like all' operator are equivalents of multiple like operator 
like example below.
--like any
select col1 from table1 where col2 like any ('%accountant%', '%accounting%', 
'%retail%', '%bank%', '%insurance%');

--Can be written using multiple like condition 
select col1 from table1 where col2 like '%accountant%' or col2 like 
'%accounting%' or col2 like '%retail%' or col2 like '%bank%' or col2 like 
'%insurance%' ;

--like all
select col1 from table1 where col2 like all ('%accountant%', '%accounting%', 
'%retail%', '%bank%', '%insurance%');

--Can be written using multiple like operator 
select col1 from table1 where col2 like '%accountant%' and col2 like 
'%accounting%' and col2 like '%retail%' and col2 like '%bank%' and col2 like 
'%insurance%' ;

Problem statement:

Now a days so many data warehouse projects are being migrated from Teradata to 
Hive.
Always Data engineer and Business analyst are searching for these two operator.
If we introduce these two operator in hive then so many scripts will be 
migrated smoothly instead of converting these operators to multiple like 
operators.


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java 0dbbc1d 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g 4357328 
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 55915a6 
  ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g a82083b 
  ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java 5e708d3 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFLikeAll.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFLikeAny.java 
PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFLikeAll.java 
PRE-CREATION 
  ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDFLikeAny.java 
PRE-CREATION 
  ql/src/test/queries/clientnegative/udf_likeall_wrong1.q PRE-CREATION 
  ql/src/test/queries/clientnegative/udf_likeany_wrong1.q PRE-CREATION 
  ql/src/test/queries/clientpositive/udf_likeall.q PRE-CREATION 
  ql/src/test/queries/clientpositive/udf_likeany.q PRE-CREATION 
  ql/src/test/results/clientnegative/udf_likeall_wrong1.q.out PRE-CREATION 
  ql/src/test/results/clientnegative/udf_likeany_wrong1.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/udf_like.q.out 8ffcf9b 
  ql/src/test/results/clientpositive/udf_likeall.q.out PRE-CREATION 
  ql/src/test/results/clientpositive/udf_likeany.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/53845/diff/


Testing
---

Junit test cases and query.q files are attached


Thanks,

Simanchal Das



Request for addition as contributor

2016-12-08 Thread Adam Szita
Hi,

Can you add my userid (szita) as contributor to Hive please.

Thanks,
Adam


Re: Request for addition as contributor

2016-12-08 Thread Alan Gates
On JIRA or the wiki?  I tried to add you to the JIRA but it didn't find a szita 
or Adam Szita user.  I'm not sure I have permission to add you to the wiki.

Alan.

> On Dec 8, 2016, at 07:21, Adam Szita  wrote:
> 
> Hi,
> 
> Can you add my userid (szita) as contributor to Hive please.
> 
> Thanks,
> Adam



[jira] [Created] (HIVE-15390) Orc reader unnecessarily reading stripe footers with hive.optimize.index.filter set to true

2016-12-08 Thread Abhishek Somani (JIRA)
Abhishek Somani created HIVE-15390:
--

 Summary: Orc reader unnecessarily reading stripe footers with 
hive.optimize.index.filter set to true
 Key: HIVE-15390
 URL: https://issues.apache.org/jira/browse/HIVE-15390
 Project: Hive
  Issue Type: Bug
  Components: ORC
Affects Versions: 1.2.1
Reporter: Abhishek Somani
Assignee: Abhishek Somani


In a split given to a task, the task's orc reader is unnecessarily reading 
stripe footers for stripes that are not its responsibility to read. This is 
happening with hive.optimize.index.filter set to true.

Assuming one split per task(no tez grouping considered), a task should not need 
to read beyond the split's end offset. Even in some split computation 
strategies where a split's end offset can be in the middle of a stripe, it 
should not need to read more than one stripe beyond the split's end offset(to 
fully read a stripe that started in it). However I see that some tasks make 
unnecessary filesystem calls to read all the stripe footers in a file from the 
split start offset till the end of the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 53204: HIVE-15076 Improve scalability of LDAP authentication provider group filter

2016-12-08 Thread Aihua Xu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/53204/#review158532
---




common/src/java/org/apache/hadoop/hive/conf/HiveConf.java (line 2425)


Just curious why we don't just put the constant string 
"hive.server2.authentication.ldap.userMembershipKey" here like most of other 
entries?



service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java 
(line 90)


This seems to be a useful info that will help in diagnostics. Wondering why 
changes from info to debug level?



service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java 
(line 115)


This should be info level which will be consistent with 
GroupMembershipKeyFilter class.



service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java 
(line 124)


Seems 'warn' is not necessary since that could be expected in the for loop, 
right?



service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java 
(line 132)


Since we are throwing the exception, I guess such debug may be redundant. 
We should display such exception in the caller somewhere.



service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java 
(line 139)


Seems this could be a info level message.



service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java 
(line 145)


You may need to change message since it's expected that the user is not in 
some groups. Probably change to "Cannot match user ... and group ..." since 
"Failed to" seems to be an error.



service/src/java/org/apache/hive/service/auth/ldap/QueryFactory.java (line 138)


Looks like we won't handle NPE so NPE may cause some problems. 

If userMembershipAttr is null, will we still check userMememberOfGroup or 
not? If not, maybe we should handle such exception here. How about 
groupMembershipAttr above? Seems we will have such issue as well.



service/src/test/org/apache/hive/service/auth/TestLdapAuthenticationProviderImpl.java
 (line 265)


You may need to add some tests for the default configuraiton which is null 
for HIVE_SERVER2_PLAIN_LDAP_USERMEMBERSHIP_KEY.


- Aihua Xu


On Dec. 8, 2016, 12:45 a.m., Illya Yalovyy wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/53204/
> ---
> 
> (Updated Dec. 8, 2016, 12:45 a.m.)
> 
> 
> Review request for hive, Aihua Xu, Ashutosh Chauhan, Chaoyu Tang, and Szehon 
> Ho.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-15076 Improve scalability of LDAP authentication provider group filter
> 
> https://issues.apache.org/jira/browse/HIVE-15076
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 5ea9751 
>   service/src/java/org/apache/hive/service/auth/ldap/DirSearch.java 33b6088 
>   service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java 
> 152c4b2 
>   service/src/java/org/apache/hive/service/auth/ldap/LdapSearch.java 65076ea 
>   service/src/java/org/apache/hive/service/auth/ldap/Query.java b8bf938 
>   service/src/java/org/apache/hive/service/auth/ldap/QueryFactory.java 
> e9172d3 
>   
> service/src/test/org/apache/hive/service/auth/TestLdapAtnProviderWithMiniDS.java
>  cd62935 
>   
> service/src/test/org/apache/hive/service/auth/TestLdapAuthenticationProviderImpl.java
>  4fad755 
>   
> service/src/test/org/apache/hive/service/auth/ldap/LdapAuthenticationTestCase.java
>  acde8c1 
>   service/src/test/org/apache/hive/service/auth/ldap/TestGroupFilter.java 
> 0cc2ead 
>   service/src/test/org/apache/hive/service/auth/ldap/TestLdapSearch.java 
> 499b624 
>   service/src/test/org/apache/hive/service/auth/ldap/TestQueryFactory.java 
> 3054e33 
>   service/src/test/resources/ldap/ad.example.com.ldif PRE-CREATION 
>   service/src/test/resources/ldap/microsoft.schema.ldif PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/53204/diff/
> 
> 
> Testing
> ---
> 
> Build succeeded.
> 
> Test results:
> 
> Tests run: 149, Failures: 0, Errors: 0, Skipped: 0
> 
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 03:14 min
> [INFO] Finished at: 201

Re: Review Request 53204: HIVE-15076 Improve scalability of LDAP authentication provider group filter

2016-12-08 Thread Illya Yalovyy


> On Dec. 8, 2016, 3:55 p.m., Aihua Xu wrote:
> > common/src/java/org/apache/hadoop/hive/conf/HiveConf.java, line 2426
> > 
> >
> > Just curious why we don't just put the constant string 
> > "hive.server2.authentication.ldap.userMembershipKey" here like most of 
> > other entries?

Because it uses in several places. In particular in documentation. Putting a 
string in documentation is not maintainable, because later someone can change 
the string and forget to update in in all places. Documentation would become 
stale. In such a big project in will be a problem. JavaDoc has means to prevent 
that from happening by using string constants in documentation sections.


> On Dec. 8, 2016, 3:55 p.m., Aihua Xu wrote:
> > service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java, 
> > line 90
> > 
> >
> > This seems to be a useful info that will help in diagnostics. Wondering 
> > why changes from info to debug level?

I totally agree, but Naveen doesn't want to expose group names in logs. It is a 
questionable concern, but moving it to DEBUG may be a good compromise.


> On Dec. 8, 2016, 3:55 p.m., Aihua Xu wrote:
> > service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java, 
> > line 115
> > 
> >
> > This should be info level which will be consistent with 
> > GroupMembershipKeyFilter class.

Ok. I'll generate 2 log entries then: 1. INFO without group information; 2. 
DEBUG with full information. 

Does it make sense?

See Naveen's comments for more details.


> On Dec. 8, 2016, 3:55 p.m., Aihua Xu wrote:
> > service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java, 
> > line 124
> > 
> >
> > Seems 'warn' is not necessary since that could be expected in the for 
> > loop, right?

It means we have a group in configuration that doesn't exist... Would you 
recommend log it at DEBUG level?


> On Dec. 8, 2016, 3:55 p.m., Aihua Xu wrote:
> > service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java, 
> > line 132
> > 
> >
> > Since we are throwing the exception, I guess such debug may be 
> > redundant. We should display such exception in the caller somewhere.

Exception message has a different (less descriptive) message. Please see 
Naveen's comments for more details.


> On Dec. 8, 2016, 3:55 p.m., Aihua Xu wrote:
> > service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java, 
> > line 139
> > 
> >
> > Seems this could be a info level message.

Same here.


> On Dec. 8, 2016, 3:55 p.m., Aihua Xu wrote:
> > service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java, 
> > line 145
> > 
> >
> > You may need to change message since it's expected that the user is not 
> > in some groups. Probably change to "Cannot match user ... and group ..." 
> > since "Failed to" seems to be an error.

I will update the message.


> On Dec. 8, 2016, 3:55 p.m., Aihua Xu wrote:
> > service/src/test/org/apache/hive/service/auth/TestLdapAuthenticationProviderImpl.java,
> >  line 265
> > 
> >
> > You may need to add some tests for the default configuraiton which is 
> > null for HIVE_SERVER2_PLAIN_LDAP_USERMEMBERSHIP_KEY.

If HIVE_SERVER2_PLAIN_LDAP_USERMEMBERSHIP_KEY is NULL this filter will not be 
used. I think we have enough test for this case. Did I get you correctly? Could 
you please provide more details about the test case you have in mind?


> On Dec. 8, 2016, 3:55 p.m., Aihua Xu wrote:
> > service/src/java/org/apache/hive/service/auth/ldap/QueryFactory.java, line 
> > 138
> > 
> >
> > Looks like we won't handle NPE so NPE may cause some problems. 
> > 
> > If userMembershipAttr is null, will we still check userMememberOfGroup 
> > or not? If not, maybe we should handle such exception here. How about 
> > groupMembershipAttr above? Seems we will have such issue as well.

I think it should not happen, but I'll double check.


- Illya


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/53204/#review158532
---


On Dec. 8, 2016, 12:45 a.m., Illya Yalovyy wrote:
> 
> ---
> This is an automatically generated e

[jira] [Created] (HIVE-15391) Location validation for table should ignore the values for view.

2016-12-08 Thread Yongzhi Chen (JIRA)
Yongzhi Chen created HIVE-15391:
---

 Summary: Location validation for table should ignore the values 
for view.
 Key: HIVE-15391
 URL: https://issues.apache.org/jira/browse/HIVE-15391
 Project: Hive
  Issue Type: Bug
  Components: Beeline
Affects Versions: 2.2.0
Reporter: Yongzhi Chen
Assignee: Yongzhi Chen
Priority: Minor


When use schematool to do location validation, we got error message for views, 
for example:
{noformat}
n DB with Name: viewa
NULL Location for TABLE with Name: viewa
In DB with Name: viewa
NULL Location for TABLE with Name: viewb
In DB with Name: viewa
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[ANNOUNCE] Apache Hive 2.1.1 Released

2016-12-08 Thread Jesus Camacho Rodriguez
The Apache Hive team is proud to announce the release of Apache Hive
version 2.1.1.

The Apache Hive (TM) data warehouse software facilitates querying and
managing large datasets residing in distributed storage. Built on top
of Apache Hadoop (TM), it provides, among others:

* Tools to enable easy data extract/transform/load (ETL)

* A mechanism to impose structure on a variety of data formats

* Access to files stored either directly in Apache HDFS (TM) or in other
  data storage systems such as Apache HBase (TM)

* Query execution via Apache Hadoop MapReduce and Apache Tez frameworks.

For Hive release details and downloads, please visit:
https://hive.apache.org/downloads.html

Hive 2.1.1 Release Notes are available here:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310843&version=12335838

We would like to thank the many contributors who made this release
possible.

Regards,

The Apache Hive Team




[jira] [Created] (HIVE-15392) Refactoring the validate function of HiveSchemaTool to make the output consistent

2016-12-08 Thread Aihua Xu (JIRA)
Aihua Xu created HIVE-15392:
---

 Summary: Refactoring the validate function of HiveSchemaTool to 
make the output consistent
 Key: HIVE-15392
 URL: https://issues.apache.org/jira/browse/HIVE-15392
 Project: Hive
  Issue Type: Sub-task
  Components: Metastore
Affects Versions: 2.2.0
Reporter: Aihua Xu
Assignee: Aihua Xu
Priority: Minor
 Attachments: HIVE-15392.1.patch

The validate output is not consistent. Make it more consistent.

{noformat}
Starting metastore validationValidating schema version
Succeeded in schema version validation.
Validating sequence number for SEQUENCE_TABLE
Metastore connection URL:
jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver :org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User:   APP
Validating tables in the schema for version 2.2.0
Expected (from schema definition) 57 tables, Found (from HMS metastore) 58 
tables
Schema table validation successful
Metastore connection URL:
jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver :org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User:   APP
Metastore connection URL:
jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver :org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User:   APP
Metastore connection URL:
jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver :org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User:   APP
Validating columns for incorrect NULL values
Metastore connection URL:
jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver :org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User:   APP
Done with metastore validationschemaTool completed
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15393) Update Guava version

2016-12-08 Thread slim bouguerra (JIRA)
slim bouguerra created HIVE-15393:
-

 Summary: Update Guava version
 Key: HIVE-15393
 URL: https://issues.apache.org/jira/browse/HIVE-15393
 Project: Hive
  Issue Type: Sub-task
  Components: Druid integration
Affects Versions: 2.2.0
Reporter: slim bouguerra
Priority: Blocker


Druid base code is using newer version of guava 16.0.1 that is not compatible 
with the current version used by Hive.
FYI Hadoop project is moving to Guava 18 not sure if it is better to move to 
guava 18 or even 19.
https://issues.apache.org/jira/browse/HADOOP-10101



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 54341: HIVE-15353: Metastore throws NPE if StorageDescriptor.cols is null

2016-12-08 Thread Anthony Hsu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/54341/
---

(Updated 十二月 8, 2016, 9:23 p.m.)


Review request for hive.


Changes
---

New version no longer updates the Thrift definition but just fixes the NPEs in 
the alter_partition code path.


Bugs: HIVE-15353
https://issues.apache.org/jira/browse/HIVE-15353


Repository: hive-git


Description (updated)
---

Update alter_partition() code path to fix NPEs.


Diffs (updated)
-

  metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java 
86565a4198d5daced5e230a41d8ada577a656268 
  metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java 
9ea6ac40d6f0eb9081c5cfad982ffc435f15f6fd 

Diff: https://reviews.apache.org/r/54341/diff/


Testing
---

After making these changes, I no longer encounter NullPointerExceptions when 
setting cols to null in create_table, alter_table, and alter_partition calls.


Thanks,

Anthony Hsu



[jira] [Created] (HIVE-15394) HiveMetaStoreClient add_partition API should not allow partitions with a null StorageDescriptor.cols to be added

2016-12-08 Thread Anthony Hsu (JIRA)
Anthony Hsu created HIVE-15394:
--

 Summary: HiveMetaStoreClient add_partition API should not allow 
partitions with a null StorageDescriptor.cols to be added
 Key: HIVE-15394
 URL: https://issues.apache.org/jira/browse/HIVE-15394
 Project: Hive
  Issue Type: Bug
Affects Versions: 1.1.0, 2.2.0
Reporter: Anthony Hsu


Follow up to HIVE-15353.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 53204: HIVE-15076 Improve scalability of LDAP authentication provider group filter

2016-12-08 Thread Illya Yalovyy


> On Dec. 8, 2016, 3:55 p.m., Aihua Xu wrote:
> > service/src/java/org/apache/hive/service/auth/ldap/QueryFactory.java, line 
> > 138
> > 
> >
> > Looks like we won't handle NPE so NPE may cause some problems. 
> > 
> > If userMembershipAttr is null, will we still check userMememberOfGroup 
> > or not? If not, maybe we should handle such exception here. How about 
> > groupMembershipAttr above? Seems we will have such issue as well.
> 
> Illya Yalovyy wrote:
> I think it should not happen, but I'll double check.

The filter will not be created for a case when 
'hive.server2.authentication.ldap.userMembershipKey' is not set (NULL). It 
means we don't have to handle null in this code. If this NPE happens, it means 
there is a bug in the code.


> On Dec. 8, 2016, 3:55 p.m., Aihua Xu wrote:
> > service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java, 
> > line 115
> > 
> >
> > This should be info level which will be consistent with 
> > GroupMembershipKeyFilter class.
> 
> Illya Yalovyy wrote:
> Ok. I'll generate 2 log entries then: 1. INFO without group information; 
> 2. DEBUG with full information. 
> 
> Does it make sense?
> 
> See Naveen's comments for more details.

Actually this is a bit different. I'll change it to INFO.


- Illya


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/53204/#review158532
---


On Dec. 8, 2016, 12:45 a.m., Illya Yalovyy wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/53204/
> ---
> 
> (Updated Dec. 8, 2016, 12:45 a.m.)
> 
> 
> Review request for hive, Aihua Xu, Ashutosh Chauhan, Chaoyu Tang, and Szehon 
> Ho.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-15076 Improve scalability of LDAP authentication provider group filter
> 
> https://issues.apache.org/jira/browse/HIVE-15076
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 5ea9751 
>   service/src/java/org/apache/hive/service/auth/ldap/DirSearch.java 33b6088 
>   service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java 
> 152c4b2 
>   service/src/java/org/apache/hive/service/auth/ldap/LdapSearch.java 65076ea 
>   service/src/java/org/apache/hive/service/auth/ldap/Query.java b8bf938 
>   service/src/java/org/apache/hive/service/auth/ldap/QueryFactory.java 
> e9172d3 
>   
> service/src/test/org/apache/hive/service/auth/TestLdapAtnProviderWithMiniDS.java
>  cd62935 
>   
> service/src/test/org/apache/hive/service/auth/TestLdapAuthenticationProviderImpl.java
>  4fad755 
>   
> service/src/test/org/apache/hive/service/auth/ldap/LdapAuthenticationTestCase.java
>  acde8c1 
>   service/src/test/org/apache/hive/service/auth/ldap/TestGroupFilter.java 
> 0cc2ead 
>   service/src/test/org/apache/hive/service/auth/ldap/TestLdapSearch.java 
> 499b624 
>   service/src/test/org/apache/hive/service/auth/ldap/TestQueryFactory.java 
> 3054e33 
>   service/src/test/resources/ldap/ad.example.com.ldif PRE-CREATION 
>   service/src/test/resources/ldap/microsoft.schema.ldif PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/53204/diff/
> 
> 
> Testing
> ---
> 
> Build succeeded.
> 
> Test results:
> 
> Tests run: 149, Failures: 0, Errors: 0, Skipped: 0
> 
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 03:14 min
> [INFO] Finished at: 2016-10-26T13:53:15-07:00
> [INFO] Final Memory: 36M/1091M
> [INFO] 
> 
> 
> 
> Thanks,
> 
> Illya Yalovyy
> 
>



Re: Review Request 53204: HIVE-15076 Improve scalability of LDAP authentication provider group filter

2016-12-08 Thread Illya Yalovyy


> On Dec. 8, 2016, 3:55 p.m., Aihua Xu wrote:
> > service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java, 
> > line 145
> > 
> >
> > You may need to change message since it's expected that the user is not 
> > in some groups. Probably change to "Cannot match user ... and group ..." 
> > since "Failed to" seems to be an error.
> 
> Illya Yalovyy wrote:
> I will update the message.

Usually it should just return true or false. If it fails with exception then 
something is wrong. That was reflected in the message. I noticed that I'm 
hiding the exception, which is a very bad practice. Will fix it as well. May be 
even WARN log message with exception details is required here. What you think? 
Again it should not happen usually, if it does - something wrong.


- Illya


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/53204/#review158532
---


On Dec. 8, 2016, 12:45 a.m., Illya Yalovyy wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/53204/
> ---
> 
> (Updated Dec. 8, 2016, 12:45 a.m.)
> 
> 
> Review request for hive, Aihua Xu, Ashutosh Chauhan, Chaoyu Tang, and Szehon 
> Ho.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-15076 Improve scalability of LDAP authentication provider group filter
> 
> https://issues.apache.org/jira/browse/HIVE-15076
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 5ea9751 
>   service/src/java/org/apache/hive/service/auth/ldap/DirSearch.java 33b6088 
>   service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java 
> 152c4b2 
>   service/src/java/org/apache/hive/service/auth/ldap/LdapSearch.java 65076ea 
>   service/src/java/org/apache/hive/service/auth/ldap/Query.java b8bf938 
>   service/src/java/org/apache/hive/service/auth/ldap/QueryFactory.java 
> e9172d3 
>   
> service/src/test/org/apache/hive/service/auth/TestLdapAtnProviderWithMiniDS.java
>  cd62935 
>   
> service/src/test/org/apache/hive/service/auth/TestLdapAuthenticationProviderImpl.java
>  4fad755 
>   
> service/src/test/org/apache/hive/service/auth/ldap/LdapAuthenticationTestCase.java
>  acde8c1 
>   service/src/test/org/apache/hive/service/auth/ldap/TestGroupFilter.java 
> 0cc2ead 
>   service/src/test/org/apache/hive/service/auth/ldap/TestLdapSearch.java 
> 499b624 
>   service/src/test/org/apache/hive/service/auth/ldap/TestQueryFactory.java 
> 3054e33 
>   service/src/test/resources/ldap/ad.example.com.ldif PRE-CREATION 
>   service/src/test/resources/ldap/microsoft.schema.ldif PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/53204/diff/
> 
> 
> Testing
> ---
> 
> Build succeeded.
> 
> Test results:
> 
> Tests run: 149, Failures: 0, Errors: 0, Skipped: 0
> 
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 03:14 min
> [INFO] Finished at: 2016-10-26T13:53:15-07:00
> [INFO] Final Memory: 36M/1091M
> [INFO] 
> 
> 
> 
> Thanks,
> 
> Illya Yalovyy
> 
>



Re: Review Request 53204: HIVE-15076 Improve scalability of LDAP authentication provider group filter

2016-12-08 Thread Illya Yalovyy


> On Dec. 8, 2016, 3:55 p.m., Aihua Xu wrote:
> > service/src/test/org/apache/hive/service/auth/TestLdapAuthenticationProviderImpl.java,
> >  line 265
> > 
> >
> > You may need to add some tests for the default configuraiton which is 
> > null for HIVE_SERVER2_PLAIN_LDAP_USERMEMBERSHIP_KEY.
> 
> Illya Yalovyy wrote:
> If HIVE_SERVER2_PLAIN_LDAP_USERMEMBERSHIP_KEY is NULL this filter will 
> not be used. I think we have enough test for this case. Did I get you 
> correctly? Could you please provide more details about the test case you have 
> in mind?

I think this use case is tested in #testAuthenticateWhenGroupFilterPasses(). 
Probably I should rename other tests to make it clear.


- Illya


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/53204/#review158532
---


On Dec. 8, 2016, 12:45 a.m., Illya Yalovyy wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/53204/
> ---
> 
> (Updated Dec. 8, 2016, 12:45 a.m.)
> 
> 
> Review request for hive, Aihua Xu, Ashutosh Chauhan, Chaoyu Tang, and Szehon 
> Ho.
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HIVE-15076 Improve scalability of LDAP authentication provider group filter
> 
> https://issues.apache.org/jira/browse/HIVE-15076
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 5ea9751 
>   service/src/java/org/apache/hive/service/auth/ldap/DirSearch.java 33b6088 
>   service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java 
> 152c4b2 
>   service/src/java/org/apache/hive/service/auth/ldap/LdapSearch.java 65076ea 
>   service/src/java/org/apache/hive/service/auth/ldap/Query.java b8bf938 
>   service/src/java/org/apache/hive/service/auth/ldap/QueryFactory.java 
> e9172d3 
>   
> service/src/test/org/apache/hive/service/auth/TestLdapAtnProviderWithMiniDS.java
>  cd62935 
>   
> service/src/test/org/apache/hive/service/auth/TestLdapAuthenticationProviderImpl.java
>  4fad755 
>   
> service/src/test/org/apache/hive/service/auth/ldap/LdapAuthenticationTestCase.java
>  acde8c1 
>   service/src/test/org/apache/hive/service/auth/ldap/TestGroupFilter.java 
> 0cc2ead 
>   service/src/test/org/apache/hive/service/auth/ldap/TestLdapSearch.java 
> 499b624 
>   service/src/test/org/apache/hive/service/auth/ldap/TestQueryFactory.java 
> 3054e33 
>   service/src/test/resources/ldap/ad.example.com.ldif PRE-CREATION 
>   service/src/test/resources/ldap/microsoft.schema.ldif PRE-CREATION 
> 
> Diff: https://reviews.apache.org/r/53204/diff/
> 
> 
> Testing
> ---
> 
> Build succeeded.
> 
> Test results:
> 
> Tests run: 149, Failures: 0, Errors: 0, Skipped: 0
> 
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 03:14 min
> [INFO] Finished at: 2016-10-26T13:53:15-07:00
> [INFO] Final Memory: 36M/1091M
> [INFO] 
> 
> 
> 
> Thanks,
> 
> Illya Yalovyy
> 
>



jenkins on master is not working

2016-12-08 Thread Eugene Koifman
Hi,

Builds are failing with the error below.  Could someone take a look please?


[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hive-ptest: Compilation failure
[ERROR] No compiler is provided in this environment. Perhaps you are running on 
a JRE rather than a JDK?
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:


For example,

https://builds.apache.org/job/PreCommit-HIVE-Build/2503/console


Re: jenkins on master is not working

2016-12-08 Thread Sergio Pena
Seems sometimes Jenkins give us a slave that does not have the environment
correctly set up.
I modified the job to include JDK on any slave. That will hopefully fix the
issue.

On Thu, Dec 8, 2016 at 4:28 PM, Eugene Koifman 
wrote:

> Hi,
>
> Builds are failing with the error below.  Could someone take a look please?
>
>
> [ERROR] Failed to execute goal org.apache.maven.plugins:
> maven-compiler-plugin:3.1:compile (default-compile) on project
> hive-ptest: Compilation failure
> [ERROR] No compiler is provided in this environment. Perhaps you are
> running on a JRE rather than a JDK?
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
>
>
> For example,
>
> https://builds.apache.org/job/PreCommit-HIVE-Build/2503/console
>


[jira] [Created] (HIVE-15395) Don't try to intern strings from empty map

2016-12-08 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HIVE-15395:
---

 Summary: Don't try to intern strings from empty map
 Key: HIVE-15395
 URL: https://issues.apache.org/jira/browse/HIVE-15395
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan


Otherwise it unnecessarily create another map object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15396) Basic Stats are not collected when running INSERT INTO commands on s3a

2016-12-08 Thread Sahil Takiar (JIRA)
Sahil Takiar created HIVE-15396:
---

 Summary: Basic Stats are not collected when running INSERT INTO 
commands on s3a
 Key: HIVE-15396
 URL: https://issues.apache.org/jira/browse/HIVE-15396
 Project: Hive
  Issue Type: Bug
  Components: Hive
Reporter: Sahil Takiar
Assignee: Sahil Takiar


{{numRows}} is not collected when running {{INSERT ... INTO ...}} commands 
against tables backed by S3 (and maybe even other blobstores).

The {{COLUMN_STATS_ACCURATE={"BASIC_STATS":"true"}}} entry is missing from the 
{{describe extended}} output.

Repro steps:

{code}
hive> drop table s3_table;
OK
Time taken: 1.87 seconds
hive> create table s3_table (col int) location 
's3a://[bucket-name]/stats-test/';
OK
Time taken: 3.069 seconds
hive> insert into s3_table values (1), (2), (3);
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the 
future versions. Consider using a different execution engine (i.e. spark, tez) 
or using Hive 1.X releases.
Query ID = stakiar_20161208160105_fb3df340-d5fb-4ad6-8776-4f3cae02216d
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2016-12-08 16:01:12,741 Stage-1 map = 0%,  reduce = 0%
2016-12-08 16:01:16,759 Stage-1 map = 100%,  reduce = 0%
Ended Job = job_local688636529_0004
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Loading data to table default.s3_table
MapReduce Jobs Launched:
Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
Time taken: 23.0 seconds
hive> select * from s3_table;
OK
1
2
3
Time taken: 0.096 seconds, Fetched: 3 row(s)
hive> describe extended s3_table;
OK
col int

Detailed Table Information  Table(tableName:s3_table, dbName:default, 
owner:stakiar, createTime:1481241657, lastAccessTime:0, retention:0, 
sd:StorageDescriptor(cols:[FieldSchema(name:col, type:int, comment:null)], 
location:s3a://cloudera-dev-hive-on-s3/stats-test, 
inputFormat:org.apache.hadoop.mapred.TextInputFormat, 
outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, 
compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
parameters:{serialization.format=1}), bucketCols:[], sortCols:[], 
parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], 
skewedColValueLocationMaps:{}), storedAsSubDirectories:false), 
partitionKeys:[], parameters:{transient_lastDdlTime=1481241687, totalSize=6, 
numFiles=1}, viewOriginalText:null, viewExpandedText:null, 
tableType:MANAGED_TABLE)
Time taken: 0.037 seconds, Fetched: 3 row(s)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15397) nullscan may return incorrect results with empty tables - I

2016-12-08 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-15397:
---

 Summary: nullscan may return incorrect results with empty tables - 
I
 Key: HIVE-15397
 URL: https://issues.apache.org/jira/browse/HIVE-15397
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15398) metadata-only queries may return incorrect results with empty tables - II

2016-12-08 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HIVE-15398:
---

 Summary: metadata-only queries may return incorrect results with 
empty tables - II
 Key: HIVE-15398
 URL: https://issues.apache.org/jira/browse/HIVE-15398
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15399) Parser change for UniqueJoin

2016-12-08 Thread Pengcheng Xiong (JIRA)
Pengcheng Xiong created HIVE-15399:
--

 Summary: Parser change for UniqueJoin
 Key: HIVE-15399
 URL: https://issues.apache.org/jira/browse/HIVE-15399
 Project: Hive
  Issue Type: Bug
Reporter: Pengcheng Xiong
Assignee: Pengcheng Xiong


UniqueJoin was introduced in HIVE-591. Add Unique Join. (Emil Ibrishimov via 
namit). It sounds like that there is only one q test for unique join, i.e., 
uniquejoin.q. In the q test, unique join source can only come from a table. 
However, in parser, its source can come from not only tableSource, but also
{code}
partitionedTableFunction | tableSource | subQuerySource | virtualTableSource
{code}
I think it would be better to change the parser and limit it to meet the user's 
real requirement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 54327: Add a FetchTask to REPL DUMP plan for reading dump uri, last repl id as ResultSet

2016-12-08 Thread Vaibhav Gumashta

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/54327/
---

(Updated Dec. 9, 2016, 12:46 a.m.)


Review request for hive, Sushanth Sowmyan and Thejas Nair.


Bugs: HIVE-15333
https://issues.apache.org/jira/browse/HIVE-15333


Repository: hive-git


Description
---

https://issues.apache.org/jira/browse/HIVE-15333


Diffs (updated)
-

  
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestReplicationScenarios.java
 95db9e8 
  itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniHS2.java 
c84570b 
  ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 7b63c52 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java c7389a8 
  ql/src/java/org/apache/hadoop/hive/ql/parse/EximUtil.java a0d492d 
  ql/src/java/org/apache/hadoop/hive/ql/parse/ReplicationSemanticAnalyzer.java 
8007c4e 
  ql/src/test/results/clientnegative/authorization_import.q.out 9972a8a 
  ql/src/test/results/clientnegative/exim_00_unsupported_schema.q.out 0caa42a 
  shims/common/src/main/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java 
228a972 

Diff: https://reviews.apache.org/r/54327/diff/


Testing
---


Thanks,

Vaibhav Gumashta



Re: Review Request 53204: HIVE-15076 Improve scalability of LDAP authentication provider group filter

2016-12-08 Thread Illya Yalovyy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/53204/
---

(Updated Dec. 9, 2016, 1:03 a.m.)


Review request for hive, Aihua Xu, Ashutosh Chauhan, Chaoyu Tang, and Szehon Ho.


Changes
---

1. Updated logging
2. Added exception to error messages
3. Trivial style correction
4. Test methods renamed according to the actual filter implementation names


Repository: hive-git


Description
---

HIVE-15076 Improve scalability of LDAP authentication provider group filter

https://issues.apache.org/jira/browse/HIVE-15076


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 5ea9751 
  service/src/java/org/apache/hive/service/auth/ldap/DirSearch.java 33b6088 
  service/src/java/org/apache/hive/service/auth/ldap/GroupFilterFactory.java 
152c4b2 
  service/src/java/org/apache/hive/service/auth/ldap/LdapSearch.java 65076ea 
  service/src/java/org/apache/hive/service/auth/ldap/Query.java b8bf938 
  service/src/java/org/apache/hive/service/auth/ldap/QueryFactory.java e9172d3 
  
service/src/test/org/apache/hive/service/auth/TestLdapAtnProviderWithMiniDS.java
 cd62935 
  
service/src/test/org/apache/hive/service/auth/TestLdapAuthenticationProviderImpl.java
 4fad755 
  
service/src/test/org/apache/hive/service/auth/ldap/LdapAuthenticationTestCase.java
 acde8c1 
  service/src/test/org/apache/hive/service/auth/ldap/TestGroupFilter.java 
0cc2ead 
  service/src/test/org/apache/hive/service/auth/ldap/TestLdapSearch.java 
499b624 
  service/src/test/org/apache/hive/service/auth/ldap/TestQueryFactory.java 
3054e33 
  service/src/test/resources/ldap/ad.example.com.ldif PRE-CREATION 
  service/src/test/resources/ldap/microsoft.schema.ldif PRE-CREATION 

Diff: https://reviews.apache.org/r/53204/diff/


Testing
---

Build succeeded.

Test results:

Tests run: 149, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 03:14 min
[INFO] Finished at: 2016-10-26T13:53:15-07:00
[INFO] Final Memory: 36M/1091M
[INFO] 


Thanks,

Illya Yalovyy



[jira] [Created] (HIVE-15400) EXCHANGE PARTITION should honor partition locations

2016-12-08 Thread Anthony Hsu (JIRA)
Anthony Hsu created HIVE-15400:
--

 Summary: EXCHANGE PARTITION should honor partition locations
 Key: HIVE-15400
 URL: https://issues.apache.org/jira/browse/HIVE-15400
 Project: Hive
  Issue Type: Bug
Reporter: Anthony Hsu


Currently, if you add a partition with a custom location, EXCHANGE PARTITION 
will fail with a "File ... does not exist" error:
{noformat}
drop table if exists text_partitioned;
drop table if exists text_partitioned2;

create table text_partitioned (b string) partitioned by (a int) stored as 
textfile;
create table text_partitioned2 (b string) partitioned by (a int) stored as 
textfile;

alter table text_partitioned add partition (a=1) location '/tmp/text/1';

alter table text_partitioned2 exchange partition (a=1) with table 
text_partitioned;
{noformat}

The last command fails with
{code}
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: 
java.io.FileNotFoundException File 
file:/path/to/warehouse_dir/text_partitioned/a=1 does not exist)
{code}

EXCHANGE PARTITION should honor the location that has been set for the 
partition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15401) Import constraints into HBase metastore

2016-12-08 Thread Alan Gates (JIRA)
Alan Gates created HIVE-15401:
-

 Summary: Import constraints into HBase metastore
 Key: HIVE-15401
 URL: https://issues.apache.org/jira/browse/HIVE-15401
 Project: Hive
  Issue Type: Sub-task
  Components: HBase Metastore
Affects Versions: 2.1.1
Reporter: Alan Gates
Assignee: Alan Gates


Since HIVE-15342 added support for primary and foreign keys in the HBase 
metastore we should support them in HBaseImport as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15402) LAG's PRECEDING does not work.

2016-12-08 Thread Ryu Kobayashi (JIRA)
Ryu Kobayashi created HIVE-15402:


 Summary: LAG's PRECEDING does not work.
 Key: HIVE-15402
 URL: https://issues.apache.org/jira/browse/HIVE-15402
 Project: Hive
  Issue Type: Bug
Affects Versions: 2.1.0
Reporter: Ryu Kobayashi


The syntax in the following manual does not work: 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+WindowingAndAnalytics#LanguageManualWindowingAndAnalytics-LAGspecifyingalagof3rowsanddefaultvalueof0

{code}
SELECT a, LAG(a, 3, 0) OVER (PARTITION BY b ORDER BY C ROWS 3 PRECEDING)
FROM T;
{code}

{code}
FAILED: SemanticException Failed to breakup Windowing invocations into Groups. 
At least 1 group must only depend on input columns. Also check for circular 
dependencies.
Underlying error: Expecting left window frame boundary for function 
LAG((tok_table_or_col a), 3, 0) Window 
Spec=[PartitioningSpec=[partitionColumns=[(tok_table_or_col 
b)]orderColumns=[(tok_table_or_col c) ASC NULLS_FIRST]]window(start=range(3 
PRECEDING), end=currentRow)] as LAG_window_0 to be unbounded. Found : 3
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-15403) LLAP: Login with kerberos before starting the daemon

2016-12-08 Thread Prasanth Jayachandran (JIRA)
Prasanth Jayachandran created HIVE-15403:


 Summary: LLAP: Login with kerberos before starting the daemon
 Key: HIVE-15403
 URL: https://issues.apache.org/jira/browse/HIVE-15403
 Project: Hive
  Issue Type: Bug
  Components: llap
Affects Versions: 2.2.0
Reporter: Prasanth Jayachandran
Assignee: Prasanth Jayachandran
Priority: Critical


In LLAP cluster, if some of the nodes are kinit'ed with some user (other than 
default hive user) and some nodes are kinit'ed with hive user, both will end up 
in different paths under zk registry and may not be reported by the llap status 
tool. The reason for that is when creating zk paths we use UGI.getCurrentUser() 
but current user may not be same across all nodes (someone has to do global 
kinit). Before bringing up the daemon, if security is enabled each daemons 
should login based on specified kerberos principal and keytab for llap daemon 
service and update the current logged in user. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)