[jira] [Updated] (HIVE-3339) Change the rules in SemanticAnalyzer to use Operator.getName() instead of hardcoded names

2012-09-12 Thread Zhenxiao Luo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenxiao Luo updated HIVE-3339:
---

Status: Patch Available  (was: Open)

> Change the rules in SemanticAnalyzer to use Operator.getName() instead of 
> hardcoded names
> -
>
> Key: HIVE-3339
> URL: https://issues.apache.org/jira/browse/HIVE-3339
> Project: Hive
>  Issue Type: Bug
>Reporter: Namit Jain
>Assignee: Zhenxiao Luo
>Priority: Minor
> Attachments: HIVE-3339.1.patch.txt, HIVE-3339.2.patch.txt
>
>
> This should be done for code cleanup.
> Instead of the rule being:
> SEL%
> It should say SelectOperator.getName()%
> It would make the rules more readable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3339) Change the rules in SemanticAnalyzer to use Operator.getName() instead of hardcoded names

2012-09-12 Thread Zhenxiao Luo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453814#comment-13453814
 ] 

Zhenxiao Luo commented on HIVE-3339:


@Namit:

Thanks a lot for the comments.

I just updated the patch, and review request at:
https://reviews.facebook.net/D5343

> Change the rules in SemanticAnalyzer to use Operator.getName() instead of 
> hardcoded names
> -
>
> Key: HIVE-3339
> URL: https://issues.apache.org/jira/browse/HIVE-3339
> Project: Hive
>  Issue Type: Bug
>Reporter: Namit Jain
>Assignee: Zhenxiao Luo
>Priority: Minor
> Attachments: HIVE-3339.1.patch.txt, HIVE-3339.2.patch.txt
>
>
> This should be done for code cleanup.
> Instead of the rule being:
> SEL%
> It should say SelectOperator.getName()%
> It would make the rules more readable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3339) Change the rules in SemanticAnalyzer to use Operator.getName() instead of hardcoded names

2012-09-12 Thread Zhenxiao Luo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenxiao Luo updated HIVE-3339:
---

Attachment: HIVE-3339.2.patch.txt

> Change the rules in SemanticAnalyzer to use Operator.getName() instead of 
> hardcoded names
> -
>
> Key: HIVE-3339
> URL: https://issues.apache.org/jira/browse/HIVE-3339
> Project: Hive
>  Issue Type: Bug
>Reporter: Namit Jain
>Assignee: Zhenxiao Luo
>Priority: Minor
> Attachments: HIVE-3339.1.patch.txt, HIVE-3339.2.patch.txt
>
>
> This should be done for code cleanup.
> Instead of the rule being:
> SEL%
> It should say SelectOperator.getName()%
> It would make the rules more readable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


newbie in hive dev - process help

2012-09-12 Thread Chalcy Raja
Hi hive dev Gurus,

I am a newbie to hive dev, but using hive for about 2 years.

I created a UDF to convert map to string, since I wanted make only the key part 
lower or upper to convert old data in map field, because the keys could be in 
different cases.  I created this about a year ago.

I would like to add it to the hive code, so I do not have to customize the code 
or add udf temporarily.  Also it could be a useful udf to have.

I got the steps to contribute from this 
https://cwiki.apache.org/Hive/howtocontribute.html

Anything else needed?  Is this email enough or I have open a jira?

Thanks,
Chalcy

-Original Message-
From: Namit Jain (JIRA) [mailto:j...@apache.org] 
Sent: Wednesday, September 12, 2012 1:53 AM
To: hive-...@hadoop.apache.org
Subject: [jira] [Assigned] (HIVE-3141) Bug in SELECT query


 [ 
https://issues.apache.org/jira/browse/HIVE-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain reassigned HIVE-3141:


Assignee: Ajesh Kumar

> Bug in SELECT query
> ---
>
> Key: HIVE-3141
> URL: https://issues.apache.org/jira/browse/HIVE-3141
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 0.9.0
> Environment: OS: Ubuntu
> Hive version: hive-0.7.1-cdh3u2
> Hadoop : hadoop-0.20.2
>Reporter: ASK
>Assignee: Ajesh Kumar
>Priority: Minor
>  Labels: patch
> Attachments: HIVE-3141.2.patch.txt, 
> Hive_bug_3141_resolution.pdf, select_syntax.q, select_syntax.q.out
>
>
> When i try to execute select *(followed by any alphanumeric character) 
> from table , query is throwing some issues. It display the result for 
> select * It doesnot happen when only numbers follow the *

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators 
For more information on JIRA, see: http://www.atlassian.com/software/jira



Hive-trunk-h0.21 - Build # 1664 - Still Failing

2012-09-12 Thread Apache Jenkins Server
Changes for Build #1638
[namit] HIVE-3393 get_json_object and json_tuple should use Jackson library
(Kevin Wilfong via namit)


Changes for Build #1639

Changes for Build #1640
[ecapriolo] HIVE-3068 Export table metadata as JSON on table drop (Andrew 
Chalfant via egc)


Changes for Build #1641

Changes for Build #1642
[hashutosh] HIVE-3338 : Archives broken for hadoop 1.0 (Vikram Dixit via 
Ashutosh Chauhan)


Changes for Build #1643

Changes for Build #1644

Changes for Build #1645
[cws] HIVE-3413. Fix pdk.PluginTest on hadoop23 (Zhenxiao Luo via cws)


Changes for Build #1646
[cws] HIVE-3056. Ability to bulk update location field in Db/Table/Partition 
records (Shreepadma Venugopalan via cws)

[cws] HIVE-3416 [jira] Fix 
TestAvroSerdeUtils.determineSchemaCanReadSchemaFromHDFS when running Hive on 
hadoop23
(Zhenxiao Luo via Carl Steinbach)

Summary:
HIVE-3416: Fix TestAvroSerdeUtils.determineSchemaCanReadSchemaFromHDFS when 
running Hive on hadoop23

TestAvroSerdeUtils determinSchemaCanReadSchemaFromHDFS is failing when running 
hive on hadoop23:

$ant very-clean package -Dhadoop.version=0.23.1 -Dhadoop-0.23.version=0.23.1 
-Dhadoop.mr.rev=23

$ant test -Dhadoop.version=0.23.1 -Dhadoop-0.23.version=0.23.1 
-Dhadoop.mr.rev=23 -Dtestcase=TestAvroSerdeUtils

 
java.lang.NoClassDefFoundError: 
org/apache/hadoop/net/StaticMapping
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:534)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:489)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:360)
at 
org.apache.hadoop.hive.serde2.avro.TestAvroSerdeUtils.determineSchemaCanReadSchemaFromHDFS(TestAvroSerdeUtils.java:187)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.net.StaticMapping
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
... 25 more

  

Test Plan: EMPTY

Reviewers: JIRA

Differential Revision: https://reviews.facebook.net/D5025

[cws] HIVE-3424. Error by upgrading a Hive 0.7.0 database to 0.8.0 
(008-HIVE-2246.mysql.sql) (Alexander Alten-Lorenz via cws)

[cws] HIVE-3412. Fix TestCliDriver.repair on Hadoop 0.23.3, 3.0.0, and 
2.2.0-alpha (Zhenxiao Luo via cws)


Changes for Build #1647

Changes for Build #1648
[namit] HIVE-3429 Bucket map join involving table with more than 1 partition 
column causes 
FileNotFoundException (Kevin Wilfong via namit)


Changes for Build #1649
[hashutosh] HIVE-3075 : Improve HiveMetaStore logging (Travis Crawford via 
Ashutosh Chauhan)


Changes for Build #1650
[hashutosh] HIVE-3340 : shims unit test failures fails further test progress 
(Giridharan Kesavan via Ashutosh Chauhan)


Changes for Build #1651
[hashutosh] HIVE-3436 :  Difference in exception string from native method 
causes script_pipe.q to fail on windows (Thejas Nair via Ashutosh Chauhan)


Changes for Build #1652
[namit] HIVE-3306 SMBJoin/BucketMapJoin should be allowed only when join

Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false #135

2012-09-12 Thread Apache Jenkins Server
See 


--
[...truncated 7697 lines...]
ivy-resolve-hadoop-shim:
 [echo] Project: shims
[ivy:resolve] :: loading settings :: file = 


ivy-retrieve-hadoop-shim:
 [echo] Project: shims
 [echo] Building shims 0.23

build_shims:
 [echo] Project: shims
 [echo] Compiling 

 against hadoop 0.23.1 
(

ivy-init-settings:
 [echo] Project: shims

ivy-resolve-hadoop-shim:
 [echo] Project: shims
[ivy:resolve] :: loading settings :: file = 

[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/0.23.1/hadoop-common-0.23.1.jar
 ...
[ivy:resolve] 

 (1725kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-common;0.23.1!hadoop-common.jar (79ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-core/0.23.1/hadoop-mapreduce-client-core-0.23.1.jar
 ...
[ivy:resolve] 
...
 (1314kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-mapreduce-client-core;0.23.1!hadoop-mapreduce-client-core.jar
 (43ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-archives/0.23.1/hadoop-archives-0.23.1.jar
 ...
[ivy:resolve]  (20kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-archives;0.23.1!hadoop-archives.jar (31ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-hdfs/0.23.1/hadoop-hdfs-0.23.1.jar
 ...
[ivy:resolve] 
..
 (1725kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-hdfs;0.23.1!hadoop-hdfs.jar (193ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-hdfs/0.23.1/hadoop-hdfs-0.23.1-tests.jar
 ...
[ivy:resolve] 
.
 (1107kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-hdfs;0.23.1!hadoop-hdfs.jar(tests) (49ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/hsqldb/hsqldb/1.8.0.7/hsqldb-1.8.0.7.jar ...
[ivy:resolve] 
...
 (628kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] hsqldb#hsqldb;1.8.0.7!hsqldb.jar (25ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/avro/avro-ipc/1.5.3/avro-ipc-1.5.3.jar 
...
[ivy:resolve] ... (164kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] org.apache.avro#avro-ipc;1.5.3!avro-ipc.jar (19ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-auth/0.23.1/hadoop-auth-0.23.1.jar
 ...
[ivy:resolve] . (41kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-auth;0.23.1!hadoop-auth.jar (45ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/codehaus/jackson/jackson-core-asl/1.7.1/jackson-core-asl-1.7.1.jar
 ...
[ivy:resolve]  (202kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.codehaus.jackson#jackson-core-asl;1.7.1!jackson-core-asl.jar (21ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/codehaus/jackson/jackson-jaxrs/1.7.1/jackson-jaxrs-1.7.1.jar
 ...
[ivy:resolve] .. (17kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.codehaus.jackson#jackson-jaxrs;1.7.1!jackson-jaxrs.jar (97ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/codehaus/jackson/jackson-xc/1.7.1/jackson-xc-1.7.1.jar
 ...
[ivy:resolve]  (30kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.codehaus.jackson#jack

[jira] [Updated] (HIVE-3440) Fix pdk PluginTest failing on trunk-h0.21

2012-09-12 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-3440:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed, thanks Zhenxiao Luo.

> Fix pdk PluginTest failing on trunk-h0.21
> -
>
> Key: HIVE-3440
> URL: https://issues.apache.org/jira/browse/HIVE-3440
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.10.0
>Reporter: Zhenxiao Luo
>Assignee: Zhenxiao Luo
> Fix For: 0.10.0
>
> Attachments: HIVE-3440.1.patch.txt
>
>
> Get the failure when running on hadoop21, triggered directly from pdk(when 
> triggered from builtin, pdk test is passed).
> Here is the execution log:
> 2012-09-06 13:46:05,646 WARN  mapred.LocalJobRunner 
> (LocalJobRunner.java:run(256)) - job_local_0001
> java.lang.RuntimeException: Error in configuring object
> at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
> at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:354)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:616)
> at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
> ... 5 more
> Caused by: java.lang.RuntimeException: Error in configuring object
> at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
> at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
> at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34)
> ... 10 more
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:616)
> at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
> ... 13 more
> Caused by: java.lang.RuntimeException: Map operator initialization failed
> at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
> ... 18 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/codehaus/jackson/map/ObjectMapper
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDTFJSONTuple.(GenericUDTFJSONTuple.java:54)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:113)
> at 
> org.apache.hadoop.hive.ql.exec.FunctionRegistry.registerGenericUDTF(FunctionRegistry.java:545)
> at 
> org.apache.hadoop.hive.ql.exec.FunctionRegistry.registerGenericUDTF(FunctionRegistry.java:539)
> at 
> org.apache.hadoop.hive.ql.exec.FunctionRegistry.(FunctionRegistry.java:472)
> at 
> org.apache.hadoop.hive.ql.exec.DefaultUDFMethodResolver.getEvalMethod(DefaultUDFMethodResolver.java:59)
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.initialize(GenericUDFBridge.java:154)
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:98)
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:137)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:898)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:924)
> at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:60)
> at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:358)
> at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:434)
> at 
> org.apache.hadoop.hive.ql.

[jira] [Updated] (HIVE-3440) Fix pdk PluginTest failing on trunk-h0.21

2012-09-12 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-3440:


Component/s: Tests

> Fix pdk PluginTest failing on trunk-h0.21
> -
>
> Key: HIVE-3440
> URL: https://issues.apache.org/jira/browse/HIVE-3440
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.10.0
>Reporter: Zhenxiao Luo
>Assignee: Zhenxiao Luo
> Fix For: 0.10.0
>
> Attachments: HIVE-3440.1.patch.txt
>
>
> Get the failure when running on hadoop21, triggered directly from pdk(when 
> triggered from builtin, pdk test is passed).
> Here is the execution log:
> 2012-09-06 13:46:05,646 WARN  mapred.LocalJobRunner 
> (LocalJobRunner.java:run(256)) - job_local_0001
> java.lang.RuntimeException: Error in configuring object
> at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
> at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:354)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:616)
> at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
> ... 5 more
> Caused by: java.lang.RuntimeException: Error in configuring object
> at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
> at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
> at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34)
> ... 10 more
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:616)
> at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
> ... 13 more
> Caused by: java.lang.RuntimeException: Map operator initialization failed
> at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
> ... 18 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/codehaus/jackson/map/ObjectMapper
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDTFJSONTuple.(GenericUDTFJSONTuple.java:54)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:113)
> at 
> org.apache.hadoop.hive.ql.exec.FunctionRegistry.registerGenericUDTF(FunctionRegistry.java:545)
> at 
> org.apache.hadoop.hive.ql.exec.FunctionRegistry.registerGenericUDTF(FunctionRegistry.java:539)
> at 
> org.apache.hadoop.hive.ql.exec.FunctionRegistry.(FunctionRegistry.java:472)
> at 
> org.apache.hadoop.hive.ql.exec.DefaultUDFMethodResolver.getEvalMethod(DefaultUDFMethodResolver.java:59)
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.initialize(GenericUDFBridge.java:154)
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:98)
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:137)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:898)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:924)
> at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:60)
> at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:358)
> at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:434)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:390)
> at 
> org.apache.hadoop.hive.ql.exe

configuring log4j for unit tests

2012-09-12 Thread Yin Huai
Hello All,

When I run hive unit tests, I always can find an error saying the class
of org.apache.hadoop.log.metrics.EventCounter can not be found. Seems the
reason is that in hadoop 0.20.2, the location of EventCounter is
org.apache.hadoop.metrics.jvm.EventCounter, so log4j should load the old
class.

The command I used for unit tests are...
ant very-clean package
ant test tar -logfile ant.log

I must missed something. Can you let me know the typical commands you used
to test all unit tests? Also, how can I change the conf of log4j, so unit
tests will load org.apache.hadoop.metrics.jvm.EventCounter instead
of org.apache.hadoop.log.metrics.EventCounter?

Thanks,

Yin


[jira] [Created] (HIVE-3452) Missing column causes null pointer exception

2012-09-12 Thread Jean Xu (JIRA)
Jean Xu created HIVE-3452:
-

 Summary: Missing column causes null pointer exception
 Key: HIVE-3452
 URL: https://issues.apache.org/jira/browse/HIVE-3452
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Jean Xu
Priority: Minor


select * from src where src = 'alkdfaj';
FAILED: SemanticException null

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21 #135

2012-09-12 Thread Apache Jenkins Server
See 

--
[...truncated 36564 lines...]
[junit] POSTHOOK: query: select count(1) as cnt from testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: 
file:/tmp/hudson/hive_2012-09-12_15-16-50_735_4483696848791747826/-mr-1
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testhivedrivertable (num int)
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: create table testhivedrivertable (num int)
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Copying file: 

[junit] PREHOOK: query: load data local inpath 
'
 into table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] Copying data from 

[junit] Loading data to table default.testhivedrivertable
[junit] POSTHOOK: query: load data local inpath 
'
 into table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: select * from testhivedrivertable limit 10
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: 
file:/tmp/hudson/hive_2012-09-12_15-16-55_692_2656129785381173442/-mr-1
[junit] POSTHOOK: query: select * from testhivedrivertable limit 10
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: 
file:/tmp/hudson/hive_2012-09-12_15-16-55_692_2656129785381173442/-mr-1
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testhivedrivertable (num int)
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: create table testhivedrivertable (num int)
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=
[junit] Hive history 
file=
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit

[jira] [Commented] (HIVE-3452) Missing column causes null pointer exception

2012-09-12 Thread Jean Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454441#comment-13454441
 ] 

Jean Xu commented on HIVE-3452:
---

https://reviews.facebook.net/D5361


> Missing column causes null pointer exception
> 
>
> Key: HIVE-3452
> URL: https://issues.apache.org/jira/browse/HIVE-3452
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Jean Xu
>Priority: Minor
>
> select * from src where src = 'alkdfaj';
> FAILED: SemanticException null

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3453) Hive query persistence / auditing

2012-09-12 Thread Matt Goeke (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Goeke updated HIVE-3453:
-

Component/s: Thrift API

> Hive query persistence / auditing
> -
>
> Key: HIVE-3453
> URL: https://issues.apache.org/jira/browse/HIVE-3453
> Project: Hive
>  Issue Type: Improvement
>  Components: CLI, Logging, Thrift API
>Reporter: Matt Goeke
>Priority: Minor
>
> Currently our Hive warehouse is open to querying from any of our business 
> analysts and we pool them by user in the fair scheduler to prevent someone 
> from hogging cluster resources.  We are looking to start summarizing details 
> of their queries so that we can view common questions they ask in order find 
> ways to optimize our tables / submission process. One thought was to patch 
> the Hive client / thrift server to write out the submitted queries to the DB 
> that our metastore is on and from there we can perform some simple analytics 
> to roll up a view of how they use the warehouse over time. This doesn't seem 
> like it would be too difficult of an effort as the needed infrastructure is 
> already in place but any suggestions or comments on this would be greatly 
> appreciated.
> I am leaving the implementation notes pretty blank as I would like to see 
> what others in the community who have more experience in this project would 
> recommend. 
> Additional information from a u...@hive.apache.org response:
> Hey Matt,
> We did something similar at Facebook to capture the information on who ran 
> what on the clusters and dumped that out to an audit db. Specifically we were 
> using Hive post execution hooks to achive that
> http://hive.apache.org/docs/r0.7.0/api/org/apache/hadoop/hive/ql/hooks/PostExecute.html
> this gets called from the hive cli mostly.
> I am not sure if the particular hook that we had implemented was contributed 
> back, but this could potentially be a cool contribution :)
> Ashish

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-3453) Hive query persistence / auditing

2012-09-12 Thread Matt Goeke (JIRA)
Matt Goeke created HIVE-3453:


 Summary: Hive query persistence / auditing
 Key: HIVE-3453
 URL: https://issues.apache.org/jira/browse/HIVE-3453
 Project: Hive
  Issue Type: Improvement
  Components: CLI, Logging
Reporter: Matt Goeke
Priority: Minor


Currently our Hive warehouse is open to querying from any of our business 
analysts and we pool them by user in the fair scheduler to prevent someone from 
hogging cluster resources.  We are looking to start summarizing details of 
their queries so that we can view common questions they ask in order find ways 
to optimize our tables / submission process. One thought was to patch the Hive 
client / thrift server to write out the submitted queries to the DB that our 
metastore is on and from there we can perform some simple analytics to roll up 
a view of how they use the warehouse over time. This doesn't seem like it would 
be too difficult of an effort as the needed infrastructure is already in place 
but any suggestions or comments on this would be greatly appreciated.

I am leaving the implementation notes pretty blank as I would like to see what 
others in the community who have more experience in this project would 
recommend. 

Additional information from a u...@hive.apache.org response:
Hey Matt,

We did something similar at Facebook to capture the information on who ran what 
on the clusters and dumped that out to an audit db. Specifically we were using 
Hive post execution hooks to achive that

http://hive.apache.org/docs/r0.7.0/api/org/apache/hadoop/hive/ql/hooks/PostExecute.html

this gets called from the hive cli mostly.

I am not sure if the particular hook that we had implemented was contributed 
back, but this could potentially be a cool contribution :)

Ashish


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-3454) Problem with CAST(BIGINT as TIMESTAMP)

2012-09-12 Thread Ryan Harris (JIRA)
Ryan Harris created HIVE-3454:
-

 Summary: Problem with CAST(BIGINT as TIMESTAMP)
 Key: HIVE-3454
 URL: https://issues.apache.org/jira/browse/HIVE-3454
 Project: Hive
  Issue Type: Bug
  Components: Types, UDF
Affects Versions: 0.9.0
Reporter: Ryan Harris


Ran into an issue while working with timestamp conversion.
CAST(unix_timestamp() as TIMESTAMP) should create a timestamp for the current 
time from the BIGINT returned by unix_timestamp()

Instead, however, a 1970-01-16 timestamp is returned.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-3455) ANSI CORR(X,Y) is incorrect

2012-09-12 Thread Maxim Bolotin (JIRA)
Maxim Bolotin created HIVE-3455:
---

 Summary: ANSI CORR(X,Y) is incorrect
 Key: HIVE-3455
 URL: https://issues.apache.org/jira/browse/HIVE-3455
 Project: Hive
  Issue Type: Bug
  Components: UDF
Affects Versions: 0.9.0, 0.8.0, 0.7.1
Reporter: Maxim Bolotin


A simple test with 2 collinear vectors returns a wrong result.
The problem is the merge of variances, file:

http://svn.apache.org/viewvc/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCorrelation.java?revision=1157222&view=markup

lines:
347: myagg.xvar += xvarB + (xavgA-xavgB) * (xavgA-xavgB) * myagg.count;
348: myagg.yvar += yvarB + (yavgA-yavgB) * (yavgA-yavgB) * myagg.count;

the correct merge should be like this:
347: myagg.xvar += xvarB+(xavgA - xavgB)*(xavgA-xavgB)/myagg.count*nA*nB;
348: myagg.yvar += yvarB+(yavgA - yavgB)*(yavgA-yavgB)/myagg.count*nA*nB;



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3455) ANSI CORR(X,Y) is incorrect

2012-09-12 Thread Maxim Bolotin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454461#comment-13454461
 ] 

Maxim Bolotin commented on HIVE-3455:
-

284,285c282,283
< double xavgOld = myagg.xavg;
< double yavgOld = myagg.yavg;
---
> double deltaX = vx - myagg.xavg;
> double deltaY = vy - myagg.yavg;
287,288c285,286
< myagg.xavg += (vx - xavgOld) / myagg.count;
< myagg.yavg += (vy - yavgOld) / myagg.count;
---
> myagg.xavg += deltaX / ((double) myagg.count);
> myagg.yavg += deltaY / ((double) myagg.count);
290,292c288,290
< myagg.covar += (vx - xavgOld) * (vy - myagg.yavg);
< myagg.xvar += (vx - xavgOld) * (vx - myagg.xavg);
< myagg.yvar += (vy - yavgOld) * (vy - myagg.yavg);
---
> myagg.covar += deltaX * (vy - myagg.yavg);
> myagg.xvar += deltaX * (vx - myagg.xavg);
> myagg.yvar += deltaY * (vy - myagg.yavg);
345,350c343,351
<   myagg.xavg = (xavgA * nA + xavgB * nB) / myagg.count;
<   myagg.yavg = (yavgA * nA + yavgB * nB) / myagg.count;
<   myagg.xvar += xvarB + (xavgA - xavgB) * (xavgA - xavgB) * 
myagg.count;
<   myagg.yvar += yvarB + (yavgA - yavgB) * (yavgA - yavgB) * 
myagg.count;
<   myagg.covar +=
<   covarB + (xavgA - xavgB) * (yavgA - yavgB) * ((double) (nA * 
nB) / myagg.count);
---
>   double n=myagg.count;
>   double nn=nA*nB/n;
>   double dX = xavgA-xavgB;
>   double dY = yavgA-yavgB;
>   myagg.xavg = xavgA * nA / n + xavgB * nB / n;
>   myagg.yavg = yavgA * nA / n + yavgB * nB / n;
>   myagg.xvar  += xvarB  + dX * dX * nn ;
>   myagg.yvar  += yvarB  + dY * dY * nn ;
>   myagg.covar += covarB + dX * dY * nn;


> ANSI CORR(X,Y) is incorrect
> ---
>
> Key: HIVE-3455
> URL: https://issues.apache.org/jira/browse/HIVE-3455
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 0.7.1, 0.8.0, 0.9.0
>Reporter: Maxim Bolotin
>
> A simple test with 2 collinear vectors returns a wrong result.
> The problem is the merge of variances, file:
> http://svn.apache.org/viewvc/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCorrelation.java?revision=1157222&view=markup
> lines:
> 347: myagg.xvar += xvarB + (xavgA-xavgB) * (xavgA-xavgB) * myagg.count;
> 348: myagg.yvar += yvarB + (yavgA-yavgB) * (yavgA-yavgB) * myagg.count;
> the correct merge should be like this:
> 347: myagg.xvar += xvarB+(xavgA - xavgB)*(xavgA-xavgB)/myagg.count*nA*nB;
> 348: myagg.yvar += yvarB+(yavgA - yavgB)*(yavgA-yavgB)/myagg.count*nA*nB;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-3456) No apparent way to change table-level COMMENT data

2012-09-12 Thread Ryan Harris (JIRA)
Ryan Harris created HIVE-3456:
-

 Summary: No apparent way to change table-level COMMENT data
 Key: HIVE-3456
 URL: https://issues.apache.org/jira/browse/HIVE-3456
 Project: Hive
  Issue Type: Bug
  Components: Documentation, Metastore
Affects Versions: 0.9.0, 0.8.1, 0.8.0, 0.7.1, 0.7.0
Reporter: Ryan Harris
Priority: Minor


Not sure if this is a documentation issue, or a feature that is lacking from 
ALTER TABLE...

Setting a COMMENT on a table during initial creation is straightforward.
Changing column names and column comments is also straightforward using ALTER 
TABLE CHANGE...
However, I have found no way, other than manually editing the metadata to 
set/change the table-level comment for a table after it has been created.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3455) ANSI CORR(X,Y) is incorrect

2012-09-12 Thread Maxim Bolotin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454465#comment-13454465
 ] 

Maxim Bolotin commented on HIVE-3455:
-

the test, the correct answer is 1.
hive> select * from max_corr ;
OK
1   2
2   4
3   6
Time taken: 2.304 seconds
hive> select corr(a,b) from max_corr ;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201207232322_2781, Tracking URL = 
http://*/jobdetails.jsp?jobid=job_201207232322_2781
Kill Command = /usr/lib/hadoop/bin/hadoop job  
-Dmapred.job.tracker=hdfs://* -kill job_201207232322_2781
2012-09-12 10:04:17,663 Stage-1 map = 0%,  reduce = 0%
2012-09-12 10:04:20,681 Stage-1 map = 100%,  reduce = 0%
2012-09-12 10:04:27,720 Stage-1 map = 100%,  reduce = 33%
2012-09-12 10:04:28,728 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201207232322_2781
OK
0.27586206896551724
Time taken: 15.088 seconds
hive> 


> ANSI CORR(X,Y) is incorrect
> ---
>
> Key: HIVE-3455
> URL: https://issues.apache.org/jira/browse/HIVE-3455
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 0.7.1, 0.8.0, 0.9.0
>Reporter: Maxim Bolotin
>
> A simple test with 2 collinear vectors returns a wrong result.
> The problem is the merge of variances, file:
> http://svn.apache.org/viewvc/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCorrelation.java?revision=1157222&view=markup
> lines:
> 347: myagg.xvar += xvarB + (xavgA-xavgB) * (xavgA-xavgB) * myagg.count;
> 348: myagg.yvar += yvarB + (yavgA-yavgB) * (yavgA-yavgB) * myagg.count;
> the correct merge should be like this:
> 347: myagg.xvar += xvarB+(xavgA - xavgB)*(xavgA-xavgB)/myagg.count*nA*nB;
> 348: myagg.yvar += yvarB+(yavgA - yavgB)*(yavgA-yavgB)/myagg.count*nA*nB;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3440) Fix pdk PluginTest failing on trunk-h0.21

2012-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454475#comment-13454475
 ] 

Hudson commented on HIVE-3440:
--

Integrated in Hive-trunk-h0.21 #1665 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1665/])
HIVE-3440. Fix pdk PluginTest failing on trunk-h0.21 (Zhenxiao Luo via 
kevinwilfong) (Revision 1384032)

 Result = SUCCESS
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1384032
Files : 
* /hive/trunk/ql/ivy.xml


> Fix pdk PluginTest failing on trunk-h0.21
> -
>
> Key: HIVE-3440
> URL: https://issues.apache.org/jira/browse/HIVE-3440
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.10.0
>Reporter: Zhenxiao Luo
>Assignee: Zhenxiao Luo
> Fix For: 0.10.0
>
> Attachments: HIVE-3440.1.patch.txt
>
>
> Get the failure when running on hadoop21, triggered directly from pdk(when 
> triggered from builtin, pdk test is passed).
> Here is the execution log:
> 2012-09-06 13:46:05,646 WARN  mapred.LocalJobRunner 
> (LocalJobRunner.java:run(256)) - job_local_0001
> java.lang.RuntimeException: Error in configuring object
> at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
> at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:354)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:616)
> at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
> ... 5 more
> Caused by: java.lang.RuntimeException: Error in configuring object
> at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
> at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
> at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34)
> ... 10 more
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:616)
> at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
> ... 13 more
> Caused by: java.lang.RuntimeException: Map operator initialization failed
> at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
> ... 18 more
> Caused by: java.lang.NoClassDefFoundError: 
> org/codehaus/jackson/map/ObjectMapper
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDTFJSONTuple.(GenericUDTFJSONTuple.java:54)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
> at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:113)
> at 
> org.apache.hadoop.hive.ql.exec.FunctionRegistry.registerGenericUDTF(FunctionRegistry.java:545)
> at 
> org.apache.hadoop.hive.ql.exec.FunctionRegistry.registerGenericUDTF(FunctionRegistry.java:539)
> at 
> org.apache.hadoop.hive.ql.exec.FunctionRegistry.(FunctionRegistry.java:472)
> at 
> org.apache.hadoop.hive.ql.exec.DefaultUDFMethodResolver.getEvalMethod(DefaultUDFMethodResolver.java:59)
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.initialize(GenericUDFBridge.java:154)
> at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:98)
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:137)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:898)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:924

Hive-trunk-h0.21 - Build # 1665 - Fixed

2012-09-12 Thread Apache Jenkins Server
Changes for Build #1638
[namit] HIVE-3393 get_json_object and json_tuple should use Jackson library
(Kevin Wilfong via namit)


Changes for Build #1639

Changes for Build #1640
[ecapriolo] HIVE-3068 Export table metadata as JSON on table drop (Andrew 
Chalfant via egc)


Changes for Build #1641

Changes for Build #1642
[hashutosh] HIVE-3338 : Archives broken for hadoop 1.0 (Vikram Dixit via 
Ashutosh Chauhan)


Changes for Build #1643

Changes for Build #1644

Changes for Build #1645
[cws] HIVE-3413. Fix pdk.PluginTest on hadoop23 (Zhenxiao Luo via cws)


Changes for Build #1646
[cws] HIVE-3056. Ability to bulk update location field in Db/Table/Partition 
records (Shreepadma Venugopalan via cws)

[cws] HIVE-3416 [jira] Fix 
TestAvroSerdeUtils.determineSchemaCanReadSchemaFromHDFS when running Hive on 
hadoop23
(Zhenxiao Luo via Carl Steinbach)

Summary:
HIVE-3416: Fix TestAvroSerdeUtils.determineSchemaCanReadSchemaFromHDFS when 
running Hive on hadoop23

TestAvroSerdeUtils determinSchemaCanReadSchemaFromHDFS is failing when running 
hive on hadoop23:

$ant very-clean package -Dhadoop.version=0.23.1 -Dhadoop-0.23.version=0.23.1 
-Dhadoop.mr.rev=23

$ant test -Dhadoop.version=0.23.1 -Dhadoop-0.23.version=0.23.1 
-Dhadoop.mr.rev=23 -Dtestcase=TestAvroSerdeUtils

 
java.lang.NoClassDefFoundError: 
org/apache/hadoop/net/StaticMapping
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:534)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:489)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:360)
at 
org.apache.hadoop.hive.serde2.avro.TestAvroSerdeUtils.determineSchemaCanReadSchemaFromHDFS(TestAvroSerdeUtils.java:187)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.net.StaticMapping
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
... 25 more

  

Test Plan: EMPTY

Reviewers: JIRA

Differential Revision: https://reviews.facebook.net/D5025

[cws] HIVE-3424. Error by upgrading a Hive 0.7.0 database to 0.8.0 
(008-HIVE-2246.mysql.sql) (Alexander Alten-Lorenz via cws)

[cws] HIVE-3412. Fix TestCliDriver.repair on Hadoop 0.23.3, 3.0.0, and 
2.2.0-alpha (Zhenxiao Luo via cws)


Changes for Build #1647

Changes for Build #1648
[namit] HIVE-3429 Bucket map join involving table with more than 1 partition 
column causes 
FileNotFoundException (Kevin Wilfong via namit)


Changes for Build #1649
[hashutosh] HIVE-3075 : Improve HiveMetaStore logging (Travis Crawford via 
Ashutosh Chauhan)


Changes for Build #1650
[hashutosh] HIVE-3340 : shims unit test failures fails further test progress 
(Giridharan Kesavan via Ashutosh Chauhan)


Changes for Build #1651
[hashutosh] HIVE-3436 :  Difference in exception string from native method 
causes script_pipe.q to fail on windows (Thejas Nair via Ashutosh Chauhan)


Changes for Build #1652
[namit] HIVE-3306 SMBJoin/BucketMapJoin should be allowed only when join

[jira] [Updated] (HIVE-3437) 0.23 compatibility: fix unit tests when building against 0.23

2012-09-12 Thread Chris Drome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Drome updated HIVE-3437:
--

Attachment: HIVE-3437-0.9.patch

Draft of a 0.9 branch patch that addresses most unit test failures when 
building against hadoop 0.23.

> 0.23 compatibility: fix unit tests when building against 0.23
> -
>
> Key: HIVE-3437
> URL: https://issues.apache.org/jira/browse/HIVE-3437
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.10.0, 0.9.1
>Reporter: Chris Drome
>Assignee: Chris Drome
> Attachments: HIVE-3437-0.9.patch
>
>
> Many unit tests fail as a result of building the code against hadoop 0.23. 
> Initial focus will be to fix 0.9.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-3457) 0.23 compatibility: queries which rely on metadata fail

2012-09-12 Thread Chris Drome (JIRA)
Chris Drome created HIVE-3457:
-

 Summary: 0.23 compatibility: queries which rely on metadata fail
 Key: HIVE-3457
 URL: https://issues.apache.org/jira/browse/HIVE-3457
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.9.0, 0.10.0
Reporter: Chris Drome
Assignee: Chris Drome


groupby_neg_float.q and metadataonly1.q tests for TestCliDriver in ql fail when 
built against hadoop23.

The above two tests use an empty file/directory/partition to get a mapper to 
run against meta data (a scan of the table is not performed).
In hadoop20 this work around worked, but as of hadoop22 splits are generated 
differently and not created for empty files.
This results in 0 mappers being created which causes the query to fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-3458) Parallel test script doesnt run all tests

2012-09-12 Thread Sambavi Muthukrishnan (JIRA)
Sambavi Muthukrishnan created HIVE-3458:
---

 Summary: Parallel test script doesnt run all tests
 Key: HIVE-3458
 URL: https://issues.apache.org/jira/browse/HIVE-3458
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.9.0
Reporter: Sambavi Muthukrishnan
Assignee: Sambavi Muthukrishnan
 Fix For: 0.10.0


Parallel test script when run on a cluster of machines in multi-threaded mode 
doesnt report back on all tests in the suite. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3400) Add Retries to Hive MetaStore Connections

2012-09-12 Thread Bhushan Mandhani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bhushan Mandhani updated HIVE-3400:
---

Status: Patch Available  (was: Open)

> Add Retries to Hive MetaStore Connections
> -
>
> Key: HIVE-3400
> URL: https://issues.apache.org/jira/browse/HIVE-3400
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Bhushan Mandhani
>Assignee: Bhushan Mandhani
>Priority: Minor
> Attachments: HIVE-3400.1.patch.txt
>
>
> Currently, when using Thrift to access the MetaStore, if the Thrift host 
> dies, there is no mechanism to reconnect to some other host even if the 
> MetaStore URIs variable in the Conf contains multiple hosts. Hive should 
> retry and reconnect rather than throwing a communication link error.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-3459) Dynamic partition queries producing no partitions fail with hive.stats.reliable=true

2012-09-12 Thread Kevin Wilfong (JIRA)
Kevin Wilfong created HIVE-3459:
---

 Summary: Dynamic partition queries producing no partitions fail 
with hive.stats.reliable=true
 Key: HIVE-3459
 URL: https://issues.apache.org/jira/browse/HIVE-3459
 Project: Hive
  Issue Type: Bug
  Components: Statistics
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong


Dynamic partition inserts which result in no partitions (either because the 
input is empty or all input rows are filtered out) will fail because stats 
cannot be collected if hive.stats.reliable=true.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3459) Dynamic partition queries producing no partitions fail with hive.stats.reliable=true

2012-09-12 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454567#comment-13454567
 ] 

Kevin Wilfong commented on HIVE-3459:
-

https://reviews.facebook.net/D5367

> Dynamic partition queries producing no partitions fail with 
> hive.stats.reliable=true
> 
>
> Key: HIVE-3459
> URL: https://issues.apache.org/jira/browse/HIVE-3459
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Affects Versions: 0.10.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Attachments: HIVE-3459.1.patch.txt
>
>
> Dynamic partition inserts which result in no partitions (either because the 
> input is empty or all input rows are filtered out) will fail because stats 
> cannot be collected if hive.stats.reliable=true.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3459) Dynamic partition queries producing no partitions fail with hive.stats.reliable=true

2012-09-12 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-3459:


Attachment: HIVE-3459.1.patch.txt

> Dynamic partition queries producing no partitions fail with 
> hive.stats.reliable=true
> 
>
> Key: HIVE-3459
> URL: https://issues.apache.org/jira/browse/HIVE-3459
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Affects Versions: 0.10.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Attachments: HIVE-3459.1.patch.txt
>
>
> Dynamic partition inserts which result in no partitions (either because the 
> input is empty or all input rows are filtered out) will fail because stats 
> cannot be collected if hive.stats.reliable=true.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3459) Dynamic partition queries producing no partitions fail with hive.stats.reliable=true

2012-09-12 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-3459:


Status: Patch Available  (was: Open)

> Dynamic partition queries producing no partitions fail with 
> hive.stats.reliable=true
> 
>
> Key: HIVE-3459
> URL: https://issues.apache.org/jira/browse/HIVE-3459
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Affects Versions: 0.10.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Attachments: HIVE-3459.1.patch.txt
>
>
> Dynamic partition inserts which result in no partitions (either because the 
> input is empty or all input rows are filtered out) will fail because stats 
> cannot be collected if hive.stats.reliable=true.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3400) Add Retries to Hive MetaStore Connections

2012-09-12 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-3400:
-

Status: Open  (was: Patch Available)

More comments on phabricator.

> Add Retries to Hive MetaStore Connections
> -
>
> Key: HIVE-3400
> URL: https://issues.apache.org/jira/browse/HIVE-3400
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Bhushan Mandhani
>Assignee: Bhushan Mandhani
>Priority: Minor
> Attachments: HIVE-3400.1.patch.txt
>
>
> Currently, when using Thrift to access the MetaStore, if the Thrift host 
> dies, there is no mechanism to reconnect to some other host even if the 
> MetaStore URIs variable in the Conf contains multiple hosts. Hive should 
> retry and reconnect rather than throwing a communication link error.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3339) Change the rules in SemanticAnalyzer to use Operator.getName() instead of hardcoded names

2012-09-12 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454620#comment-13454620
 ] 

Namit Jain commented on HIVE-3339:
--

+1

> Change the rules in SemanticAnalyzer to use Operator.getName() instead of 
> hardcoded names
> -
>
> Key: HIVE-3339
> URL: https://issues.apache.org/jira/browse/HIVE-3339
> Project: Hive
>  Issue Type: Bug
>Reporter: Namit Jain
>Assignee: Zhenxiao Luo
>Priority: Minor
> Attachments: HIVE-3339.1.patch.txt, HIVE-3339.2.patch.txt
>
>
> This should be done for code cleanup.
> Instead of the rule being:
> SEL%
> It should say SelectOperator.getName()%
> It would make the rules more readable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3339) Change the rules in SemanticAnalyzer to use Operator.getName() instead of hardcoded names

2012-09-12 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3339:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed. Thanks Zhenxiao

> Change the rules in SemanticAnalyzer to use Operator.getName() instead of 
> hardcoded names
> -
>
> Key: HIVE-3339
> URL: https://issues.apache.org/jira/browse/HIVE-3339
> Project: Hive
>  Issue Type: Bug
>Reporter: Namit Jain
>Assignee: Zhenxiao Luo
>Priority: Minor
> Attachments: HIVE-3339.1.patch.txt, HIVE-3339.2.patch.txt
>
>
> This should be done for code cleanup.
> Instead of the rule being:
> SEL%
> It should say SelectOperator.getName()%
> It would make the rules more readable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3392) Hive unnecessarily validates table SerDes when dropping a table

2012-09-12 Thread Ajesh Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454625#comment-13454625
 ] 

Ajesh Kumar commented on HIVE-3392:
---

Hi Edward Capriolo, when we are dropping a table, why we need to validate the 
class. We just need to drop the table even if the particular class (which was 
used while creating the table) is not available.
let me know your comments.

> Hive unnecessarily validates table SerDes when dropping a table
> ---
>
> Key: HIVE-3392
> URL: https://issues.apache.org/jira/browse/HIVE-3392
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.9.0
>Reporter: Jonathan Natkins
>Assignee: Ajesh Kumar
>  Labels: patch
> Attachments: HIVE-3392.2.patch.txt, HIVE-3392.Test Case - After 
> Patch.txt, HIVE-3392.Test Case - Before Patch.txt
>
>
> natty@hadoop1:~$ hive
> hive> add jar 
> /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar;
> Added 
> /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
>  to class path
> Added resource: 
> /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
> hive> create table test (a int) row format serde 'hive.serde.JSONSerDe';  
>   
> OK
> Time taken: 2.399 seconds
> natty@hadoop1:~$ hive
> hive> drop table test;
>
> FAILED: Hive Internal Error: 
> java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException
>  SerDe hive.serde.JSONSerDe does not exist))
> java.lang.RuntimeException: 
> MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe 
> hive.serde.JSONSerDe does not exist)
>   at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262)
>   at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253)
>   at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490)
>   at 
> org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162)
>   at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943)
>   at 
> org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700)
>   at 
> org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
> Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException 
> SerDe com.cloudera.hive.serde.JSONSerDe does not exist)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211)
>   at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260)
>   ... 20 more
> hive> add jar 
> /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar;
> Added 
> /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
>  to class path
> Added resource: 
> /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
> hive> drop table test;
> OK
> Time taken: 0.658 seconds
> hive> 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3392) Hive unnecessarily validates table SerDes when dropping a table

2012-09-12 Thread Ajesh Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454626#comment-13454626
 ] 

Ajesh Kumar commented on HIVE-3392:
---

Testcases are attached.  
1) HIVE-3392.Test Case - Before Patch.txt
2) HIVE-3392.Test Case - After Patch.txt

> Hive unnecessarily validates table SerDes when dropping a table
> ---
>
> Key: HIVE-3392
> URL: https://issues.apache.org/jira/browse/HIVE-3392
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.9.0
>Reporter: Jonathan Natkins
>Assignee: Ajesh Kumar
>  Labels: patch
> Attachments: HIVE-3392.2.patch.txt, HIVE-3392.Test Case - After 
> Patch.txt, HIVE-3392.Test Case - Before Patch.txt
>
>
> natty@hadoop1:~$ hive
> hive> add jar 
> /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar;
> Added 
> /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
>  to class path
> Added resource: 
> /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
> hive> create table test (a int) row format serde 'hive.serde.JSONSerDe';  
>   
> OK
> Time taken: 2.399 seconds
> natty@hadoop1:~$ hive
> hive> drop table test;
>
> FAILED: Hive Internal Error: 
> java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException
>  SerDe hive.serde.JSONSerDe does not exist))
> java.lang.RuntimeException: 
> MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe 
> hive.serde.JSONSerDe does not exist)
>   at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262)
>   at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253)
>   at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490)
>   at 
> org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162)
>   at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943)
>   at 
> org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700)
>   at 
> org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
> Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException 
> SerDe com.cloudera.hive.serde.JSONSerDe does not exist)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211)
>   at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260)
>   ... 20 more
> hive> add jar 
> /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar;
> Added 
> /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
>  to class path
> Added resource: 
> /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
> hive> drop table test;
> OK
> Time taken: 0.658 seconds
> hive> 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3452) Missing column causes null pointer exception

2012-09-12 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454628#comment-13454628
 ] 

Namit Jain commented on HIVE-3452:
--

comments on phabricator.

[~jeanxu], please click on 'submit patch' if you want others to review your 
patch

> Missing column causes null pointer exception
> 
>
> Key: HIVE-3452
> URL: https://issues.apache.org/jira/browse/HIVE-3452
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Jean Xu
>Priority: Minor
>
> select * from src where src = 'alkdfaj';
> FAILED: SemanticException null

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HIVE-3452) Missing column causes null pointer exception

2012-09-12 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain reassigned HIVE-3452:


Assignee: Jean Xu

> Missing column causes null pointer exception
> 
>
> Key: HIVE-3452
> URL: https://issues.apache.org/jira/browse/HIVE-3452
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Jean Xu
>Assignee: Jean Xu
>Priority: Minor
>
> select * from src where src = 'alkdfaj';
> FAILED: SemanticException null

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3458) Parallel test script doesnt run all tests

2012-09-12 Thread Ivan Gorbachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Gorbachev updated HIVE-3458:
-

Assignee: Ivan Gorbachev  (was: Sambavi Muthukrishnan)

> Parallel test script doesnt run all tests
> -
>
> Key: HIVE-3458
> URL: https://issues.apache.org/jira/browse/HIVE-3458
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.9.0
>Reporter: Sambavi Muthukrishnan
>Assignee: Ivan Gorbachev
> Fix For: 0.10.0
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Parallel test script when run on a cluster of machines in multi-threaded mode 
> doesnt report back on all tests in the suite. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HIVE-3422) Support partial partition specifications in when enabling/disabling protections in Hive

2012-09-12 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain reassigned HIVE-3422:


Assignee: Jean Xu

> Support partial partition specifications in when enabling/disabling 
> protections in Hive
> ---
>
> Key: HIVE-3422
> URL: https://issues.apache.org/jira/browse/HIVE-3422
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Jean Xu
>Assignee: Jean Xu
>Priority: Minor
>
> Currently if you have a table t with partition columns c1 and c2 the 
> following command works:
> ALTER TABLE t PARTITION (c1 = 'x', c2 = 'y') ENABLE NO_DROP;
> The following does not:
> ALTER TABLE t PARTITION (c1 = 'x') ENABLE NO_DROP;
> We would like all existing partitions for which c1 = 'x' to have NO_DROP 
> enabled when a user runs the above command

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3422) Support partial partition specifications in when enabling/disabling protections in Hive

2012-09-12 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454638#comment-13454638
 ] 

Namit Jain commented on HIVE-3422:
--

Comments in phabricator.


> Support partial partition specifications in when enabling/disabling 
> protections in Hive
> ---
>
> Key: HIVE-3422
> URL: https://issues.apache.org/jira/browse/HIVE-3422
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Jean Xu
>Assignee: Jean Xu
>Priority: Minor
>
> Currently if you have a table t with partition columns c1 and c2 the 
> following command works:
> ALTER TABLE t PARTITION (c1 = 'x', c2 = 'y') ENABLE NO_DROP;
> The following does not:
> ALTER TABLE t PARTITION (c1 = 'x') ENABLE NO_DROP;
> We would like all existing partitions for which c1 = 'x' to have NO_DROP 
> enabled when a user runs the above command

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2957) Hive JDBC doesn't support TIMESTAMP column

2012-09-12 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454643#comment-13454643
 ] 

Carl Steinbach commented on HIVE-2957:
--

Thanks for making the changes Richard.

+1. Will commit if tests pass.

> Hive JDBC doesn't support TIMESTAMP column
> --
>
> Key: HIVE-2957
> URL: https://issues.apache.org/jira/browse/HIVE-2957
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.8.1, 0.9.0
>Reporter: Bharath Ganesh
>Assignee: Richard Ding
>Priority: Minor
> Attachments: HIVE-2957.2.patch.txt, HIVE-2957.3.patch, 
> HIVE-2957.4.patch, HIVE-2957.patch, JDBC_ResultSet_Conversion_Chart.png
>
>
> Steps to replicate:
> 1. Create a table with at least one column of type TIMESTAMP
> 2. Do a DatabaseMetaData.getColumns () such that this TIMESTAMP column is 
> part of the resultset.
> 3. When you iterate over the TIMESTAMP column it would fail, throwing the 
> below exception:
> Exception in thread "main" java.sql.SQLException: Unrecognized column type: 
> timestamp
>   at org.apache.hadoop.hive.jdbc.Utils.hiveTypeToSqlType(Utils.java:56)
>   at org.apache.hadoop.hive.jdbc.JdbcColumn.getSqlType(JdbcColumn.java:62)
>   at 
> org.apache.hadoop.hive.jdbc.HiveDatabaseMetaData$2.next(HiveDatabaseMetaData.java:244)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3438) Add tests for 'm' bigs tables sortmerge join with 'n' small tables where both m,n>1

2012-09-12 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454664#comment-13454664
 ] 

Kevin Wilfong commented on HIVE-3438:
-

+1 will run tests

> Add tests for 'm' bigs tables sortmerge join with 'n' small tables where both 
> m,n>1
> ---
>
> Key: HIVE-3438
> URL: https://issues.apache.org/jira/browse/HIVE-3438
> Project: Hive
>  Issue Type: Test
>  Components: Tests
>Reporter: Namit Jain
>Assignee: Namit Jain
> Attachments: hive.3438.1.patch
>
>
> Once https://issues.apache.org/jira/browse/HIVE-3171 is in, it would be good 
> to add more tests which tests the above condition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HIVE-3451) map-reduce jobs does not work for a partition containing sub-directories

2012-09-12 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain reassigned HIVE-3451:


Assignee: Gang Tim Liu

> map-reduce jobs does not work for a partition containing sub-directories
> 
>
> Key: HIVE-3451
> URL: https://issues.apache.org/jira/browse/HIVE-3451
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Gang Tim Liu
>
> Consider the following test:
> -- The test verifies that sub-directories are supported for versions of hadoop
> -- where MAPREDUCE-1501 is fixed. So, enable this test only for hadoop 23.
> -- INCLUDE_HADOOP_MAJOR_VERSIONS(0.23)
> CREATE TABLE fact_daily(x int) PARTITIONED BY (ds STRING);
> CREATE TABLE fact_tz(x int) PARTITIONED BY (ds STRING, hr STRING) 
> LOCATION 'pfile:${system:test.tmp.dir}/fact_tz';
> INSERT OVERWRITE TABLE fact_tz PARTITION (ds='1', hr='1')
> SELECT key+11 FROM src WHERE key=484;
> ALTER TABLE fact_daily SET TBLPROPERTIES('EXTERNAL'='TRUE');
> ALTER TABLE fact_daily ADD PARTITION (ds='1')
> LOCATION 'pfile:${system:test.tmp.dir}/fact_tz/ds=1';
> set mapred.input.dir.recursive=true;
> SELECT * FROM fact_daily WHERE ds='1';
> SELECT count(1) FROM fact_daily WHERE ds='1';
> Say, the above file was named: recursive_dir.q
> and we ran the test for hadoop 23:
> by executing:
> ant test -Dhadoop.mr.rev=23 -Dtest.print.classpath=true 
> -Dhadoop.version=2.0.0-alpha -Dhadoop.security.version=2.0.0-alpha 
> -Dtestcase=TestCliDriver -Dqfile=recursive_dir.q
> The select * from the table works fine, but the last command does not work
> since it requires a map-reduce job.
> This will prevent other features which are creating sub-directories to add
> any tests which requires a map-reduce job. The work-around is to issue
> queries which do not require map-reduce jobs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HIVE-3152) Disallow certain character patterns in partition names

2012-09-12 Thread Sambavi Muthukrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sambavi Muthukrishnan reassigned HIVE-3152:
---

Assignee: Ivan Gorbachev  (was: Andrew Poland)

> Disallow certain character patterns in partition names
> --
>
> Key: HIVE-3152
> URL: https://issues.apache.org/jira/browse/HIVE-3152
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore
>Reporter: Andrew Poland
>Assignee: Ivan Gorbachev
>Priority: Minor
>  Labels: api-addition, configuration-addition
> Attachments: unicode.patch
>
>
> New event listener to allow metastore to reject a partition name if it 
> contains undesired character patterns such as unicode and commas.
> Match pattern is implemented as a regular expression
> Modifies append_partition to call a new MetaStorePreventListener 
> implementation, PreAppendPartitionEvent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira