Re: [DISCUSS] Switch to log4j 2

2014-08-15 Thread Steve Loughran
moving to SLF4J as an API is independent —it's just a better API for
logging than commons-logging, was already a dependency and doesn't force
anyone to switch to a new log back end.


On 15 August 2014 03:34, Tsuyoshi OZAWA  wrote:

> Hi,
>
> Steve has started discussion titled "use SLF4J APIs in new modules?"
> as a related topic.
>
> http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201404.mbox/%3cca+4kjvv_9cmmtdqzcgzy-chslyb1wkgdunxs7wrheslwbuh...@mail.gmail.com%3E
>
> It sounds good to me to use asynchronous logging when we log INFO. One
> concern is that asynchronous logging makes debugging difficult - I
> don't know log4j 2 well, but I suspect that ordering of logging can be
> changed even if WARN or  FATAL are logged with synchronous logger.
>
> Thanks,
> - Tsuyoshi
>
> On Thu, Aug 14, 2014 at 6:44 AM, Arpit Agarwal 
> wrote:
> > I don't recall whether this was discussed before.
> >
> > I often find our INFO logging to be too sparse for useful diagnosis. A
> high
> > performance logging framework will encourage us to log more.
> Specifically,
> > Asynchronous Loggers look interesting.
> > https://logging.apache.org/log4j/2.x/manual/async.html#Performance
> >
> > What does the community think of switching to log4j 2 in a Hadoop 2.x
> > release?
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Hadoop-Hdfs-trunk - Build # 1838 - Still Failing

2014-08-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1838/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 13795 lines...]
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  03:35 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  2.306 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:35 h
[INFO] Finished at: 2014-08-15T15:09:45+00:00
[INFO] Final Memory: 73M/536M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating MAPREDUCE-5878
Updating YARN-2197
Updating MAPREDUCE-4791
Updating HADOOP-10770
Updating MAPREDUCE-883
Updating MAPREDUCE-5999
Updating MAPREDUCE-5998
Updating HADOOP-10121
Updating YARN-2378
Updating HDFS-6850
Updating MAPREDUCE-5950
Updating HADOOP-10231
Updating HADOOP-10967
Updating HADOOP-10964
Updating YARN-2397
Updating MAPREDUCE-5906
Updating YARN-1918
Updating MAPREDUCE-6010
Updating MAPREDUCE-5597
Updating MAPREDUCE-5363
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk #1838

2014-08-15 Thread Apache Jenkins Server
See 

Changes:

[jianhe] YARN-2378. Added support for moving applications across queues in 
CapacityScheduler. Contributed by Subramaniam Venkatraman Krishnan

[wang] Move HADOOP-10121 to correct section of CHANGES.txt

[wang] HADOOP-10964. Small fix for NetworkTopologyWithNodeGroup#sortByDistance. 
Contributed by Yi Liu.

[tucu] HADOOP-10967. Improve DefaultCryptoExtension#generateEncryptedKey 
performance. (hitliuyi via tucu)

[tucu] HADOOP-10770. KMS add delegation token support. (tucu)

[atm] HDFS-6850. Move NFS out of order write unit tests into TestWrites class. 
Contributed by Zhe Zhang.

[jlowe] MAPREDUCE-5878. some standard JDK APIs are not part of system classes 
defaults. Contributed by Sangjin Lee

[aw] YARN-1918. Typo in description and error message for 
yarn.resourcemanager.cluster-id (Anandha L Ranganathan via aw)

[aw] YARN-2197. Add a link to YARN CHANGES.txt in the left side of doc (Akira 
AJISAKA via aw)

[aw] HADOOP-10231. Add some components in Native Libraries document (Akira 
AJISAKA via aw)

[zjshen] YARN-2397. Avoided loading two authentication filters for RM and TS 
web interfaces. Contributed by Varun Vasudev.

[aw] HADOOP-10121. Fix javadoc spelling for HadoopArchives#writeTopLevelDirs 
(Akira AJISAKA via aw)

[aw] MAPREDUCE-5906. Inconsistent configuration in property 
"mapreduce.reduce.shuffle.input.buffer.percent" (Akira AJISAKA via aw)

[aw] MAPREDUCE-5999. Fix dead link in InputFormat javadoc (Akira AJISAKA via aw)

[aw] MAPREDUCE-5998. CompositeInputFormat javadoc is broken (Akira AJISAKA via 
aw)

[aw] MAPREDUCE-5950. incorrect description in distcp2 document (Akira AJISAKA 
via aw)

[aw] MAPREDUCE-5597. Missing alternatives in javadocs for deprecated 
constructors in mapreduce.Job (Akira AJISAKA via aw)

[jlowe] MAPREDUCE-6010. HistoryServerFileSystemStateStore fails to update 
tokens. Contributed by Jason Lowe

[aw] MAPREDUCE-5363. Fix doc and spelling for TaskCompletionEvent#getTaskStatus 
and getStatus (Akira AJISAKA via aw)

[aw] MAPREDUCE-4791. Javadoc for KeyValueTextInputFormat should include default 
separator and how to change it (Akira AJISAKA via aw)

[aw] MAPREDUCE-883. harchive: Document how to unarchive (Akira AJISAKA and Koji 
Noguchi via aw)

--
[...truncated 13602 lines...]
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.17 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.557 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec - in 
org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Running org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.069 sec - in 
org.apache.hadoop.hdfs.protocol.TestExtendedBlock
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.128 sec - in 
org.apache.hadoop.hdfs.protocol.TestAnnotations
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.084 sec - in 
org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.169 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.372 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.838 sec - in 
org.apache.hadoop.hdfs.TestDatanodeRegistration
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.296 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestReadWhileWriting
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.071 sec - in 
org.apache.hadoop.hdfs.TestReadWhileWriting
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.35 sec - in 
org.apache.hadoop.hdfs.TestDFSStartupVersions
Running org.apache.hadoop.hdfs.TestDefaultNameNodePort
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.927 sec - in 
org.apache.hadoop.hdfs.TestDefaultNameNodePort
Running org.apache.hadoop.hdfs.TestFSInputChecker
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.648 sec - in 
org.apache.hadoop.hdfs.TestFSInputChecker
R

Re: [DISCUSS] Switch to log4j 2

2014-08-15 Thread Aaron T. Myers
Not necessarily opposed to switching logging frameworks, but I believe we
can actually support async logging with today's logging system if we wanted
to, e.g. as was done for the HDFS audit logger in this JIRA:

https://issues.apache.org/jira/browse/HDFS-5241

--
Aaron T. Myers
Software Engineer, Cloudera


On Fri, Aug 15, 2014 at 5:44 AM, Steve Loughran 
wrote:

> moving to SLF4J as an API is independent —it's just a better API for
> logging than commons-logging, was already a dependency and doesn't force
> anyone to switch to a new log back end.
>
>
> On 15 August 2014 03:34, Tsuyoshi OZAWA  wrote:
>
> > Hi,
> >
> > Steve has started discussion titled "use SLF4J APIs in new modules?"
> > as a related topic.
> >
> >
> http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201404.mbox/%3cca+4kjvv_9cmmtdqzcgzy-chslyb1wkgdunxs7wrheslwbuh...@mail.gmail.com%3E
> >
> > It sounds good to me to use asynchronous logging when we log INFO. One
> > concern is that asynchronous logging makes debugging difficult - I
> > don't know log4j 2 well, but I suspect that ordering of logging can be
> > changed even if WARN or  FATAL are logged with synchronous logger.
> >
> > Thanks,
> > - Tsuyoshi
> >
> > On Thu, Aug 14, 2014 at 6:44 AM, Arpit Agarwal  >
> > wrote:
> > > I don't recall whether this was discussed before.
> > >
> > > I often find our INFO logging to be too sparse for useful diagnosis. A
> > high
> > > performance logging framework will encourage us to log more.
> > Specifically,
> > > Asynchronous Loggers look interesting.
> > > https://logging.apache.org/log4j/2.x/manual/async.html#Performance
> > >
> > > What does the community think of switching to log4j 2 in a Hadoop 2.x
> > > release?
> > >
> > > --
> > > CONFIDENTIALITY NOTICE
> > > NOTICE: This message is intended for the use of the individual or
> entity
> > to
> > > which it is addressed and may contain information that is confidential,
> > > privileged and exempt from disclosure under applicable law. If the
> reader
> > > of this message is not the intended recipient, you are hereby notified
> > that
> > > any printing, copying, dissemination, distribution, disclosure or
> > > forwarding of this communication is strictly prohibited. If you have
> > > received this communication in error, please contact the sender
> > immediately
> > > and delete it from your system. Thank You.
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>


Re: [DISCUSS] Switch to log4j 2

2014-08-15 Thread Karthik Kambatla
Using asynchronous loggers for improved performance sounds reasonable.
However, IMO we already log too much at INFO level (particularly YARN).
Logging more at DEBUG level and lowering the overhead of enabling DEBUG
logging is preferable.

One concern is the defaults. Based on what I read on the log4j2 page
shared, we might want to keep our audit logging synchronous and make all
other logging asynchronous. Is there a way to easily configure it this way;
otherwise, what is the dev cost we are looking at?



On Wed, Aug 13, 2014 at 2:44 PM, Arpit Agarwal 
wrote:

> I don't recall whether this was discussed before.
>
> I often find our INFO logging to be too sparse for useful diagnosis. A high
> performance logging framework will encourage us to log more. Specifically,
> Asynchronous Loggers look interesting.
> https://logging.apache.org/log4j/2.x/manual/async.html#Performance
>
> What does the community think of switching to log4j 2 in a Hadoop 2.x
> release?
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>


Re: [VOTE] Merge fs-encryption branch to trunk

2014-08-15 Thread Andrew Wang
With 4 binding +1s, 3 non-binding +1s, no -1s, the vote passes. Thanks
everyone who gave feedback at this stage, particularly Sanjay and Suresh.
I should add that this vote will run for the standard 7 days for a
non-release vote, so will close at 12PM Pacific on August 15th.


On Fri, Aug 8, 2014 at 11:45 AM, Andrew Wang 
wrote:

> Hi all,
>
> I'd like to call a vote to merge the fs-encryption branch to trunk.
> Development of this feature has been ongoing since March on HDFS-6134 and
> HADOOP-10150, totally approximately 50 commits.
>
> The fs-encryption branch introduces support for transparent, end-to-end
> encryption within an "encryption zone". Each file stored within an
> encryption zone is automatically encrypted and decrypted with a unique key.
> These per-file keys are encrypted with an encryption key only accessible by
> the client, ensuring that only the client is able to decrypt sensitive
> data. Furthermore, there is support for native, hardware-accelerated AES
> encryption. For further details, please see the design doc on HDFS-6134.
>
> In terms of merge readiness, we've posted some successful consolidated
> patches to the JIRA for Jenkins runs. distcp and fs -cp support has also
> recently been completed, allowing users to securely copy encrypted files
> without first decrypting them. There is ongoing work to add support for
> WebHDFS, HttpFS, and other alternative access methods. Stephen Chu has also
> posted a test plan, and has already identified a few issues that have been
> fixed.
>
> Design and development of this feature was also a cross-company effort
> with many different contributors.
>
> I'd like to thank Charles Lamb, Yi Liu, Uma Maheswara Rao G, Colin McCabe,
> and Juan Yu for their code contributions and reviews. Alejandro Abdelnur
> was also instrumental, doing a lot of the design work and as well as
> writing most of the Hadoop Key Mangement Server (KMS). Finally, I'd like to
> thank everyone who gave feedback on the JIRAs. This includes Owen, Sanjay,
> Larry, Mike Y, ATM, Todd, Nicholas, and Andy, among others.
>
> With that, here's my +1 to merge this to trunk.
>
> Thanks,
> Andrew
>


Re: [VOTE] Merge fs-encryption branch to trunk

2014-08-15 Thread sanjay Radia


+1 (binding)
We have made some great progress in the last few days on some of the issues I 
raised.
I have posted a summary of the followup items that are needed on the Jira today.
I am +1ing expecting the team will  complete Items 1 (distcp/cp) and 2 
(webhdfs)  promptly. Before we publish transparent encryption in a 2.x release 
for pubic consumption, let us at least complete item 1 (ie distcp and cp) and 
the flag to turn this feature on/of.

This is a great work; thanks team for contributing this important feature.

sanjay

On Aug 14, 2014, at 1:05 AM, sanjay Radia  wrote:

> While I was originally skeptical of transparent encryption, I like the value 
> proposition of transparent encryption. HDFS has several layers, protocols  
> and tools. While the HDFS core part seems to be well done in the Jira, 
> inserting the matching transparency in the other tools or protocols need to 
> be worked through.
> 
> I have the following areas of concern:
> - Common protocols like webhdfs should continue to work (the design doc marks 
> this as a goal), This issue is being discussed in the Jira but it appears 
> that webhdfs does not currently work with encrypted files: Andrew say that 
> "Regarding webhdfs, it's not a recommended deployment" and that he will 
> modify the documentation to match that. Aljeandro say "Both httpfs and 
> webhdfs will work just fine" but then in the same paragraph says "this could 
> fail some security audits". We need to resolve this quickly. Webhdfs is 
> heavily used by many Hadoop users.
> 
> 
> - Common tools should like cp, distcp and HAR should continue  to work with 
> non-encrypted and encrypted files in an automatic fashion. This issue has 
> been heavily discussed in the Jira and at the meeting. The /.reserved./.raw 
> mechanism appears to be a step in the right direction for distcp and cp, 
> however this work has not reached its conclusion in my opinion; Charles are I 
> are going through the use cases and I think we are close to a clean solution 
> for distcp and cp.  HAR still needs a concrete proposal.
> 
> - KMS scalability in medium to large clusters. This can perhaps  be addressed 
> by getting the keys ahead of time when a job is submitted.  Without this the  
> KMS will need to be as highly available and scalable as the NN.  I think this 
> is future implementation work but we need to at least determine if this is 
> indeed possible in case we need to modify some of the APIs right now to 
> support that.
> 
> There are some other minor things under discussion, and I still need to go 
> through the new APIs.
> 
> Unfortunately at this stage I cannot give a +1 for this merge; I hope to 
> change this in the next day or -  I am working with the Jira's team.  
> Alejandoro, Charles, Andrew, Atm, ...  to resolve the above as quickly as 
> possible.
> 
> Sanjay (binding)
> 
> 
> 
> On Aug 8, 2014, at 11:45 AM, Andrew Wang  wrote:
> 
>> Hi all,
>> 
>> I'd like to call a vote to merge the fs-encryption branch to trunk.
>> Development of this feature has been ongoing since March on HDFS-6134 and
>> HADOOP-10150, totally approximately 50 commits.
>> 
>> .
>> Thanks,
>> Andrew
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HDFS-6856) Send an OOB ack asynchronously

2014-08-15 Thread Brandon Li (JIRA)
Brandon Li created HDFS-6856:


 Summary: Send an OOB ack asynchronously
 Key: HDFS-6856
 URL: https://issues.apache.org/jira/browse/HDFS-6856
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Brandon Li
Assignee: Brandon Li


As [~kihwal] pointed out in HDFS-6569,:
"One bad client may block this and prevent the message from being sent to the 
rest of "good" clients. Unless a new thread is created (during shutdown!) to 
send an OOB ack asynchronously, the blocking ack.readFields() call needs to be 
changed in order to delegate the message transmission to the responder thread. "

This JIRA is to track the effort of sending OOB ack asynchronously.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Thinking ahead to hadoop-2.6

2014-08-15 Thread Subramaniam Krishnan
Thanks for initiating the thread Arun.

Can we add YARN-1051  to
the list? We have most of the patches for the sub-JIRAs under review and
have committed a couple.

-Subru

-- Forwarded message --

From: Arun C Murthy 

Date: Tue, Aug 12, 2014 at 1:34 PM

Subject: Thinking ahead to hadoop-2.6

To: "common-...@hadoop.apache.org" , "
hdfs-dev@hadoop.apache.org" , "
mapreduce-...@hadoop.apache.org" ,

"yarn-...@hadoop.apache.org" 





Folks,



 With hadoop-2.5 nearly done, it's time to start thinking ahead to
hadoop-2.6.



 Currently, here is the Roadmap per the wiki:



• HADOOP

• Credential provider HADOOP-10607

• HDFS

• Heterogeneous storage (Phase 2) - Support APIs for using
storage tiers by the applications HDFS-5682

• Memory as storage tier HDFS-5851

• YARN

• Dynamic Resource Configuration YARN-291

• NodeManager Restart YARN-1336

• ResourceManager HA Phase 2 YARN-556

• Support for admin-specified labels in YARN YARN-796

• Support for automatic, shared cache for YARN application
artifacts YARN-1492

• Support NodeGroup layer topology on YARN YARN-18

• Support for Docker containers in YARN YARN-1964

• YARN service registry YARN-913



 My suspicion is, as is normal, some will make the cut and some won't.

Please do add/subtract from the list as appropriate. Ideally, it would be
good to ship hadoop-2.6 in a 6-8 weeks (say, October) to keep up a cadence.



 More importantly, as we discussed previously, we'd like hadoop-2.6 to be
the *last* Apache Hadoop 2.x release which support JDK6. I'll start a
discussion with other communities (HBase, Pig, Hive, Oozie etc.) and see
how they feel about this.



thanks,

Arun





--

CONFIDENTIALITY NOTICE

NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from disclosure under applicable law. If the reader
of this message is not the intended recipient, you are hereby notified that
any printing, copying, dissemination, distribution, disclosure or
forwarding of this communication is strictly prohibited. If you have
received this communication in error, please contact the sender immediately
and delete it from your system. Thank You.