Re: Mumak is still active?

2010-10-28 Thread Hong Tang
Nan, We (at Yahoo) are still doing some work in and on top of Mumak in Yahoo, but not in significant ways because most of our production is using an internal version of Hadoop 20, which diverges from Hadoop trunk (where Mumak resides). So the project is certainly "live" but not very "acti

[jira] Created: (HADOOP-6863) Compile-native still fails on Mac OS X.

2010-07-15 Thread Hong Tang (JIRA)
Compile-native still fails on Mac OS X. --- Key: HADOOP-6863 URL: https://issues.apache.org/jira/browse/HADOOP-6863 Project: Hadoop Common Issue Type: Bug Reporter: Hong Tang I am still

[jira] Resolved: (HADOOP-6662) hadoop zlib compression does not fully utilize the buffer

2010-03-29 Thread Hong Tang (JIRA)
[ https://issues.apache.org/jira/browse/HADOOP-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hong Tang resolved HADOOP-6662. --- Resolution: Duplicate > hadoop zlib compression does not fully utilize the buf

[jira] Created: (HADOOP-6350) We should have some mechanism to enforce metrics as part of the public API.

2009-10-30 Thread Hong Tang (JIRA)
Issue Type: Improvement Reporter: Hong Tang Metrics should be part of public API, and should be clearly documented similar to HADOOP-5073, so that we can reliably build tools on top of them. -- This message is automatically generated by JIRA. - You can reply to this email to

RESULT: [VOTE] port HADOOP-6218 (Split TFile by Record Sequence Number) to hadoop 0.20/0.21

2009-10-16 Thread Hong Tang
With 3 +1's and 0 -1's, the vote passed. -Hong On Oct 12, 2009, at 3:55 PM, Hong Tang wrote: HADOOP-6218 exposed the internal "Location" object as a global Record Sequence Number (RecNum). The feature is useful in a number of ways: (1) support progress reporting for

[VOTE] port HADOOP-6218 (Split TFile by Record Sequence Number) to hadoop 0.20/0.21

2009-10-12 Thread Hong Tang
HADOOP-6218 exposed the internal "Location" object as a global Record Sequence Number (RecNum). The feature is useful in a number of ways: (1) support progress reporting for upper layers (object file, zebra); (2) use RecNum as cursor by a secondary index; (3) support aligned split across mu

[jira] Created: (HADOOP-6218) Split TFile by Record Sequence Number

2009-08-27 Thread Hong Tang (JIRA)
Split TFile by Record Sequence Number - Key: HADOOP-6218 URL: https://issues.apache.org/jira/browse/HADOOP-6218 Project: Hadoop Common Issue Type: New Feature Reporter: Hong Tang It would be

[jira] Created: (HADOOP-6177) FSInputChecker.getPos() would return position greater than the file size

2009-08-06 Thread Hong Tang (JIRA)
Issue Type: Bug Reporter: Hong Tang When using a small buffer (< 512 bytes) to read through the whole file, the final file position is 1+ the file size. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.

[jira] Created: (HADOOP-6173) src/native/packageNativeHadoop.sh only packages files with "hadoop" in the name

2009-07-31 Thread Hong Tang (JIRA)
ct: Hadoop Common Issue Type: Bug Components: build Affects Versions: 0.21.0 Reporter: Hong Tang Priority: Minor src/native/packageNativeHadoop.sh only packages files with "hadoop" in the name. This becomes too restrictive when a user wants t

[jira] Created: (HADOOP-6172) bin/hadoop version not working

2009-07-31 Thread Hong Tang (JIRA)
Reporter: Hong Tang Priority: Minor Two problems found: - ${build.src} not included in ant target "compile-core-classes", thus o.a.h.package-info.java is not compiled, which contains the version annotation. - bin/hadoop-config.sh attempts to include jar files matchi

[jira] Resolved: (HADOOP-4162) CodecPool.getDecompressor(LzopCodec) always creates a brand-new decompressor.

2009-07-30 Thread Hong Tang (JIRA)
[ https://issues.apache.org/jira/browse/HADOOP-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hong Tang resolved HADOOP-4162. --- Resolution: Invalid > CodecPool.getDecompressor(LzopCodec) always creates a brand-new decompres

[jira] Created: (HADOOP-6169) Removing deprecated method calls in TFile

2009-07-24 Thread Hong Tang (JIRA)
Removing deprecated method calls in TFile - Key: HADOOP-6169 URL: https://issues.apache.org/jira/browse/HADOOP-6169 Project: Hadoop Common Issue Type: Bug Reporter: Hong Tang

[jira] Created: (HADOOP-6150) Need to be able to instantiate a comparator instance from a comparator string without creating a TFile.Reader object

2009-07-14 Thread Hong Tang (JIRA)
://issues.apache.org/jira/browse/HADOOP-6150 Project: Hadoop Common Issue Type: Improvement Components: io Affects Versions: 0.20.0, 0.21.0 Reporter: Hong Tang Assignee: Hong Tang Priority: Minor Occasionally, we want have the

[jira] Created: (HADOOP-6141) hadoop 0.20 branch "test-patch" is broken

2009-07-10 Thread Hong Tang (JIRA)
Affects Versions: 0.20.0 Reporter: Hong Tang Assignee: Hong Tang There were two problems found in src/test/bin/test-patch.sh while I am doing the backporting of TFile patch (HADOOP-3315): - java5.home and forrest.home is not defined for the ant command in pre-build stage, w

Re: [VOTE] Back-port TFile to Hadoop 0.20

2009-07-10 Thread Hong Tang
Hi all, WIth 12 +1's and no -1, the vote passed. I will upload a patch for Hadoop 0.20 shortly. -Hong

Re: [VOTE] Back-port TFile to Hadoop 0.20

2009-07-10 Thread Hong Tang
Documentation and test cases should also be back ported accordingly. On Jul 9, 2009, at 8:35 PM, Ian Holsman wrote: Hong Tang wrote: I have talked with a few folks in the community who are interested in using TFile (HADOOP-3315) in their projects that are currently dependent on Hadoop 0.20

[jira] Created: (HADOOP-6131) A sysproperty should not be set unless the property is set on the ant command line in build.xml.

2009-07-08 Thread Hong Tang (JIRA)
-6131 Project: Hadoop Common Issue Type: Bug Components: build Affects Versions: 0.21.0 Reporter: Hong Tang Priority: Trivial Patch for HADOOP-3315 contains an improper usage of setting a sysproperty. What it does now: {code} {code

[VOTE] Back-port TFile to Hadoop 0.20

2009-07-07 Thread Hong Tang
I have talked with a few folks in the community who are interested in using TFile (HADOOP-3315) in their projects that are currently dependent on Hadoop 0.20, and it would significantly simplify the release process as well as their lives if we could back port TFile to Hadoop 0.20 (instead o