Mohit,
Thanks for tracking this down -- it sounds like a bug. Please file a bug
report at https://issues.apache.org/jira/browse/HADOOP
- Aaron
On Thu, Feb 3, 2011 at 8:53 PM, Mohit wrote:
> Hello Authors,
>
>
>
> I suspect there is a problem in there,
>
>
>
> I configured a property ipc.server
I downloaded the "combined" tarball of 0.21.0-rc0 and set it up as a
pseudo-distributed Hadoop cluster.
Everything seems to work; basic smoke tests pass. Did not run internal unit
tests. I tested Sqoop 1.0.0 against this release. All sqoop unit tests pass.
Sqoop can operate on the command-line as
[
https://issues.apache.org/jira/browse/HADOOP-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Aaron Kimball resolved HADOOP-6708.
---
Resolution: Won't Fix
After thinking more about this, I don't think this issue i
Reporter: Aaron Kimball
Attachments: CompressionBug.java
DefaultCodec.createOutputStream() creates a new Compressor instance in each
OutputStream. Even if the OutputStream is closed, this leaks memory.
--
This message is automatically generated by JIRA.
-
You can reply to this email
Reporter: Aaron Kimball
Assignee: Aaron Kimball
A file format that handles multi-gigabyte records efficiently, with lazy disk
access
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
https://issues.apache.org
Reporter: Aaron Kimball
Snapshots of Hadoop trunk downloaded from Ivy have VersionInfo.getVersion()
returning "Unknown"
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
There are an enormous number of examples of the following line in user-side
code:
Configuration conf = new Configuration();
... This is going to need to still work transparently after any refactoring.
The new Configuration in this case needs to be populated with values from
the appropriate defaul
An Interactive Hadoop FS shell
--
Key: HADOOP-6541
URL: https://issues.apache.org/jira/browse/HADOOP-6541
Project: Hadoop Common
Issue Type: New Feature
Reporter: Aaron Kimball
Assignee
Reporter: Aaron Kimball
Assignee: Aaron Kimball
Priority: Blocker
The *-site.xml files in src/contrib/test are not valid XML. the
declaration must appear above the license header.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a
See http://wiki.apache.org/hadoop/HowToContribute for more step-by-step
instructions.
- Aaron
On Fri, Jan 22, 2010 at 7:36 PM, Kay Kay wrote:
> Start with hadoop-common to start building .
>
> hadoop-hdfs / hadoop-mapred pull the dependencies from apache snapshot
> repository that contains the n
: Hadoop Common
Issue Type: New Feature
Components: fs
Reporter: Aaron Kimball
Assignee: Aaron Kimball
Reading data from avro files requires using Avro's SeekableInput interface; we
need to be able to wrap FSDataInputStream in this interface.
--
Make avro serialization APIs public
---
Key: HADOOP-6492
URL: https://issues.apache.org/jira/browse/HADOOP-6492
Project: Hadoop Common
Issue Type: Improvement
Reporter: Aaron Kimball
Reporter: Aaron Kimball
Assignee: Aaron Kimball
The {{SerializationBase.accept()}} methods of several serialization
implementations use incorrect metadata when determining whether they are the
correct serializer for the user's metadata.
--
This message is automati
[
https://issues.apache.org/jira/browse/HADOOP-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Aaron Kimball resolved HADOOP-6438.
---
Resolution: Invalid
After discussion here and on MAPREDUCE-1126, the conclusion is that
Reporter: Aaron Kimball
Assignee: Aaron Kimball
Needed for MAPREDUCE-1126, getter and setter methods to inject specific
metadata into configurations to (de)serialize various data types.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add
Components: conf
Reporter: Aaron Kimball
Assignee: Aaron Kimball
Per MAPREDUCE-1126, we need to be able to take a map of (key, value) pairs and
embed that inside a Configuration object.
--
This message is automatically generated by JIRA.
-
You can reply to this
When running Hadoop with DEBUG logging on, this IOException was actually
responsible for well-over 90% of the lines of text in my logs, making them
unreadable.
We actually removed this on trunk:
https://issues.apache.org/jira/browse/HADOOP-6312
- Aaron
On Thu, Nov 19, 2009 at 5:29 AM, Steve Lou
Thanks for getting that list of examples together. That's a pretty good mix!
I went through these too without looking at Todd's comments first to avoid
prejudice. Here's my results..
1) ugly dangling ')'
6-7) would prefer 4 spaces before 'throws'
11-12) ok.
16-17) ok. I don't think we should manda
: Bug
Components: build
Reporter: Aaron Kimball
Assignee: Aaron Kimball
Priority: Critical
Attachments: HADOOP-6370.patch
Only Hadoop's own library dependencies are promoted to ${build.dir}/lib; any
libraries required by contribs ar
On Thu, Nov 5, 2009 at 2:34 AM, Andrei Dragomir wrote:
> Hello everyone.
> We ran into a bunch of issues with building and deploying hadoop 0.21.
> It would be great to get some answers about how things should work, so
> we can try to fix them.
>
> 1. When checking out the repositories, each of t
Type: Improvement
Components: fs
Reporter: Aaron Kimball
Some operations (e.g., rename and delete) can take very long running times on
some filesystem implementations (e.g., S3). The API should provide the ability
to include progress callbacks during these operations
Issue Type: Bug
Components: io
Reporter: Aaron Kimball
Assignee: Aaron Kimball
It is possible to pollute CodecPool in such a way that Hadoop cannot read
gzip-compressed data.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add
Reporter: Aaron Kimball
Assignee: Aaron Kimball
Attachments: HADOOP-6312.patch
Configuration objects send a DEBUG-level log message every time they're
instantiated, which include a full stack trace. This is more appropriate for
TRACE-level logging, as it renders
Look into "typed bytes":
http://dumbotics.com/2009/02/24/hadoop-1722-and-typed-bytes/
On Thu, Aug 20, 2009 at 10:29 AM, Jaliya Ekanayake wrote:
> Hi Stefan,
>
>
>
> I am sorry, for the late reply. Somehow the response email has slipped my
> eyes.
>
> Could you explain a bit on how to use Hadoop s
Reporter: Aaron Kimball
Assignee: Aaron Kimball
Priority: Blocker
Attachments: HADOOP-6152.patch
The various Hadoop scripts (bin/hadoop, bin/hdfs, bin/mapred) do not properly
identify the jars needed to run Hadoop. They try to include hadoop-*-hdfs.jar,
etc, rather
[
https://issues.apache.org/jira/browse/HADOOP-5482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Aaron Kimball reopened HADOOP-5482:
---
This issue was marked "Resolved" but it was never applied to the 20 branch or
trunk
Chris,
No operations in git ever require connectivity to an upstream remote, except
for the obvious ones of "pull more down from remote" and "push local refs up
to remote." All history and associated metadata is fully replicated to each
clone.
- Aaron
On Mon, Jun 29, 2009 at 12:01 PM, Chris Doug
27 matches
Mail list logo