On 04/07/11 18:22, Ted Dunning wrote:
One reasonable suggestion that I have heard recently was to do like Google
does and put a DNS front end onto Zookeeper. Machines would need to have
DNS set up properly and a requests for a special ZK based domain would have
to be delegated to the fancy DNS s
See https://builds.apache.org/job/Hadoop-Common-trunk/738/
###
## LAST 60 LINES OF THE CONSOLE
###
[...truncated 25123 lines...]
jar:
[tar] Nothing to do:
/grid/0
Improvement for the FsShell -copyFromLocal/-put"
Key: HADOOP-7441
URL: https://issues.apache.org/jira/browse/HADOOP-7441
Project: Hadoop Common
Issue Type: Improvement
Affects Versions: 0.
[
https://issues.apache.org/jira/browse/HADOOP-7417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Eli Collins resolved HADOOP-7417.
-
Resolution: Not A Problem
> Hadoop Management System (Umbrella)
> --
Hello guys,
We wonder to know where the compression take place for MapOutputStream in Map
phase.
We guess there are two possible places in sortAndSpill() at MapTask.java:
Writer.append() or Writer.close()
Which one makes compression?
Appreciate very much for your response~
See lines marked by
Docs in core-default.xml still reference deprecated config
"topology.script.file.name"
--
Key: HADOOP-7442
URL: https://issues.apache.org/jira/browse/HADOOP-7442
Proj
In another project, I have implemented a bonjour beacon (jmdns) which
sit on the Zookeeper nodes to advertise the location of zookeeper
servers. When clients start up, it will discover location of
zookeeper through multicast dns. Once, the server locations are
resolved (ip:port and TXT records),
See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/673/
###
## LAST 60 LINES OF THE CONSOLE
###
[...truncated 25803 lines...]
jar:
[tar] Nothing to do:
Hello Rui Hou,
If you look at the Writer constructor used here, you'll get your answer very
easily. It takes a codec (a compression codec, to be specific) as an argument.
The codec, if not null (in case compression is disabled), is then responsible
for compressing the streams of data by wrappin
See https://builds.apache.org/job/Hadoop-Common-trunk-Commit/674/
###
## LAST 60 LINES OF THE CONSOLE
###
[...truncated 26032 lines...]
jar:
[tar] Nothing to do:
Add CRC32C as another DataChecksum implementation
-
Key: HADOOP-7443
URL: https://issues.apache.org/jira/browse/HADOOP-7443
Project: Hadoop Common
Issue Type: New Feature
Components:
Add Checksum API to verify and calculate checksums "in bulk"
Key: HADOOP-7444
URL: https://issues.apache.org/jira/browse/HADOOP-7444
Project: Hadoop Common
Issue Type: Improvement
Implement bulk checksum verification using efficient native code
Key: HADOOP-7445
URL: https://issues.apache.org/jira/browse/HADOOP-7445
Project: Hadoop Common
Issue Type: Impr
Implement CRC32C native code using SSE4.2 instructions
--
Key: HADOOP-7446
URL: https://issues.apache.org/jira/browse/HADOOP-7446
Project: Hadoop Common
Issue Type: Improvement
Co
Add a warning message for FsShell -getmerge when the src path is no a directory
---
Key: HADOOP-7447
URL: https://issues.apache.org/jira/browse/HADOOP-7447
Project: Hadoop Com
Hi All,
I have been trying to setup eclipse project with trunk code, but I have
been getting following error.
[ivy:resolve] ::
[ivy:resolve] :: UNRESOLVED DEPENDENCIES ::
[ivy:resolve] :::
16 matches
Mail list logo