Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-22 Thread Brian Demers
Anyone else getting timeout errors with the MIT keypool? The ubuntu keypool seems ok On Tue, Jan 22, 2019 at 1:28 AM Wangda Tan wrote: > It seems there's no useful information from the log :(. Maybe I should > change my key and try again. In the meantime, Sunil will help me to create > release a

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-21 Thread Wangda Tan
It seems there's no useful information from the log :(. Maybe I should change my key and try again. In the meantime, Sunil will help me to create release and get 3.1.2 out. Thanks everybody for helping with this, really appreciate it! Best, Wangda On Mon, Jan 21, 2019 at 9:55 PM Chris Lambertus

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-21 Thread Chris Lambertus
2019-01-22 05:40:41 INFO [99598137-805273] - com.sonatype.nexus.staging.internal.DefaultStagingManager - Dropping staging repositories [orgapachehadoop-1201] 2019-01-22 05:40:42 INFO [ool-1-thread-14] - com.sonatype.nexus.staging.internal.task.StagingBackgroundTask - STARTED Dropping staging

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-21 Thread Wangda Tan
Hi Chris, Thanks for helping the issue, Now the issue still exists but the process becomes much faster: failureMessage Failed to validate the pgp signature of '/org/apache/hadoop/hadoop-project/3.1.2/hadoop-project-3.1.2.pom', check the logs. failureMessage Failed to validate the pgp signature o

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-21 Thread Chris Lambertus
It looks like there are timeouts from some of the keyservers. I’ve trimmed the list again to only servers known to be working (ubuntu and sks-keyservers.net ). Can you give it a try again? Brian, there are also a number of timeout errors related to central, but I thi

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-21 Thread Brian Fox
They keys file is irrelevant to Nexus. The only thing that matters is it’s in the mit pgp key ring. --Brian (mobile) > On Jan 21, 2019, at 3:34 PM, Wangda Tan wrote: > > I just checked on KEYS file, it doesn't show sig part. I updated KEYS file on > Apache https://dist.apache.org/repos/dist/

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-21 Thread Wangda Tan
I just checked on KEYS file, it doesn't show sig part. I updated KEYS file on Apache https://dist.apache.org/repos/dist/release/hadoop/common/KEYS and made it be ultimately trusted. pub rsa4096 2018-03-20 [SC] 4C899853CDDA4E40C60212B5B3FA653D57300D45 uid [ultimate] Wangda tan si

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-21 Thread Wangda Tan
Hi David, Thanks for helping check this, I can see signatures on my key: pub 4096R/57300D45 2018-03-20 Fingerprint=4C89 9853 CDDA 4E40 C602 12B5 B3FA 653D 5730 0D45 uid Wangda tan sig sig3 57300D45

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-21 Thread David Nalley
I wonder if it's because there are no signatures on your key. --David On Mon, Jan 21, 2019 at 11:57 AM Wangda Tan wrote: > > Hi Brian, > > Here're links to my key: > > http://pool.sks-keyservers.net:11371/key/0xB3FA653D57300D45 > > http://pgp.mit.edu/pks/lookup?op=get&search=0xB3FA653D57300D45 >

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-21 Thread Wangda Tan
Hi Brian, Here're links to my key: http://pool.sks-keyservers.net:11371/key/0xB3FA653D57300D45 http://pgp.mit.edu/pks/lookup?op=get&search=0xB3FA653D57300D45 On Apache SVN: https://dist.apache.org/repos/dist/release/hadoop/common/KEYS Thanks, Wangda On Mon, Jan 21, 2019 at 6:51 AM Brian Demer

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-21 Thread Brian Demers
Can you share the link to your key? -Brian > On Jan 20, 2019, at 11:21 PM, Wangda Tan wrote: > > Still couldn't figure out without locating the log on the Nexus machine. With > help from several committers and PMCs, we didn't see anything wrong with my > signing key. > > I don't want to dela

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-20 Thread Wangda Tan
Still couldn't figure out without locating the log on the Nexus machine. With help from several committers and PMCs, we didn't see anything wrong with my signing key. I don't want to delay 3.1.2 more because of this. Is it allowed for me to publish artifacts (like tarball, source package, etc.) on

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-17 Thread Wangda Tan
Spent several more hours trying to figure out the issue, still no luck. I just filed https://issues.sonatype.org/browse/OSSRH-45646, really appreciate if anybody could add some suggestions. Thanks, Wangda On Tue, Jan 15, 2019 at 9:48 AM Wangda Tan wrote: > It seems the problem still exists for

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-15 Thread Wangda Tan
It seems the problem still exists for me: Now the error message only contains: failureMessage Failed to validate the pgp signature of '/org/apache/hadoop/hadoop-client-check-invariants/3.1.2/hadoop-client-check-invariants-3.1.2.pom', check the logs. failureMessage Failed to validate the pgp sig

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-15 Thread Brian Fox
Good to know. The pool has occasionally had sync issues, but we're talking 3 times in the last 8-9 years. On Tue, Jan 15, 2019 at 10:39 AM Elek, Marton wrote: > My key was pushed to the server with pgp about 1 year ago, and it worked > well with the last Ratis release. So it should be synced bet

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-15 Thread Elek, Marton
My key was pushed to the server with pgp about 1 year ago, and it worked well with the last Ratis release. So it should be synced between the key servers. But it seems that the INFRA solved the problem with shuffling the key server order (or it was an intermittent issue): see INFRA-17649 Seems to

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-14 Thread Wangda Tan
HI Brain, Thanks for responding, could u share how to push to keys to Apache pgp pool? Best, Wangda On Mon, Jan 14, 2019 at 10:44 AM Brian Fox wrote: > Did you push your key up to the pgp pool? That's what Nexus is validating > against. It might take time to propagate if you just pushed it. > >

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-14 Thread Brian Fox
Did you push your key up to the pgp pool? That's what Nexus is validating against. It might take time to propagate if you just pushed it. On Mon, Jan 14, 2019 at 9:59 AM Elek, Marton wrote: > Seems to be an INFRA issue for me: > > 1. I downloaded a sample jar file [1] + the signature from the >

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-14 Thread Elek, Marton
Seems to be an INFRA issue for me: 1. I downloaded a sample jar file [1] + the signature from the repository and it was ok, locally I verified it. 2. I tested it with an other Apache project (Ratis) and my key. I got the same problem even if it worked at last year during the 0.3.0 release. (I use

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-12 Thread Wangda Tan
Uploaded sample file and signature. On Sat, Jan 12, 2019 at 9:18 PM Wangda Tan wrote: > Actually, among the hundreds of failed messages, the "No public key" > issues still occurred several times: > > failureMessage No public key: Key with id: (b3fa653d57300d45) was not > able to be located on

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-12 Thread Wangda Tan
Actually, among the hundreds of failed messages, the "No public key" issues still occurred several times: failureMessage No public key: Key with id: (b3fa653d57300d45) was not able to be located on http://gpg-keyserver.de/. Upload your public key and try the operation again. failureMessage No pu

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-12 Thread Wangda Tan
Thanks David for the quick response! I just retried, now the "No public key" issue is gone. However, the issue: failureMessage Failed to validate the pgp signature of '/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.1.2/hadoop-mapreduce-client-jobclient-3.1.2-tests.jar', check the logs.

Re: [Urgent] Question about Nexus repo and Hadoop release

2019-01-12 Thread David Nalley
On Sat, Jan 12, 2019 at 9:09 PM Wangda Tan wrote: > > Hi Devs, > > I'm currently rolling Hadoop 3.1.2 release candidate, however, I saw an issue > when I try to close repo in Nexus. > > Logs of https://repository.apache.org/#stagingRepositories > (orgapachehadoop-1183) shows hundreds of lines of

[Urgent] Question about Nexus repo and Hadoop release

2019-01-12 Thread Wangda Tan
Hi Devs, I'm currently rolling Hadoop 3.1.2 release candidate, however, I saw an issue when I try to close repo in Nexus. Logs of https://repository.apache.org/#stagingRepositories (orgapachehadoop-1183) shows hundreds of lines of the following error: failureMessage No public key: Key with id:

[jira] [Resolved] (HDFS-14173) how to ask a question about hdfs? I don't find the page

2018-12-25 Thread Takanobu Asanuma (JIRA)
[ https://issues.apache.org/jira/browse/HDFS-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma resolved HDFS-14173. - Resolution: Invalid > how to ask a question about hdfs? I don't find

[jira] [Created] (HDFS-14173) how to ask a question about hdfs? I don't find a page

2018-12-25 Thread lpz (JIRA)
lpz created HDFS-14173: -- Summary: how to ask a question about hdfs? I don't find a page Key: HDFS-14173 URL: https://issues.apache.org/jira/browse/HDFS-14173 Project: Hadoop HDFS Issue Type

Question about ReplicaMap's mutex.

2016-08-25 Thread Tiger Hu
Hi all, I am reading HDFS source code and got one question about FsDatasetImpl. In FsDatasetImpl#FsDatasetImpl(), we use the statement to create global ReplicaMap object, volumeMap = new ReplicaMap(this); “this” is passed as input and is assigned to volumeMap.mutex for

Re: A question about reading one block

2012-10-31 Thread Harsh J
Vivi, There is no inbuilt function to do this, no. You'd have to write the utility yourself. On Thu, Nov 1, 2012 at 3:01 AM, Vivi Lang wrote: > Please ignore the previous email which I did not describe the problem > clearly. > > On Wed, Oct 31, 2012 at 3:05 PM, Vivi Lang wrote: > >> Hi, thks fo

Re: A question about reading one block

2012-10-31 Thread Vivi Lang
Please ignore the previous email which I did not describe the problem clearly. On Wed, Oct 31, 2012 at 3:05 PM, Vivi Lang wrote: > Hi, thks for the replying. > > Is there any function can help me to create a InputStream for a certain > Block which stored in a certain datanode > Thanks, > > On Sa

Re: A question about reading one block

2012-10-31 Thread Vivi Lang
Hi, thks for the replying. Is there any function can help me to create a InputStream for a certain Block which stored in a certain datanode Thanks, On Sat, Oct 27, 2012 at 9:41 PM, Harsh J wrote: > Vivi, > > Assuming you know how to get the block info out of a file's metadata, > opening a file

Re: A question about reading one block

2012-10-27 Thread Harsh J
Vivi, Assuming you know how to get the block info out of a file's metadata, opening a file for read (with FileSystem.open API, etc.) with an offset and length set to to match that of a block, will open and read only that block. On Sat, Oct 27, 2012 at 2:19 AM, Vivi Lang wrote: > Hi guys, > > I a

Re: Question about

2012-09-13 Thread Harsh J
MR does not read the files in the front-end (unless a partitioner such as the TOP demands it). The actual block-level read is done via the DFSClient class (its sub-classes DFSInputStream and DFSOutputStream - the first one should be where your interest lies.) All MR cares about is scheduling the d

RE: Question about

2012-09-12 Thread Charles Baker
Hi Vivian. Take a look at TextInputFormat and the RecordReader classes. This is set via JobConf.setInputFormat(). -Chuck -Original Message- From: Vivi Lang [mailto:sqlxwei...@gmail.com] Sent: Wednesday, September 12, 2012 5:10 PM To: hdfs-dev@hadoop.apache.org Subject: Question about

Question about

2012-09-12 Thread Vivi Lang
Hi all, Is there anyone who can tell me that when we lanuch a mapreduce task, for example, wordcount, after the JobClient obtained the block locations (the related hosts/datanodes are stored in the specified split), which function/class will be called for reading those blocks from the datanode? T

Re: A question about the namenode's decision on the data placement of a new file

2012-08-24 Thread Le Hieu Hanh
For 0.20 version, you should go to ReplicationTargetChooser which is called from FSNamesystem. Le Hieu Hanh On Sat, Aug 25, 2012 at 9:00 AM, Vivi Lang wrote: > Thanks but I found that BlockPlacementPolicy only appear after 0.21.0? Is > there any similar class or function appeared in 0.20. > >

Re: A question about the namenode's decision on the data placement of a new file

2012-08-24 Thread Vivi Lang
Thanks but I found that BlockPlacementPolicy only appear after 0.21.0? Is there any similar class or function appeared in 0.20. On Fri, Aug 24, 2012 at 6:03 PM, Andy Isaacson wrote: > On Fri, Aug 24, 2012 at 3:33 PM, wei xu wrote: > > I am doing some research on the data placement, but I am no

Re: A question about the namenode's decision on the data placement of a new file

2012-08-24 Thread Andy Isaacson
On Fri, Aug 24, 2012 at 3:33 PM, wei xu wrote: > I am doing some research on the data placement, but I am not quite familiar > with the Hadoop, is there any one who can tell me when a new file be added > into the HDFS, which function will be called by namenode to make a decision > on the allocatio

A question about the namenode's decision on the data placement of a new file

2012-08-24 Thread wei xu
Hi, I am doing some research on the data placement, but I am not quite familiar with the Hadoop, is there any one who can tell me when a new file be added into the HDFS, which function will be called by namenode to make a decision on the allocation ( I mean, if there has a list of datanodes and th

Re: Question about pig & HDFS

2011-08-26 Thread Daniel Dai
Pig by default use plain text file as input/output, unless you write a custom LoadFunc/StoreFunc. There is no specific Pig storage format. You can copy the file to local using copyToLocal. If you want to export directly to SQL table, you need to write a StoreFunc. Pig work on tuple rather than K,V

Question about pig & HDFS

2011-08-23 Thread Keren Ouaknine
Hello, Pig generates data to HDFS and I would like to find a way to convert it to a general format by either: 1. flatening the data (would copyToLocal work here?!) 2. export the data to SQL tables (or any other non specific Hadoop format) 3. generate K,V pairs of data (since Pig code is converted

Re: Question about hadoop namenode -format -clusterid

2011-05-11 Thread Bharath Mundlapudi
on this: https://issues.apache.org/jira/browse/HDFS-1905 -Bharath From: Doug Balog To: hdfs-dev@hadoop.apache.org Sent: Wednesday, May 11, 2011 8:03 PM Subject: Question about hadoop namenode -format -clusterid I'm at the hackathon in SF just tryi

Question about hadoop namenode -format -clusterid

2011-05-11 Thread Doug Balog
I'm at the hackathon in SF just trying to setup a single node cluster from my trunk checkout. I'm at the point where I need to format a new namenode, and the old way of just running "hadoop namenode -format" is failing because I'm not specifying a clusterID. So I started poking around the code

Question about BackupNode

2011-01-20 Thread mac fang
Hi, guys, one question about Backup Node, The code in Backup Node shows us the NamespaceID of BN must be same as NN, but when I start BackupNode, I have to do Namenode.format - which means the NamespaceID is a new one. Then I need to remove the ID and replace with the one of NN. Does it necessary

Re: I am a new comer and have a question about HDFS code check out

2010-09-13 Thread Min Long
therine) Long IBM China Systems and Technology Lab Jay Booth 09/14/2010 09:59 AM Please respond to hdfs-dev@hadoop.apache.org To hdfs-dev@hadoop.apache.org cc Subject Re: I am a new comer and have a question about HDFS code check out Hi Min, look at the unit tests which make use of Min

Re: I am a new comer and have a question about HDFS code check out

2010-09-13 Thread Jay Booth
-dev@hadoop.apache.org > > > To > > cc > > Subject > Re: I am a new comer and have a question about HDFS code check out > > > > > > > You need to use Git (http://git-scm.com). It is a source control tool like > CVS, but better suited to large distribu

Re: I am a new comer and have a question about HDFS code check out

2010-09-13 Thread Min Long
simple questions. Hope to get reply or any advice to new comer for HDFS. Best Regards, Min (Catherine) Long IBM China Systems and Technology Lab 09/13/2010 11:25 AM Please respond to hdfs-dev@hadoop.apache.org To cc Subject Re: I am a new comer and have a question about HDFS code check

Re: I am a new comer and have a question about HDFS code check out

2010-09-12 Thread Christopher.Shain
You need to use Git (http://git-scm.com). It is a source control tool like CVS, but better suited to large distributed projects. - Original Message - From: Min Long To: hdfs-dev@hadoop.apache.org Sent: Sun Sep 12 23:13:34 2010 Subject: Re: I am a new comer and have a question about

Re: I am a new comer and have a question about HDFS code check out

2010-09-12 Thread Min Long
gards, Min (Catherine) Long IBM China Systems and Technology Lab Ryan Rawson 09/13/2010 10:44 AM Please respond to hdfs-dev@hadoop.apache.org To hdfs-dev@hadoop.apache.org cc Subject Re: I am a new comer and have a question about HDFS code check out Anyone can check out the code, I

Re: I am a new comer and have a question about HDFS code check out

2010-09-12 Thread Ryan Rawson
Anyone can check out the code, I would recommend the git mirrors: http://git.apache.org/ To submit you might want to read: http://wiki.apache.org/hadoop/HowToContribute -ryan On Sun, Sep 12, 2010 at 7:38 PM, Min Long wrote: > Dears, > >   I just joined HDFS dev mailing list. I am currently res

I am a new comer and have a question about HDFS code check out

2010-09-12 Thread Min Long
Dears, I just joined HDFS dev mailing list. I am currently researching HDFS interface for security mechanism. May I know how to get authority of checking out HDFS codes? Thanks! Best Regards, Min (Catherine) Long IBM China Systems and Technology Lab

Re: Source Code Question about blockReport

2010-06-22 Thread Todd Lipcon
On Mon, Jun 21, 2010 at 10:59 PM, Jeff Zhang wrote: > Hi Hadoop Devs, > > I have one question about the blockReport DataNode send to NameNode. I > think NameNode get the blockReport from DataNode, then it can tell > DataNode which block is invalid and which block should be repl

Source Code Question about blockReport

2010-06-22 Thread Jeff Zhang
Hi Hadoop Devs, I have one question about the blockReport DataNode send to NameNode. I think NameNode get the blockReport from DataNode, then it can tell DataNode which block is invalid and which block should be replicated, But I look at the source code of method blockReport of NameNode, it