Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/ [Mar 29, 2017 3:12:02 PM] (jeagles) HADOOP-14216. Improve Configuration XML Parsing Performance (jeagles) [Mar 29, 2017 5:56:36 PM] (templedf) HDFS-11571. Typo in DataStorage exception message (Contributed by Anna [Mar 29, 2017 6:13:39 PM] (liuml07) HADOOP-14247. FileContextMainOperationsBaseTest should clean up test [Mar 29, 2017 7:38:11 PM] (templedf) YARN-5685. RM configuration allows all failover methods to disabled when [Mar 29, 2017 9:37:21 PM] (wang) HADOOP-14223. Extend FileStatus#toString() to include details like [Mar 29, 2017 11:18:13 PM] (rkanter) YARN-5654. Not be able to run SLS with FairScheduler (yufeigu via [Mar 30, 2017 12:41:58 AM] (mingma) MAPREDUCE-6862. Fragments are not handled correctly by resource limit [Mar 30, 2017 5:10:55 AM] (zhz) MAPREDUCE-6873. MR Job Submission Fails if MR framework application path -1 overall The following subsystems voted -1: asflicense unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.security.TestRaceWhenRelogin hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.client.api.impl.TestAMRMClient hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/diff-compile-javac-root.txt [184K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/diff-checkstyle-root.txt [17M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/diff-patch-shellcheck.txt [24K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/whitespace-tabs.txt [1.2M] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/diff-javadoc-javadoc-root.txt [2.2M] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [136K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [284K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [60K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [324K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase-tests.txt [24K] asflicense: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/361/artifact/out/patch-asflicense-problems.txt [4.0K] Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-ma
Re: Can we update protobuf's version on trunk?
On Wed, Mar 29, 2017 at 4:59 PM, Stack wrote: >> The former; an intermediate handler decoding, [modifying,] and >> encoding the record without losing unknown fields. >> > > I did not try this. Did you? Otherwise I can. Yeah, I did. Same format. -C >> This looks fine. -C >> >> > Thanks, >> > St.Ack >> > >> > >> > # Using the protoc v3.0.2 tool >> > $ protoc --version >> > libprotoc 3.0.2 >> > >> > # I have a simple proto definition with two fields in it >> > $ more pb.proto >> > message Test { >> > optional string one = 1; >> > optional string two = 2; >> > } >> > >> > # This is a text-encoded instance of a 'Test' proto message: >> > $ more pb.txt >> > one: "one" >> > two: "two" >> > >> > # Now I encode the above as a pb binary >> > $ protoc --encode=Test pb.proto < pb.txt > pb.bin >> > [libprotobuf WARNING google/protobuf/compiler/parser.cc:546] No syntax >> > specified for the proto file: pb.proto. Please use 'syntax = "proto2";' >> > or >> > 'syntax = "proto3";' to specify a syntax version. (Defaulted to proto2 >> > syntax.) >> > >> > # Here is a dump of the binary >> > $ od -xc pb.bin >> > 000 030a6e6f126574036f77 >> > \n 003 o n e 022 003 t w o >> > 012 >> > >> > # Here is a proto definition file that has a Test Message minus the >> > 'two' >> > field. >> > $ more pb_drops_two.proto >> > message Test { >> > optional string one = 1; >> > } >> > >> > # Use it to decode the bin file: >> > $ protoc --decode=Test pb_drops_two.proto < pb.bin >> > [libprotobuf WARNING google/protobuf/compiler/parser.cc:546] No syntax >> > specified for the proto file: pb_drops_two.proto. Please use 'syntax = >> > "proto2";' or 'syntax = "proto3";' to specify a syntax version. >> > (Defaulted >> > to proto2 syntax.) >> > one: "one" >> > 2: "two" >> > >> > Note how the second field is preserved (absent a field name). It is not >> > dropped. >> > >> > If I change the syntax of pb_drops_two.proto to be proto3, the field IS >> > dropped. >> > >> > # Here proto file with proto3 syntax specified (had to drop the >> > 'optional' >> > qualifier -- not allowed in proto3): >> > $ more pb_drops_two.proto >> > syntax = "proto3"; >> > message Test { >> > string one = 1; >> > } >> > >> > $ protoc --decode=Test pb_drops_two.proto < pb.bin > pb_drops_two.txt >> > $ more pb_drops_two.txt >> > one: "one" >> > >> > >> > I cannot reencode the text output using pb_drops_two.proto. It >> > complains: >> > >> > $ protoc --encode=Test pb_drops_two.proto < pb_drops_two.txt > >> > pb_drops_two.bin >> > [libprotobuf WARNING google/protobuf/compiler/parser.cc:546] No syntax >> > specified for the proto file: pb_drops_two.proto. Please use 'syntax = >> > "proto2";' or 'syntax = "proto3";' to specify a syntax version. >> > (Defaulted >> > to proto2 syntax.) >> > input:2:1: Expected identifier, got: 2 >> > >> > Proto 2.5 does same: >> > >> > $ ~/bin/protobuf-2.5.0/src/protoc --encode=Test pb_drops_two.proto < >> > pb_drops_two.txt > pb_drops_two.bin >> > input:2:1: Expected identifier. >> > Failed to parse input. >> > >> > St.Ack >> > >> > >> > >> > >> > >> > >> > On Wed, Mar 29, 2017 at 10:14 AM, Stack wrote: >> >> >> >> On Tue, Mar 28, 2017 at 4:18 PM, Andrew Wang >> >> wrote: >> >>> >> >>> > >> >>> > > If unknown fields are dropped, then applications proxying tokens >> >>> > > and >> >>> > other >> >>> > >> data between servers will effectively corrupt those messages, >> >>> > >> unless >> >>> > >> we >> >>> > >> make everything opaque bytes, which- absent the convenient, >> >>> > >> prenominate >> >>> > >> semantics managing the conversion- obviate the compatibility >> >>> > >> machinery >> >>> > that >> >>> > >> is the whole point of PB. Google is removing the features that >> >>> > >> justified >> >>> > >> choosing PB over its alternatives. Since we can't require that >> >>> > >> our >> >>> > >> applications compile (or link) against our updated schema, this >> >>> > >> creates >> >>> > a >> >>> > >> problem that PB was supposed to solve. >> >>> > > >> >>> > > >> >>> > > This is scary, and it potentially affects services outside of the >> >>> > > Hadoop >> >>> > > codebase. This makes it difficult to assess the impact. >> >>> > >> >>> > Stack mentioned a compatibility mode that uses the proto2 semantics. >> >>> > If that carries unknown fields through intermediate handlers, then >> >>> > this objection goes away. -C >> >>> >> >>> >> >>> Did some more googling, found this: >> >>> >> >>> https://groups.google.com/d/msg/protobuf/Z6pNo81FiEQ/fHkdcNtdAwAJ >> >>> >> >>> Feng Xiao appears to be a Google engineer, and suggests workarounds >> >>> like >> >>> packing the fields into a byte type. No mention of a PB2 compatibility >> >>> mode. Also here: >> >>> >> >>> https://groups.google.com/d/msg/protobuf/bO2L6-_t91Q/-zIaJAR9AAAJ >> >>> >> >>> Participants say that unknown fields were dropped for automatic JSON >> >>> encoding, since you can't losslessly convert to JSON without knowing >> >>> the
Re: Can we update protobuf's version on trunk?
On Thu, Mar 30, 2017 at 9:16 AM, Chris Douglas wrote: > On Wed, Mar 29, 2017 at 4:59 PM, Stack wrote: > >> The former; an intermediate handler decoding, [modifying,] and > >> encoding the record without losing unknown fields. > >> > > > > I did not try this. Did you? Otherwise I can. > > Yeah, I did. Same format. -C > > Grand. St.Ack > >> This looks fine. -C > >> > >> > Thanks, > >> > St.Ack > >> > > >> > > >> > # Using the protoc v3.0.2 tool > >> > $ protoc --version > >> > libprotoc 3.0.2 > >> > > >> > # I have a simple proto definition with two fields in it > >> > $ more pb.proto > >> > message Test { > >> > optional string one = 1; > >> > optional string two = 2; > >> > } > >> > > >> > # This is a text-encoded instance of a 'Test' proto message: > >> > $ more pb.txt > >> > one: "one" > >> > two: "two" > >> > > >> > # Now I encode the above as a pb binary > >> > $ protoc --encode=Test pb.proto < pb.txt > pb.bin > >> > [libprotobuf WARNING google/protobuf/compiler/parser.cc:546] No > syntax > >> > specified for the proto file: pb.proto. Please use 'syntax = > "proto2";' > >> > or > >> > 'syntax = "proto3";' to specify a syntax version. (Defaulted to proto2 > >> > syntax.) > >> > > >> > # Here is a dump of the binary > >> > $ od -xc pb.bin > >> > 000 030a6e6f126574036f77 > >> > \n 003 o n e 022 003 t w o > >> > 012 > >> > > >> > # Here is a proto definition file that has a Test Message minus the > >> > 'two' > >> > field. > >> > $ more pb_drops_two.proto > >> > message Test { > >> > optional string one = 1; > >> > } > >> > > >> > # Use it to decode the bin file: > >> > $ protoc --decode=Test pb_drops_two.proto < pb.bin > >> > [libprotobuf WARNING google/protobuf/compiler/parser.cc:546] No > syntax > >> > specified for the proto file: pb_drops_two.proto. Please use 'syntax = > >> > "proto2";' or 'syntax = "proto3";' to specify a syntax version. > >> > (Defaulted > >> > to proto2 syntax.) > >> > one: "one" > >> > 2: "two" > >> > > >> > Note how the second field is preserved (absent a field name). It is > not > >> > dropped. > >> > > >> > If I change the syntax of pb_drops_two.proto to be proto3, the field > IS > >> > dropped. > >> > > >> > # Here proto file with proto3 syntax specified (had to drop the > >> > 'optional' > >> > qualifier -- not allowed in proto3): > >> > $ more pb_drops_two.proto > >> > syntax = "proto3"; > >> > message Test { > >> > string one = 1; > >> > } > >> > > >> > $ protoc --decode=Test pb_drops_two.proto < pb.bin > pb_drops_two.txt > >> > $ more pb_drops_two.txt > >> > one: "one" > >> > > >> > > >> > I cannot reencode the text output using pb_drops_two.proto. It > >> > complains: > >> > > >> > $ protoc --encode=Test pb_drops_two.proto < pb_drops_two.txt > > >> > pb_drops_two.bin > >> > [libprotobuf WARNING google/protobuf/compiler/parser.cc:546] No > syntax > >> > specified for the proto file: pb_drops_two.proto. Please use 'syntax = > >> > "proto2";' or 'syntax = "proto3";' to specify a syntax version. > >> > (Defaulted > >> > to proto2 syntax.) > >> > input:2:1: Expected identifier, got: 2 > >> > > >> > Proto 2.5 does same: > >> > > >> > $ ~/bin/protobuf-2.5.0/src/protoc --encode=Test pb_drops_two.proto < > >> > pb_drops_two.txt > pb_drops_two.bin > >> > input:2:1: Expected identifier. > >> > Failed to parse input. > >> > > >> > St.Ack > >> > > >> > > >> > > >> > > >> > > >> > > >> > On Wed, Mar 29, 2017 at 10:14 AM, Stack wrote: > >> >> > >> >> On Tue, Mar 28, 2017 at 4:18 PM, Andrew Wang < > andrew.w...@cloudera.com> > >> >> wrote: > >> >>> > >> >>> > > >> >>> > > If unknown fields are dropped, then applications proxying tokens > >> >>> > > and > >> >>> > other > >> >>> > >> data between servers will effectively corrupt those messages, > >> >>> > >> unless > >> >>> > >> we > >> >>> > >> make everything opaque bytes, which- absent the convenient, > >> >>> > >> prenominate > >> >>> > >> semantics managing the conversion- obviate the compatibility > >> >>> > >> machinery > >> >>> > that > >> >>> > >> is the whole point of PB. Google is removing the features that > >> >>> > >> justified > >> >>> > >> choosing PB over its alternatives. Since we can't require that > >> >>> > >> our > >> >>> > >> applications compile (or link) against our updated schema, this > >> >>> > >> creates > >> >>> > a > >> >>> > >> problem that PB was supposed to solve. > >> >>> > > > >> >>> > > > >> >>> > > This is scary, and it potentially affects services outside of > the > >> >>> > > Hadoop > >> >>> > > codebase. This makes it difficult to assess the impact. > >> >>> > > >> >>> > Stack mentioned a compatibility mode that uses the proto2 > semantics. > >> >>> > If that carries unknown fields through intermediate handlers, then > >> >>> > this objection goes away. -C > >> >>> > >> >>> > >> >>> Did some more googling, found this: > >> >>> > >> >>> https://groups.google.com/d/msg/protobuf/Z6pNo81FiEQ/fHkdcNtdAwAJ > >> >>> > >> >>> Feng Xiao ap
[jira] [Created] (HDFS-11598) Improve -setrep for Erasure Coded files
Wei-Chiu Chuang created HDFS-11598: -- Summary: Improve -setrep for Erasure Coded files Key: HDFS-11598 URL: https://issues.apache.org/jira/browse/HDFS-11598 Project: Hadoop HDFS Issue Type: Improvement Components: shell Affects Versions: 3.0.0-alpha1 Reporter: Wei-Chiu Chuang Quoting my comments in HDFS-10974: {quote} Looking at code again, I think we can improve the handling of EC files better: * In SetReplication#processPath, can we check if the file is EC before invoking setReplication() on it? * -setrep also has a command switch -w to wait for the file to be replicated. Can we ignore this if the file is erasure coded? {quote} Also, in fact, -setrep only make sense for HDFS file system... -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/273/ [Mar 29, 2017 3:12:02 PM] (jeagles) HADOOP-14216. Improve Configuration XML Parsing Performance (jeagles) [Mar 29, 2017 5:56:36 PM] (templedf) HDFS-11571. Typo in DataStorage exception message (Contributed by Anna [Mar 29, 2017 6:13:39 PM] (liuml07) HADOOP-14247. FileContextMainOperationsBaseTest should clean up test [Mar 29, 2017 7:38:11 PM] (templedf) YARN-5685. RM configuration allows all failover methods to disabled when [Mar 29, 2017 9:37:21 PM] (wang) HADOOP-14223. Extend FileStatus#toString() to include details like [Mar 29, 2017 11:18:13 PM] (rkanter) YARN-5654. Not be able to run SLS with FairScheduler (yufeigu via [Mar 30, 2017 12:41:58 AM] (mingma) MAPREDUCE-6862. Fragments are not handled correctly by resource limit [Mar 30, 2017 5:10:55 AM] (zhz) MAPREDUCE-6873. MR Job Submission Fails if MR framework application path [Mar 30, 2017 9:11:50 AM] (aajisaka) HADOOP-14256. [S3A DOC] Correct the format for "Seoul" example. -1 overall The following subsystems voted -1: compile unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.app.TestRuntimeEstimators hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService Timed out junit tests : org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/273/artifact/out/patch-compile-root.txt [140K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/273/artifact/out/patch-compile-root.txt [140K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/273/artifact/out/patch-compile-root.txt [140K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/273/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [332K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/273/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/273/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/273/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [72K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/273/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [324K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/273/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-plugin
[jira] [Created] (HDFS-11599) distcp interrupt does not kill hadoop job
David Fagnan created HDFS-11599: --- Summary: distcp interrupt does not kill hadoop job Key: HDFS-11599 URL: https://issues.apache.org/jira/browse/HDFS-11599 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 2.7.3 Reporter: David Fagnan keyboard interrupt for example leaves the hadoop job & copy still running, is this intended behavior? -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: Can we update protobuf's version on trunk?
Great. If y'all are satisfied, I am too. My only other request is that we shade PB even for the non-client JARs, since empirically there are a lot of downstream projects pulling in our server-side artifacts. On Thu, Mar 30, 2017 at 9:55 AM, Stack wrote: > On Thu, Mar 30, 2017 at 9:16 AM, Chris Douglas > wrote: > >> On Wed, Mar 29, 2017 at 4:59 PM, Stack wrote: >> >> The former; an intermediate handler decoding, [modifying,] and >> >> encoding the record without losing unknown fields. >> >> >> > >> > I did not try this. Did you? Otherwise I can. >> >> Yeah, I did. Same format. -C >> >> > Grand. > St.Ack > > > > >> >> This looks fine. -C >> >> >> >> > Thanks, >> >> > St.Ack >> >> > >> >> > >> >> > # Using the protoc v3.0.2 tool >> >> > $ protoc --version >> >> > libprotoc 3.0.2 >> >> > >> >> > # I have a simple proto definition with two fields in it >> >> > $ more pb.proto >> >> > message Test { >> >> > optional string one = 1; >> >> > optional string two = 2; >> >> > } >> >> > >> >> > # This is a text-encoded instance of a 'Test' proto message: >> >> > $ more pb.txt >> >> > one: "one" >> >> > two: "two" >> >> > >> >> > # Now I encode the above as a pb binary >> >> > $ protoc --encode=Test pb.proto < pb.txt > pb.bin >> >> > [libprotobuf WARNING google/protobuf/compiler/parser.cc:546] No >> syntax >> >> > specified for the proto file: pb.proto. Please use 'syntax = >> "proto2";' >> >> > or >> >> > 'syntax = "proto3";' to specify a syntax version. (Defaulted to >> proto2 >> >> > syntax.) >> >> > >> >> > # Here is a dump of the binary >> >> > $ od -xc pb.bin >> >> > 000 030a6e6f126574036f77 >> >> > \n 003 o n e 022 003 t w o >> >> > 012 >> >> > >> >> > # Here is a proto definition file that has a Test Message minus the >> >> > 'two' >> >> > field. >> >> > $ more pb_drops_two.proto >> >> > message Test { >> >> > optional string one = 1; >> >> > } >> >> > >> >> > # Use it to decode the bin file: >> >> > $ protoc --decode=Test pb_drops_two.proto < pb.bin >> >> > [libprotobuf WARNING google/protobuf/compiler/parser.cc:546] No >> syntax >> >> > specified for the proto file: pb_drops_two.proto. Please use 'syntax >> = >> >> > "proto2";' or 'syntax = "proto3";' to specify a syntax version. >> >> > (Defaulted >> >> > to proto2 syntax.) >> >> > one: "one" >> >> > 2: "two" >> >> > >> >> > Note how the second field is preserved (absent a field name). It is >> not >> >> > dropped. >> >> > >> >> > If I change the syntax of pb_drops_two.proto to be proto3, the field >> IS >> >> > dropped. >> >> > >> >> > # Here proto file with proto3 syntax specified (had to drop the >> >> > 'optional' >> >> > qualifier -- not allowed in proto3): >> >> > $ more pb_drops_two.proto >> >> > syntax = "proto3"; >> >> > message Test { >> >> > string one = 1; >> >> > } >> >> > >> >> > $ protoc --decode=Test pb_drops_two.proto < pb.bin > >> pb_drops_two.txt >> >> > $ more pb_drops_two.txt >> >> > one: "one" >> >> > >> >> > >> >> > I cannot reencode the text output using pb_drops_two.proto. It >> >> > complains: >> >> > >> >> > $ protoc --encode=Test pb_drops_two.proto < pb_drops_two.txt > >> >> > pb_drops_two.bin >> >> > [libprotobuf WARNING google/protobuf/compiler/parser.cc:546] No >> syntax >> >> > specified for the proto file: pb_drops_two.proto. Please use 'syntax >> = >> >> > "proto2";' or 'syntax = "proto3";' to specify a syntax version. >> >> > (Defaulted >> >> > to proto2 syntax.) >> >> > input:2:1: Expected identifier, got: 2 >> >> > >> >> > Proto 2.5 does same: >> >> > >> >> > $ ~/bin/protobuf-2.5.0/src/protoc --encode=Test pb_drops_two.proto < >> >> > pb_drops_two.txt > pb_drops_two.bin >> >> > input:2:1: Expected identifier. >> >> > Failed to parse input. >> >> > >> >> > St.Ack >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > On Wed, Mar 29, 2017 at 10:14 AM, Stack wrote: >> >> >> >> >> >> On Tue, Mar 28, 2017 at 4:18 PM, Andrew Wang < >> andrew.w...@cloudera.com> >> >> >> wrote: >> >> >>> >> >> >>> > >> >> >>> > > If unknown fields are dropped, then applications proxying >> tokens >> >> >>> > > and >> >> >>> > other >> >> >>> > >> data between servers will effectively corrupt those messages, >> >> >>> > >> unless >> >> >>> > >> we >> >> >>> > >> make everything opaque bytes, which- absent the convenient, >> >> >>> > >> prenominate >> >> >>> > >> semantics managing the conversion- obviate the compatibility >> >> >>> > >> machinery >> >> >>> > that >> >> >>> > >> is the whole point of PB. Google is removing the features that >> >> >>> > >> justified >> >> >>> > >> choosing PB over its alternatives. Since we can't require that >> >> >>> > >> our >> >> >>> > >> applications compile (or link) against our updated schema, >> this >> >> >>> > >> creates >> >> >>> > a >> >> >>> > >> problem that PB was supposed to solve. >> >> >>> > > >> >> >>> > > >> >> >>> > > This is scary, and it potentially affects services outside of >> the >> >> >>> > > Hadoop >> >> >>> > > codebas
[jira] [Created] (HDFS-11600) Refactor TestDFSStripedOutputStreamWithFailure test classes
Andrew Wang created HDFS-11600: -- Summary: Refactor TestDFSStripedOutputStreamWithFailure test classes Key: HDFS-11600 URL: https://issues.apache.org/jira/browse/HDFS-11600 Project: Hadoop HDFS Issue Type: Improvement Components: test Affects Versions: 3.0.0-alpha2 Reporter: Andrew Wang Priority: Minor TestDFSStripedOutputStreamWithFailure has a great number of subclasses. The tests are parameterized based on the name of these subclasses. Seems like we could parameterize these tests with JUnit and then not need all these separate test classes. Another note, the tests will randomly return instead of running the test. Using {{Assume}} instead would make it more clear in the test output that these tests were skipped. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11601) Ozone: Compact DB should be called on Open Containers.
Anu Engineer created HDFS-11601: --- Summary: Ozone: Compact DB should be called on Open Containers. Key: HDFS-11601 URL: https://issues.apache.org/jira/browse/HDFS-11601 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Anu Engineer The discussion in HDFS-11594 pointed to a potential issue that we might run into. That is too many delete key operations can take place and make a DB slow. Running compactDB in those cases are useful. Currently we run compactDB when we close a container. This JIRA tracks a potential improvement of running compactDB even on open containers. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11602) Enable HttpFS Tomcat access logging
John Zhuge created HDFS-11602: - Summary: Enable HttpFS Tomcat access logging Key: HDFS-11602 URL: https://issues.apache.org/jira/browse/HDFS-11602 Project: Hadoop HDFS Issue Type: Improvement Components: httpfs Affects Versions: 2.6.0 Reporter: John Zhuge Assignee: John Zhuge Use Tomcat {{org.apache.catalina.valves.AccessLogValve}} to enable access logging. Verify the solution works with LB or proxy. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11603) Make slow mirror/disk warnings in BlockReceiver more useful
Arpit Agarwal created HDFS-11603: Summary: Make slow mirror/disk warnings in BlockReceiver more useful Key: HDFS-11603 URL: https://issues.apache.org/jira/browse/HDFS-11603 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Reporter: Arpit Agarwal Assignee: Arpit Agarwal The slow mirror warnings in the DataNode BlockReceiver should include the peer's address. Similarly, the slow disk warnings should include the volume path. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org