Also note that.. Nothing gets deployed back to maven central repo from this
job. So no interference with other jobs and nodes as well.

-Vinay

On Thu, 28 Nov 2019, 10:55 pm Vinayakumar B, <vinayakum...@apache.org>
wrote:

> Hi all,
>
> As a starter..
> Created a simple mvn based job (not yetus and docker as current qbt on
> trunk on x86) in
>
> https://builds.apache.org/job/Hadoop-qbt-linux-ARM-trunk/7/console
>
> Right now using the manual tools installed and the previous workarounds
> mentioned related to various third-party dependencies related ARM
> architecture. This will be triggered daily once automatically.
>
> Going forward we can make sure that yetus with docker works fine in this
> node as well and configure similar to x86 qbt run.
>
> -Vinay
>
> On Thu, 28 Nov 2019, 7:30 am Zhenyu Zheng, <zhengzhenyul...@gmail.com>
> wrote:
>
>> Thanks for the reply Chris, And really appriaciated about all the things
>> you have done to made our node work. I'm sending this ML to send out info
>> about the node is ready. And hope someone from Hadoop project could help
>> us
>> set some new jobs/builds, I totally understand your role and opinion, I'm
>> not asking you to add jobs for Hadoop, I'm just trying to make clear about
>> what we are looking for.
>>
>> As Chris mentioned in previous email interactions, there are 3 kinds of CI
>> nodes available in the CI system, the 1st and 2nd type have to use the
>> current infra management tools to install tools and software required for
>> the system, which the infra management tool is currently not ready for ARM
>> platform. And the 3rd kind of CI nodes is what we are ready now - we
>> manually install all the required tools and software and maintain them
>> according to infra's other nodes.  And we will try to make the infra
>> management tools usable for ARM platform to make the nodes type2 or type1.
>>
>> As for jobs/builds, seems a periodic job/builds like
>> https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-trunk-Commit/
>> seems
>> to be the most suitable for what we are looking for the current step.
>> Since
>> we are still having some errors and failures(15 erros in Hadoop-YARN, 4
>> Failures and 2 Errors in Hadoop-HDFS, 23 Failures in Hadoop-MapReduce,
>> which is a quite small number comparing to the total number of tests, and
>> failures/errors in same sub-project seems to be caused by same problem)
>> that our team will work on, so we want to propose 4 different jobs similar
>> to the mechanism used in Hadoop-trunk-Commit, a SCM triggered periodic job
>> test out building and UT for each sub-project:
>> Hadoop-YARN-trunk-Commit-Aarch64, Hadoop-HDFS-trunk-Commit-Aarch64,
>> Hadoop-MapReducer-trunk-Commit-Aarch64 and Hadoop-Common-trunk-Commit
>> Aarch64 to be more tracked for each project. We can also start one by one,
>> of cause.
>>
>> Hope this could clear all the misunderstanding.
>>
>> BR,
>>
>> On Wed, Nov 27, 2019 at 10:28 PM Chris Thistlethwaite <chr...@apache.org>
>> wrote:
>>
>> > If anyone would like to follow along in JIRA, here's the ticket
>> > https://issues.apache.org/jira/browse/INFRA-19369. I've been updating
>> > that ticket with any issues. arm-poc has been moved to a node in
>> Singapore
>> > and will need to be tested again with builds.
>> >
>> > I'm going to mention again that someone from Hadoop should be changing
>> > these builds in order to run against arm-poc. In my reply below, I
>> thought
>> > that the project knew about the ARM nodes and was involved with setting
>> up
>> > new builds, which is why I said I'd be willing to make simple changes
>> for
>> > testing. However I don't want to change things without the knowledge of
>> the
>> > project. The builds themselves are created by the project, not Infra,
>> which
>> > means I have no idea which build should run against ARM vs any other
>> CPU.
>> >
>> > -Chris T.
>> > #asfinfra
>> >
>> > On 11/22/19 9:28 AM, Chris Thistlethwaite wrote:
>> >
>> > In order to run builds against arm-poc, someone (me included) will need
>> to
>> > change a build config to only use that label. The node itself isn't
>> fully
>> > built out like our other ASF nodes, due to the fact that it's ARM and we
>> > don't have all the packaged tools built for that architecture, it will
>> > likely take some time to fix issues.
>> >
>> >
>> > -Chris T.
>> > #asfinfra
>> >
>> > On 11/22/19 3:46 AM, bo zhaobo wrote:
>> >
>> > Thanks. That would be great if a project can use the ARM test worker to
>> do
>> > the specific testing on ARM.
>> >
>> > Also I think it's better to make @Chris Thistlethwaite <
>> chr...@apache.org> know
>> > this email.  Could you please give some kind advices? Thank you.
>> >
>> > BR
>> >
>> > ZhaoBo
>> >
>> >
>> >
>> > [image: Mailtrack]
>> > <
>> https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&;>
>> Sender
>> > notified by
>> > Mailtrack
>> > <
>> https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&;>
>> 19/11/22
>> > 下午04:42:30
>> >
>> > Zhenyu Zheng <zhengzhenyul...@gmail.com> 于2019年11月22日周五 下午4:32写道:
>> >
>> >> Hi Hadoop,
>> >>
>> >>
>> >>
>> >> First off, I want to thanks to Wei-Chiu for having me on the last
>> week's
>> >> Hadoop community sync to introduce our ideas of ARM support on Hadoop.
>> And
>> >> also for all the attendees for listening and providing suggestions.
>> >>
>> >>
>> >>
>> >> I want to provide some update on the status:
>> >>
>> >> 1. Our teammate has successfully donated an ARM machine to the
>> >> ApacheInfra team, and it is setup for running:
>> >> https://builds.apache.org/computer/arm-poc/ it might be a good idea to
>> >> make use of it, like running some periodic jobs for some experiment,
>> and it
>> >> will also benifit us for discussions and asking for help on identified
>> >> problems.
>> >>
>> >>
>> >>
>> >> 2. I've been keep try to test and debug sub-project by sub-project, and
>> >> here is the current status for YARN:
>> >>
>> >> When running the whole test suits, some of the test suit will be
>> skipped
>> >> due to the rules of if some previous test fails, then skip this suit.
>> So I
>> >> manually run those test suits again to see if they can pass, the full
>> test
>> >> result is that:
>> >>
>> >> Total: 5688; Failure: 0; Error 15; Skipped 60
>> >>
>> >> Among the 15 errors, 13 of them came from the ``Apache Hadoop YARN
>> >> TimelineService HBase tests`` test suit. The other 2 came from ``Apache
>> >> Hadoop YARN DistributedShell`` suit.
>> >>
>> >>
>> >>
>> >> 3. Some walk-arounds:
>> >>
>> >>
>> >>
>> >> 1) The only walk-arounds for build Hadoop on ARM is to pre-build
>> >> grpc-java, which my teammates are working with the community to
>> release a
>> >> newer version with ARM support: github.com/grpc/grpc-java/issues/6364
>> >>
>> >> 2) For YARN tests, the TimelineService HBase suit need either HBase
>> 1.4.8
>> >> or 2.0.2 which can only be built under protocbuf 2.5.0(HBase 1.4.8,
>> HBase
>> >> 2.0.2 external) and protocbuf 3.5.1(HBase 2.0.2 internal), so we have
>> to
>> >> pre-build them. And the new cause of the error is still under
>> debugging.
>> >>
>> >> 3) The rest of the know issue and possible walk-arounds are reported to
>> >> Hadoop Jira and are now under Wei-Chiu's tent jira report:
>> >> https://issues.apache.org/jira/browse/HADOOP-16723
>> >>
>> >>
>> >>
>> >> I have put all the test logs in the attachment and error related
>> surefire
>> >> reports in my github
>> >> https://github.com/ZhengZhenyu/HadoopTestLogs/issues/1 (the attachment
>> >> size is limited for sending mailling list), please have a check if you
>> are
>> >> interested.
>> >>
>> >>
>> >>
>> >> So, how should we move a little bit forward and make use of the ARM
>> >> resources in ApacheInfra?
>> >>
>> >>
>> >>
>> >> Best Regards,
>> >>
>> >>
>> >>
>> >> Zhenyu
>> >>
>> >
>> >
>> >
>>
>

Reply via email to