Re: [yocto] systemd Version Going Backwards on Warrior

2019-10-29 Thread Ross Burton

On 29/10/2019 04:41, Robert Joslyn wrote:

On Mon, 2019-10-28 at 19:06 +, Ross Burton wrote:

On 28/10/2019 16:25, robert.jos...@redrectangle.org wrote:

I'm using buildhistory in one of my builds that creates a package
feed, and a recent update to systemd on warrior triggered version-
going-backwards errors:

ERROR: systemd-conf-241+AUTOINC+511646b8ac-r0 do_packagedata: QA
Issue: Package version for package systemd-conf-src went backwards
which would break package feeds from (0:241+0+c1f8ff8d0d-r0 to
0:241+0+511646b8ac-r0) [version-going-backwards]

Should PE have been updated at the same time due to the hash making
the version number go backwards? I can send a patch if that's all
that's missing. Or is a PR server enough to prevent this? My debug
builds do not use a PR server, but my production builds do use a PR
server.


If you're using feeds, you need to use a PR server.  This is *exactly*
what they are for.

>


The part I wasn't sure about was if the PR server helped in the case where
PV went backwards. I know it works when PV stays the same but the package
was rebuilt. But if it keeps the versions going forward no matter how PV
changes, then I should be good. I should probably setup a separate PR
server on my debug builds to avoid this kind of error.


If the PV actually goes backwards then the PR service isn't useful, as 
PV sorts before PR.


However in this case the problem is that SRCREV should have a 
incrementing counter (the +0+ should be +1+ in the rebuild) which I 
believe comes from the PR service.  I may be wrong...


Ross
--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] systemd Version Going Backwards on Warrior

2019-10-29 Thread akuster


On 10/29/19 11:50 AM, Ross Burton wrote:
> On 29/10/2019 04:41, Robert Joslyn wrote:
>> On Mon, 2019-10-28 at 19:06 +, Ross Burton wrote:
>>> On 28/10/2019 16:25, robert.jos...@redrectangle.org wrote:
 I'm using buildhistory in one of my builds that creates a package
 feed, and a recent update to systemd on warrior triggered version-
 going-backwards errors:

 ERROR: systemd-conf-241+AUTOINC+511646b8ac-r0 do_packagedata: QA
 Issue: Package version for package systemd-conf-src went backwards
 which would break package feeds from (0:241+0+c1f8ff8d0d-r0 to
 0:241+0+511646b8ac-r0) [version-going-backwards]

Isn't this do to the hashes being different and the PR server can't
really tell which one is newer?



 Should PE have been updated at the same time due to the hash making
 the version number go backwards? I can send a patch if that's all
 that's missing. Or is a PR server enough to prevent this? My debug
 builds do not use a PR server, but my production builds do use a PR
 server.
>>>
>>> If you're using feeds, you need to use a PR server.  This is *exactly*
>>> what they are for.
> >
>
>> The part I wasn't sure about was if the PR server helped in the case
>> where
>> PV went backwards. I know it works when PV stays the same but the
>> package
>> was rebuilt. But if it keeps the versions going forward no matter how PV
>> changes, then I should be good. I should probably setup a separate PR
>> server on my debug builds to avoid this kind of error.
>
> If the PV actually goes backwards then the PR service isn't useful, as
> PV sorts before PR.
>
> However in this case the problem is that SRCREV should have a
> incrementing counter (the +0+ should be +1+ in the rebuild) which I
> believe comes from the PR service.  I may be wrong...
>
> Ross

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Yocto Project Status WW44’19

2019-10-29 Thread Stephen K Jolley
Current Dev Position: YP 3.1 M1

Next Deadline: YP 3.1 M1 build Dec. 2, 2019

SWAT Team Rotation:

   -

   SWAT lead is currently: Amanda
   -

   SWAT team rotation: Amanda -> Armin on Nov. 1, 2019
   -

   SWAT team rotation: Armin-> Anuj on Nov. 8, 2019
   -

   https://wiki.yoctoproject.org/wiki/Yocto_Build_Failure_Swat_Team


Next Team Meetings:

   -

   Bug Triage meeting Thursday Nov. 7th at 7:30am PDT (
   https://zoom.us/j/454367603)
   -

   Monthly Project Meeting Tuesday Nov. 5th at 8am PDT (
   https://zoom.us/j/990892712) 
   -

   Weekly Engineering Sync Tuesday Nov. 12th at 8am PDT (
   https://zoom.us/j/990892712) 
   -

   Twitch - Next event is Tuesday Nov. 12th at 8am PDT (
   https://www.twitch.tv/yocto_project)


Key Status/Updates:

   -

   Yocto Project “Zeus” 3.0 has been released!  Thank you to everyone who
   contributed patches, bugs, feedback and testing.  Some very rough git
   metrics say that 182 different people have contributed patches to this
   cycle.
   -

   This week is ELC-E in Lyon, so meetings are limited.  If anyone reading
   this is there please do visit the Yocto Project booth and say hello!
   -

   Patches have been flowing fast into master.  Due to ELC-E this will slow
   down this week, but Ross will continue to collect patches for testing in
   ross/mut.
   -

   There are ongoing intermittent autobuilder failures, particularly in
   selftest but in other areas too. There is a separate email about this and
   we could do with help in debugging and resolving those issues.
   -

   YP 2.6.4 was built and has passed QA, will be released imminently.
   -

   YP 2.7.2 was held due to an unexplained test failure but will now be
   built in the next few days.
   -

   Armin and Anuj have volunteered to maintain Zeus and they plan to work
   out the maintainership between them, thanks!
   -

   We have begun collecting ideas for YP 3.1 in this document:
   
https://docs.google.com/document/d/1UKZIGe88-eq3-pOPtkAvFAegbQDzhy_f4ye64yjnABc/edit?usp=sharing
   -

   If anyone has any status items for the project they’d like to add to the
   weekly reports, please email Richard and Stephen.


Planned upcoming dot releases:

   -

   YP 2.7.2 (Warrior) is planned this week.
   -

   YP 2.6.4 (Thud) is is to be released shortly.


Tracking Metrics:

   -

   WDD 2493 (last week 2498) (
   https://wiki.yoctoproject.org/charts/combo.html)
   -

   Poky Patch Metrics
   -

  Total patches found: 1441 (last week 1432)
  -

  Patches in the Pending State: 579 (40%) [last week 578 (41%)]


The Yocto Project’s technical governance is through its Technical Steering
Committee, more information is available at:

https://wiki.yoctoproject.org/wiki/TSC

The Status reports are now stored on the wiki at:
https://wiki.yoctoproject.org/wiki/Weekly_Status

[If anyone has suggestions for other information you’d like to see on this
weekly status update, let us know!]

-- 

Thanks,



*Stephen K. Jolley*

*Yocto Project Program Manager*

*7867 SW Bayberry Dr., Beaverton, OR 97007*

(*Cell*:(208) 244-4460

* *Email*: *s
jolley.yp...@gmail.com *
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] systemd Version Going Backwards on Warrior

2019-10-29 Thread Martin Jansa
PR server never knows which one is really newer (in git).

It just returns max(LOCALCOUNT)+1 when it gets query for hash which isn't
stored in the database yet.

Either the build in question didn't use PRserv at all or PRserv's cache was
deleted between builds or the builds were using the same buildhistory but
different PRserv's or the first systemd SRCREV was reused from sstate
created on another server which doesn't share the PRserv (so it didn't
build it locally to query local PRserv to store 511646b8ac as LOCALCOUNT).

e.g. I've just built systemd with 511646b8ac hash as:
systemd_241+0+511646b8ac-r0.0webos4_qemux86.ipk
but reusing it from sstate created on another jenkins server as shown by
buildstats-summary:

NOTE: Build completion summary:
NOTE:   do_populate_sysroot: 100.0% sstate reuse(124 setscene, 0 scratch)
NOTE:   do_package_qa: 100.0% sstate reuse(10 setscene, 0 scratch)
NOTE:   do_packagedata: 100.0% sstate reuse(51 setscene, 0 scratch)
NOTE:   do_package_write_ipk: 100.0% sstate reuse(10 setscene, 0 scratch)
NOTE:   do_populate_lic: 100.0% sstate reuse(17 setscene, 0 scratch)

Then I've reverted oe-core commit 8b9703454cb2a8a0aa6b7942498f191935d547ea
to go back to c1f8ff8d0de7e303b8004b02a0a47d4cc103a7f8 systemd revision.

This time it haven't found valid sstate archive for it:
NOTE: Build completion summary:
NOTE:   do_populate_sysroot: 0.0% sstate reuse(0 setscene, 16 scratch)
NOTE:   do_package_qa: 0.0% sstate reuse(0 setscene, 19 scratch)
NOTE:   do_package: 15.8% sstate reuse(3 setscene, 16 scratch)
NOTE:   do_packagedata: 0.0% sstate reuse(0 setscene, 16 scratch)
NOTE:   do_package_write_ipk: 0.0% sstate reuse(0 setscene, 19 scratch)
NOTE:   do_populate_lic: 100.0% sstate reuse(2 setscene, 0 scratch)

and resulting .ipk has also +0:
systemd_241+0+c1f8ff8d0d-r0.0webos4_qemux86.ipk
but no warning is shown, because in this case it went from 511 to c1f.

Removing the revert again, doesn't trigger the warning again, because it
will be again reused from sstate (and QA checks won't get executed):
NOTE: Build completion summary:
NOTE:   do_populate_sysroot: 100.0% sstate reuse(16 setscene, 0 scratch)
NOTE:   do_package_qa: 100.0% sstate reuse(19 setscene, 0 scratch)
NOTE:   do_packagedata: 100.0% sstate reuse(16 setscene, 0 scratch)
NOTE:   do_package_write_ipk: 100.0% sstate reuse(19 setscene, 0 scratch)
NOTE:   do_populate_lic: 100.0% sstate reuse(2 setscene, 0 scratch)

And the local PRserv database still has only the c1f8ff8d0d hash,
because 511646b8ac was never really queried against this local PRserv.

cache$ sqlite3 prserv.sqlite3 "select * from PRMAIN_nohist where version
like 'AUTOINC-systemd-1%'"
AUTOINC-systemd-1_241+|qemux86|AUTOINC+c1f8ff8d0d|0

Also systemd recipe is using strange format with:
PV_append = "+${SRCPV}"

most recipes use "+git${SRCPV}" or "+gitr${SRCPV} to make it more clear
where this +0+hash came from.

So long story short: the change is correct, PRserv should handle this, but
there are many cases where it will fail (e.g.
https://bugzilla.yoctoproject.org/show_bug.cgi?id=5399), but that's not a
reason to start PE bumps everywhere.

Cheers,

On Tue, Oct 29, 2019 at 12:57 PM akuster  wrote:

>
>
> On 10/29/19 11:50 AM, Ross Burton wrote:
> > On 29/10/2019 04:41, Robert Joslyn wrote:
> >> On Mon, 2019-10-28 at 19:06 +, Ross Burton wrote:
> >>> On 28/10/2019 16:25, robert.jos...@redrectangle.org wrote:
>  I'm using buildhistory in one of my builds that creates a package
>  feed, and a recent update to systemd on warrior triggered version-
>  going-backwards errors:
> 
>  ERROR: systemd-conf-241+AUTOINC+511646b8ac-r0 do_packagedata: QA
>  Issue: Package version for package systemd-conf-src went backwards
>  which would break package feeds from (0:241+0+c1f8ff8d0d-r0 to
>  0:241+0+511646b8ac-r0) [version-going-backwards]
>
> Isn't this do to the hashes being different and the PR server can't
> really tell which one is newer?
>
>
> 
>  Should PE have been updated at the same time due to the hash making
>  the version number go backwards? I can send a patch if that's all
>  that's missing. Or is a PR server enough to prevent this? My debug
>  builds do not use a PR server, but my production builds do use a PR
>  server.
> >>>
> >>> If you're using feeds, you need to use a PR server.  This is *exactly*
> >>> what they are for.
> > >
> >
> >> The part I wasn't sure about was if the PR server helped in the case
> >> where
> >> PV went backwards. I know it works when PV stays the same but the
> >> package
> >> was rebuilt. But if it keeps the versions going forward no matter how PV
> >> changes, then I should be good. I should probably setup a separate PR
> >> server on my debug builds to avoid this kind of error.
> >
> > If the PV actually goes backwards then the PR service isn't useful, as
> > PV sorts before PR.
> >
> > However in this case the problem is that SRCREV should have a
> > incrementing counter (the +0

Re: [yocto] [OE-core] Yocto Project Status WW44’19

2019-10-29 Thread akuster808


On 10/29/19 5:12 PM, Stephen K Jolley wrote:
>
> Current Dev Position: YP 3.1 M1 
>
> Next Deadline: YP 3.1 M1 build Dec. 2, 2019
>

I noticed there is no 3.0.1 schedule.  Can we try for early December?

>
> SWAT Team Rotation:
>
>  *
>
> SWAT lead is currently: Amanda
>
>  *
>
> SWAT team rotation: Amanda -> Armin on Nov. 1, 2019
>
I will be on vacation next week so there may be delays in checking the
builds. But thats no worse than what I normally do.
No need to reschedule.

>  *
>
> SWAT team rotation: Armin-> Anuj on Nov. 8, 2019
>
>  *
>
> https://wiki.yoctoproject.org/wiki/Yocto_Build_Failure_Swat_Team
>

We could use more volunteers to help monitory builds.

>
> Next Team Meetings:
>
>  *
>
> Bug Triage meeting Thursday Nov. 7th at 7:30am PDT
> (https://zoom.us/j/454367603)
>
>  *
>
> Monthly Project Meeting Tuesday Nov. 5th at 8am PDT
> (https://zoom.us/j/990892712) 
>
>  *
>
> Weekly Engineering Sync Tuesday Nov. 12th at 8am PDT
> (https://zoom.us/j/990892712) 
>
>  *
>
> Twitch - Next event is Tuesday Nov. 12th at 8am PDT
> (https://www.twitch.tv/yocto_project)
>
>
> Key Status/Updates:
>
>  *
>
> Yocto Project “Zeus” 3.0 has been released!  Thank you to everyone
> who contributed patches, bugs, feedback and testing.  Some very
> rough git metrics say that 182 different people have contributed
> patches to this cycle.
>
>  *
>
> This week is ELC-E in Lyon, so meetings are limited.  If anyone
> reading this is there please do visit the Yocto Project booth and
> say hello!
>
>  *
>
> Patches have been flowing fast into master.  Due to ELC-E this
> will slow down this week, but Ross will continue to collect
> patches for testing in ross/mut.
>
>  *
>
> There are ongoing intermittent autobuilder failures, particularly
> in selftest but in other areas too. There is a separate email
> about this and we could do with help in debugging and resolving
> those issues.
>
>  *
>
> YP 2.6.4 was built and has passed QA, will be released imminently.
>
>  *
>
> YP 2.7.2 was held due to an unexplained test failure but will now
> be built in the next few days.
>
>  *
>
> Armin and Anuj have volunteered to maintain Zeus and they plan to
> work out the maintainership between them, thanks!
>

I have updated the wiki to reflect that.

I also added in 3.1 and 3.2

Any notion of a code name?

>  *
>
> We have begun collecting ideas for YP 3.1 in this document:
> 
> https://docs.google.com/document/d/1UKZIGe88-eq3-pOPtkAvFAegbQDzhy_f4ye64yjnABc/edit?usp=sharing
>
>  *
>
> If anyone has any status items for the project they’d like to add
> to the weekly reports, please email Richard and Stephen.
>

When are we planning on adding Centos 8?

>
> Planned upcoming dot releases:
>
>  *
>
> YP 2.7.2 (Warrior) is planned this week.
>
>  *
>
> YP 2.6.4 (Thud) is is to be released shortly.
>

This may be the last Thud update and I have patches still being queued.
Not sure when it will shift to community support.


I have stated marking older builds as "EOL" on the release page.


>
> Tracking Metrics:
>
>  *
>
> WDD 2493 (last week
> 2498)(https://wiki.yoctoproject.org/charts/combo.html)
>
>  *
>
> Poky Patch Metrics  
>
>  o
>
> Total patches found: 1441 (last week 1432)
>
>  o
>
> Patches in the Pending State: 579 (40%) [last week 578 (41%)]
>
>
> The Yocto Project’s technical governance is through its Technical
> Steering Committee, more information is available at:
>
> https://wiki.yoctoproject.org/wiki/TSC
>
>
> The Status reports are now stored on the wiki at:
> https://wiki.yoctoproject.org/wiki/Weekly_Status
>
>
> [If anyone has suggestions for other information you’d like to see on
> this weekly status update, let us know!]
>

Do we want a separate stable report or include in this ?

-armin
>
> -- 
>
> Thanks,
>
>  
>
> */Stephen K. Jolley/*
>
> *Yocto Project Program Manager*
>
> *7867 SW Bayberry Dr., Beaverton, OR 97007*
>
> (*Cell*:    (208) 244-4460
>
> * *Email*: _s
> jolley.yp...@gmail.com
> _
>
>

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] systemd Version Going Backwards on Warrior

2019-10-29 Thread Robert Joslyn
On Tue, 2019-10-29 at 20:30 +0100, Martin Jansa wrote:
> PR server never knows which one is really newer (in git).
> 
> It just returns max(LOCALCOUNT)+1 when it gets query for hash which
> isn't stored in the database yet.
> 
> Either the build in question didn't use PRserv at all or PRserv's cache
> was deleted between builds or the builds were using the same
> buildhistory but different PRserv's or the first systemd SRCREV was
> reused from sstate created on another server which doesn't share the
> PRserv (so it didn't build it locally to query local PRserv to
> store 511646b8ac as LOCALCOUNT).
> 
> e.g. I've just built systemd with 511646b8ac hash as:
> systemd_241+0+511646b8ac-r0.0webos4_qemux86.ipk
> but reusing it from sstate created on another jenkins server as shown by
> buildstats-summary:
> 
> NOTE: Build completion summary:
> NOTE:   do_populate_sysroot: 100.0% sstate reuse(124 setscene, 0
> scratch)
> NOTE:   do_package_qa: 100.0% sstate reuse(10 setscene, 0 scratch)
> NOTE:   do_packagedata: 100.0% sstate reuse(51 setscene, 0 scratch)
> NOTE:   do_package_write_ipk: 100.0% sstate reuse(10 setscene, 0
> scratch)
> NOTE:   do_populate_lic: 100.0% sstate reuse(17 setscene, 0 scratch)
> 
> Then I've reverted oe-core
> commit 8b9703454cb2a8a0aa6b7942498f191935d547ea to go back
> to c1f8ff8d0de7e303b8004b02a0a47d4cc103a7f8 systemd revision.
> 
> This time it haven't found valid sstate archive for it:
> NOTE: Build completion summary:
> NOTE:   do_populate_sysroot: 0.0% sstate reuse(0 setscene, 16 scratch)
> NOTE:   do_package_qa: 0.0% sstate reuse(0 setscene, 19 scratch)
> NOTE:   do_package: 15.8% sstate reuse(3 setscene, 16 scratch)
> NOTE:   do_packagedata: 0.0% sstate reuse(0 setscene, 16 scratch)
> NOTE:   do_package_write_ipk: 0.0% sstate reuse(0 setscene, 19 scratch)
> NOTE:   do_populate_lic: 100.0% sstate reuse(2 setscene, 0 scratch)
> 
> and resulting .ipk has also +0:
> systemd_241+0+c1f8ff8d0d-r0.0webos4_qemux86.ipk
> but no warning is shown, because in this case it went from 511 to c1f.
> 
> Removing the revert again, doesn't trigger the warning again, because it
> will be again reused from sstate (and QA checks won't get executed):
> NOTE: Build completion summary:
> NOTE:   do_populate_sysroot: 100.0% sstate reuse(16 setscene, 0 scratch)
> NOTE:   do_package_qa: 100.0% sstate reuse(19 setscene, 0 scratch)
> NOTE:   do_packagedata: 100.0% sstate reuse(16 setscene, 0 scratch)
> NOTE:   do_package_write_ipk: 100.0% sstate reuse(19 setscene, 0
> scratch)
> NOTE:   do_populate_lic: 100.0% sstate reuse(2 setscene, 0 scratch)
> 
> And the local PRserv database still has only the c1f8ff8d0d hash,
> because 511646b8ac was never really queried against this local PRserv.
> 
> cache$ sqlite3 prserv.sqlite3 "select * from PRMAIN_nohist where version
> like 'AUTOINC-systemd-1%'"
> AUTOINC-systemd-1_241+|qemux86|AUTOINC+c1f8ff8d0d|0
> 
> Also systemd recipe is using strange format with:
> PV_append = "+${SRCPV}"
> 
> most recipes use "+git${SRCPV}" or "+gitr${SRCPV} to make it more clear
> where this +0+hash came from.
> 
> So long story short: the change is correct, PRserv should handle this,
> but there are many cases where it will fail (e.g. 
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=5399), but that's not
> a reason to start PE bumps everywhere.

I think this explains what I'm seeing and matches what Ross said. I did
setup the same test and am able to see the version going forward properly
when I have the PR server enabled. The file goes from

systemd_241+0+c1f8ff8d0d-r0.0_core2-64.ipk
to
systemd_241+1+511646b8ac-r0.0_core2-64.ipk

It wasn't obvious to me that the +0 would be incremented by the PR server,
I guess I never noticed it before. I already had the PR server running for
my production builds, but I didn't have it enabled for my test builds
where I got the error. I'll setup another PR server for my test builds to
prevent false alarms like this.

Thanks for the help!

Robert


-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [zeus] icu-native-64.2-r0 do_configure: configure failed

2019-10-29 Thread star
Build of image failed, I got strange and long error messages like:

ERROR: icu-native-64.2-r0 do_configure: configure failed
ERROR: icu-native-64.2-r0 do_configure: Execution of 
'/home/.../tmp/work/x86_64-linux/icu-native/64.2-r0/temp/run.do_configure
...
| configure: WARNING: unrecognized options: --disable-silent-rules, 
--disable-dependency-tracking
| Not rebuilding data/rules.mk, assuming prebuilt data in data/in
| Spawning Python to generate test/testdata/rules.mk...
| Traceback (most recent call last):
| File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
| "__main__", mod_spec)
| File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
| exec(code, run_globals)
| File 
"/home/.../tmp/work/x86_64-linux/icu-native/64.2-r0/icu/source/data/buildtool/__main__.py",
 line 19, in 
| import BUILDRULES
| File 
"/home/.../tmp/work/x86_64-linux/icu-native/64.2-r0/icu/source/test/testdata/BUILDRULES.py",
 line 4, in 
| from distutils.sysconfig import parse_makefile
| ModuleNotFoundError: No module named 'distutils.sysconfig'
| configure: error: Python failed to run; see above error.
| WARNING: exit code 1 from a shell command.
|
ERROR: Task 
(virtual:native:/home/.../poky/meta/recipes-support/icu/icu_64.2.bb:do_configure)
 failed with exit code '1'
NOTE: Tasks Summary: Attempted 1846 tasks of which 1819 didn't need to be rerun 
and 1 failed.

Summary: 1 task failed:
virtual:native:/home/.../poky/meta/recipes-support/icu/icu_64.2.bb:do_configure
Summary: There were 2 ERROR messages shown, returning a non-zero exit code.

-

The messages has been so strange (to me) - couldn't found the trigger for 
building that. Finally I bitbaked all my image packages manually and found that 
strace and valgrind triggers this icu-native. So I have two questions:

a) What goes wrong here and how can this be avoid?
b) How do I found which module is responsible. There is nothing about 
strace/valgrind mentioned in the log. I just had to build all my additional 
packets of my image manually to find the triggers for this icu-native.

Thank you
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto