[yocto] QA cycle report for 2.6 M1 RC1

2018-07-03 Thread Yeoh, Ee Peng
Hello All,



This is the full report for 2.6 M1 RC1:

https://wiki.yoctoproject.org/wiki/WW27_-_2018-07-02_-_Full_Test_Cycle_2.6_M1_rc1



=== Summary 



99% of planned tests were executed except automated SDK test for 
buildgalculator (testrun# 9747-9753) and automated SDK image tests for 
core-image-sato-sdk-qemumips (testrun# 9713-9714), where those tests were 
excluded from the automated test suites and it was still under investigation 
why they were excluded.



There were zero high milestone defect.  Team had found 9 new defects where 
Edgerouter [1] & Beaglebone Black [3] defects were resolved.  Team found that 
pTest for busybox and valgrind had testcases passed in previous 2.5 M4 rc1 but 
failed during this release [8] [9].  Team had also found that existing 
automated xorg test was passed successfully while it was not able to catch 
matchbox window manager issue on qemumips [4], thus team had added manual 
graphic tests as temporary solution.



=== QA-Hints



Two medium+ defects where 1 was resolved and 1 under design.



=== Bugs 



New Bugs

[1] Bug 12790 - [2.6 M1] Edgerouter can not boot

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12790



[2] Bug 12804 - [QA 2.6 M1 rc1 ][Build Appliance]: proxy issue in VM ware

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12804



[3] Bug 12795 - [2.6 M1] gcc/g++ doesn't work on beaglebone black

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12795



[4] Bug 12806 - [2.6 M1 rc1 ][BSP][Test Run 9708]: Qemuppc image was not 
booting with graphic even though runtime/xorg.py test passed

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12806



[5] Bug 12832 - [ 2.6 M1 rc1 ][BSP][Test case 267]: audio and video does not 
play in media player[Mturbot x86-64 and NUC7]

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12832



[6] Bug 12802 - [Yocto-2.6_M1.RC1] Crosstap doesn't work on 2.6 M1 RC1

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12802



[7] Bug 12813 - [QA 2.6 M1 rc1 ][Toaster]:Build stopped without error, to 
terminate by ctrl+c

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12813



[8] Bug 12836 - [2.6 M1 RC1] busybox ptest failed

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12836



[9] Bug 12837 - [2.6 M1 RC1] valgrind ptest failed

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12837



Regards

Ee Peng

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] QA cycle report for 2.6 M1 RC1

2018-07-03 Thread Richard Purdie
Thanks Ee Peng and team, this release seemed to have a number of issues
and proved challenging to QA. I appreciate the work that goes into
these (particularly given the other releases also queued), this is a
useful QA report and has found some real issues.

On Tue, 2018-07-03 at 07:09 +, Yeoh, Ee Peng wrote:
> This is the full report for 2.6 M1 RC1: 
> https://wiki.yoctoproject.org/wiki/WW27_-_2018-07-02_-_Full_Test_Cycl
> e_2.6_M1_rc1
>  
> === Summary 
>  
> 99% of planned tests were executed except automated SDK test for
>  (testrun# 9747-9753) and automated SDK image tests
> for core-image-sato-sdk-qemumips (testrun# 9713-9714), where those
> tests were excluded from the automated test suites and it was still
> under investigation why they were excluded.

I think buildgalculator is excluded simply because it takes so long to
run as the mips emulation in qemu is slow. There might be information
in commit logs about why we did that.
 
> There were zero high milestone defect.  Team had found 9 new defects
> where Edgerouter [1] & Beaglebone Black [3] defects were resolved. 
> Team found that pTest for busybox and valgrind had testcases passed
> in previous 2.5 M4 rc1 but failed during this release [8] [9].  Team
> had also found that existing automated xorg test was passed
> successfully while it was not able to catch matchbox window manager
> issue on qemumips [4], thus team had added manual graphic tests as
> temporary solution.  
>  
> === QA-Hints
>  
> Two medium+ defects where 1 was resolved and 1 under design. 
>  
> === Bugs 
>  
> New Bugs
> [1] Bug 12790 - [2.6 M1] Edgerouter can not boot
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=12790

[Fixed in master]
 
> [2] Bug 12804 - [QA 2.6 M1 rc1 ][Build Appliance]: proxy issue in VM
> ware
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=12804

[Regression where we understand which change broke things but not fixed
yet]
 
> [3] Bug 12795 - [2.6 M1] gcc/g++ doesn't work on beaglebone black
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=12795

[Fixed in master]

> [4] Bug 12806 - [2.6 M1 rc1 ][BSP][Test Run 9708]: Qemuppc image was
> not booting with graphic even though runtime/xorg.py test passed
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=12806

[Needs further debugging - Ross' comment about arm64 confuses me in a
qemuppc bug :/]
 
> [5] Bug 12832 - [ 2.6 M1 rc1 ][BSP][Test case 267]: audio and video
> does not play in media player[Mturbot x86-64 and NUC7]
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=12832

[Not triaged yet]
 
> [6] Bug 12802 - [Yocto-2.6_M1.RC1] Crosstap doesn't work on 2.6 M1
> RC1
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=12802

[Unclear if functionality is broken or just a warning message]

> [7] Bug 12813 - [QA 2.6 M1 rc1 ][Toaster]:Build stopped without
> error, to terminate by ctrl+c
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=12813

[Valid issue, not release blocker and scheduled for M3]
 
> [8] Bug 12836 - [2.6 M1 RC1] busybox ptest failed
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=12836

[Regression: 4 wget tests that passed now fail]

> [9] Bug 12837 - [2.6 M1 RC1] valgrind ptest failed
> https://bugzilla.yoctoproject.org/show_bug.cgi?id=12837

[Regression: 19 tests started failing, 2 started passing]


My take on this is that we probably should release M1 rc1 and move to
concentrate on M2 but we need to ensure more of the above issued are
fixed before we can build M2. This QA report tells us very clearly
where we need to focus effort on addressing regressions.

Cheers,

Richard

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-raspberrypi] Waveshare touchscreen

2018-07-03 Thread Michele Tirinzoni
Thanks Trevor. I don't see he zero in their supported device list, I will
try to contact them.
Anyway, the config.txt is just enough to be able to use the screen with
touch functionality ? No need to have any driver installed ?

On Tue, 3 Jul 2018 at 04:33, Trevor Woerner  wrote:

> On Wed, Jun 27, 2018 at 7:38 AM, Michele Tirinzoni 
> wrote:
>>
>> I saw in the documentation page that the Waveshare touchscreen is
>> supported.
>> Before buying one of those I'd like to know if anyone tried it recently
>> with a rpi zero w or if there's any known issue.
>>
>
> I added the support for the waveshare screen and use it regularly with a
> Raspberry Pi 3 (B and B+) 32-bit. My _latest_ build and test from master
> for RPi3B+-32 was just last week. Check the commit message for the specific
> one I'm using:
>
>
> https://github.com/agherzan/meta-raspberrypi/commit/da32aac453da278e254d37b816602410af85d162#diff-a8b738ce971c646d8c30f0d75c6c45b9
>
> I haven't tried it with the Zero, in fact, I haven't tried a Zero at all.
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-ab-helper] Extend LAVA buildset JSON to ABHELPER

2018-07-03 Thread Aaron Chan
This patch is an extension to default config.json with ABHELPER_JSON env set.
This extension is to support buildset config for target MACHINE intel-corei7-64
with meta-intel layer included.

Signed-off-by: Aaron Chan 
---
 config-x86_64-lava.json | 34 ++
 1 file changed, 34 insertions(+)
 create mode 100644 config-x86_64-lava.json

diff --git a/config-x86_64-lava.json b/config-x86_64-lava.json
new file mode 100644
index 000..81e248d
--- /dev/null
+++ b/config-x86_64-lava.json
@@ -0,0 +1,34 @@
+{
+"overrides" : {
+"nightly-x86-64-bsp" : {
+"NEEDREPOS" : ["poky", "meta-intel", "meta-openembedded"],
+   "step1" : {
+"MACHINE" : "intel-corei7-64",
+"SDKMACHINE" : "x86_64",
+"extravars" : [
+"DISTRO_FEATURES_append = \" systemd\"",
+"IMAGE_INSTALL_append = \" udev util-linux systemd\"",
+"CORE_IMAGE_EXTRA_INSTALL_append += \"python3 python3-pip 
python-pip git socat apt dpkg openssh\"",
+"IMAGE_FSTYPES = \"tar.gz\""
+],
+"ADDLAYER" : [
+"../meta-intel",
+"../meta-openembedded"
+],
+"BBTARGETS" : "core-image-sato-sdk"
+}
+}
+},
+"repo-defaults" : {
+"meta-intel" : {
+"url" : "git://git.yoctoproject.org/meta-intel",
+"branch" : "master",
+"revision" : "HEAD"
+},
+"meta-openembedded" : {
+"url" : "git://git.openembedded.org/meta-openembedded",
+"branch" : "master",
+"revision" : "HEAD"
+}
+}
+}
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Fwd: Basehash value changed issue

2018-07-03 Thread techi eth
i don't see any time usage in recipe however if this variable need's to
exclude than could you please share me hint or reference of recipe to do
the same.
I tried below in recipe but no success.
PR[vardepsxeclude]="DATETIME DATE TIME"


On Mon, Jul 2, 2018 at 4:51 PM, Mike Looijmans 
wrote:

> The simplest (and probably preferred) way to fix would be to get rid of
> TIME usage from that recipe. Parsing the recipe twice should yield the same
> result and your trouble would be over.
>
>
> On 02-07-18 07:27, techi eth wrote:
>
>> Hi,
>>
>> Can anybody give me hint over below issue.
>>
>> Thanks
>>
>>
>>
> Kind regards,
>
> Mike Looijmans
> System Expert
>
> TOPIC Products
> Materiaalweg 4, NL-5681 RJ Best
> Postbus 440, NL-5680 AK Best
> Telefoon: +31 (0) 499 33 69 79
> E-mail: mike.looijm...@topicproducts.com
> Website: www.topicproducts.com
>
> Please consider the environment before printing this e-mail
>
>
>
> -- Forwarded message --
>
>> From: *techi eth* mailto:techi...@gmail.com>>
>> Date: Thu, Jun 21, 2018 at 6:30 PM
>> Subject: Basehash value changed issue
>> To: yocto@yoctoproject.org 
>>
>>
>> Hi,
>>
>> I am facing issue with basehash value changed while building image on one
>> of my test board (Ref of beagle bone) on morty branch.
>>
>> Error :
>> gateway.bb.do_rootfs, the basehash value changed from
>> e685a429b8df6dcff60063f087d425ee to 3f98a102f48ea8722835ad0d65bfbc1f.
>> The metadata is not deterministic and this needs to be fixed
>>
>> When i run bitbake-diffsigs -t gateway do_rootfs
>> I found below O/P.
>> basehash changed from 8e6b9498c9704590bd016491efcbf9f9 to
>> 3f98a102f48ea8722835ad0d65bfbc1f
>> Variable TIME value changed from '112854' to '115745'
>>
>> After googling I found that TIME need's to be added in vardepsexclude
>> list.
>> I added below in conf/distro/machine.conf but error persist.
>> do_rootfs[vardepsexclude] = "TIME DATE DATETIME"
>>
>> Please suggest me where & what has to be added to come out of issue.
>>
>>
>>
>>
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Fwd: Basehash value changed issue

2018-07-03 Thread Richard Purdie
On Mon, 2018-07-02 at 17:37 +0530, techi eth wrote:
> I have not got success of building my first image so it happen's to
> me always. I also tried deleting tmp,cache folder & re-build again
> but problem persist.
> Is it something to do with timestamps ?
> 
> I do see below patch & applied changes but problem persist.
> http://cgit.openembedded.org/openembedded-core/commit/?id=4af13a4855c
> 74cea9cf6c168fd73165d7094bf93

do_rootfs[vardepsexclude] = "TIME DATE DATETIME"

The above means stop the do_rootfs *function* depending on those
values. It is not a task wide exclusion, its limited to that function
only. I suspect there is some other function in your build depending on
these values and you need:

[vardepsexclude] = "TIME DATE DATETIME"

but which function is including these is something you'll need to
figure out.

Running bitbake-dumpsig on the do_rootfs sigdata file should be able to
tell you that.

Cheers,

Richard
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH] [yocto-ab-helper] utils.py: Resolved unicode data expansion

2018-07-03 Thread Richard Purdie
On Tue, 2018-07-03 at 09:44 +0800, Aaron Chan wrote:
> Updated patch to trigger handlestr() when unicode string is found
> during iteration json.loads(config.json). Unicode and list with data
> expansion were not handled hence adding this patch to handle
> conversion.
> Added a debug message to dump pretty json data populated to
> ourconfig[c].
> 
> e.g "REPO_STASH_DIR" read as ${BASE_HOMEDIR}/git/mirror, where it
> should be
> "REPO_STASH_DIR" as /home/pokybuild/git/mirror
> 
> Signed-off-by: Aaron Chan 
> ---
>  scripts/utils.py | 6 +-
>  1 file changed, 5 insertions(+), 1 deletion(-)

It took me a while to figure out why you were doing this.

We can't expand the data half way through loading the json file as
other pieces of data may later override the values. We therefore have
to defer expansion of variables until the file is completely loaded.

We therefore have to expand the variables later on, when we read them.

I pointed you at this commit:

http://git.yoctoproject.org/cgit/cgit.cgi/yocto-autobuilder-helper/commit/?id=d6253df2bc21752bc0b53202e491140b0994ff63

which changes direct accesses into ourconfig, e.g.:

ourconfig["REPO_STASH_DIR"]

into accesses using a function:

utils.getconfig("REPO_STASH_DIR", ourconfig)

and that function handles the expansion.

You should therefore be able to fix the clobberdir issue by using the
getconfig() method instead of direct access?

Cheers,

Richard


-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH] [yocto-ab-helper] Extend LAVA buildset JSON to ABHELPER

2018-07-03 Thread Richard Purdie
On Tue, 2018-07-03 at 16:40 +0800, Aaron Chan wrote:
> This patch is an extension to default config.json with ABHELPER_JSON
> env set.
> This extension is to support buildset config for target MACHINE
> intel-corei7-64
> with meta-intel layer included.
> 
> Signed-off-by: Aaron Chan 
> ---
>  config-x86_64-lava.json | 34 ++
>  1 file changed, 34 insertions(+)
>  create mode 100644 config-x86_64-lava.json

Thanks, I've merged this but I renamed the file "config-intelqa-x86_64-
lava.json" so we're clear who owns it and what its for.

Cheers,

Richard
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Updated invitation: Yocto Project Technical Team Meeting @ Monthly from 8am to 8:30am on the first Tuesday (PDT) (stephen.k.jol...@intel.com)

2018-07-03 Thread Jolley, Stephen K
Attendees: Richard, Nick, Ross, Stephen, Tom, Rob, Joshua, Scott, Tim, Bruce,

YP Status:
•   YP 2.6 M1 is just out of QA.  See 
https://wiki.yoctoproject.org/wiki/WW27_-_2018-07-02_-_Full_Test_Cycle_2.6_M1_rc1
  it came back with a number of issues.  We discussed the issues.  We decided 
we would insure the bugs found will be fixed before we release M2, but we will 
release M1 as is.
•   YP 2.3.4 (Pyro) rc1 is built and in QA see 
https://wiki.yoctoproject.org/wiki/2.3_QA_Status it is at 97% complete.
•   YP 2.2.4 (Morty) rc1 is built and in QA see 
https://wiki.yoctoproject.org/wiki/2.2_QA_Status
 it is at 98% complete.
Opens:
Richard – Discussed that he is now employed for the next 3 months by YP.  The 
Advisory Board (AB) will work out the remaining issues over the 3 months to 
hopefully fully employee Richard by YP.
Richard – We now have the new Autobuilder working and some issues are still 
needing fixed to get this fully working.
Richard – Discussed that 2.6 feature list is not fully committed.  As such it 
is hard to know what will be in YP 2.6.
Joshua – Discussed what is coming and status of the sstate update he is working 
on.

Thanks,

Stephen


-Original Appointment-
From: theyoctoproj...@gmail.com [mailto:theyoctoproj...@gmail.com]
Sent: Tuesday, May 29, 2018 10:59 AM
To: theyoctoproj...@gmail.com; yocto@yoctoproject.org; Jolley, Stephen K
Subject: Updated invitation: Yocto Project Technical Team Meeting @ Monthly 
from 8am to 8:30am on the first Tuesday (PDT) (stephen.k.jol...@intel.com)
When: Tuesday, July 03, 2018 8:00 AM-8:30 AM America/Los_Angeles.
Where: Zoom Meeting: https://zoom.us/j/990892712


This event has been changed.
more details 
»

Yocto Project Technical Team Meeting
Changed: We encourage people attending the meeting to logon and announce 
themselves on the Yocto Project IRC chancel during the meeting (optional):

Yocto IRC: 
http://webchat.freenode.net/?channels=#yocto

Wiki: 
https://www.yoctoproject.org/public-virtual-meetings/

Bridge is with Zoom at: 
https://zoom.us/j/990892712
When
Monthly from 8am to 8:30am on the first Tuesday Pacific Time

Where
Zoom Meeting: https://zoom.us/j/990892712 
(map)

Calendar
stephen.k.jol...@intel.com

Who
•
theyoctoproj...@gmail.com - organizer

•
yocto@yoctoproject.org

•
stephen.k.jol...@intel.com



Going?   All events in this series:   
Yes
 - 
Maybe
 - 
No
more options 
»
Invitation from Google Calendar

You are receiving this email at the account stephen.k.jol...@intel.com because 
you are subscribed for updated invitations on calendar 
stephen.k.jol...@intel.com.
To stop receiving these emails, please log in to 
https://www.google.com/calendar/ and change your notification settings for this 
calendar.
Forwarding this invitation could allow any recipient to modify your RSVP 
response. Learn 
More.
 << File: invite.ics >>

-- 
___
yocto mailing lis

Re: [yocto] QA cycle report for 2.6 M1 RC1

2018-07-03 Thread Jolley, Stephen K
You will need to request access via AGS See: 
https://soco.intel.com/docs/DOC-2009343 on how to do so.
You should go to: https://ags.intel.com/ and follow the steps in the above 
document.  The item to search for is DevTools - JIRA - Yocto Project - User

Thanks,

Stephen

From: Yeoh, Ee Peng
Sent: Tuesday, July 03, 2018 12:10 AM
To: 'yocto@yoctoproject.org' ; Jolley, Stephen K 
; Eggleton, Paul ; 
'richard.pur...@linuxfoundation.org' 
Cc: Sangal, Apoorv ; Kirkiris, Nectar 

Subject: QA cycle report for 2.6 M1 RC1


Hello All,



This is the full report for 2.6 M1 RC1:

https://wiki.yoctoproject.org/wiki/WW27_-_2018-07-02_-_Full_Test_Cycle_2.6_M1_rc1



=== Summary 



99% of planned tests were executed except automated SDK test for 
buildgalculator (testrun# 9747-9753) and automated SDK image tests for 
core-image-sato-sdk-qemumips (testrun# 9713-9714), where those tests were 
excluded from the automated test suites and it was still under investigation 
why they were excluded.



There were zero high milestone defect.  Team had found 9 new defects where 
Edgerouter [1] & Beaglebone Black [3] defects were resolved.  Team found that 
pTest for busybox and valgrind had testcases passed in previous 2.5 M4 rc1 but 
failed during this release [8] [9].  Team had also found that existing 
automated xorg test was passed successfully while it was not able to catch 
matchbox window manager issue on qemumips [4], thus team had added manual 
graphic tests as temporary solution.



=== QA-Hints



Two medium+ defects where 1 was resolved and 1 under design.



=== Bugs 



New Bugs

[1] Bug 12790 - [2.6 M1] Edgerouter can not boot

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12790



[2] Bug 12804 - [QA 2.6 M1 rc1 ][Build Appliance]: proxy issue in VM ware

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12804



[3] Bug 12795 - [2.6 M1] gcc/g++ doesn't work on beaglebone black

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12795



[4] Bug 12806 - [2.6 M1 rc1 ][BSP][Test Run 9708]: Qemuppc image was not 
booting with graphic even though runtime/xorg.py test passed

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12806



[5] Bug 12832 - [ 2.6 M1 rc1 ][BSP][Test case 267]: audio and video does not 
play in media player[Mturbot x86-64 and NUC7]

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12832



[6] Bug 12802 - [Yocto-2.6_M1.RC1] Crosstap doesn't work on 2.6 M1 RC1

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12802



[7] Bug 12813 - [QA 2.6 M1 rc1 ][Toaster]:Build stopped without error, to 
terminate by ctrl+c

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12813



[8] Bug 12836 - [2.6 M1 RC1] busybox ptest failed

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12836



[9] Bug 12837 - [2.6 M1 RC1] valgrind ptest failed

https://bugzilla.yoctoproject.org/show_bug.cgi?id=12837



Regards

Ee Peng

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Canceled: Yocto Project Technical Team Meeting

2018-07-03 Thread Jolley, Stephen K
BEGIN:VCALENDAR
METHOD:CANCEL
PRODID:Microsoft Exchange Server 2010
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Pacific Standard Time
BEGIN:STANDARD
DTSTART:16010101T02
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=1SU;BYMONTH=11
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:16010101T02
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=2SU;BYMONTH=3
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
ORGANIZER;CN="Jolley, Stephen K":MAILTO:stephen.k.jol...@intel.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=yocto@yoct
 oproject.org:MAILTO:yocto@yoctoproject.org
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=OTC Embedd
 ed All:MAILTO:otc.embedded@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN="Cetola, St
 ephano":MAILTO:stephano.cet...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN="Stewart, D
 avid C":MAILTO:david.c.stew...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='Manjukuma
 r Harthikote Matha':MAILTO:manju...@xilinx.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN="Erway, Tra
 cey M":MAILTO:tracey.m.er...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN="Mueller, R
 obert":MAILTO:robert.muel...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN="Jordan, Ro
 bin L":MAILTO:robin.l.jor...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN="Moses, Fre
 d":MAILTO:fred.mo...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN="Orling, Ti
 mothy T":MAILTO:timothy.t.orl...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='Randy Mac
 Leod':MAILTO:randy.macl...@windriver.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN="Bodke, Kis
 hore K":MAILTO:kishore.k.bo...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='Michael L
 im':MAILTO:youh...@us.ibm.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN="Ang, Chin 
 Huat":MAILTO:chin.huat@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN="Chan, Aaro
 n Chun Yew":MAILTO:aaron.chun.yew.c...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='Vignesh R
 ajendran (RBEI/ECF3)':MAILTO:vignesh.rajend...@in.bosch.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='ID - Davi
 d Torres':MAILTO:dtor...@fermax.com
DESCRIPTION;LANGUAGE=en-US:This is the old meeting invite.  I thought I had
  canceled it.  We now have a zoom meeting for this.\n\n
RRULE:FREQ=MONTHLY;INTERVAL=1;BYDAY=1TU
SUMMARY;LANGUAGE=en-US:Canceled: Yocto Project Technical Team Meeting
DTSTART;TZID=Pacific Standard Time:20180605T08
DTEND;TZID=Pacific Standard Time:20180605T083000
UID:04008200E00074C5B7101A82E008E066FB6D4BE8D301000
 010001183840AEA5013459814913AD8DC3311
CLASS:PUBLIC
PRIORITY:1
DTSTAMP:20180703T150833Z
TRANSP:OPAQUE
STATUS:CANCELLED
SEQUENCE:1
LOCATION;LANGUAGE=en-US:Bridge Info Enclosed
X-MICROSOFT-CDO-APPT-SEQUENCE:1
X-MICROSOFT-CDO-OWNERAPPTID:740972514
X-MICROSOFT-CDO-BUSYSTATUS:FREE
X-MICROSOFT-CDO-INTENDEDSTATUS:FREE
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:2
X-MICROSOFT-CDO-INSTTYPE:1
X-MICROSOFT-DISALLOW-COUNTER:FALSE
END:VEVENT
BEGIN:VEVENT
SUMMARY:Canceled: Yocto Project Technical Team Meeting
DTSTART;TZID=Pacific Standard Time:20180605T08
DTEND;TZID=Pacific Standard Time:20180605T083000
UID:04008200E00074C5B7101A82E008E066FB6D4BE8D301000
 010001183840AEA5013459814913AD8DC3311
RECURRENCE-ID;TZID=Pacific Standard Time:20180605T00
CLASS:PUBLIC
PRIORITY:1
DTSTAMP:20180703T150833Z
TRANSP:OPAQUE
STATUS:CANCELLED
SEQUENCE:1
LOCATION:Bridge Info Enclosed
X-MICROSOFT-CDO-APPT-SEQUENCE:1
X-MICROSOFT-CDO-OWNERAPPTID:740972514
X-MICROSOFT-CDO-BUSYSTATUS:FREE
X-MICROSOFT-CDO-INTENDEDSTATUS:FREE
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:2
X-MICROSOFT-CDO-INSTTYPE:1
X-MICROSOFT-DISALLOW-COUNTER:FALSE
END:VEVENT
END:VCALENDAR
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] struggling with initramfs

2018-07-03 Thread Tim Hammer
Can anyone point me to a step-by-step tutorial or simple how-to on creating
and using an initramfs with my kernel for ARM aarch64?


I have tried creating my own:
 - boot-image.bb file with IMAGE_FSTYPES = "cpio.gz".
 - local.conf has INITRAMFS_IMAGE_BUNDLE = "1"
 - linux.bbappend has INITRAMFS_IMAGE = "boot-image"

This all seems to be "correct" to the extent that bitbake linux tries to do
the right thing.

However, I get a failure in do_bundle_initramfs- "mv: cannot stat
'arch/arm64/boot/Image': No such file or directory".

To the best of my (limited) debugging abilities with Yocto, it seems like
the kernel image backup has already been run when it gets to this point and
the Image file in that directory has already been moved to Image.bak. If I
comment out the mv statement in kernel.bbclass causing the failure, the
process continues, but the initramfs does not seem to get populated or
perhaps installed into my kernel image as I get kernel panics that I have
been unable to get past.


I decided to take a different approach and try using the
core-image-minimal-initramfs recipe as INITRAMFS_IMAGE. By commenting out
the COMPATIBLE_HOST entry I am able to build a kernel for ARM aarch64. I
can even seem to boot into this initramfs- it counts down waiting for
removable media; seems to find my primary rootfs on sda3, but there is no
rootfs.img file there so says it is dropping to a shell (although I never
get a prompt...).

Thinking I could start with that recipe and work to get rid of the live
stuff and just get to a busybox prompt before trying to run my unique init
commands, I copied  core-image-minimal-initramfs.bb to my-
core-image-minimal-initramfs.bb in my layer and changed INITRAMFS_IMAGE to
"my- core-image-minimal-initramfs".
However, I obviously missed something in the configuration as I get an
error in go_bundle_initramfs again:
 kernel-source/scripts/gen_initramfs_list.sh:
 Cannot open
'/.../linux-qoriq/4.14-r0/build/usr/my-core-image-minimal-initramfs-{machine}.cpio'

Any help would be greatly appreciated.
Thank you!
-- 

.Tim
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] struggling with initramfs

2018-07-03 Thread Prakash Ks
Try Adding
KERNEL_INITRAMFS = "-initramfs"
in platform specific .conf file


Thanks!
Prakash

On Tue, Jul 3, 2018 at 11:53 AM Tim Hammer  wrote:

>
> Can anyone point me to a step-by-step tutorial or simple how-to on
> creating and using an initramfs with my kernel for ARM aarch64?
>
>
> I have tried creating my own:
>  - boot-image.bb file with IMAGE_FSTYPES = "cpio.gz".
>  - local.conf has INITRAMFS_IMAGE_BUNDLE = "1"
>  - linux.bbappend has INITRAMFS_IMAGE = "boot-image"
>
> This all seems to be "correct" to the extent that bitbake linux tries to
> do the right thing.
>
> However, I get a failure in do_bundle_initramfs- "mv: cannot stat
> 'arch/arm64/boot/Image': No such file or directory".
>
> To the best of my (limited) debugging abilities with Yocto, it seems like
> the kernel image backup has already been run when it gets to this point and
> the Image file in that directory has already been moved to Image.bak. If I
> comment out the mv statement in kernel.bbclass causing the failure, the
> process continues, but the initramfs does not seem to get populated or
> perhaps installed into my kernel image as I get kernel panics that I have
> been unable to get past.
>
>
> I decided to take a different approach and try using the
> core-image-minimal-initramfs recipe as INITRAMFS_IMAGE. By commenting out
> the COMPATIBLE_HOST entry I am able to build a kernel for ARM aarch64. I
> can even seem to boot into this initramfs- it counts down waiting for
> removable media; seems to find my primary rootfs on sda3, but there is no
> rootfs.img file there so says it is dropping to a shell (although I never
> get a prompt...).
>
> Thinking I could start with that recipe and work to get rid of the live
> stuff and just get to a busybox prompt before trying to run my unique init
> commands, I copied  core-image-minimal-initramfs.bb to my-
> core-image-minimal-initramfs.bb in my layer and changed INITRAMFS_IMAGE
> to "my- core-image-minimal-initramfs".
> However, I obviously missed something in the configuration as I get an
> error in go_bundle_initramfs again:
>  kernel-source/scripts/gen_initramfs_list.sh:
>  Cannot open
> '/.../linux-qoriq/4.14-r0/build/usr/my-core-image-minimal-initramfs-{machine}.cpio'
>
> Any help would be greatly appreciated.
> Thank you!
> --
>
> .Tim
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>


-- 
Thanks and Regards,
Prakash K S
+91 9620140303
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [meta-security][PATCH 1/2] suricata: update postinit

2018-07-03 Thread Armin Kuster
[log_check] WARNING: Intentionally failing postinstall scriptlets of 
['suricata', 'clamav'] to defer them to first boot is deprecated. Please place 
them into pkg_postinst_ontarget_${PN} ()

Signed-off-by: Armin Kuster 
---
 recipes-security/suricata/suricata_4.0.0.bb | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/recipes-security/suricata/suricata_4.0.0.bb 
b/recipes-security/suricata/suricata_4.0.0.bb
index 82d134b..e163486 100644
--- a/recipes-security/suricata/suricata_4.0.0.bb
+++ b/recipes-security/suricata/suricata_4.0.0.bb
@@ -46,8 +46,8 @@ do_install_append () {
 install -m 0644 ${WORKDIR}/volatiles.03_suricata  
${D}${sysconfdir}/default/volatiles/volatiles.03_suricata
 }
 
-pkg_postinst_${PN} () {
-if [ -z "$D" ] && [ -e /etc/init.d/populate-volatile.sh ] ; then
+pkg_postinst_ontarget_${PN} () {
+if [ -e /etc/init.d/populate-volatile.sh ] ; then
 ${sysconfdir}/init.d/populate-volatile.sh update
 fi
 ${bindir}/suricata -c ${sysconfdir}/suricata.yaml -i eth0 
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [meta-security][PATCH 2/2] clamav: update postinit

2018-07-03 Thread Armin Kuster
log_check] WARNING: Intentionally failing postinstall scriptlets of 
['suricata', 'clamav'] to defer them to first boot is deprecated. Please place 
them into pkg_postinst_ontarget_${PN} ()

Signed-off-by: Armin Kuster 
---
 recipes-security/clamav/clamav_0.99.3.bb | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/recipes-security/clamav/clamav_0.99.3.bb 
b/recipes-security/clamav/clamav_0.99.3.bb
index 043fa21..688250d 100644
--- a/recipes-security/clamav/clamav_0.99.3.bb
+++ b/recipes-security/clamav/clamav_0.99.3.bb
@@ -93,8 +93,8 @@ do_install_append() {
 fi
 }
 
-pkg_postinst_${PN} () {
-if [ -z "$D" ] && [ -e /etc/init.d/populate-volatile.sh ] ; then
+pkg_postinst_ontarget_${PN} () {
+if [ -e /etc/init.d/populate-volatile.sh ] ; then
 ${sysconfdir}/init.d/populate-volatile.sh update
 fi
 chown ${UID}:${GID} ${localstatedir}/lib/clamav
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] struggling with initramfs

2018-07-03 Thread Andre McCurdy
On Tue, Jul 3, 2018 at 11:02 AM, Tim Hammer  wrote:
>
> Can anyone point me to a step-by-step tutorial or simple how-to on creating
> and using an initramfs with my kernel for ARM aarch64?
>
> I have tried creating my own:
>  - boot-image.bb file with IMAGE_FSTYPES = "cpio.gz".

Note that the approach taken by kernel.bbclass is that the cpio image
is included uncompressed in the kernel and then (if you build a
compressed kernel image type) the kernel and cpio are compressed
together. What you've done is OK, but just be aware that
copy_initramfs() will uncompress the cpio image for you before it's
included in the kernel.

(If you want a compressed cpio image inside an uncompressed kernel
then kernel.bbclass would need some patching).

>  - local.conf has INITRAMFS_IMAGE_BUNDLE = "1"
>  - linux.bbappend has INITRAMFS_IMAGE = "boot-image"

Is your kernel recipe called linux? If not then the .bbappend might
not be applied when your actual kernel recipe is built. It doesn't
look like that's your issue as the error below from
do_bundle_initramfs() suggests that you got past the check that both
INITRAMFS_IMAGE and INITRAMFS_IMAGE_BUNDLE are as expected, but In
general it's probably safest to keep INITRAMFS_IMAGE_BUNDLE and
INITRAMFS_IMAGE together in a global config file (e.g. local.conf)
rather than making either of them recipe specific.

> This all seems to be "correct" to the extent that bitbake linux tries to do
> the right thing.
>
> However, I get a failure in do_bundle_initramfs- "mv: cannot stat
> 'arch/arm64/boot/Image': No such file or directory".

You don't mention what version of OE you are using? The
do_bundle_initramfs() code has historically been quite fragile,
especially around the time support for multiple kernel image types was
merged a year or two back. Everything should be OK now and fixes
should have been backported to releases, but if you're using an older
release there could be a fix which didn't make it.

Are you manually over-riding KERNEL_OUTPUT_DIR? How are you setting
KERNEL_IMAGETYPE, KERNEL_ALT_IMAGETYPE, KERNEL_IMAGETYPES and
KERNEL_IMAGETYPE_FOR_MAKE?

Note that setting a kernel image type of "Image.gz" didn't work until
relatively recently. I'm not sure how far that fix got backported (if
at all).

> To the best of my (limited) debugging abilities with Yocto, it seems like
> the kernel image backup has already been run when it gets to this point and
> the Image file in that directory has already been moved to Image.bak. If I
> comment out the mv statement in kernel.bbclass causing the failure, the
> process continues, but the initramfs does not seem to get populated or
> perhaps installed into my kernel image as I get kernel panics that I have
> been unable to get past.

The error is odd as the mv commands in do_bundle_initramfs() are only
run to backup kernel images (or symlinks to images) as
do_bundle_initramfs() finds them, or to rename a kernel image after a
call to kernel_do_compile() to create it.

You don't mention specifically which mv command in
do_bundle_initramfs() is failing. If it's after the call to
kernel_do_compile() then it suggests that kernel_do_compile() isn't
creating the expected kernel image (or isn't creating it in the
expected directory).

If you add a few "ls" commands to do_bundle_initramfs() it should be
fairly easy to see where files are (and whether or not they are
symlinks) before and after the call to kernel_do_compile() from the
do_bundle_initramfs() log.

> I decided to take a different approach and try using the
> core-image-minimal-initramfs recipe as INITRAMFS_IMAGE. By commenting out
> the COMPATIBLE_HOST entry I am able to build a kernel for ARM aarch64. I can
> even seem to boot into this initramfs- it counts down waiting for removable
> media; seems to find my primary rootfs on sda3, but there is no rootfs.img
> file there so says it is dropping to a shell (although I never get a
> prompt...).
>
> Thinking I could start with that recipe and work to get rid of the live
> stuff and just get to a busybox prompt before trying to run my unique init
> commands, I copied  core-image-minimal-initramfs.bb to my-
> core-image-minimal-initramfs.bb in my layer and changed INITRAMFS_IMAGE to
> "my- core-image-minimal-initramfs".
> However, I obviously missed something in the configuration as I get an error
> in go_bundle_initramfs again:
>  kernel-source/scripts/gen_initramfs_list.sh:
>  Cannot open
> '/.../linux-qoriq/4.14-r0/build/usr/my-core-image-minimal-initramfs-{machine}.cpio'
>
> Any help would be greatly appreciated.
> Thank you!
> --
>
> .Tim
>
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-ab-helper] utils.py: Resolved unicode data expansion

2018-07-03 Thread Aaron Chan
Patch fix on utils:getconfig:expandresult function to handle the expansion
This patch is to add a condition to handle unicode entries as dict & list
have been handled during expandresult.

janitor/clobberdir: [line 46]: changes
from : trashdir = ourconfig["TRASH_DIR"]
to   : trashdir = utils.getconfig("TRASH_DIR", ourconfig)

scripts/utils.py:  [line 41-47]: added
getconfig invokes only unicode entries and handles the data expansions.
This allows ${BUILDDIR} to be expanded, to retain ${BUILDDIR} in ourconfig[c],
we should never invoke utils.getconfig("BUILDDIR", ourconfig) in our scripts
unless we intend to change the BUILDDIR paths.

Signed-off-by: Aaron Chan 
---
 janitor/clobberdir | 5 ++---
 scripts/utils.py   | 8 
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/janitor/clobberdir b/janitor/clobberdir
index 5dab5af..5e04ed7 100755
--- a/janitor/clobberdir
+++ b/janitor/clobberdir
@@ -43,11 +43,10 @@ if "TRASH_DIR" not in ourconfig:
 print("Please set TRASH_DIR in the configuration file")
 sys.exit(1)
 
-trashdir = ourconfig["TRASH_DIR"]
+trashdir = utils.getconfig("TRASH_DIR", ourconfig)
 
 for x in [clobberdir]:
 if os.path.exists(x):
 trashdest = trashdir + "/" + str(int(time.time())) + '-'  + 
str(random.randrange(100, 10, 2))
 mkdir(trashdest)
-subprocess.check_call(['mv', x, trashdest])
-
+subprocess.check_call(['mv', x, trashdest])
\ No newline at end of file
diff --git a/scripts/utils.py b/scripts/utils.py
index db1e3c2..373f8de 100644
--- a/scripts/utils.py
+++ b/scripts/utils.py
@@ -26,6 +26,7 @@ def configtrue(name, config):
 # Handle variable expansion of return values, variables are of the form ${XXX}
 # need to handle expansion in list and dicts
 __expand_re__ = re.compile(r"\${[^{}@\n\t :]+}")
+__expansion__ = re.compile(r"\${(.+)}")
 def expandresult(entry, config):
 if isinstance(entry, list):
 ret = []
@@ -37,6 +38,13 @@ def expandresult(entry, config):
 for k in entry:
 ret[expandresult(k, config)] = expandresult(entry[k], config)
 return ret
+if isinstance(entry, unicode):
+entry = str(entry)
+entryExpand = __expansion__.match(entry).group(1)
+if entryExpand:
+return entry.replace('${' + entryExpand + '}', config[entryExpand])
+else:
+return entry
 if not isinstance(entry, str):
 return entry
 class expander:
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] How to remove a package from a build

2018-07-03 Thread Raymond Yeung
We've our own non-yocto openssl that we want to use.  At the moment, we're 
using "sato" image, rather than "minimal" and includes its openssl that is 
out-of-date.  What is the best way to exclude it from our image (and from 
sysroots)?


We have thought about two ideas -


  1.  Use smaller image like core-image-base, or core-image-full-cmdline (but 
not -minimal that may remove too much functionality).
  2.  Use INSTALL_IMAGE_remove += " openssl"


Would either one work?  Also, how do I follow the .bb files etc (e.g. starting 
from the one for sato) to trace down which sub-package includes openssl?


Thanks,

Raymond
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH] [yocto-ab-helper] utils.py: Resolved unicode data expansion

2018-07-03 Thread Chan, Aaron Chun Yew
Hello Richard,

This morning I had the new autobuilder setup from scratch with the latest patch 
you just check out, thanks for that and rollout the fix below manually. 
Everything else is clean and I did what you asked me to.
However this did not resolved the data expansion when called 
utils.getconfig("REPO_STASH_DIR, ourconfig), this applies when you invoke 
ourconfig["REPO_STASH_DIR"]. Both yields the same errors.
We assume the  JSON dumps are properly handled in ourconfig[c] when we handles 
config[c] but that is not the case. I do see there is a growing issues as your 
strategy to use nested JSON, however 
we wont be able to handle all of these conditions needed when nested JSON 
becomes complex. Anyway, i'll leave it you decide what will be best course of 
action.

STDERR logs on autobuilder: (poky-tiny)
--
mv: cannot move '/home/pokybuild/yocto-worker/poky-tiny/' to a subdirectory of 
itself, '${BASE_HOMEDIR}/git/mirror/1530669213-56172/poky-tiny'
Traceback (most recent call last):
  File "/home/pokybuild/yocto-autobuilder-helper/janitor/clobberdir", line 52, 
in 
subprocess.check_call(['mv', x, trashdest])
  File "/usr/lib/python2.7/subprocess.py", line 541, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['mv', 
'/home/pokybuild/yocto-worker/poky-tiny/', 
u'${BASE_HOMEDIR}/git/mirror/1530669213-56172']' returned non-zero exit status 1

also, this action causes directory named ${BASE_HOMEDIR} to be created under 
~/yocto-worker/poky-tiny/build

This patch 
[https://lists.yoctoproject.org/pipermail/yocto/2018-July/041685.html] which 
submitted today resolves the Step 1: Clobber build dir on autobuilder. 

Best wishes,
Aaron

From: richard.pur...@linuxfoundation.org [richard.pur...@linuxfoundation.org]
Sent: Tuesday, July 03, 2018 9:25 PM
To: Chan, Aaron Chun Yew; yocto@yoctoproject.org
Subject: Re: [PATCH] [yocto-ab-helper] utils.py: Resolved unicode data expansion

On Tue, 2018-07-03 at 09:44 +0800, Aaron Chan wrote:
> Updated patch to trigger handlestr() when unicode string is found
> during iteration json.loads(config.json). Unicode and list with data
> expansion were not handled hence adding this patch to handle
> conversion.
> Added a debug message to dump pretty json data populated to
> ourconfig[c].
>
> e.g "REPO_STASH_DIR" read as ${BASE_HOMEDIR}/git/mirror, where it
> should be
> "REPO_STASH_DIR" as /home/pokybuild/git/mirror
>
> Signed-off-by: Aaron Chan 
> ---
>  scripts/utils.py | 6 +-
>  1 file changed, 5 insertions(+), 1 deletion(-)

It took me a while to figure out why you were doing this.

We can't expand the data half way through loading the json file as
other pieces of data may later override the values. We therefore have
to defer expansion of variables until the file is completely loaded.

We therefore have to expand the variables later on, when we read them.

I pointed you at this commit:

http://git.yoctoproject.org/cgit/cgit.cgi/yocto-autobuilder-helper/commit/?id=d6253df2bc21752bc0b53202e491140b0994ff63

which changes direct accesses into ourconfig, e.g.:

ourconfig["REPO_STASH_DIR"]

into accesses using a function:

utils.getconfig("REPO_STASH_DIR", ourconfig)

and that function handles the expansion.

You should therefore be able to fix the clobberdir issue by using the
getconfig() method instead of direct access?

Cheers,

Richard


-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto