[yocto] [PATCH] [yocto-autobuilder2] Add extended support on MailNotifier services

2018-06-11 Thread Aaron
From: Aaron Chan 

---
 config.py   | 34 ++
 services.py | 32 
 2 files changed, 62 insertions(+), 4 deletions(-)

diff --git a/config.py b/config.py
index 2568768..9d0807f 100644
--- a/config.py
+++ b/config.py
@@ -80,3 +80,37 @@ builder_to_workers = {
 "nightly-deb-non-deb": [],
 "default": workers
 }
+
+# MailNotifier default settings (refer to schedulers.py)
+#smtpConf = {
+#"fromaddr" : "yocto-bui...@yoctoproject.org",
+#"sendToInterestedUsers": False,
+#"extraRecipients"  : ["yocto-bui...@yoctoproject.org"],
+#"subject"  : "",
+#"mode" : ["failing", "exception", "cancelled"],
+#"builders" : None,
+#"tags" : None,
+#"schedulers"   : None,
+#"branches" : None,
+#"addLogs"  : False, 
+#"addPatch" : True,
+#"buildSetSummary"  : True,
+#"smtpServer"   : "",
+#"useTls"   : False, 
+#"useSmtps" : False,
+#"smtpUser" : None, 
+#"smtpPassword" : None,
+#"lookup"   : None,
+#"extraHeaders" : None,
+#"watchedWorkers"   : None,
+#"missingWorkers"   : None,
+#"template_dir" : "https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-autobuilder2] Add extended support on MailNotifier services

2018-06-11 Thread Aaron
From: Aaron Chan 

---
 config.py   | 34 ++
 services.py | 32 
 2 files changed, 62 insertions(+), 4 deletions(-)

diff --git a/config.py b/config.py
index 2568768..9d0807f 100644
--- a/config.py
+++ b/config.py
@@ -80,3 +80,37 @@ builder_to_workers = {
 "nightly-deb-non-deb": [],
 "default": workers
 }
+
+# MailNotifier default settings (refer to schedulers.py)
+#smtpConf = {
+#"fromaddr" : "yocto-bui...@yoctoproject.org",
+#"sendToInterestedUsers": False,
+#"extraRecipients"  : ["yocto-bui...@yoctoproject.org"],
+#"subject"  : "",
+#"mode" : ["failing", "exception", "cancelled"],
+#"builders" : None,
+#"tags" : None,
+#"schedulers"   : None,
+#"branches" : None,
+#"addLogs"  : False, 
+#"addPatch" : True,
+#"buildSetSummary"  : True,
+#"smtpServer"   : "",
+#"useTls"   : False, 
+#"useSmtps" : False,
+#"smtpUser" : None, 
+#"smtpPassword" : None,
+#"lookup"   : None,
+#"extraHeaders" : None,
+#"watchedWorkers"   : None,
+#"missingWorkers"   : None,
+#"template_dir" : "https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-autobuilder2] Add extended support on MailNotifier services

2018-06-11 Thread Aaron
From: Aaron Chan 

---
 config.py   | 34 ++
 services.py | 32 
 2 files changed, 62 insertions(+), 4 deletions(-)

diff --git a/config.py b/config.py
index 2568768..9d0807f 100644
--- a/config.py
+++ b/config.py
@@ -80,3 +80,37 @@ builder_to_workers = {
 "nightly-deb-non-deb": [],
 "default": workers
 }
+
+# MailNotifier default settings (refer to schedulers.py)
+#smtpConf = {
+#"fromaddr" : "yocto-bui...@yoctoproject.org",
+#"sendToInterestedUsers": False,
+#"extraRecipients"  : ["yocto-bui...@yoctoproject.org"],
+#"subject"  : "",
+#"mode" : ["failing", "exception", "cancelled"],
+#"builders" : None,
+#"tags" : None,
+#"schedulers"   : None,
+#"branches" : None,
+#"addLogs"  : False, 
+#"addPatch" : True,
+#"buildSetSummary"  : True,
+#"smtpServer"   : "",
+#"useTls"   : False, 
+#"useSmtps" : False,
+#"smtpUser" : None, 
+#"smtpPassword" : None,
+#"lookup"   : None,
+#"extraHeaders" : None,
+#"watchedWorkers"   : None,
+#"missingWorkers"   : None,
+#"template_dir" : "https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-autobuilder2] Add support to enable Manual BSP on LAVA

2018-06-13 Thread Aaron Chan
Signed-off-by: Aaron Chan 
---
 config.py | 9 +
 schedulers.py | 9 +++--
 2 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/config.py b/config.py
index 2568768..d21948f 100644
--- a/config.py
+++ b/config.py
@@ -80,3 +80,12 @@ builder_to_workers = {
 "nightly-deb-non-deb": [],
 "default": workers
 }
+
+# Supported LAVA-Linaro on Yocto Project 
+# Enable Automated Manual(Hardware) BSP Test case(s)
+enable_hw_test = {
+"enable": False,
+"lava_user"   : "",
+"lava_token"  : "",
+"lava_server" : ":"
+}
diff --git a/schedulers.py b/schedulers.py
index 8f3dbc5..2c1b8e1 100644
--- a/schedulers.py
+++ b/schedulers.py
@@ -63,9 +63,14 @@ def props_for_builder(builder):
 props.append(util.BooleanParameter(
 name="deploy_artifacts",
 label="Do we want to deploy artifacts? ",
-default=Boolean
+default=False
+))
+if builder in ['nightly-x86-64', 'nightly-x86-64-lsb', 'nightly-arm', 
'nightly-arm-lsb', 'nightly-arm64']:
+props.append(util.BooleanParameter(
+name="enable_hw_test",
+label="Enable BSP Test case(s) on Hardware?",
+default=config.enable_hw_test['enable']
 ))
-
 props = props + repos_for_builder(builder)
 return props
 
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-autobuilder2] Add nightly-x86-64-bsp job to enable BSP HW Testcase

2018-06-19 Thread Aaron Chan
Signed-off-by: Aaron Chan 
---
 config.json | 22 ++
 1 file changed, 22 insertions(+)

diff --git a/config.json b/config.json
index 808fefc..e79fae3 100644
--- a/config.json
+++ b/config.json
@@ -230,6 +230,23 @@
 "MACHINE" : "genericx86-64"
 }
 },
+"nightly-x86-64-bsp" : {
+"MACHINE" : "intel-corei7-64",
+"SDKMACHINE" : "x86_64",
+"BBTARGETS" : "core-image-sato-sdk",
+"extravars" : [
+"DISTRO_FEATURES_append = \" systemd\"",
+"IMAGE_INSTALL_append = \" udev util-linux systemd\"",
+"IMAGE_FSTYPES = \"tar.gz\"",
+"CORE_IMAGE_EXTRA_INSTALL += \"python3 python3-pip python-pip 
git socat apt dpkg openssh\""
+],
+"NEEDREPOS" : ["poky", "meta-intel", "meta-minnow", 
"meta-openembedded"],
+"ADDLAYER" : [
+"${BUILDDIR}/../meta-intel",
+"${BUILDDIR}/../meta-minnow",
+"${BUILDDIR}/../meta-openembedded/meta-python"
+]
+},
 "nightly-world" : {
 "MACHINE" : "qemux86",
 "SDKMACHINE" : "x86_64",
@@ -738,6 +755,11 @@
 "url" : "git://git.yoctoproject.org/meta-gplv2",
 "branch" : "master",
 "revision" : "HEAD"
+},
+"meta-minnow" : {
+"url" : "https://github.com/alimhussin2/meta-minnow";,
+"branch" : "master",
+"revision" : "HEAD"
 }
 }
 }
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-autobuilder2] Set ABHELPER_JSON on shared-repo-unpack, run-config buildStep

2018-06-25 Thread Aaron Chan
Signed-off-by: Aaron Chan 
---
 builders.py | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/builders.py b/builders.py
index 4b6ee9e..0ebe562 100644
--- a/builders.py
+++ b/builders.py
@@ -142,6 +142,7 @@ def create_builder_factory():
  util.Property("buildername"),
  util.Property("is_release")],
 haltOnFailure=True,
+env={'ABHELPER_JSON' : 'config.json config-intel-lava.json'},
 name="Unpack shared repositories"))
 
 f.addStep(steps.SetPropertyFromCommand(command=util.Interpolate("cd 
%(prop:sharedrepolocation)s/poky; git rev-parse HEAD"),
@@ -160,6 +161,7 @@ def create_builder_factory():
  get_publish_dest,
  util.URLForBuild],
 name="run-config",
+env={'ABHELPER_JSON' : '../config.json config-intel-lava.json'},
 timeout=16200))  # default of 1200s/20min is too short, use 4.5hrs
 return f
 
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-ab-helper] Add config support intel-corei7-64

2018-06-25 Thread Aaron Chan
Signed-off-by: Aaron Chan 
---
 scripts/config-intel-lava.json | 129 +
 scripts/utils.py   |   2 +-
 2 files changed, 130 insertions(+), 1 deletion(-)
 create mode 100644 scripts/config-intel-lava.json

diff --git a/scripts/config-intel-lava.json b/scripts/config-intel-lava.json
new file mode 100644
index 000..76bb4f5
--- /dev/null
+++ b/scripts/config-intel-lava.json
@@ -0,0 +1,129 @@
+{
+"BASE_HOMEDIR" : "~",
+"BASE_SHAREDDIR" : "/srv/www/vhosts/autobuilder.yoctoproject.org",
+
+"defaults" : {
+"NEEDREPOS" : ["poky"],
+"DISTRO" : "poky",
+"SDKMACHINE" : "i686",
+"PACKAGE_CLASSES" : "package_rpm package_deb package_ipk",
+"PRSERV" : "PRSERV_HOST = 'localhost:0'",
+"DLDIR" : "DL_DIR = '${BASE_SHAREDDIR}/current_sources'",
+"SSTATEDIR" : ["SSTATE_DIR ?= '${BASE_SHAREDDIR}/pub/sstate'"],
+"SSTATEDIR_RELEASE" : ["SSTATE_MIRRORS += 'file://.* 
file://${BASE_SHAREDDIR}/pub/sstate/PATH'", "SSTATE_DIR ?= 
'/srv/www/vhosts/downloads.yoctoproject.org/sstate/@RELEASENUM@'"],
+"SDKEXTRAS" : ["SSTATE_MIRRORS += '\\", "file://.* 
http://sstate.yoctoproject.org/dev/@RELEASENUM@PATH;downloadfilename=PATH'"],
+"BUILDINFO" : false,
+"BUILDINFOVARS" : ["INHERIT += 'image-buildinfo'", 
"IMAGE_BUILDINFO_VARS_append = ' IMAGE_BASENAME IMAGE_NAME'"],
+"WRITECONFIG" : true,
+"SENDERRORS" : true,
+"extravars" : [
+"QEMU_USE_KVM = 'True'",
+"INHERIT += 'report-error'",
+"PREMIRRORS = ''",
+"BB_GENERATE_MIRROR_TARBALLS = '1'",
+"BB_NUMBER_THREADS = '16'",
+"PARALLEL_MAKE = '-j 16'",
+"BB_TASK_NICE_LEVEL = '5'",
+"BB_TASK_NICE_LEVEL_task-testimage = '0'",
+"BB_TASK_IONICE_LEVEL = '2.7'",
+"BB_TASK_IONICE_LEVEL_task-testimage = '2.1'",
+"INHERIT += 'testimage'",
+"TEST_QEMUBOOT_TIMEOUT = '1500'",
+"SANITY_TESTED_DISTROS = ''",
+"SDK_EXT_TYPE = 'minimal'",
+"SDK_INCLUDE_TOOLCHAIN = '1'"
+]
+},
+"overrides" : {
+"nightly-x86-64-bsp" : {
+"MACHINE" : "intel-corei7-64",
+"SDKMACHINE" : "x86_64",
+"extravars" : [
+"DISTRO_FEATURES_append = \" systemd\"",
+"IMAGE_INSTALL_append = \" udev util-linux systemd\"",
+"CORE_IMAGE_EXTRA_INSTALL_append += \"python3 python3-pip git 
socat apt dpkg openssh\"",
+"IMAGE__FSTYPES = \"tar.gz\""
+],
+"NEEDREPOS" : ["poky", "meta-intel", "meta-minnow", 
"meta-openembedded"],
+"step1" : {
+"ADDLAYER" : [
+"../meta-intel",
+"../meta-minnow"
+],
+"BBTARGETS" : "core-image-sato-sdk"
+}
+}
+},
+"repo-defaults" : {
+"poky" : {
+"url" : "git://git.yoctoproject.org/poky",
+"branch" : "master",
+"revision" : "HEAD",
+"checkout-dirname" : ".",
+"no-layer-add" : true,
+"call-init" : true
+},
+"meta-intel" : {
+"url" : "git://git.yoctoproject.org/meta-intel-contrib",
+"branch" : "anujm/next",
+"revision" : "HEAD"
+},
+"oecore" : {"url" : "git://git.openembedded.org/openembedded-core",
+"branch" : "master",
+"revision" : "HEAD",
+"checkout-dirname" : ".",
+"no-layer-add" : true,
+"call-init" : true
+},
+"bitbake" : {

[yocto] [PATCH] [yocto-ab-helper] Fix syntax load config.json clobber buildStep

2018-06-26 Thread Aaron Chan
Signed-off-by: Aaron Chan 
---
 config.json| 5 ++---
 janitor/clobberdir | 3 +--
 2 files changed, 3 insertions(+), 5 deletions(-)

diff --git a/config.json b/config.json
index ecfca51..c9dc21e 100644
--- a/config.json
+++ b/config.json
@@ -8,15 +8,14 @@
 "BUILD_HISTORY_DIRECTPUSH" : ["poky:morty", "poky:pyro", "poky:rocko", 
"poky:master"],
 "BUILD_HISTORY_FORKPUSH" : {"poky-contrib:ross/mut" : "poky:master", 
"poky:master-next" : "poky:master"},
 
-"REPO_STASH_DIR" : "${BASE_HOMEDIR}/git/mirror",
-"TRASH_DIR" : "${BASE_HOMEDIR}/git/trash",
+"REPO_STASH_DIR" : "/git/mirror",
+"TRASH_DIR" : "/git/trash",
 
 "QAMAIL_TO" : "richard.pur...@linuxfoundation.org",
 "QAMAIL_TO1" : "yocto@yoctoproject.org",
 "QAMAIL_CC1" : "pi...@toganlabs.com, ota...@ossystems.com.br, 
yi.z...@windriver.com, tracy.gray...@intel.com, joshua.g.l...@intel.com, 
apoorv.san...@intel.com, ee.peng.y...@intel.com, aaron.chun.yew.c...@intel.com, 
rebecca.swee.fun.ch...@intel.com, chin.huat@intel.com",
 "WEBPUBLISH_DIR" : "${BASE_SHAREDDIR}/",
 "WEBPUBLISH_URL" : "https://autobuilder.yocto.io/";,
-
 "defaults" : {
 "NEEDREPOS" : ["poky"],
 "DISTRO" : "poky",
diff --git a/janitor/clobberdir b/janitor/clobberdir
index 5dab5af..73ec87c 100755
--- a/janitor/clobberdir
+++ b/janitor/clobberdir
@@ -19,7 +19,6 @@ import utils
 
 ourconfig = utils.loadconfig()
 
-
 def mkdir(path):
 try:
 os.makedirs(path)
@@ -43,7 +42,7 @@ if "TRASH_DIR" not in ourconfig:
 print("Please set TRASH_DIR in the configuration file")
 sys.exit(1)
 
-trashdir = ourconfig["TRASH_DIR"]
+trashdir = ourconfig["BASE_HOMEDIR"] + ourconfig["TRASH_DIR"]
 
 for x in [clobberdir]:
 if os.path.exists(x):
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-ab-helper] Add qemux86, qemux86-64 WIC testimage buildset-config

2018-06-26 Thread Aaron Chan
Signed-off-by: Aaron Chan 
---
 config.json | 32 +++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/config.json b/config.json
index c9dc21e..3c1f989 100644
--- a/config.json
+++ b/config.json
@@ -383,6 +383,36 @@
 ],
 "step1" : {
 "MACHINE" : "qemux86",
+"SDKMACHINE" : "x86_64",
+"DISTRO" : "poky-lsb",
+"BBTARGETS" : "wic-tools core-image-lsb-sdk",
+"EXTRACMDS" : [
+"wic create directdisk -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-lsb-sdk/",
+"wic create directdisk-gpt -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-lsb-sdk/",
+"wic create mkefidisk -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-lsb-sdk/"
+],
+"extravars" : [
+"IMAGES_FSTYPES += ' wic'"
+],
+"SANITYTARGETS" : "core-image-lsb-sdk:do_testimage"
+},
+"step2" : {
+"MACHINE" : "qemux86-64",
+"SDKMACHINE" : "x86_64",
+"DISTRO" : "poky-lsb",
+"BBTARGETS" : "wic-tools core-image-lsb-sdk",
+"EXTRACMDS" : [
+"wic create directdisk -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86-64/directdisk/core-image-lsb-sdk/",
+"wic create directdisk-gpt -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86-64/directdisk/core-image-lsb-sdk/",
+"wic create mkefdisk -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86-64/directdisk/core-image-lsb-sdk/"
+],
+"extravars" : [
+"IMAGES_FSTYPES += ' wic'"
+],
+"SANITYTARGETS" : "core-image-lsb-sdk:do_testimage"
+},
+"step3" : {
+"MACHINE" : "qemux86",
 "BBTARGETS" : "wic-tools core-image-sato",
 "EXTRACMDS" : [
 "wic create directdisk -e core-image-sato -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-sato/",
@@ -390,7 +420,7 @@
 "wic create mkefidisk -e core-image-sato -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-sato/"
 ]
 },
-"step2" : {
+"step4" : {
 "MACHINE" : "genericx86",
 "BBTARGETS" : "wic-tools core-image-sato",
 "EXTRACMDS" : [
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH 1/2] [yocto-ab-helper] Add qemux86, qemux86-64 WIC testimage buildset-config

2018-07-02 Thread Aaron Chan
Signed-off-by: Aaron Chan 
---
 config.json | 32 +++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/config.json b/config.json
index c9dc21e..3c1f989 100644
--- a/config.json
+++ b/config.json
@@ -383,6 +383,36 @@
 ],
 "step1" : {
 "MACHINE" : "qemux86",
+"SDKMACHINE" : "x86_64",
+"DISTRO" : "poky-lsb",
+"BBTARGETS" : "wic-tools core-image-lsb-sdk",
+"EXTRACMDS" : [
+"wic create directdisk -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-lsb-sdk/",
+"wic create directdisk-gpt -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-lsb-sdk/",
+"wic create mkefidisk -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-lsb-sdk/"
+],
+"extravars" : [
+"IMAGES_FSTYPES += ' wic'"
+],
+"SANITYTARGETS" : "core-image-lsb-sdk:do_testimage"
+},
+"step2" : {
+"MACHINE" : "qemux86-64",
+"SDKMACHINE" : "x86_64",
+"DISTRO" : "poky-lsb",
+"BBTARGETS" : "wic-tools core-image-lsb-sdk",
+"EXTRACMDS" : [
+"wic create directdisk -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86-64/directdisk/core-image-lsb-sdk/",
+"wic create directdisk-gpt -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86-64/directdisk/core-image-lsb-sdk/",
+"wic create mkefdisk -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86-64/directdisk/core-image-lsb-sdk/"
+],
+"extravars" : [
+"IMAGES_FSTYPES += ' wic'"
+],
+"SANITYTARGETS" : "core-image-lsb-sdk:do_testimage"
+},
+"step3" : {
+"MACHINE" : "qemux86",
 "BBTARGETS" : "wic-tools core-image-sato",
 "EXTRACMDS" : [
 "wic create directdisk -e core-image-sato -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-sato/",
@@ -390,7 +420,7 @@
 "wic create mkefidisk -e core-image-sato -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-sato/"
 ]
 },
-"step2" : {
+"step4" : {
 "MACHINE" : "genericx86",
 "BBTARGETS" : "wic-tools core-image-sato",
 "EXTRACMDS" : [
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH 2/2] [yocto-ab-helper] utils.py: Resolved unicode data expansion

2018-07-02 Thread Aaron Chan
A patch to fix the data expansion config.json

Signed-off-by: Aaron Chan 
---
 scripts/utils.py | 16 
 1 file changed, 16 insertions(+)

diff --git a/scripts/utils.py b/scripts/utils.py
index 7c6535c..d26cd0c 100644
--- a/scripts/utils.py
+++ b/scripts/utils.py
@@ -142,6 +142,22 @@ def loadconfig():
 else:
 ourconfig[c][x] = config[c][x]
 
+def resolvexp(pattern, config, c):
+try:
+strMatch = re.compile(pattern)
+expansion = strMatch.match(config[c]).group(1)
+reference = strMatch.match(config[c]).group(2)
+if reference:
+ourconfig[c] = config[c].replace(expansion, config[reference])
+except:
+pass
+
+def handlestr(config, ourconfig, c):
+if not c in ourconfig:
+ourconfig[c] = config[c]
+if isinstance(config[c], str):
+resolvexp(r"(\${(.+)})", config, c)
+
 ourconfig = {}
 for f in files.split():
 p = f
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-ab-helper] utils.py: Resolved unicode data expansion

2018-07-02 Thread Aaron Chan
Updated patch to trigger handlestr() when unicode string is found
during iteration json.loads(config.json). Unicode and list with data
expansion were not handled hence adding this patch to handle conversion.
Added a debug message to dump pretty json data populated to ourconfig[c].

e.g "REPO_STASH_DIR" read as ${BASE_HOMEDIR}/git/mirror, where it should be
"REPO_STASH_DIR" as /home/pokybuild/git/mirror

Signed-off-by: Aaron Chan 
---
 scripts/utils.py | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/scripts/utils.py b/scripts/utils.py
index d26cd0c..32caa4f 100644
--- a/scripts/utils.py
+++ b/scripts/utils.py
@@ -152,11 +152,13 @@ def loadconfig():
 except:
 pass
 
-def handlestr(config, ourconfig, c):
+def handlestr(config, ourconfig, c, debug=False):
 if not c in ourconfig:
 ourconfig[c] = config[c]
 if isinstance(config[c], str):
 resolvexp(r"(\${(.+)})", config, c)
+if debug:
+print(json.dumps(ourconfig[c], indent=4, sort_keys=True))
 
 ourconfig = {}
 for f in files.split():
@@ -168,6 +170,8 @@ def loadconfig():
 for c in config:
 if isinstance(config[c], dict):
 handledict(config, ourconfig, c)
+elif isinstance(config[c], str):
+handlestr(config, ourconfig, c)
 else:
 ourconfig[c] = config[c]
 
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-ab-helper] Extend LAVA buildset JSON to ABHELPER

2018-07-03 Thread Aaron Chan
This patch is an extension to default config.json with ABHELPER_JSON env set.
This extension is to support buildset config for target MACHINE intel-corei7-64
with meta-intel layer included.

Signed-off-by: Aaron Chan 
---
 config-x86_64-lava.json | 34 ++
 1 file changed, 34 insertions(+)
 create mode 100644 config-x86_64-lava.json

diff --git a/config-x86_64-lava.json b/config-x86_64-lava.json
new file mode 100644
index 000..81e248d
--- /dev/null
+++ b/config-x86_64-lava.json
@@ -0,0 +1,34 @@
+{
+"overrides" : {
+"nightly-x86-64-bsp" : {
+"NEEDREPOS" : ["poky", "meta-intel", "meta-openembedded"],
+   "step1" : {
+"MACHINE" : "intel-corei7-64",
+"SDKMACHINE" : "x86_64",
+"extravars" : [
+"DISTRO_FEATURES_append = \" systemd\"",
+"IMAGE_INSTALL_append = \" udev util-linux systemd\"",
+"CORE_IMAGE_EXTRA_INSTALL_append += \"python3 python3-pip 
python-pip git socat apt dpkg openssh\"",
+"IMAGE_FSTYPES = \"tar.gz\""
+],
+"ADDLAYER" : [
+"../meta-intel",
+"../meta-openembedded"
+],
+"BBTARGETS" : "core-image-sato-sdk"
+}
+}
+},
+"repo-defaults" : {
+"meta-intel" : {
+"url" : "git://git.yoctoproject.org/meta-intel",
+"branch" : "master",
+"revision" : "HEAD"
+},
+"meta-openembedded" : {
+"url" : "git://git.openembedded.org/meta-openembedded",
+"branch" : "master",
+"revision" : "HEAD"
+}
+}
+}
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-ab-helper] utils.py: Resolved unicode data expansion

2018-07-03 Thread Aaron Chan
Patch fix on utils:getconfig:expandresult function to handle the expansion
This patch is to add a condition to handle unicode entries as dict & list
have been handled during expandresult.

janitor/clobberdir: [line 46]: changes
from : trashdir = ourconfig["TRASH_DIR"]
to   : trashdir = utils.getconfig("TRASH_DIR", ourconfig)

scripts/utils.py:  [line 41-47]: added
getconfig invokes only unicode entries and handles the data expansions.
This allows ${BUILDDIR} to be expanded, to retain ${BUILDDIR} in ourconfig[c],
we should never invoke utils.getconfig("BUILDDIR", ourconfig) in our scripts
unless we intend to change the BUILDDIR paths.

Signed-off-by: Aaron Chan 
---
 janitor/clobberdir | 5 ++---
 scripts/utils.py   | 8 
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/janitor/clobberdir b/janitor/clobberdir
index 5dab5af..5e04ed7 100755
--- a/janitor/clobberdir
+++ b/janitor/clobberdir
@@ -43,11 +43,10 @@ if "TRASH_DIR" not in ourconfig:
 print("Please set TRASH_DIR in the configuration file")
 sys.exit(1)
 
-trashdir = ourconfig["TRASH_DIR"]
+trashdir = utils.getconfig("TRASH_DIR", ourconfig)
 
 for x in [clobberdir]:
 if os.path.exists(x):
 trashdest = trashdir + "/" + str(int(time.time())) + '-'  + 
str(random.randrange(100, 10, 2))
 mkdir(trashdest)
-subprocess.check_call(['mv', x, trashdest])
-
+subprocess.check_call(['mv', x, trashdest])
\ No newline at end of file
diff --git a/scripts/utils.py b/scripts/utils.py
index db1e3c2..373f8de 100644
--- a/scripts/utils.py
+++ b/scripts/utils.py
@@ -26,6 +26,7 @@ def configtrue(name, config):
 # Handle variable expansion of return values, variables are of the form ${XXX}
 # need to handle expansion in list and dicts
 __expand_re__ = re.compile(r"\${[^{}@\n\t :]+}")
+__expansion__ = re.compile(r"\${(.+)}")
 def expandresult(entry, config):
 if isinstance(entry, list):
 ret = []
@@ -37,6 +38,13 @@ def expandresult(entry, config):
 for k in entry:
 ret[expandresult(k, config)] = expandresult(entry[k], config)
 return ret
+if isinstance(entry, unicode):
+entry = str(entry)
+entryExpand = __expansion__.match(entry).group(1)
+if entryExpand:
+return entry.replace('${' + entryExpand + '}', config[entryExpand])
+else:
+return entry
 if not isinstance(entry, str):
 return entry
 class expander:
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-ab-helper] clobberdir: Fix Unicode data expansion with utils API

2018-07-04 Thread Aaron Chan
This fix is to move clobberdir from python2 to python3 to resolve unicode data
in python2 and change the data extraction expansion from ourconfig["TRASH_DIR"]
to utils.getconfig("TRASH_DIR", ourconfig) on "Clobber build dir" BuildStep

Signed-off-by: Aaron Chan 
---
 janitor/clobberdir | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/janitor/clobberdir b/janitor/clobberdir
index 5e04ed7..b05a876 100755
--- a/janitor/clobberdir
+++ b/janitor/clobberdir
@@ -1,4 +1,4 @@
-#!/usr/bin/env python2
+#!/usr/bin/env python3
 #
 # Delete a directory using the ionice backgrounded command
 #
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-ab-helper] scripts/run-jinja-parser: Add Jinja2 parser extension in autobuilder

2018-07-06 Thread Aaron Chan
This patch is introduced as a feature in 2.6 M2 to support the
extension of autobuilder to LAVA (Linaro Automated Validation Architecture).
run-jinja2-parser loads lava config module and generates LAVA job config
in a YAML format before its triggers LAVA server to execute a task.

Signed-off-by: Aaron Chan 
---
 lava/device/bsp-packages.jinja2 | 43 ++
 scripts/lava.py | 76 
 scripts/run-jinja-parser| 97 +
 3 files changed, 216 insertions(+)
 create mode 100644 lava/device/bsp-packages.jinja2
 create mode 100644 scripts/lava.py
 create mode 100755 scripts/run-jinja-parser

diff --git a/lava/device/bsp-packages.jinja2 b/lava/device/bsp-packages.jinja2
new file mode 100644
index 000..61fbcad
--- /dev/null
+++ b/lava/device/bsp-packages.jinja2
@@ -0,0 +1,43 @@
+device_type: {{ device_type }}
+job_name: {{ job_name }}
+timeouts: 
+  job:
+minutes: {{ timeout.job.minutes }}
+  action:
+minutes: {{ timeout.action.minutes }}
+  connection:
+minutes: {{ timeout.connection.minutes }}
+priority: {{ priority }}
+visibility: {{ visibility }}
+actions:
+- deploy:
+timeout:
+  minutes: {{ deploy.timeout }}
+to: {{ deploy.to }}
+kernel:
+  url: {{ deploy.kernel.url }}
+  type: {{ deploy.kernel.type }}
+modules:
+  url: {{ deploy.modules.url }}
+  compression: {{ deploy.modules.compression }}
+nfsrootfs:
+  url: {{ deploy.nfsrootfs.url }}
+  compression: {{ deploy.nfsrootfs.compression }}
+os: {{ deploy.os }}
+- boot:
+timeout:
+  minutes: {{ boot.timeout }}
+method: {{ boot.method }}
+commands: {{ boot.commands }}
+auto_login: { login_prompt: {{ boot.auto_login.login_prompt }}, username: 
{{ boot.auto_login.username }} }
+prompts:
+  - {{ boot.prompts }}
+- test:
+timeout:
+  minutes: {{ test.timeout }}
+name: {{ test.name }}
+definitions:
+- repository: {{ test.definitions.repository }}
+  from: {{ test.definitions.from }}
+  path: {{ test.definitions.path }}
+  name: {{ test.definitions.name }}
diff --git a/scripts/lava.py b/scripts/lava.py
new file mode 100644
index 000..be18529
--- /dev/null
+++ b/scripts/lava.py
@@ -0,0 +1,76 @@
+# A Yocto Project Embedded Linux Systems Automated Embedded Systems 
+# support in Open Source Linaro (www.linaro.org), LAVA an automated 
+# validation architecture for BSP test deployments of systems covering 
+# IA (x86), ARM, MIPS, PPC architectures.
+
+# Standard LAVA-Server Connection Configurations
+
+lavaConn = {
+'username' : "",
+'token': "",
+'server'   : ":"
+}
+
+# Standard LAVA Job-configuration for each Architectures.
+#
+# Minnowboard Turbot: boot method execute thru NFS network PXIE boot.
+# 
+lavaConf = {
+"minnowboard" : {
+"job_name" : "Minnowboard Turbot with Yocto core-image-sato-sdk 
(intel-corei7-64)",
+"priority" : "medium",
+"visibility" : "public",
+"timeout" : {
+"job": { "minutes" : 180 },
+"action" : { "minutes" : 60 },
+"connection" : { "minutes" : 60 }
+},
+"deploy" : {
+"timeout" : 60,
+"to" : "tftp",
+"kernel" : {
+"url" : "${DEPLOYDIR}/bzImage",
+"type" : "BzImage"
+},
+"modules" : {
+"url" : "${DEPLOYDIR}/modules-intel-corei7-64.tgz",
+"compression" : "gz"
+},
+"nfsrootfs" : {
+"url" : 
"${DEPLOYDIR}/core-image-sato-sdk-intel-corei7-64.tar.gz",
+"compression" : "gz"
+},
+"os" : "oe"
+},
+"boot" : {
+"timeout": 60,
+"method" : "grub",
+"commands"   : "nfs",
+"auto_login" : {
+"login_prompt" : "'intel-corei7-64 login:'",
+"username" : "root"
+},
+"prompts" : "'root@intel-corei7-64:~#'"
+},
+"test" : {
+"timeout" : 3600,
+"name" : "yocto-bsp-test",
+"definitions" : {
+"repository" : 
"git://git.yoctoproject.org/yocto-atuobuilder-helper.git",
+"from" : "git"

[yocto] [PATCH] [yocto-autobuilder] Add Manual BSP job-config into autobuilder

2018-07-09 Thread Aaron Chan
This patch is to add/update configuration needed to support several
hardware platforms on ARM64, x32, x86/x86-64, MIPS64, PPC on
autobuilder as new features added in 2.6 M2 and to support automated
manual BSP test case(s) for future QA releases.

Signed-off-by: Aaron Chan 
---
 config.py | 17 ++---
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/config.py b/config.py
index 2568768..ada76ac 100644
--- a/config.py
+++ b/config.py
@@ -11,6 +11,7 @@ buildertorepos = {
 "nightly-oecore": ["oecore", "bitbake"],
 "nightly-checkuri": ["poky", "meta-qt4", "meta-qt3"],
 "nightly-check-layer": ["poky", "meta-mingw", "meta-gplv2"],
+"nightly-x86-64-bsp": ["poky", "meta-intel", "meta-security", 
"meta-openembedded"],
 "default": ["poky"]
 }
 
@@ -32,16 +33,18 @@ repos = {
 "meta-qt4": ["git://git.yoctoproject.org/meta-qt4", "master"],
 "meta-qt3": ["git://git.yoctoproject.org/meta-qt3", "master"],
 "meta-mingw": ["git://git.yoctoproject.org/meta-mingw", "master"],
-"meta-gplv2": ["git://git.yoctoproject.org/meta-gplv2", "master"]
+"meta-gplv2": ["git://git.yoctoproject.org/meta-gplv2", "master"],
+"meta-security": ["git://git.yoctoproject.org/meta-security", "master"],
+"meta-openembedded": ["git://git.yoctoproject.org/meta-openembedded", 
"master"]
 }
 
 trigger_builders_wait = [
-"nightly-arm", "nightly-arm-lsb", "nightly-arm64",
-"nightly-mips", "nightly-mips-lsb", "nightly-mips64",
-"nightly-multilib", "nightly-x32",
-"nightly-ppc", "nightly-ppc-lsb",
-"nightly-x86-64", "nightly-x86-64-lsb",
-"nightly-x86", "nightly-x86-lsb",
+"nightly-arm", "nightly-arm-lsb", "nightly-arm64", "nightly-arm-bsp", 
"nightly-arm64-bsp",
+"nightly-mips", "nightly-mips-lsb", "nightly-mips64", "nightly-mips-bsp", 
"nightly-mips64-bsp",
+"nightly-multilib", "nightly-x32", "nightly-x32-bsp",
+"nightly-ppc", "nightly-ppc-lsb", "nightly-ppc-bsp",
+"nightly-x86-64", "nightly-x86-64-lsb", "nightly-x86-64-bsp",
+"nightly-x86", "nightly-x86-lsb", "nightly-x86-bsp",
 "nightly-packagemanagers",
 "nightly-rpm-non-rpm", "nightly-deb-non-deb",
 "build-appliance", "buildtools", "eclipse-plugin-neon",
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-autobuilder] master.cfg: Defaults autobuilder URL based on FQDN

2018-07-09 Thread Aaron Chan
This patch is to enable auto-assignments buildbot URL based on Hosts FQDN.
The socket module allows the retrieval on FQDN and constructs the entire
URL by default, this default settings can be overwritten in c['buildbotURL']
based on local administrator preferences.

Signed-off-by: Aaron Chan 
---
 master.cfg | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/master.cfg b/master.cfg
index fca80d2..49ddeb4 100644
--- a/master.cfg
+++ b/master.cfg
@@ -4,6 +4,7 @@
 import os
 import imp
 import pkg_resources
+import socket
 
 from buildbot.plugins import *
 from buildbot.plugins import db
@@ -55,6 +56,7 @@ imp.reload(services)
 imp.reload(www)
 
 c = BuildmasterConfig = {}
+url = os.path.join('http://', socket.getfqdn() + ':' + str(config.web_port) + 
'/')
 
 # Disable usage reporting
 c['buildbotNetUsageData'] = None
@@ -76,6 +78,7 @@ c['www'] = www.www
 c['workers'] = workers.workers
 
 c['title'] = "Yocto Autobuilder"
-c['titleURL'] = "https://autobuilder.yoctoproject.org/main/";
+c['titleURL'] = url
 # visible location for internal web server
-c['buildbotURL'] = "https://autobuilder.yoctoproject.org/main/";
+# - Default c['buildbotURL'] = "https://autobuilder.yoctoproject.org/main/";
+c['buildbotURL'] = url
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-autobuilder] init: Fix the import module yoctoabb & yocto_console_view

2018-07-09 Thread Aaron Chan
This patch is to fix the inconsistency in loading custom module
yoctoabb & yocto_console_view during Buildbot Master startup.

Signed-off-by: Aaron Chan 
---
 __init__.py| 0
 yocto_console_view/__init__.py | 0
 2 files changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 __init__.py
 create mode 100644 yocto_console_view/__init__.py

diff --git a/__init__.py b/__init__.py
new file mode 100644
index 000..e69de29
diff --git a/yocto_console_view/__init__.py b/yocto_console_view/__init__.py
new file mode 100644
index 000..e69de29
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-ab-helper] config-intelqa-x86_64-lava.json: Update job-config to enable BSP on Minnowboard (x86_64)

2018-07-13 Thread Aaron Chan
This patch is to update the nightly-x86-64-bsp job configuration
and include the TEST_TARGET_IP, TEST_SERVER_IP & TEST_SUITES to
enable OEQA Automated BSP test case(s) using server client connection.

Signed-off-by: Aaron Chan 
---
 config-intelqa-x86_64-lava.json | 17 ++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/config-intelqa-x86_64-lava.json b/config-intelqa-x86_64-lava.json
index 81e248d..9713b47 100644
--- a/config-intelqa-x86_64-lava.json
+++ b/config-intelqa-x86_64-lava.json
@@ -1,7 +1,10 @@
 {
 "overrides" : {
 "nightly-x86-64-bsp" : {
-"NEEDREPOS" : ["poky", "meta-intel", "meta-openembedded"],
+"MACHINE" : "intel-corei7-64",
+"DEPLOY_DIR" : "/srv/data/builds/",
+"DEPLOY_DIR_IMAGE" : "${DEPLOY_DIR}/images/${MACHINE}/",
+"NEEDREPOS" : ["poky", "meta-intel"],
"step1" : {
 "MACHINE" : "intel-corei7-64",
 "SDKMACHINE" : "x86_64",
@@ -9,11 +12,19 @@
 "DISTRO_FEATURES_append = \" systemd\"",
 "IMAGE_INSTALL_append = \" udev util-linux systemd\"",
 "CORE_IMAGE_EXTRA_INSTALL_append += \"python3 python3-pip 
python-pip git socat apt dpkg openssh\"",
-"IMAGE_FSTYPES = \"tar.gz\""
+"IMAGE_FSTYPES = \"tar.gz\"",
+"TEST_SUITES_append = \" manualbsp\"",
+"TEST_TARGET = \"simpleremote\"",
+"TEST_SERVER_IP = \"${SERVER_IP}\"",
+"TEST_TARGET_IP = \"${TARGET_IP}\""
 ],
 "ADDLAYER" : [
 "../meta-intel",
-"../meta-openembedded"
+"../meta-openembedded/meta-oe",
+"../meta-openembedded/meta-python",
+"../meta-openembedded/meta-perl",
+"../meta-openembedded/meta-networking",
+"../meta-security"
 ],
 "BBTARGETS" : "core-image-sato-sdk"
 }
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-ab-helper] scripts/run-jinja-parser: Add parser to generate LAVA job-config

2018-07-13 Thread Aaron Chan
This patch is added to generate the LAVA job-config based on
config-intelqa-x86_64-lava.json,lava.py, jinja2 template to construct
the job definition in YAML using autobuilder.

Signed-off-by: Aaron Chan 
---
 scripts/run-jinja-parser | 121 +++
 1 file changed, 121 insertions(+)
 create mode 100755 scripts/run-jinja-parser

diff --git a/scripts/run-jinja-parser b/scripts/run-jinja-parser
new file mode 100755
index 000..2e9d9f4
--- /dev/null
+++ b/scripts/run-jinja-parser
@@ -0,0 +1,121 @@
+#!/usr/bin/env python3
+#
+# Parser loads lava.py module and convert template in Jinja2 format into Job 
configuration in YAML.
+#
+# Options:
+# $1 - Path loads lava module
+# $2 - Absolute path Jinja2 Template
+# $3 - Define Job name
+# $4 - Define Build num
+# $5 - BSP (Minnowboard, Beaglebone, Edgerouter, x86)
+
+import os
+import sys
+import re
+import json
+import jinja2
+import time
+import utils
+from jinja2 import Template, Environment, FileSystemLoader
+
+def jinja_helper():
+print("USAGE: python3 run-jinja-parser   
 ")
+sys.exit(0)
+
+def jinja_writer(name, data):
+yamlFile=name + ".yaml"
+yamlNewFile= "-".join([name, time.strftime("%d%m%Y-%H%M%S")]) + ".yaml"
+if os.path.isfile(yamlFile):
+os.rename(yamlFile, yamlNewFile)
+print("INFO: Found previous job config [%s] & rename to [%s]" % 
(yamlFile, yamlNewFile))
+with open(yamlFile, "w+") as fh:
+fh.write(data)
+fh.close()
+
+def getconfig_expand(config, ourconfig, pattern, match, buildname, buildnum):
+newconfig={}
+expansion=re.compile(pattern)
+for items in ourconfig.items():
+if items[0] in match:
+if items[0] in "DEPLOY_DIR":
+if buildnum is None:
+newconfig[items[0]] = "file://" + os.path.join(items[1], 
buildname)
+else:
+newconfig[items[0]] = "file://" + os.path.join(items[1], 
buildname, buildnum)
+else:
+newconfig[items[0]] = items[1]
+config=config.replace('${' + items[0] + '}', newconfig[items[0]])
+newconfig['DEPLOY_DIR_IMAGE']=config
+return newconfig['DEPLOY_DIR_IMAGE']
+
+try:
+loadModule=os.path.expanduser(sys.argv[1])
+jinjaTempl=sys.argv[2]
+buildNames=sys.argv[3]
+buildNum=sys.argv[4]
+device=sys.argv[5]
+except:
+jinja_helper()
+
+if not os.path.exists(loadModule):
+print("ERROR: Unable to load LAVA module at [%s]" % loadModule)
+sys.exit(1)
+
+try:
+   sys.path.insert(loadModule)
+except:
+   sys.path.insert(0, loadModule)
+
+#
+# Starts here
+#
+from lava import *
+ourconfig = utils.loadconfig()
+ourconfig = ourconfig['overrides'][buildNames]
+deploydir = getconfig_expand(ourconfig['DEPLOY_DIR_IMAGE'], ourconfig, 
"\${(.+)}/images/\${(.+)}", ['DEPLOY_DIR', 'MACHINE'], buildNames, None)
+lavaconfig = lavaConf[device]
+
+lavaconfig["device_type"] = device
+
+images = ['kernel', 'modules', 'nfsrootfs']
+for img in images:
+if img in lavaconfig['deploy'].keys():
+try:
+#data = getconfig_expand(lavaconfig['deploy'][img]['url'], 
ourconfig, "\${(.+)}", ['url'], None)
+#print(data)
+url=lavaconfig['deploy'][img]['url']
+expansion=re.compile("\${(.+)}")
+expansion=expansion.match(url).group(1)
+if expansion:
+#if data:
+url=url.replace('${' + expansion + '}', deploydir)
+lavaconfig['deploy'][img]['url']=url
+else:
+pass
+except:
+print("ERROR: URL is not defined in [%s] images %s" % (img, 
json.dumps(lavaconfig['deploy'][img])))
+sys.exit(1)
+else: pass
+
+if not os.path.isfile(jinjaTempl):
+print("ERROR: Unable to find Jinja2 Template: [%s]" % jinjaTempl)
+sys.exit(1)
+
+#
+# JSON Dumps
+#
+debug=True
+if debug:
+print(json.dumps(lavaconfig, indent=4, sort_keys=True))
+
+jinjaPath = "/".join(jinjaTempl.split("/")[0:-1])
+jinjaFile = jinjaTempl.split("/")[-1]
+
+templateLoader = jinja2.FileSystemLoader(searchpath=jinjaPath)
+templateEnv= jinja2.Environment(loader=templateLoader)
+templateJinja  = templateEnv.get_template(jinjaFile)
+outText= templateJinja.render(lavaconfig)
+
+jinja_writer(buildNames, outText)
+
+print("INFO: Job configuration [%s] is ready to be triggered in next step" % 
buildNames)
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] do_configure for socketcand package failed

2018-07-30 Thread Aaron Cohen
That configure script is looking for libconfig.

You need DEPENDS = "libconfig"

On Mon, Jul 30, 2018 at 4:20 AM Zoran Stojsavljevic <
zoran.stojsavlje...@gmail.com> wrote:

> Seems, that this does work?!
> ___
>
> SUMMARY = "Socketcand ..."
> SECTION = "socketcan"
> LICENSE = "GPLv2"
> LIC_FILES_CHKSUM =
> "file://${COMMON_LICENSE_DIR}/GPL-2.0;md5=801f80980d171dd6425610833a22dbe6"
> PR = "r0"
>
> RDEPENDS_${PN}-dev += "${PN}-staticdev"
>
> SRCREV = "df7fb4ff8a4439d7737fe2df3540e1ab7465721a"
> SRC_URI = "git://github.com/dschanoeh/socketcand.git;protocol=http"
>
> S = "${WORKDIR}/git"
>
> EXTRA_OECONF = "
> --without-config
> "
>
> inherit autotools update-alternatives
> inherit autotools-brokensep
> ___
>
> Any additional comments?
>
> Thank you,
> Zoran
>
> On Mon, Jul 30, 2018 at 8:40 AM, Zoran Stojsavljevic
>  wrote:
> > Hello,
> >
> > I am writing the recipe for the socketcand package. It looks so far very
> simple:
> >
> > PR = "r0"
> >
> > RDEPENDS_${PN}-dev += "${PN}-staticdev"
> >
> > SRCREV = "df7fb4ff8a4439d7737fe2df3540e1ab7465721a"
> >
> > SRC_URI = "git://github.com/dschanoeh/socketcand.git;protocol=http"
> >
> > S = "${WORKDIR}/git"
> >
> > inherit autotools
> > ___
> >
> > I did install the following:
> > sudo apt-get install autoconf
> > sudo apt-get install libconfig-dev
> >
> > The error is in do_config():
> >
> > | checking for config_init in -lconfig... no
> > | configure: error: in
> >
> `/home/netmodule.intranet/stojsavljevic/projects/beaglebone-black/yocto-rocko/poky/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/socketcand/1.0-r0/build':
> > | configure: error: config test failed (--without-config to disable)
> >
> > What should I include in the recipe to make it work (the both autoconf
> >  and ./configure work normally for the normal installation)???
> >
> > Thank you,
> > Zoran
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] run-config: Reverse the oder of remove layers

2018-08-08 Thread Aaron Chan
This patch fixes the "Collection Error during parsing layer conf"
when a parent layer is accidentally added before depend layers.
When removing layers from bblayers, we should not be following
the same sequence of adding the layers, it should be done in a
reversed order. This is an assumption that the layers have their
dependent layers added before the parent layers are added.
In general, the parent layer require to be remove first before
its child dependencies layers.

Signed-off-by: Aaron Chan 
---
 scripts/run-config | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/scripts/run-config b/scripts/run-config
index ce40249..9fede1e 100755
--- a/scripts/run-config
+++ b/scripts/run-config
@@ -152,8 +152,8 @@ for stepnum in range(1, maxsteps + 1):
 utils.printheader("Step %s/%s: Running 'plain' command %s" % (stepnum, 
maxsteps, cmd))
 bitbakecmd(builddir, cmd, report, stepnum, oeenv=False)
 
-# Remove any layers we added
-for layer in layers:
+# Remove any layers we added in a reverse order
+for layer in reversed(layers):
 bitbakecmd(builddir, "bitbake-layers remove-layer %s" % layer, report, 
stepnum)
 
 if publish:
-- 
2.16.2.windows.1

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] oe-run-native uses host python3 instead of sysroot

2018-08-10 Thread Aaron Cohen
Trying to run bmaptool on Centos 7, oe-run-native using the command
"oe-run-native bmap-tools-native bmaptool copy -h" gives the following
error:

--- a/scripts/oe-run-nativeTraceback (most recent call last):
  File
"/home/joel-cohen/code/yocto-2.5/xilinx-build/tmp/work/x86_64-linux/bmap-tools-native/3.4-r0/recipe-sysroot-native/usr/bin/bmaptool",
line 6, in 
from pkg_resources import load_entry_point
  File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 3007, in

working_set.require(__requires__)
  File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 728, in
require
needed = self.resolve(parse_requirements(requirements))
  File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 626, in
resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: bmap-tools==3.4

which is caused because the python3 that it is using is my host's old
version (3.4.8) rather than from the recipe sysroot.

The following patch fixes it for me, but I wonder if there's a better
solution. I do think that finding the sysroot python3 is in general
preferable to using whatever happens to be on the host...

A side note: the bmaptool currently in oe throws a weird error when given
no arguments rather than a usage message. This has been fixed upstream.

--Aaron

+++ b/scripts/oe-run-native
@@ -55,7 +55,7 @@ fi
 OLD_PATH=$PATH

 # look for a tool only in native sysroot
-PATH=$OECORE_NATIVE_SYSROOT/usr/bin:$OECORE_NATIVE_SYSROOT/bin:$OECORE_NATIVE_SYSROOT/usr/sbin:$OECORE_NATIVE_SYSROOT/sbin
+PATH=$OECORE_NATIVE_SYSROOT/usr/bin:$OECORE_NATIVE_SYSROOT/bin:$OECORE_NATIVE_SYSROOT/usr/sbin:$OECORE_NATIVE_SYSROOT/sbin:$OECORE_NATIVE_SYSROOT/usr/bin/python3-native
 tool_find=`/usr/bin/which $tool 2>/dev/null`

 if [ -n "$tool_find" ] ; then
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [yocto-autobuilder-helper][PATCH 1/6] lava-templates: Add Jinja2 LAVA job-template on BSP x86_64

2018-08-29 Thread Aaron Chan
Include a reference LAVA job template on x86_64 in jinja2 format.
This template will parsed and converted into YAML configuration
before job is trigger on LAVA server end thru Yocto autobuilder.

Signed-off-by: Aaron Chan 
---
 lava-templates/generate-jobconfig.jinja2 | 43 
 1 file changed, 43 insertions(+)
 create mode 100644 lava-templates/generate-jobconfig.jinja2

diff --git a/lava-templates/generate-jobconfig.jinja2 
b/lava-templates/generate-jobconfig.jinja2
new file mode 100644
index 000..61fbcad
--- /dev/null
+++ b/lava-templates/generate-jobconfig.jinja2
@@ -0,0 +1,43 @@
+device_type: {{ device_type }}
+job_name: {{ job_name }}
+timeouts: 
+  job:
+minutes: {{ timeout.job.minutes }}
+  action:
+minutes: {{ timeout.action.minutes }}
+  connection:
+minutes: {{ timeout.connection.minutes }}
+priority: {{ priority }}
+visibility: {{ visibility }}
+actions:
+- deploy:
+timeout:
+  minutes: {{ deploy.timeout }}
+to: {{ deploy.to }}
+kernel:
+  url: {{ deploy.kernel.url }}
+  type: {{ deploy.kernel.type }}
+modules:
+  url: {{ deploy.modules.url }}
+  compression: {{ deploy.modules.compression }}
+nfsrootfs:
+  url: {{ deploy.nfsrootfs.url }}
+  compression: {{ deploy.nfsrootfs.compression }}
+os: {{ deploy.os }}
+- boot:
+timeout:
+  minutes: {{ boot.timeout }}
+method: {{ boot.method }}
+commands: {{ boot.commands }}
+auto_login: { login_prompt: {{ boot.auto_login.login_prompt }}, username: 
{{ boot.auto_login.username }} }
+prompts:
+  - {{ boot.prompts }}
+- test:
+timeout:
+  minutes: {{ test.timeout }}
+name: {{ test.name }}
+definitions:
+- repository: {{ test.definitions.repository }}
+  from: {{ test.definitions.from }}
+  path: {{ test.definitions.path }}
+  name: {{ test.definitions.name }}
-- 
2.11.0

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [yocto-autobuilder-helper][PATCH 2/6] run-jinja-parser: Add converter Jinja2 template to YAML parser

2018-08-29 Thread Aaron Chan
run-jinja-parser converts the Jinja2 template from lava-templates
folder into YAML pipeline job configuration used in LAVA.
Jinja2 provides a standard template to be modify/update to meet
other architecture supports.
The lava-template/generate-jobconfig.jinja2 are to be couple with
the JSON config-intelqa-x86_64-lava.json when defining your
architecture and hardware configuration at LAVA end.

Signed-off-by: Aaron Chan 
---
 lava/run-jinja-parser | 96 +++
 1 file changed, 96 insertions(+)
 create mode 100755 lava/run-jinja-parser

diff --git a/lava/run-jinja-parser b/lava/run-jinja-parser
new file mode 100755
index 000..65f47af
--- /dev/null
+++ b/lava/run-jinja-parser
@@ -0,0 +1,96 @@
+#!/usr/bin/env python3
+#
+# Parser loads JSON file (e.g. config-intelqa-x86_64-lava) and converts Jinja2 
template 
+# format into a LAVA Job configuration in YAML format.
+#
+# Parameters:
+# $1 - Define the absolute path of Jinja2 template stored
+# $2 - Inherits the Job name in autobuilder (e.g. nightly-x86)
+# $3 - Inherits the Build number in autobuilder (Defaults to None)
+# $4 - Device type definition on LAVA Dispatcher
+#
+import os
+import sys
+import re
+import json
+import jinja2
+import time
+from jinja2 import Template, Environment, FileSystemLoader
+
+sys.path.append(os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))),"scripts"))
+import utils
+
+# Enable this section on manual run
+# os.environ["ABHELPER_JSON"]="config.json 
/home/ab/yocto-autobuilder-helper/config-intelqa-x86_64-lava.json"
+
+def jinja_helper():
+print("USAGE: python3 run-jinja-parser   
 ")
+print("   python3 scripts/run-jinja-parser 
lava/device/bsp-packages.jinja2 nightly-x86-64-bsp None minnowboard")
+sys.exit(0)
+
+# Create Job definition in YAML based on autobuilder job name
+def jinja_writer(name, data):
+yamlFile=name + ".yaml"
+yamlNewFile= "-".join([name, time.strftime("%d%m%Y-%H%M%S")]) + ".yaml"
+if os.path.isfile(yamlFile):
+os.rename(yamlFile, yamlNewFile)
+print("INFO: Found previous job config [%s] & rename to [%s]" % 
(yamlFile, yamlNewFile))
+with open(yamlFile, "w+") as fh:
+fh.write(data)
+fh.close()
+
+# Handles data expansion based on pattern matching
+def getconfig_expand(config, ourconfig, pattern, match, buildname, buildnum):
+newconfig={}
+expansion=re.compile(pattern)
+for items in ourconfig.items():
+if items[0] in match:
+if items[0] in "DEPLOY_DIR":
+imagedeploy= "file://" + os.path.join(items[1], buildname)
+newconfig[items[0]] = imagedeploy
+if buildnum is not None and buildnum != "None":
+newconfig[items[0]] = os.path.join(imagedeploy, 
str(buildnum))
+else:
+newconfig[items[0]] = items[1]
+config=config.replace('${' + items[0] + '}', newconfig[items[0]])
+newconfig['DEPLOY_DIR_IMAGE']=config
+return newconfig['DEPLOY_DIR_IMAGE']
+
+try:
+jinjaTempl=sys.argv[1]
+target=sys.argv[2]
+buildnum=sys.argv[3]
+device=sys.argv[4]
+debug=True
+except:
+jinja_helper()
+
+ourconfig  = utils.loadconfig()
+jobconfig  = ourconfig['overrides'][target]
+lavaconfig = ourconfig['lava-devices'][device]
+deploydir  = getconfig_expand(jobconfig['DEPLOY_DIR_IMAGE'], jobconfig, 
"\${(.+)}/images/\${(.+)}/", ['DEPLOY_DIR', 'MACHINE'], target, buildnum)
+newconfig  = { 'DEPLOY_DIR_IMAGE' : deploydir }
+jinjaTempl = os.path.abspath(jinjaTempl)
+
+for img in ['kernel', 'modules', 'nfsrootfs']:
+lavaconfig['deploy'][img]['url'] = 
getconfig_expand(lavaconfig['deploy'][img]['url'], newconfig, "\${(.+)}.+", 
['DEPLOY_DIR_IMAGE'], target, buildnum)
+lavaconfig['device_type'] = device
+
+if not os.path.isfile(jinjaTempl):
+print("ERROR: Unable to find Jinja2 Template: [%s]" % jinjaTempl)
+sys.exit(1)
+
+# JSON Dumps
+if debug:
+print(json.dumps(lavaconfig, indent=4))
+
+jinjaPath = "/".join(jinjaTempl.split("/")[0:-1])
+jinjaFile = jinjaTempl.split("/")[-1]
+
+templateLoader = jinja2.FileSystemLoader(searchpath=jinjaPath)
+templateEnv= jinja2.Environment(loader=templateLoader)
+templateJinja  = templateEnv.get_template(jinjaFile)
+outText= templateJinja.render(lavaconfig)
+
+jinja_writer(target, outText)
+print("INFO: Job configuration [%s] is ready to be triggered in next step" % 
target)
-- 
2.11.0

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [yocto-autobuilder-helper][PATCH 3/6] trigger-lava-jobs: Add LAVA RPC trigger pipeline script

2018-08-29 Thread Aaron Chan
trigger-lava-jobs accepts the YAML pipeline lava-job config file
generated by run-jinja-parser scripts. This triggers a new job at
LAVA end thru RPC and parses the authentication token and user
credentials to launch/start the hardware automation on LAVA
Dispatcher.Script will exit on error when lava-job return a state
either incomplete or canceling stage.

trigger-lava-jobs uses lava_scheduler.py python module where the
LAVA classes and library constructed from XML-RPC API which are
define and supported by Linaro, LAVA.

Signed-off-by: Aaron Chan 
---
 lava/lava_scheduler.py |  70 
 lava/trigger-lava-jobs | 218 +
 2 files changed, 288 insertions(+)
 create mode 100644 lava/lava_scheduler.py
 create mode 100755 lava/trigger-lava-jobs

diff --git a/lava/lava_scheduler.py b/lava/lava_scheduler.py
new file mode 100644
index 000..10839c7
--- /dev/null
+++ b/lava/lava_scheduler.py
@@ -0,0 +1,70 @@
+#!/usr/bin/env python3
+'''
+author__ = "Aaron Chan"
+__copyright__ = "Copyright 2018, Intel Corp"
+__credits__" = ["Aaron Chan"]
+__license__" = "GPL"
+__version__" = "1.0"
+__maintainer__ = "Aaron Chan"
+__email__ = "aaron.chun.yew.c...@intel.com"
+'''
+
+import xmlrpc
+
+class scheduler():
+
+def __init__(self, server, user, token, url):
+self.server = server
+self.user = user
+self.token = token
+self.url = url
+
+@classmethod
+# Description: Submit the given job data which is in LAVA
+#  job JSON or YAML format as a new job to
+#  LAVA scheduler.
+# Return: dict 
+def lava_jobs_submit(self, server, data):
+return server.scheduler.jobs.submit(data)
+
+@classmethod
+# Description: Cancel the given job referred by its id
+# Return: Boolean 
+def lava_jobs_cancel(self, server, jobid):
+state = server.scheduler.jobs.cancel(jobid)
+if type(state) is bool: return state
+
+@classmethod
+def lava_jobs_resubmit(self, server, jobid):
+return server.scheduler.jobs.resubmit(jobid)
+
+@classmethod
+# Description: Return the logs for the given job
+# Args: jobid , line  - Show only after the given line
+# Return: tuple 
+def lava_jobs_logs(self, server, jobid, line):
+return server.scheduler.jobs.logs(jobid, line)
+
+@classmethod
+# Description: Show job details
+# Return: Dict 
+def lava_jobs_show(self, server, jobid):
+return server.scheduler.jobs.show(jobid)
+
+@classmethod
+# Description: Return the job definition
+# Return: Instance 
+def lava_jobs_define(self, server, jobid):
+return server.scheduler.jobs.definition(jobid)
+
+@classmethod
+def lava_jobs_status(self, server, jobid):
+return server.scheduler.job_status(jobid)
+
+@classmethod
+def lava_jobs_output(self, server, jobid, offset):
+return server.scheduler.job_output(jobid, offset)
+
+@classmethod
+def lava_jobs_details(self, server, jobid):
+return server.scheduler.job_details(jobid)
diff --git a/lava/trigger-lava-jobs b/lava/trigger-lava-jobs
new file mode 100755
index 000..5b7a6dd
--- /dev/null
+++ b/lava/trigger-lava-jobs
@@ -0,0 +1,218 @@
+#!/usr/bin/env python3
+# 
+# 
=
+# XML-RPC API reference taken from
+# -- https://validation.linaro.org/static/docs/v2/data-export.html#xml-rpc
+# Developed By : Chan, Aaron 
+# Organization : Yocto Project Open Source Technology Center (Intel)
+# Date : 27-Aug-2018 (Initial release)
+# 
=
+#
+# Triggers a job execution define by YAML template on LAVA server end from 
autobuilder.
+# This script will monitor the lava-job status until the hardware boots up 
successfully
+# and returns the IPv4 addr pre-configure over network boot (PXE) on the board.
+# Once the IPv4 addr has been recovered, script will update the auto.conf with
+# TEST_TARGET_IP, TEST_SERVER_IP to establish a client-host connection and 
prepare to
+# execute automated harware test case(s) on hardware on the next step.
+#
+# Options:
+#
+# $1 - Supply lava-job template in a YAML format (e.g. .yaml)
+# $2 - Supply autobuilder buildername (e.g. nightly-x86-64-bsp, 
nightly-arm64-bsp)
+# $3 - By default set to "None", else parse in the buildnumber to create the 
NFS path
+# $4 - Supply device/board name (same as LAVA device type)
+#
+import xmlrpc.client
+import sys
+import os
+import time
+import re
+import json
+import netifaces
+import time
+from shutil import copyfile
+from lava_scheduler import *
+
+sys.path.append(os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))),"scripts"))
+import util

[yocto] [yocto-autobuilder-helper][PATCH 4/6] resume-lava-jobs: Add lava-job cleanup script

2018-08-29 Thread Aaron Chan
This script is needed to inform LAVA server to end the lava-job
process and shutdowns the board/device gracefully while removing
the lava-overlay tmpfs resides on LAVA dispatcher end.

Once lava-job completely endsm it returns a signal to host machine
to continue with the remaining steps in autobuilder before it
completes and end the entire job workflow.

Signed-off-by: Aaron Chan 
---
 lava/resume-lava-jobs | 79 +++
 1 file changed, 79 insertions(+)
 create mode 100755 lava/resume-lava-jobs

diff --git a/lava/resume-lava-jobs b/lava/resume-lava-jobs
new file mode 100755
index 000..40f2b77
--- /dev/null
+++ b/lava/resume-lava-jobs
@@ -0,0 +1,79 @@
+#!/usr/bin/env python3
+#
+# 
=
+# Developed By : Chan, Aaron 
+# Organization : Yocto Project Open Source Technology Center (Intel)
+# Date : 27-Aug-2018 (Initial release)
+# 
=
+#
+# This script is to trigger a signal to LAVA server to terminate the lava-job 
once the
+# test cases has been completely executed on the target hardware from the host 
machine.
+# Once LAVA server receives the signal to end the job, LAVA server will 
clean-up tmpfs
+# overlay on LAVA Dispatcher and gracefully shutdown the target 
hardware/board/device.
+# In the same way, autobuilder will received the handoff signal from LAVA 
server and
+# run any remaining post script and end the job in autobuilder.
+#
+# Options:
+# $1 - Supply the NFS and/or absolute path on the board_info.json generated 
from hardware.
+# $2 - Supply the command to run on target hardware, by default "shutdown" to 
power down
+#  target hardware/board/device
+#
+import subprocess
+import argparse
+import re
+import os
+import sys
+
+sys.path.append(os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))),"scripts"))
+import utils
+
+parser = argparse.ArgumentParser(description='SSH Client to Target Board.')
+parser.add_argument('--json', action='store', dest='brdinfo', help='Define 
default board user')
+parser.add_argument('--cmd', action='store', dest='ipcmd', help='Define 
default command to the board')
+
+results = parser.parse_args()
+
+# Enable this section on manual run
+# os.environ['ABHELPER_JSON'] = "config.json 
/home/pokybuild/yocto-autobuilder-helper/config-intelqa-x86_64-lava.json"
+
+brdinfo = results.brdinfo
+if os.path.isfile(os.path.expanduser(brdinfo)):
+os.environ['ABHELPER_JSON'] += (" " + brdinfo)
+ourconfig=utils.loadconfig()
+else:
+print("ERROR: Failed to retrieve [%s] thru NFS. Check your NFS mount on 
the worker/hosts" % brdinfo)
+sys.exit(1)
+
+ipcmd = results.ipcmd
+ipaddr = ourconfig['network']['ipaddr'].strip('\n')
+user = ourconfig['user'].strip('\n')
+
+if user is None or ipaddr is None:
+print("ERROR: Failed to retrieve (e.g username/IP) from hardware. Check 
network interface on target device.")
+sys.exit(1)
+else:
+if re.match(ipcmd, 'shutdown'):
+ipcmd = 'touch minnow.idle.done'
+else:
+ipcmd = 'echo Completed.'
+shellCommand = ["ssh", "-oStrictHostKeyChecking=no", "%s@%s" % (user, 
ipaddr), "uname -a;", ipcmd]
+
+ssh = subprocess.Popen(shellCommand, stdout=subprocess.PIPE, 
stderr=subprocess.PIPE)
+ssherr = ssh.stderr.read().decode('utf-8')
+
+if re.search('man-in-the-middle\s*attack', ssherr):
+match=True
+elif re.search('Connection\s*timed\*out', ssherr):
+print("ERROR: Connection to board timeout due board unresponsive, 
check your hardware.")
+match=False
+else:
+match=False
+
+if match:
+ssh_keygen = subprocess.Popen(
+["ssh-keygen", "-f", "\"" + 
os.path.expanduser("~/.ssh/known_hosts") + "\"", "-R", ipaddr],
+stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+sshout = subprocess.Popen(shellCommand, stdout=subprocess.PIPE, 
stderr=subprocess.PIPE)
+print("INFO: %s" % sshout.stdout.read())
+else:
+print("ERROR %s" % sshout.stderr.read())
-- 
2.11.0

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [yocto-autobuilder-helper][PATCH 5/6] publish-artefacts: Add deployment BSP support on x86_64

2018-08-29 Thread Aaron Chan
Add in the support to publish images into the designated path.
BSP packages of the previous build will be cleaned up before
new BSP packages are copied over. This ensures the previous
image will not be retained and causes conflicts before image
is loaded into x86_64 (MTURBOT64) hardware.

Signed-off-by: Aaron Chan 
---
 scripts/publish-artefacts | 8 
 1 file changed, 8 insertions(+)

diff --git a/scripts/publish-artefacts b/scripts/publish-artefacts
index 83a4094..3418de2 100755
--- a/scripts/publish-artefacts
+++ b/scripts/publish-artefacts
@@ -137,6 +137,14 @@ case "$target" in
 md5sums $TMPDIR/deploy/images/genericx86-64
 cp -R --no-dereference --preserve=links 
$TMPDIR/deploy/images/genericx86-64/*genericx86-64* 
$DEST/machines/genericx86-64-lsb
 ;;
+"nightly-x86-64-bsp")
+rm -rf $DEST/$target/images/intel-corei7-64/*
+mkdir -p $DEST/$target/images/intel-corei7-64
+md5sums $TMPDIR/deploy/images/intel-corei7-64
+cp -R --no-dereference --preserve=links 
$TMPDIR/deploy/images/intel-corei7-64/bzImage* 
$DEST/$target/images/intel-corei7-64
+cp -R --no-dereference --preserve=links 
$TMPDIR/deploy/images/intel-corei7-64/*core-image-sato-sdk-intel-corei7-64*tar* 
$DEST/$target/images/intel-corei7-64
+cp -R --no-dereference --preserve=links 
$TMPDIR/deploy/images/intel-corei7-64/*modules-* 
$DEST/$target/images/intel-corei7-64
+;;
 "nightly-x86")
 mkdir -p $DEST/machines/qemu/qemux86
 md5sums $TMPDIR/deploy/images/qemux86
-- 
2.11.0

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [yocto-autobuilder-helper][PATCH 6/6] config-intelqa-x86_64-lava.json: Add extension to config.json to support BSP

2018-08-29 Thread Aaron Chan
config-intelqa on x86-64 is an extension to config.json where it
contains the recipes and meta layers to build core-image-sato-sdk on
various supported architectures in Yocto Project.

This is an initial release model on specifying the recipes use to build
the embeddded linux images starting with x86-64 MTURBOT64 (Intel IA).
With this reference, community will inherit the structure and model
benchmark from config-intelqa--lava.json to support and build
own hardware on other architectures (e.g arm64, mips64, pcc, x86) on
the same common CI infrastructure (Yocto Autobuilder).

The config-intelqa--lava.json will contain the consolidate data in
autobuilder and LAVA (Linaro) to execute independent jobs configuration
respectively. Architecture owners are to work with the respective
maintainers and review their automated hardware tests to ensure common
structure is agree by the current and/or new community.

Signed-off-by: Aaron Chan 
---
 config-intelqa-x86_64-lava.json | 163 
 1 file changed, 149 insertions(+), 14 deletions(-)

diff --git a/config-intelqa-x86_64-lava.json b/config-intelqa-x86_64-lava.json
index 81e248d..450890c 100644
--- a/config-intelqa-x86_64-lava.json
+++ b/config-intelqa-x86_64-lava.json
@@ -1,23 +1,139 @@
 {
+"lava-defaults" : {
+"username" : "< LAVA user >",
+"token": "< LAVA token >",
+"server"   : "< LAVA server >:< LAVA port >",
+"interface": "< Board network interface >"
+},
+"lava-devices" : {
+"minnowboard" : {
+"job_name" : "Minnowboard Turbot with Yocto core-image-sato-sdk 
(intel-corei7-64)",
+"priority" : "medium",
+"visibility" : "public",
+"timeout" : {
+"job": { "minutes" : 180 },
+"action" : { "minutes" : 60 },
+"connection" : { "minutes" : 60 }
+},
+"deploy" : {
+  "timeout" : 60,
+  "to" : "tftp",
+  "kernel" : {
+"url"  : "${DEPLOY_DIR_IMAGE}bzImage",
+"type" : "BzImage"
+  },
+  "modules" : {
+"url" : "${DEPLOY_DIR_IMAGE}modules-intel-corei7-64.tgz",
+"compression" : "gz"
+  },
+  "nfsrootfs" : {
+"url" : 
"${DEPLOY_DIR_IMAGE}core-image-sato-sdk-intel-corei7-64.tar.gz",
+"compression" : "gz"
+  },
+  "os": "oe"
+},
+"boot" : {
+"timeout" : 60,
+"method"  : "grub",
+"commands" : "nfs",
+"auto_login" : {
+"login_prompt" : "'intel-corei7-64 login:'",
+"username" : "root"
+},
+"prompts" : "'root@intel-corei7-64:~#'"
+},
+"test" : {
+"timeout" : 3600,
+"name" : "yocto-bsp-test",
+"definitions" : {
+"repository" : 
"https://git.yoctoproject.org/git/yocto-autobuilder-helper";,
+"from" : "git",
+"path" : "lava-templates/auto-bsp-test.yaml",
+"name" : "yocto-bsp-test"
+}
+}
+},
+"beaglebone-black" : {
+"job_name" : "Beaglebone with Yocto core-image-sato-sdk (ARM 
Cortex)",
+"priority" : "medium",
+"visibility" : "public",
+"timeout" : {
+"job": { "minutes" : 180 },
+"action" : { "minutes" : 60 },
+"connection" : { "minutes" : 60 }
+}
+},
+"beaglebone-mx" : {},
+"x86" : {},
+"qemu" : {},
+"dragonboard-410c" : {},
+"mustang" : {}
+},
 "overrides" : {
 "nightly-x86-64-bsp" : {
-"NEEDREPOS" : ["poky", "meta-intel", "meta-op

[yocto] Is KCONFIG_MODE test backwards in kernel-yocto.bbclass?

2018-09-25 Thread Aaron Cohen
http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/classes/kernel-yocto.bbclass#n297

With no KCONFIG_MODE specified, we check if the user has copied a defconfig
to workdir, and then use allnoconfig if so. Shouldn't we be using
alldefconfig in this case?

The result currently is that if you use a defconfig file that you add to
SRC_URI, much of it won't work, because it's been set to no by allnoconfig,
and it's not super-obvious, to me at least, why.
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Mercurial fetcher doesn't work for me (patch included)

2018-11-26 Thread Aaron Cohen
I'm trying to write a recipe for a local repo that is kept in mercurial,
and encountering the following problem:


> Traceback (most recent call last):
>   File "/home/joel-cohen/code/waveos2/poky/bitbake/lib/bb/data_smart.py",
> line 808, in DataSmart.getVarFlag(var='PV', flag='_content', expand=True,
> noweakdefault=False, parsing=False, retparser=False):
>  if expand or retparser:
> >parser = self.expandWithRefs(value, cachename)
>  if expand:
>   File "/home/joel-cohen/code/waveos2/poky/bitbake/lib/bb/data_smart.py",
> line 416, in DataSmart.expandWithRefs(s='0.0.4+git${SRCPV}', varname='PV'):
>  try:
> >s = __expand_var_regexp__.sub(varparse.var_sub, s)
>  try:
>   File "/home/joel-cohen/code/waveos2/poky/bitbake/lib/bb/data_smart.py",
> line 108, in VariableParse.var_sub(match=<_sre.SRE_Match object; span=(9,
> 17), match='${SRCPV}'>):
>  raise Exception("variable %s references itself!"
> % self.varname)
> >var = self.d.getVarFlag(key, "_content")
>  self.references.add(key)
>   File "/home/joel-cohen/code/waveos2/poky/bitbake/lib/bb/data_smart.py",
> line 808, in DataSmart.getVarFlag(var='SRCPV', flag='_content',
> expand=True, noweakdefault=False, parsing=False, retparser=False):
>  if expand or retparser:
> >parser = self.expandWithRefs(value, cachename)
>  if expand:
>   File "/home/joel-cohen/code/waveos2/poky/bitbake/lib/bb/data_smart.py",
> line 430, in DataSmart.expandWithRefs(s='${@bb.fetch2.get_srcrev(d)}',
> varname='SRCPV'):
>  except Exception as exc:
> >raise ExpansionError(varname, s, exc) from exc
>
> bb.data_smart.ExpansionError: Failure expanding variable SRCPV, expression
> was ${@bb.fetch2.get_srcrev(d)} which triggered exception AttributeError:
> 'FetchData' object has no attribute 'moddir'


I've fixed this with the following patch, which moves some code in the Hg
fetcher to later in the urldata_init method, after some necessary variables
have been initialized. I'm not sure if I'm alone in seeing this problem or
not?

Thanks for any help,
Aaron
diff --git a/bitbake/lib/bb/fetch2/hg.py b/bitbake/lib/bb/fetch2/hg.py
index 936d043..9790e1b 100644
--- a/bitbake/lib/bb/fetch2/hg.py
+++ b/bitbake/lib/bb/fetch2/hg.py
@@ -66,13 +66,6 @@ class Hg(FetchMethod):
 else:
 ud.proto = "hg"
 
-ud.setup_revisions(d)
-
-if 'rev' in ud.parm:
-ud.revision = ud.parm['rev']
-elif not ud.revision:
-ud.revision = self.latest_revision(ud, d)
-
 # Create paths to mercurial checkouts
 hgsrcname = '%s_%s_%s' % (ud.module.replace('/', '.'), \
 ud.host, ud.path.replace('/', '.'))
@@ -86,6 +79,14 @@ class Hg(FetchMethod):
 ud.localfile = ud.moddir
 ud.basecmd = d.getVar("FETCHCMD_hg") or "/usr/bin/env hg"
 
+ud.setup_revisions(d)
+
+if 'rev' in ud.parm:
+ud.revision = ud.parm['rev']
+elif not ud.revision:
+ud.revision = self.latest_revision(ud, d)
+
+
 ud.write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS")
 
 def need_update(self, ud, d):
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Mercurial fetcher doesn't work for me (patch included)

2018-11-26 Thread Aaron Cohen
Also, what is it that is supposed to create the
$(BUILDDIR)/tmp/hosttools/hg symlink?

For some reason, this link is not created for me automatically. It works
fine if I create it manually.

--Aaron


On Mon, Nov 26, 2018 at 3:34 PM Aaron Cohen  wrote:

> I'm trying to write a recipe for a local repo that is kept in mercurial,
> and encountering the following problem:
>
>
>> Traceback (most recent call last):
>>   File "/home/joel-cohen/code/waveos2/poky/bitbake/lib/bb/data_smart.py",
>> line 808, in DataSmart.getVarFlag(var='PV', flag='_content', expand=True,
>> noweakdefault=False, parsing=False, retparser=False):
>>  if expand or retparser:
>> >parser = self.expandWithRefs(value, cachename)
>>  if expand:
>>   File "/home/joel-cohen/code/waveos2/poky/bitbake/lib/bb/data_smart.py",
>> line 416, in DataSmart.expandWithRefs(s='0.0.4+git${SRCPV}', varname='PV'):
>>  try:
>> >s = __expand_var_regexp__.sub(varparse.var_sub, s)
>>  try:
>>   File "/home/joel-cohen/code/waveos2/poky/bitbake/lib/bb/data_smart.py",
>> line 108, in VariableParse.var_sub(match=<_sre.SRE_Match object; span=(9,
>> 17), match='${SRCPV}'>):
>>  raise Exception("variable %s references itself!"
>> % self.varname)
>> >var = self.d.getVarFlag(key, "_content")
>>  self.references.add(key)
>>   File "/home/joel-cohen/code/waveos2/poky/bitbake/lib/bb/data_smart.py",
>> line 808, in DataSmart.getVarFlag(var='SRCPV', flag='_content',
>> expand=True, noweakdefault=False, parsing=False, retparser=False):
>>  if expand or retparser:
>> >parser = self.expandWithRefs(value, cachename)
>>  if expand:
>>   File "/home/joel-cohen/code/waveos2/poky/bitbake/lib/bb/data_smart.py",
>> line 430, in DataSmart.expandWithRefs(s='${@bb.fetch2.get_srcrev(d)}',
>> varname='SRCPV'):
>>  except Exception as exc:
>> >raise ExpansionError(varname, s, exc) from exc
>>
>> bb.data_smart.ExpansionError: Failure expanding variable SRCPV,
>> expression was ${@bb.fetch2.get_srcrev(d)} which triggered exception
>> AttributeError: 'FetchData' object has no attribute 'moddir'
>
>
> I've fixed this with the following patch, which moves some code in the Hg
> fetcher to later in the urldata_init method, after some necessary variables
> have been initialized. I'm not sure if I'm alone in seeing this problem or
> not?
>
> Thanks for any help,
> Aaron
>
>
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] How to have Poky automatically login at boot?

2017-07-28 Thread Aaron Schwartz
Hello,

I'm trying to have the busybox getty in Poky login as root automatically
but I can't seem to get it working.

So far I have a sysvinit-inittab_2.%.bbappend:

> PR := "${PR}.1"
> SYSVINIT_ENABLED_GETTYS="1 2 3 4"
> do_install() {
> install -d ${D}${sysconfdir}
> install -m 0644 ${WORKDIR}/inittab ${D}${sysconfdir}/inittab
> install -d ${D}${base_bindir}
> install -m 0755 ${WORKDIR}/start_getty ${D}${base_bindir}/start_getty
> set -x
> tmp="${SERIAL_CONSOLES}"
> for i in $tmp
> do
> j=`echo ${i} | sed s/\;/\ /g`
> l=`echo ${i} | sed -e 's/tty//' -e 's/^.*;//' -e 's/;.*//'`
> label=`echo $l | sed 's/.*\(\)/\1/'`
> echo "$label:12345:respawn:${base_bindir}/start_getty ${j} vt102" >>
> ${D}${sysconfdir}/inittab
> done
> if [ "${USE_VT}" = "1" ]; then
> cat <>${D}${sysconfdir}/inittab
> # ${base_sbindir}/getty invocations for the runlevels.
> #
> # The "id" field MUST be the same as the last
> # characters of the device (after "tty").
> #
> # Format:
> #  :::
> #
> EOF
> for n in ${SYSVINIT_ENABLED_GETTYS}
> do
> echo "$n:12345:respawn:${base_sbindir}/mingetty --autologin
> root 38400 tty$n" >> ${D}${sysconfdir}/inittab
> done
> echo "" >> ${D}${sysconfdir}/inittab
> fi
> }


I have tried a number of flags with the default getty, including using a
shell as login.  I tried adding mingetty to my image to see if that would
work with it's "--autologin" flag, but I still got the busybox getty
yelling at me about unrecognized flags.

Is the best approach to adjust the ALTERNATIVE_PRIORITY for either the
busybox getty or mingetty to have update-alternatives select mingetty, or
is there an easier modification to my sysvinit-innittab bbappend above that
will enable autologin?

Thanks for the help!
Aaron
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] How to have Poky automatically login at boot?

2017-07-31 Thread Aaron Schwartz
Good tip Andre!

Just adding 'auto-serial-console' to IMAGE_INSTALL results in openvt
complaining that VT1 is already in use and then the user is presented with
the regular login.  It looks like I'll be able to make this work once I
figure out how to disable whatever's claiming that vty, though.

Thanks for the help

On Fri, Jul 28, 2017 at 8:32 PM, Andre McCurdy  wrote:

> On Fri, Jul 28, 2017 at 11:29 AM, Aaron Schwartz
>  wrote:
> > Hello,
> >
> > I'm trying to have the busybox getty in Poky login as root automatically
> but
> > I can't seem to get it working.
>
> This may be a possible solution:
>
>   https://git.linaro.org/openembedded/meta-linaro.git/
> tree/meta-linaro/recipes-linaro/auto-serial-console/
> auto-serial-console_0.1.bb
>
> > So far I have a sysvinit-inittab_2.%.bbappend:
> >>
> >> PR := "${PR}.1"
> >> SYSVINIT_ENABLED_GETTYS="1 2 3 4"
> >> do_install() {
> >> install -d ${D}${sysconfdir}
> >> install -m 0644 ${WORKDIR}/inittab ${D}${sysconfdir}/inittab
> >> install -d ${D}${base_bindir}
> >> install -m 0755 ${WORKDIR}/start_getty
> ${D}${base_bindir}/start_getty
> >> set -x
> >> tmp="${SERIAL_CONSOLES}"
> >> for i in $tmp
> >> do
> >> j=`echo ${i} | sed s/\;/\ /g`
> >> l=`echo ${i} | sed -e 's/tty//' -e 's/^.*;//' -e 's/;.*//'`
> >> label=`echo $l | sed 's/.*\(\)/\1/'`
> >> echo "$label:12345:respawn:${base_bindir}/start_getty ${j} vt102" >>
> >> ${D}${sysconfdir}/inittab
> >> done
> >> if [ "${USE_VT}" = "1" ]; then
> >> cat <>${D}${sysconfdir}/inittab
> >> # ${base_sbindir}/getty invocations for the runlevels.
> >> #
> >> # The "id" field MUST be the same as the last
> >> # characters of the device (after "tty").
> >> #
> >> # Format:
> >> #  :::
> >> #
> >> EOF
> >> for n in ${SYSVINIT_ENABLED_GETTYS}
> >> do
> >> echo "$n:12345:respawn:${base_sbindir}/mingetty --autologin
> >> root 38400 tty$n" >> ${D}${sysconfdir}/inittab
> >> done
> >> echo "" >> ${D}${sysconfdir}/inittab
> >> fi
> >> }
> >
> >
> > I have tried a number of flags with the default getty, including using a
> > shell as login.  I tried adding mingetty to my image to see if that would
> > work with it's "--autologin" flag, but I still got the busybox getty
> yelling
> > at me about unrecognized flags.
> >
> > Is the best approach to adjust the ALTERNATIVE_PRIORITY for either the
> > busybox getty or mingetty to have update-alternatives select mingetty,
> or is
> > there an easier modification to my sysvinit-innittab bbappend above that
> > will enable autologin?
> >
> > Thanks for the help!
> > Aaron
> >
> > --
> > ___
> > yocto mailing list
> > yocto@yoctoproject.org
> > https://lists.yoctoproject.org/listinfo/yocto
> >
>



-- 

Aaron Schwartz
Production
Logic Supply
Direct: +1 802 861 2300 Ext. 530
Main: +1 802 861 2300
www.logicsupply.com

Google+ <https://plus.google.com/+Logicsupply/posts> | Twitter
<https://twitter.com/logicsupply> | LinkedIn
<https://www.linkedin.com/company/logic-supply> | YouTube
<https://www.youtube.com/user/logicsupply>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Predictable network interface names without systemd?

2017-08-18 Thread Aaron Schwartz
Does anybody here know of a way to get predictable network interface names
<https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/>
without systemd?

I realize I could write a script to do it myself, but I was hoping there's
an easier solution that's ready to go (or close to it).

Thanks in advance!
Aaron
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Predictable network interface names without systemd?

2017-08-21 Thread Aaron Schwartz
Wow, thanks Chris and Andre!

I'm just getting to play around with this, but Andre's trick seems to have
worked exactly like I was hoping for.  I ended up just adding a bbappend
that removes the file touched in the line of the eudev recipe Andre
referenced.

Thanks again,
Aaron

On Fri, Aug 18, 2017 at 4:59 PM, Andre McCurdy  wrote:

> On Fri, Aug 18, 2017 at 9:37 AM, Aaron Schwartz
>  wrote:
> > Does anybody here know of a way to get predictable network interface
> names
> > without systemd?
>
> If you are using eudev, then try commenting out the line below "Use
> classic network interface naming scheme" in the eudev recipe.
>
> > I realize I could write a script to do it myself, but I was hoping
> there's
> > an easier solution that's ready to go (or close to it).
> >
> > Thanks in advance!
> > Aaron
> >
> > --
> > ___
> > yocto mailing list
> > yocto@yoctoproject.org
> > https://lists.yoctoproject.org/listinfo/yocto
> >
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Building Custom Python 3 Packages

2017-08-31 Thread Seilis, Aaron
Hi,

I'm building a couple of custom Python 3 scripts+modules as part of the -native 
tools required to build a number of my embedded components. I'm having 
difficulties getting my recipe to build when I've run `devtool modify` but the 
build works fine in a regular build. The Python tools are stored in various Git 
repositories and use the typical setuptools infrastructure.

My recipe is roughly as follows:

> inherit setuptools3
>
> SUMMARY = "My tool"
> DESCRIPTION = "Description"
>
> SECTION = "devel/scripts"
> LICENSE = "CLOSED"
>
> SRC_URI = "my-URL"
> SRCREV = "git hash"
>
> S = "${WORKDIR}/git/"
> BBCLASSEXTEND = "native"

This builds fine when running Bitbake normally; the tool is built and can be 
used to build other components. However, when I run `devtool modify`, the ${S} 
variable gets set to my alternate source location but the ${B} variable remains 
under ${WORKDIR}. This causes the build to fail with the error:

> ERROR: mytool-native-1.3.5-r0 do_install: python3 setup.py install execution 
> failed.
> ERROR: mytool-native-1.3.5-r0 do_install: Function failed: do_install (log 
> file is located at 
> /.../build/tmp/work/x86_64-linux/mytool-native/1.3.5-r0/temp/log.do_install.39172)
> ERROR: Logfile of failure stored in: 
> /.../build/tmp/work/x86_64-linux/mytool-native/1.3.5-r0/temp/log.do_install.39172
> Log data follows:
> | DEBUG: Executing shell function do_install
> | /.../build/tmp/sysroots/x86_64-linux/usr/bin/python3-native/python3: can't 
> open file 'setup.py': [Errno 2] No such file or directory
> | ERROR: python3 setup.py install execution failed.
> ERROR: Function failed: do_install (log file is located at 
> /.../build/tmp/work/x86_64-linux/mytool-native/1.3.5-r0/temp/log.do_install.39172)
> ERROR: Task 0 
> (virtual:native:/.../sources/meta-myproject/recipes-tools/mytool/mytool_1.3.5.bb,
>  do_install) failed with exit code '1'

Looking at the build recipes for Python, shows that the do_compile() step for 
Python modules (with setuptools3) is:

>distutils3_do_compile() {
>if [ x86_64-linux != aarch64-WaveserverOS-linux ]; then
>SYS=wcs
>else
>SYS=aarch64-WaveserverOS-linux
>fi
>
> STAGING_INCDIR=/localdata/projects/waveserver/build/tmp/sysroots/wcs/usr/include
>  \
>
> STAGING_LIBDIR=/localdata/projects/waveserver/build/tmp/sysroots/wcs/usr/lib \
>BUILD_SYS=x86_64-linux HOST_SYS=${SYS} \
>
> /localdata/projects/waveserver/build/tmp/sysroots/x86_64-linux/usr/bin/python3-native/python3
>  setup.py \
>build  || \
>bbfatal_log "python3 setup.py build_ext execution failed."
>}

This clearly indicates that the issue is that the build is looking for setup.py 
in the ${B} location, but it is only present in the ${S} location when `devtool 
modify` has been run. I have tried setting ${B} to ${S} explicitly in the 
recipe, but this doesn't result in ${B} being changed when I run `bitbake -e 
mytool`. I could always copy ${S} to ${B} in the recipe, but that seems a bit 
hack-ish.

Did I miss something or is there another way that Python builds are intended to 
work?

Thanks,
Aaron
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] how to execute bitbake menuconfig from ssh server

2017-09-13 Thread Aaron Schwartz
Tmux [0] also works well for this, and I've never tried it with Screen (a
similar utility) so here's instructions using Tmux:

You need to install Tmux on the server you are using SSH to connect to,
then as soon as you SSH into the server run `$ tmux`.  Then when you run `$
bitbake -c menuconfig ...` it will automatically open a second pane on the
bottom half of your screen where you can edit your kernel config.  That
pane will close automatically when you exit the menuconfig application.

I hope that helps!
Aaron


0)  https://github.com/tmux/tmux/wiki

On Wed, Sep 13, 2017 at 7:17 AM, yahia farghaly 
wrote:

>
> can you give some steps on how to do this ?
>
>
> ‌
>
> On 13 September 2017 at 10:54, Yusuke Mitsuki <
> mickey.happygolu...@gmail.com> wrote:
>
>> Hello
>>
>> You can use screen.
>> If your host is ubuntu,you can get via apt as follows.
>>
>> sudo apt install screen.
>>
>> If necessary , you can set auto or screen to OE_TERMINAL environment.
>>
>> 2017/09/13 16:09 "yahia farghaly" :
>>
>>> Hi,
>>>
>>> I am working with yocto from a remote server using ssh. i want to
>>> execute *bitbake -c menuconfig virtual/kernel*  . It fails to open
>>> since it tries to open another shell.
>>> how can i redirect output of menuconfig to my current ssh session ?
>>>
>>> --
>>> Yahia Farghaly
>>> Graduated from Faculty of Engineering - Electronics and Communications
>>> Department at Cairo University.
>>> Linkedin <https://linkedin.com/in/yahiafarghaly> - GitHub
>>> <https://github.com/yahiafarghaly>
>>>
>>>
>>>
>>> ‌
>>>
>>> --
>>> ___
>>> yocto mailing list
>>> yocto@yoctoproject.org
>>> https://lists.yoctoproject.org/listinfo/yocto
>>>
>>>
>
>
> --
> Yahia Farghaly
> Graduated from Faculty of Engineering - Electronics and Communications
> Department at Cairo University.
> Linkedin <https://linkedin.com/in/yahiafarghaly> - GitHub
> <https://github.com/yahiafarghaly>
>
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>
>


-- 

Aaron Schwartz
Production
Logic Supply
Direct: +1 802 861 2300 Ext. 530
Main: +1 802 861 2300
www.logicsupply.com

Google+ <https://plus.google.com/+Logicsupply/posts> | Twitter
<https://twitter.com/logicsupply> | LinkedIn
<https://www.linkedin.com/company/logic-supply> | YouTube
<https://www.youtube.com/user/logicsupply>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Problems when adding custom layer

2017-10-02 Thread Aaron Sarginson
Hello,

I am trying to add a folder and some files to a raspberry pi build which does 
build successfully with the latest Yocto.
When added, I receive parsing errors at the inherit “systemd” line.
The idea is that the web-files folder should be transferred in its entirety, 
rules.sh and the executable ‘project’ to the same folder plus ddd.service which 
is a systemd service file for ‘project’.

meta-ddd
├── conf
│   └── layer.conf
├── files
│   └── ddd.service
├── project
├── README
├── rules.sh
└── web-files
└── sub-folder

SUMMARY = “DDD software installation"
DESCRIPTION = "Transfers files and sets up systemd for DDD."

SRC_URI += "\
file://web-files/* \
file://project \
file://ddd.service \
file://rules.sh \
"
S = "${WORKDIR}"

inherit "systemd"

SYSTEMD_PACKAGES = "${PN}"

SYSTEMD_SERVICE_${PN} = " ddd.service"

FILES_${PN} += " ddd.service \
 /web-files/ \
 /opt/ddd/project \
"


do_install () {
install -d ${D}{sysconfdir}/systemd/system
install -d ${D}/opt/ddd
install -m 0755 ${WORKDIR}/web-files/ ${D}/opt/ddd
install -m 0755 ${WORKDIR}/ddd.service
${D}{sysconfdir}/systemd/system
install -m 0755 ${WORKDIR}/project ${D}/opt/ddd
}-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] RFC: autotooler: generation of "configure.ac" and "Makefile.am" using Kconfig

2017-10-02 Thread Aaron Schwartz
sl   Include OpenSSL],
> [case "${enableval}" in
> yes)use_openssl=true  ;;
> no) use_openssl=false ;;
> *) AC_MSG_ERROR(bad value ${enableval} for
> --enable-openssl) ;;
>  esac
> ],
> [use_openssl=false])
> AM_CONDITIONAL(CONFIG_OPENSSL, test x$openssl = xtrue)
>
> #  Pthread Libraries
> AC_ARG_WITH([pthread-include-path],
> [AS_HELP_STRING([--with-pthread-include-path],[location of the
> PThread headers, defaults to /usr/include])],
> [CFLAGS_PTHREAD="-I$withval"],
> [CFLAGS_PTHREAD="-I/usr/include"])
> AC_SUBST([PTHREAD_CFLAGS])
>
> AC_ARG_WITH([pthread-lib-path],
> [AS_HELP_STRING([--with-pthread-lib-path],[location of the
> PThread libraries, defaults to /usr/include])],
> [PTHREAD_LIBS="-L$withval" -lpthread],
> [PTHREAD_LIBS="-L/usr/include -lpthread"])
> AC_SUBST([PTHREAD_LIBS])
>
> AC_ARG_ENABLE(pthread,
> [--enable-pthread   Include PThreads],
>     [case "${enableval}" in
> yes)use_pthread=true  ;;
> no) use_pthread=false ;;
> *) AC_MSG_ERROR(bad value ${enableval} for
> --enable-pthread) ;;
>  esac
> ],
> [use_pthread=false])
> AM_CONDITIONAL(CONFIG_PTHREAD, test x$pthread = xtrue)
>
> #  Debug
> AC_ARG_ENABLE(debug,
> [--enable-debug Build with DEBUG enabled],
> [case "${enableval}" in
> yes)debug=true  ;;
> no) debug=false ;;
> *) AC_MSG_ERROR(bad value ${enableval} for --enable-debug)
> ;;
>  esac
> ],
> [debug=false])
> AM_CONDITIONAL(CONFIG_DEBUG, test x$debug = xtrue)
>
> AC_ARG_ENABLE(examples,
> [--enable-examples  Build examples],
> [case "${enableval}" in
> yes)examples=true  ;;
> no) examples=false ;;
> *) AC_MSG_ERROR(bad value ${enableval} for
> --enable-examples) ;;
>  esac
> ],
> [examples=false])
> AM_CONDITIONAL(CONFIG_EXAMPLES, test x$examples = xtrue)
>
> AC_SUBST([CFLAGS])
> AC_CONFIG_FILES([Makefile  libyocto.pc])
> AC_OUTPUT
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>



-- 

Aaron Schwartz
Production
Logic Supply
Direct: +1 802 861 2300 Ext. 530
Main: +1 802 861 2300
www.logicsupply.com

Google+ <https://plus.google.com/+Logicsupply/posts> | Twitter
<https://twitter.com/logicsupply> | LinkedIn
<https://www.linkedin.com/company/logic-supply> | YouTube
<https://www.youtube.com/user/logicsupply>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Navigating the layer labyrinth

2017-10-12 Thread Aaron Schwartz
 Hello,

I am not sure if you've found the OpenEmbedded Layer Index [0] yet, but
that's a good resource and an example of what can be done.  I believe the
source code is available [1] and I've toyed with the idea of getting it
working locally (although I've not had the time to do so).

That could be a part of what you're looking for, at least.

Hope that helps,
Aaron



0) https://layers.openembedded.org/layerindex/branch/master/recipes/
1) http://git.yoctoproject.org/cgit/cgit.cgi/layerindex-web/tree/layerindex


On Thu, Oct 12, 2017 at 5:34 AM, Bernd  wrote:

> I am a new user for a few weeks now, trying to make a customized image
> for a toradex colibri-vf module, so far I have succeeded in the
> following disciplines:
>
> * adding the 3rd party layers that I need
> * making my own layers
> * using a .bbappend to patch the device tree
> * using a .bbappend to workaround a bug(?) in one of the freescale layers
> * writing my own recipe to install a python script
> * writing recipes for pulling additional python packages with pypi and
> setuptools3
> * writing my own image recipe
> * making it boot and run on the target platform
>
> During this learning experience I have made the following observations
> of circumstances that made it especially hard for me to get things
> done, I'm not yet really sure if this is a documentation issue or if
> it is really a missing feature but I feel I could have had a much
> *much* easier time learning and understanding the concepts and
> relationships and the inner workings of existing layers upon which I
> want to build my system if the following things were possible (and/or
> if they are already possible they should be documented in the very
> first chapter of the documentation):
>
> * Finding the *file path* of an existing recipe (or append file or
> class) *by its name* and also all existing .bbappends for it, i
> imagine something simple like bitbake --show-paths foo-bar would
> output me the small list of absolute paths of recipe files by the name
> foo-bar and all matching .bbappend files in the order in which they
> would be applied, it would show me only this small list of paths and
> not dump 100kb of unrelated information along with it. This would be
> incredibly helpful when I need to inspect an existing recipe in order
> to understand how I can bbappend it or even just to see and understand
> what it actually does.
>
> * A simple way to track the assignment of a certain variable, to
> inspect its contents and if it refers to other variables then
> recursively show their contents too (and also the path of the bb file
> where this happens), and also show which other recipes will directly
> and indirectrly depend on this variable further down the line, I
> imagine this should output two tree-like structures where one can see
> at one glance how and where all the contents of that variable come
> from and where they are going to be used. Again this should be a
> simple command that formats and outputs that (and only that)
> information in a well formatted and compact tree-like representation.
>
> * The absolute killer application would be an IDE or an editor plugin
> where I open any .bb file and can then just CTRL-click on any include,
> require, inherit, depend, rdepend, or any variable name and it would
> open another editor containing that recipe file where it is defined
> and/or populate a sidebar with a list or a tree of direct and indirect
> references to that name, backward and forward, and I could just click
> on any node of that tree an and it would open the file in the editor
> and jump to that line of code. Such a thing would be an incredibly
> helpful tool, it would make even the most complex and tangled
> labyrinth of recipes navigable with ease.
>
> Please tell me that such a thing already exists and I just have not
> found it yet.
>
> Bernd
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
>



-- 

Aaron Schwartz
Production
Logic Supply
Direct: +1 802 861 2300 Ext. 530
Main: +1 802 861 2300
www.logicsupply.com

Google+ <https://plus.google.com/+Logicsupply/posts> | Twitter
<https://twitter.com/logicsupply> | LinkedIn
<https://www.linkedin.com/company/logic-supply> | YouTube
<https://www.youtube.com/user/logicsupply>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] python-numpy RDEPENDS not honored?

2019-05-09 Thread Aaron Cohen
python-numpy is being installed on my image as a dependency of one of my
packages.

However, none of python-numpy's RDEPENDS are currently being included on
the image.

When I run bitbake -e python-numpy > numpy.txt, I see that for some reason,
the RDEPENDS is being overridden to a nearly empty value:

# $RDEPENDS_python-numpy [5 operations]
> #   append
> /home/joel-cohen/code/juevos/poky/meta/classes/distutils-base.bbclass:2
> # "${@['', '${PYTHON_PN}-core']['${CLASSOVERRIDE}' == 'class-target']}"
> #   set
> /home/joel-cohen/code/juevos/poky/meta/recipes-devtools/python-numpy/python-numpy.inc:112
> # "${PYTHON_PN}-unittest   ${PYTHON_PN}-difflib
>${PYTHON_PN}-pprint   ${PYTHON_PN}-pickle
>${PYTHON_PN}-shell   ${PYTHON_PN}-nose
>  ${PYTHON_PN}-doctest   ${PYTHON_PN}-datetime
>  ${PYTHON_PN}-distutils   ${PYTHON_PN}-misc
>${PYTHON_PN}-mmap   ${PYTHON_PN}-netclient
>  ${PYTHON_PN}-numbers   ${PYTHON_PN}-pydoc
>  ${PYTHON_PN}-pkgutil   ${PYTHON_PN}-email
>  ${PYTHON_PN}-compression
>  ${PYTHON_PN}-ctypes   ${PYTHON_PN}-threading "
> #   rename from RDEPENDS_${PN} data.py:117 [expandKeys]
> # "${PYTHON_PN}-unittest   ${PYTHON_PN}-difflib
>${PYTHON_PN}-pprint   ${PYTHON_PN}-pickle
>${PYTHON_PN}-shell   ${PYTHON_PN}-nose
>  ${PYTHON_PN}-doctest   ${PYTHON_PN}-datetime
>  ${PYTHON_PN}-distutils   ${PYTHON_PN}-misc
>${PYTHON_PN}-mmap   ${PYTHON_PN}-netclient
>  ${PYTHON_PN}-numbers   ${PYTHON_PN}-pydoc
>  ${PYTHON_PN}-pkgutil   ${PYTHON_PN}-email
>  ${PYTHON_PN}-compression
>  ${PYTHON_PN}-ctypes   ${PYTHON_PN}-threading "
> #   override[class-native]:set
> /home/joel-cohen/code/juevos/poky/meta/recipes-devtools/python-numpy/python-numpy.inc:114
> # ""
> #   override[class-native]:rename from RDEPENDS_${PN}_class-native
> data_smart.py:644 [renameVar]
> # ""
> # pre-expansion value:
> #   " ${PYTHON_PN}-subprocess "
> RDEPENDS_python-numpy=" python-subprocess "



Anyone have any ideas for why this recipe doesn't seem to be behaving
correctly?

I'm using poky 2.6 at the moment.

Thanks for any help,
Aaron
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] python-numpy RDEPENDS not honored?

2019-05-09 Thread Aaron Cohen
I think the problem is in the python-numpy_1.14.5.bb recipe.

It does: RDEPENDS_${PN}_class-target_append = ...

where it seems to need to be: RDEPENDS_${PN}_append_class-target = ...

Note the position of the word append.

If someone wanted to point me in the right direction, I could try to make a
qa task that would discover this sort of error if it seems feasible?

--Aaron


On Thu, May 9, 2019 at 4:15 PM Aaron Cohen  wrote:

> python-numpy is being installed on my image as a dependency of one of my
> packages.
>
> However, none of python-numpy's RDEPENDS are currently being included on
> the image.
>
> When I run bitbake -e python-numpy > numpy.txt, I see that for some
> reason, the RDEPENDS is being overridden to a nearly empty value:
>
> # $RDEPENDS_python-numpy [5 operations]
>> #   append
>> /home/joel-cohen/code/juevos/poky/meta/classes/distutils-base.bbclass:2
>> # "${@['', '${PYTHON_PN}-core']['${CLASSOVERRIDE}' ==
>> 'class-target']}"
>> #   set
>> /home/joel-cohen/code/juevos/poky/meta/recipes-devtools/python-numpy/python-numpy.inc:112
>> # "${PYTHON_PN}-unittest   ${PYTHON_PN}-difflib
>>  ${PYTHON_PN}-pprint   ${PYTHON_PN}-pickle
>>  ${PYTHON_PN}-shell   ${PYTHON_PN}-nose
>>${PYTHON_PN}-doctest   ${PYTHON_PN}-datetime
>>${PYTHON_PN}-distutils   ${PYTHON_PN}-misc
>>  ${PYTHON_PN}-mmap
>>  ${PYTHON_PN}-netclient   ${PYTHON_PN}-numbers
>>  ${PYTHON_PN}-pydoc   ${PYTHON_PN}-pkgutil
>>  ${PYTHON_PN}-email   ${PYTHON_PN}-compression
>>  ${PYTHON_PN}-ctypes   ${PYTHON_PN}-threading "
>> #   rename from RDEPENDS_${PN} data.py:117 [expandKeys]
>> # "${PYTHON_PN}-unittest   ${PYTHON_PN}-difflib
>>  ${PYTHON_PN}-pprint   ${PYTHON_PN}-pickle
>>  ${PYTHON_PN}-shell   ${PYTHON_PN}-nose
>>${PYTHON_PN}-doctest   ${PYTHON_PN}-datetime
>>${PYTHON_PN}-distutils   ${PYTHON_PN}-misc
>>  ${PYTHON_PN}-mmap
>>  ${PYTHON_PN}-netclient   ${PYTHON_PN}-numbers
>>  ${PYTHON_PN}-pydoc   ${PYTHON_PN}-pkgutil
>>  ${PYTHON_PN}-email   ${PYTHON_PN}-compression
>>  ${PYTHON_PN}-ctypes   ${PYTHON_PN}-threading "
>> #   override[class-native]:set
>> /home/joel-cohen/code/juevos/poky/meta/recipes-devtools/python-numpy/python-numpy.inc:114
>> # ""
>> #   override[class-native]:rename from RDEPENDS_${PN}_class-native
>> data_smart.py:644 [renameVar]
>> # ""
>> # pre-expansion value:
>> #   " ${PYTHON_PN}-subprocess "
>> RDEPENDS_python-numpy=" python-subprocess "
>
>
>
> Anyone have any ideas for why this recipe doesn't seem to be behaving
> correctly?
>
> I'm using poky 2.6 at the moment.
>
> Thanks for any help,
> Aaron
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] How do IMAGE_FEATURES+="tools-sdk" and TOOLCHAIN_TARGET_TASK interact?

2019-05-09 Thread Aaron Cohen
Looking through the recipes and classes, it seems that adding "tools-sdk"
to the IMAGE_FEATURES of an image just adds a relatively arbitrary list of
"sdk-ish" packages to the image from packagegroup-core-sdk

I had assumed that tools-sdk would duplicate the logic of "populate_sdk"
somehow, so that what you end up on the target is equivalent to what is in
a generated SDK.

In particular, if I add something to TOOLCHAIN_TARGET_TASK, it doesn't seem
to also be added to anything that tools-sdk cares about.

Am I missing something?

Thanks,
Aaron
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] python-numpy RDEPENDS not honored?

2019-05-14 Thread Aaron Cohen
Is there a place I should post this that would get more attention?

On Thu, May 9, 2019 at 4:48 PM Aaron Cohen  wrote:

> I think the problem is in the python-numpy_1.14.5.bb recipe.
>
> It does: RDEPENDS_${PN}_class-target_append = ...
>
> where it seems to need to be: RDEPENDS_${PN}_append_class-target = ...
>
> Note the position of the word append.
>
> If someone wanted to point me in the right direction, I could try to make
> a qa task that would discover this sort of error if it seems feasible?
>
> --Aaron
>
>
> On Thu, May 9, 2019 at 4:15 PM Aaron Cohen  wrote:
>
>> python-numpy is being installed on my image as a dependency of one of my
>> packages.
>>
>> However, none of python-numpy's RDEPENDS are currently being included on
>> the image.
>>
>> When I run bitbake -e python-numpy > numpy.txt, I see that for some
>> reason, the RDEPENDS is being overridden to a nearly empty value:
>>
>> # $RDEPENDS_python-numpy [5 operations]
>>> #   append
>>> /home/joel-cohen/code/juevos/poky/meta/classes/distutils-base.bbclass:2
>>> # "${@['', '${PYTHON_PN}-core']['${CLASSOVERRIDE}' ==
>>> 'class-target']}"
>>> #   set
>>> /home/joel-cohen/code/juevos/poky/meta/recipes-devtools/python-numpy/python-numpy.inc:112
>>> # "${PYTHON_PN}-unittest   ${PYTHON_PN}-difflib
>>>  ${PYTHON_PN}-pprint   ${PYTHON_PN}-pickle
>>>  ${PYTHON_PN}-shell   ${PYTHON_PN}-nose
>>>${PYTHON_PN}-doctest   ${PYTHON_PN}-datetime
>>>${PYTHON_PN}-distutils   ${PYTHON_PN}-misc
>>>  ${PYTHON_PN}-mmap
>>>  ${PYTHON_PN}-netclient   ${PYTHON_PN}-numbers
>>>  ${PYTHON_PN}-pydoc   ${PYTHON_PN}-pkgutil
>>>  ${PYTHON_PN}-email   ${PYTHON_PN}-compression
>>>  ${PYTHON_PN}-ctypes   ${PYTHON_PN}-threading "
>>> #   rename from RDEPENDS_${PN} data.py:117 [expandKeys]
>>> # "${PYTHON_PN}-unittest   ${PYTHON_PN}-difflib
>>>  ${PYTHON_PN}-pprint   ${PYTHON_PN}-pickle
>>>  ${PYTHON_PN}-shell   ${PYTHON_PN}-nose
>>>${PYTHON_PN}-doctest   ${PYTHON_PN}-datetime
>>>${PYTHON_PN}-distutils   ${PYTHON_PN}-misc
>>>  ${PYTHON_PN}-mmap
>>>  ${PYTHON_PN}-netclient   ${PYTHON_PN}-numbers
>>>  ${PYTHON_PN}-pydoc   ${PYTHON_PN}-pkgutil
>>>  ${PYTHON_PN}-email   ${PYTHON_PN}-compression
>>>  ${PYTHON_PN}-ctypes   ${PYTHON_PN}-threading "
>>> #   override[class-native]:set
>>> /home/joel-cohen/code/juevos/poky/meta/recipes-devtools/python-numpy/python-numpy.inc:114
>>> # ""
>>> #   override[class-native]:rename from RDEPENDS_${PN}_class-native
>>> data_smart.py:644 [renameVar]
>>> # ""
>>> # pre-expansion value:
>>> #   " ${PYTHON_PN}-subprocess "
>>> RDEPENDS_python-numpy=" python-subprocess "
>>
>>
>>
>> Anyone have any ideas for why this recipe doesn't seem to be behaving
>> correctly?
>>
>> I'm using poky 2.6 at the moment.
>>
>> Thanks for any help,
>> Aaron
>>
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Syntax for zero-padding a number in a recipe?

2019-06-28 Thread Aaron Biver
Is there such a thing as zero padding a number in a recipe (or vice versa?
I'd be just as happy starting with a zero-padded number, and converting it
to non-zero-padded.
The crux of the dilemma is that I must have zero-padding for file-naming,
but I CAN'T have zero padding in these numbers when I pass them to the C
code (or the C code treats them as octal).

I'd like to start with:
VERSION_MAJOR="1"
VERSION_MINOR="19"

And turn this into "01" and "19" for purposes of naming the output file
from the recipe.

I've tried the bash-style printf -v, but the recipe kicks the printf lines
out with a parse error.

VERSION_MAJOR="01"
VERSION_MINOR="19"

VERSION_MAJOR_STR=""
VERSION_MINOR_STR=""

printf -v VERSION_MAJOR_STR "%02d" VERSION_MAJOR
printf -v VERSION_MINOR_STR "%02d" VERSION_MINOR
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Creating a file within a recipe

2019-06-28 Thread Aaron Biver
I'd like to be able to create a file using the cat command in a recipe.
The sub-goal is to have the file created somewhere I can actually find it:

do_create_tebf0808() {
cat > tebf0808.bif <-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Creating a file within a recipe

2019-06-28 Thread Aaron Biver
That seems to work.  Thanks!

On Fri, Jun 28, 2019 at 3:53 PM Burton, Ross  wrote:

> The bash parser does have some bugs, and I think you just found one.
> Probably easier to have a template on disk in SRC_URI, and sed in the
> value you want.
>
> Ross
>
> On Fri, 28 Jun 2019 at 19:35, Aaron Biver  wrote:
> >
> > I'd like to be able to create a file using the cat command in a recipe.
> The sub-goal is to have the file created somewhere I can actually find it:
> >
> > do_create_tebf0808() {
> > cat > tebf0808.bif < > all:
> > {
> > ${LATEST_TEBF0808_FW}
> > }
> > EOF
> > }
> >
> > This fails at the "EOF" line:
> > ERROR: ParseError at ...firmware.bb:43: unparsed line: 'EOF'
> | ETA:  --:--:--
> >
> > Maybe sed would work better?
> >
> >
> > --
> > ___
> > yocto mailing list
> > yocto@yoctoproject.org
> > https://lists.yoctoproject.org/listinfo/yocto
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Syntax for zero-padding a number in a recipe?

2019-07-01 Thread Aaron Biver
 [Sending again because I initially replied to Claudius only, and not
mailing list]

I found using back-ticks with printf also works in a recipe:

NUMBER = "1"
NUMBER_PADDED="FOO_`printf "02d" ${NUMBER}`_MOARFOO"

On Fri, Jun 28, 2019 at 4:48 PM Claudius Heine  wrote:

> Hi,
>
> Quoting Aaron Biver (2019-06-28 18:44:40)
> > Is there such a thing as zero padding a number in a recipe (or vice
> versa?
> > I'd be just as happy starting with a zero-padded number, and converting
> it
> > to non-zero-padded.
>
> In bitbake you could do this with inline python:
>
> NUMBER = "1"
> NUMBER_PADDED = "${@"{:02d}".format(d.getVar("NUMBER", True))}"
> NUMBER_PADDED2 = "0012"
> NUMBER2 = "${@str(int(d.getVar("NUMBER_PADDED2", True)))}"
>
> I have not tested this and it could possible be done more elegant, but
> it should work.
>
> regards,
> Claudius
>
> > The crux of the dilemma is that I must have zero-padding for file-naming,
> > but I CAN'T have zero padding in these numbers when I pass them to the C
> > code (or the C code treats them as octal).
> >
> > I'd like to start with:
> > VERSION_MAJOR="1"
> > VERSION_MINOR="19"
> >
> > And turn this into "01" and "19" for purposes of naming the output file
> > from the recipe.
> >
> > I've tried the bash-style printf -v, but the recipe kicks the printf
> lines
> > out with a parse error.
> >
> > VERSION_MAJOR="01"
> > VERSION_MINOR="19"
> >
> > VERSION_MAJOR_STR=""
> > VERSION_MINOR_STR=""
> >
> > printf -v VERSION_MAJOR_STR "%02d" VERSION_MAJOR
> > printf -v VERSION_MINOR_STR "%02d" VERSION_MINOR
> >
> >
> > --
> > ___
> > yocto mailing list
> > yocto@yoctoproject.org
> > https://lists.yoctoproject.org/listinfo/yocto
>
> --
> DENX Software Engineering GmbH,  Managing Director: Wolfgang Denk
> HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
> Phone: (+49)-8142-66989-54 Fax: (+49)-8142-66989-80 Email: c...@denx.de
>
>PGP key: 6FF2 E59F 00C6 BC28 31D8 64C1 1173 CB19 9808 B153
>  Keyserver: hkp://pool.sks-keyservers.net
>
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] RDEPENDS in a containerized world

2019-07-26 Thread Aaron Cohen
I'm not sure if this is the proper venue, but I'll send it here hoping for
any insight.

I'm developing a containerized system. Ideally the host will be somewhat
minimal, and most of the functionality of the system will run inside docker
containers.

I have most of this working to some extent now, but am beginning to run
into an issue.

How do I handle runtime dependencies that I "know" are provided by the host?

For example, I have one particular application that requires gpsd at
runtime.

I know that gpsd is installed on the host, so would prefer that it not be
installed again in the docker container for this application.

Do I have to edit the recipe for the application and remove the
RDEPENDS="gpsd" line, or is there some more clever way that I can specify
that the RDEPENDS has been fulfilled for the container that is being built?

Thanks,
Aaron
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Working with symbolic links in recipes?

2019-09-06 Thread Aaron Biver
In short, I'd like to have my recipe know the name of a file that is a
symbolic link, as in "readlink" and the linux prompt.- preferably before
populating SRC_URI, but this is not working (so I guess random shell
scripts are not doable in recipes):
LINK_TARGET=`readlink -f ${LATEST_VER}`
SRC_URI = "file://${LINK_TARGET}"

Reasoning, if it matters, and in case there is a better way to do what I am
doing:

I have a bitbake recipe that includes a subproject that contains a binary
that I want to "wrap".  the binaries change occasionally, with the revision
reflected in the name.
So, one version might include "myfile-00-00-01.bin" and the subsequent
version is "myfile-00-00-02.bin"

In the past, I have edited my bitbake recipe to contain the substring of
the most recent version of the binary file... so updating the subproject
that has these binaries necessitates updating the substring, say updating a
variable from "00-00-01" to "00-00-02".

I'd prefer to have this done automatically, BUT STILL HAVE THE VERSION
SUBSTRING IN THE BITBAKE RECIPE.  This is because I want my recipe to
create a wrapped file that has thversion substring in it.

I'd really like to have the subproject have a "myfile-latest.bin" which is
a link to "myfile-00-00-02.bin", and have the recipe use readlink to get
the target of the link before it populates SRC_URI, so it can know the link
target's "real" name.

Right now, including the name of the link in SRC_URI causes a file to get
copied over the build area that is not a link.
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] visual studio code packages or building instructions?

2019-11-01 Thread Aaron Solochek
I would like to get visual studio code on my NXP i.MX8. If someone is
aware of a aarch64 rpm of it, that would be the easiest. Alternatively,
if anyone knows how to build it using bitbake, I can build it myself.

Thank you.

-Aaron
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] visual studio code packages or building instructions?

2019-11-01 Thread Aaron Solochek
Well I grabbed the .deb that one of those links mentioned and converted it to 
an rpm, but of course there are a ton of unmet dependencies, so I might have to 
build it anyway.

I found these instructions for building it, which are pinned to an older 
version (which is probably fine)

https://github.com/futurejones/code-oss-aarch64

I have gotten most of the dependencies built with bitbake, except for the  x11 
stuff: 

ERROR: Nothing RPROVIDES 'packagegroup-core-x11' (but 
/home/aarons/sri/bullitt/nxp/imx-yocto-bsp/sources/meta-fsl-bsp-release/imx/meta-sdk/dynamic-layers/qt5-layer/recipes-fsl/images/fsl-image-qt5-validation-imx.bb
 RDEPENDS on or otherwise requires it)
packagegroup-core-x11 was skipped: missing required distro feature 'x11' (not 
in DISTRO_FEATURES)
NOTE: Runtime target 'packagegroup-core-x11' is unbuildable, removing...
Missing or unbuildable dependency chain was: ['packagegroup-core-x11']
ERROR: Required build target 'fsl-image-qt5-validation-imx' has no buildable 
providers.
Missing or unbuildable dependency chain was: ['fsl-image-qt5-validation-imx', 
'packagegroup-core-x11']

But this is my local.conf:

MACHINE ??= 'imx8mqevk'
DISTRO ?= 'fsl-imx-wayland'
PACKAGE_CLASSES ?= "package_rpm"
EXTRA_IMAGE_FEATURES ?= "debug-tweaks"
USER_CLASSES ?= "buildstats image-mklibs image-prelink"
PATCHRESOLVE = "noop"
BB_DISKMON_DIRS ??= "\
STOPTASKS,${TMPDIR},1G,100K \
STOPTASKS,${DL_DIR},1G,100K \
STOPTASKS,${SSTATE_DIR},1G,100K \
STOPTASKS,/tmp,100M,100K \
ABORT,${TMPDIR},100M,1K \
ABORT,${DL_DIR},100M,1K \
ABORT,${SSTATE_DIR},100M,1K \
ABORT,/tmp,10M,1K"
PACKAGECONFIG_append_pn-qemu-native = " sdl"
PACKAGECONFIG_append_pn-nativesdk-qemu = " sdl"
CONF_VERSION = "1"
#IMAGE_FEATURES_append = " package-management tools-sdk  x11-base x11"
IMAGE_FEATURES += "package-management"
IMAGE_FEATURES += "tools-sdk"
IMAGE_FEATURES += "x11-base"
IMAGE_FEATURES += "x11"


As you can see, I tried adding x11 both using IMAGE_FEATURES += as well as 
IMAGE_FEATURES_append (where I would then comment out the += lines)
Why is x11 never getting added to the DISTRO_FEATURES? I also tried putting x11 
in DISTRO_FEATURES and DISTRO_FEATURES_append. 

What is the correct thing here?

I also installed yarn according to those instructions, which worked (it said it 
was successfully installed) however when I tried to use it I got an error about 
not finding gulp:

# yarn watch
yarn run v1.19.1
$ gulp watch --max_old_space_size=4095
/bin/sh: gulp: command not found
error Command failed with exit code 127.

I don't know anything about yarn. Is gulp another package I need to install, or 
should it be part of yarn? 

Thank you.

-Aaron

-----Original Message-
From: Ross Burton  
Sent: Friday, November 1, 2019 12:52 PM
To: Aaron Solochek ; yocto@yoctoproject.org
Subject: Re: [yocto] visual studio code packages or building instructions?

On 01/11/2019 16:35, Aaron Solochek wrote:
> I would like to get visual studio code on my NXP i.MX8. If someone is 
> aware of a aarch64 rpm of it, that would be the easiest. 
> Alternatively, if anyone knows how to build it using bitbake, I can build it 
> myself.

Well Microsoft only make x86 binaries available:

https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcode.visualstudio.com%2F%23alt-downloads&data=01%7C01%7Caaron.solochek%40sri.com%7Ca9ecc8ecbf4e4aeb6bf208d75eebd148%7C40779d3379c44626b8bf140c4d5e9075%7C1&sdata=mSWDKRoSwEfy2SiGAgOdcKZvcrvofVPqriDnu5Pd3bI%3D&reserved=0

So you'll have to follow the instructions at
https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FMicrosoft%2Fvscode%2Fwiki%2FHow-to-Contribute%23build-and-run&data=01%7C01%7Caaron.solochek%40sri.com%7Ca9ecc8ecbf4e4aeb6bf208d75eebd148%7C40779d3379c44626b8bf140c4d5e9075%7C1&sdata=3WHuWNu9ngP%2Fbymit1UgnuBawO1b123F1LdwTuHot0U%3D&reserved=0

They use Yarn, so you'll have to package that first.

Comments like
https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmicrosoft%2Fvscode%2Fissues%2F6442%23issuecomment-509605292&data=01%7C01%7Caaron.solochek%40sri.com%7Ca9ecc8ecbf4e4aeb6bf208d75eebd148%7C40779d3379c44626b8bf140c4d5e9075%7C1&sdata=F0XX9b%2FADWjPJbeis%2FgIlyRunHbb4zpj6Mw3ac1Cs%2Fw%3D&reserved=0
on the bug asking for RPi support isn't exactly encouraging though.

Ross
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] visual studio code packages or building instructions?

2019-11-01 Thread Aaron Solochek



-Original Message-
From: Ross Burton  
Sent: Friday, November 1, 2019 2:04 PM
To: Aaron Solochek ; yocto@yoctoproject.org
Subject: Re: [yocto] visual studio code packages or building instructions?

On 01/11/2019 17:51, Aaron Solochek wrote:
> Well I grabbed the .deb that one of those links mentioned and converted it to 
> an rpm, but of course there are a ton of unmet dependencies, so I might have 
> to build it anyway.
> 
> I found these instructions for building it, which are pinned to an 
> older version (which is probably fine)
> 
> https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgith
> ub.com%2Ffuturejones%2Fcode-oss-aarch64&data=01%7C01%7Caaron.soloc
> hek%40sri.com%7C660e9bc6cada46f92d8608d75ef5d038%7C40779d3379c44626b8b
> f140c4d5e9075%7C1&sdata=CLhgG4mZZ2EUuiJlJ5Ebqc3xApqspO4RxMvkaEJZx4
> s%3D&reserved=0
> 
> I have gotten most of the dependencies built with bitbake, except for the  
> x11 stuff:
> 
> ERROR: Nothing RPROVIDES 'packagegroup-core-x11' (but 
> /home/aarons/sri/bullitt/nxp/imx-yocto-bsp/sources/meta-fsl-bsp-releas
> e/imx/meta-sdk/dynamic-layers/qt5-layer/recipes-fsl/images/fsl-image-q
> t5-validation-imx.bb RDEPENDS on or otherwise requires it)
> packagegroup-core-x11 was skipped: missing required distro feature 
> 'x11' (not in DISTRO_FEATURES)
> NOTE: Runtime target 'packagegroup-core-x11' is unbuildable, removing...
> Missing or unbuildable dependency chain was: ['packagegroup-core-x11']
> ERROR: Required build target 'fsl-image-qt5-validation-imx' has no buildable 
> providers.
> Missing or unbuildable dependency chain was: 
> ['fsl-image-qt5-validation-imx', 'packagegroup-core-x11']
> 
> But this is my local.conf:
> 
> MACHINE ??= 'imx8mqevk'
> DISTRO ?= 'fsl-imx-wayland'
> PACKAGE_CLASSES ?= "package_rpm"
> EXTRA_IMAGE_FEATURES ?= "debug-tweaks"
> USER_CLASSES ?= "buildstats image-mklibs image-prelink"
> PATCHRESOLVE = "noop"
> BB_DISKMON_DIRS ??= "\
>  STOPTASKS,${TMPDIR},1G,100K \
>  STOPTASKS,${DL_DIR},1G,100K \
>  STOPTASKS,${SSTATE_DIR},1G,100K \
>  STOPTASKS,/tmp,100M,100K \
>  ABORT,${TMPDIR},100M,1K \
>  ABORT,${DL_DIR},100M,1K \
>  ABORT,${SSTATE_DIR},100M,1K \
>  ABORT,/tmp,10M,1K"
> PACKAGECONFIG_append_pn-qemu-native = " sdl"
> PACKAGECONFIG_append_pn-nativesdk-qemu = " sdl"
> CONF_VERSION = "1"
> #IMAGE_FEATURES_append = " package-management tools-sdk  x11-base x11"
> IMAGE_FEATURES += "package-management"
> IMAGE_FEATURES += "tools-sdk"
> IMAGE_FEATURES += "x11-base"
> IMAGE_FEATURES += "x11"
> 
> 
> As you can see, I tried adding x11 both using IMAGE_FEATURES += as 
> well as IMAGE_FEATURES_append (where I would then comment out the += lines) 
> Why is x11 never getting added to the DISTRO_FEATURES? I also tried putting 
> x11 in DISTRO_FEATURES and DISTRO_FEATURES_append.
> 
> What is the correct thing here?

Adding x11 to IMAGE_FEATURES doesn't achieve anything, as x11 isn't an 
IMAGE_FEATURE.  It's a DISTRO_FEATURE.

DISTRO ?= 'fsl-imx-wayland'

https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FFreescale%2Fmeta-freescale-distro%2Fblob%2Fmaster%2Fconf%2Fdistro%2Ffsl-wayland.conf&data=01%7C01%7Caaron.solochek%40sri.com%7C660e9bc6cada46f92d8608d75ef5d038%7C40779d3379c44626b8bf140c4d5e9075%7C1&sdata=i7DpsY90tCm4VJheO3d0T%2BZqFh8g7ZwBXpx7XS3xoDE%3D&reserved=0
says that this explicitly does DISTRO_FEATURES_remove = "x11" so you can't add 
it back.

I suggest you use a DISTRO that supports X11.

Oh, duh. I need fsl-imx-xwayland. 

Thank you!

-Aaron
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Question about automatic dependencies when native packages are used

2018-04-13 Thread Aaron M. Biver
I'm having some trouble with native dependencies in my recipe, and I'm 
wondering if anyone has any tips.  I'm building with the petalinux toolset on 
an x64 linux for an arm architecture.

I have a recipe myapp, which has a native version, myapp-native.  myapp depends 
on its myapp-native, as this builds an application used in the build of myapp.  
myapp also depends on a kernel module, mymodule.

So, an excerpt from myapp.bb:

DEPENDS += "mymodule myapp-native"
BBCLASSEXTEND = "native"

The problem that myapp-native is trying to include mymodule-native.  This 
complains:

ERROR: Nothing PROVIDES 'mymodule-native' (but 
virtual:native:/path/.../myapp/myapp.bb DEPENDS on or otherwise requires it)

I've tried adding a 'BBCLASSEXTEND = "native"' to mymodule.bb, but that 
generated build errors.

I've tried overriding the dependency in myapp.bb with
DEPENDS-native = ""

And then I tried adding this to myapp.bb
DEPENDS_${PN}-native = ""

I've also tried allowing the mymodule-native package to be empty by adding this 
to mymodule.bb
ALLOW_EMPTY_${PN} = "1"

As well as this:
ALLOW_EMPTY_${PN}-native = "1"

But nothing seems to work... it keeps trying to find mymodule-native.  I'd like 
a way to either override this dependency or make mymodule-native an empty 
package.  I'm all out of random spaghetti to throw against this wall, and I'm 
hoping someone has some experience with this.



-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PATCH] [yocto-autobuilder2] Fixes on schedulers default build-appliance

2018-06-10 Thread aaron . chun . yew . chan
From: Aaron Chan 

---
 schedulers.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/schedulers.py b/schedulers.py
index 8f3dbc5..02e4340 100644
--- a/schedulers.py
+++ b/schedulers.py
@@ -63,7 +63,7 @@ def props_for_builder(builder):
 props.append(util.BooleanParameter(
 name="deploy_artifacts",
 label="Do we want to deploy artifacts? ",
-default=Boolean
+default=False
 ))
 
 props = props + repos_for_builder(builder)
-- 
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] yocto Digest, Vol 93, Issue 45

2018-06-26 Thread Chan, Aaron Chun Yew
Hi Richard,

Did you manage to take a look at this patch ?

Regards,
Aaron

-Original Message-
From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org] On 
Behalf Of yocto-requ...@yoctoproject.org
Sent: Wednesday, June 13, 2018 6:23 PM
To: yocto@yoctoproject.org
Subject: yocto Digest, Vol 93, Issue 45

Send yocto mailing list submissions to
yocto@yoctoproject.org

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.yoctoproject.org/listinfo/yocto
or, via email, send a message with subject or body 'help' to
yocto-requ...@yoctoproject.org

You can reach the person managing the list at
yocto-ow...@yoctoproject.org

When replying, please edit your Subject line so it is more specific than "Re: 
Contents of yocto digest..."


Today's Topics:

   1. [PATCH] [yocto-autobuilder2] Add support to enable Manual BSP
  on LAVA (Aaron Chan)
   2. Re: mono-native is trying to install files into a shared
  area... (Alex Lennon)
   3. Re: Image specific configuration files (Iv?n Castell)


--

Message: 1
Date: Wed, 13 Jun 2018 15:07:58 +0800
From: Aaron Chan 
To: yocto@yoctoproject.org
Cc: Aaron Chan 
Subject: [yocto] [PATCH] [yocto-autobuilder2] Add support to enable
Manual  BSP on LAVA
Message-ID:
<1528873678-17502-1-git-send-email-aaron.chun.yew.c...@intel.com>

Signed-off-by: Aaron Chan 
---
 config.py | 9 +
 schedulers.py | 9 +++--
 2 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/config.py b/config.py
index 2568768..d21948f 100644
--- a/config.py
+++ b/config.py
@@ -80,3 +80,12 @@ builder_to_workers = {
 "nightly-deb-non-deb": [],
 "default": workers
 }
+
+# Supported LAVA-Linaro on Yocto Project # Enable Automated 
+Manual(Hardware) BSP Test case(s) enable_hw_test = {
+"enable": False,
+"lava_user"   : "",
+"lava_token"  : "",
+"lava_server" : ":"
+}
diff --git a/schedulers.py b/schedulers.py index 8f3dbc5..2c1b8e1 100644
--- a/schedulers.py
+++ b/schedulers.py
@@ -63,9 +63,14 @@ def props_for_builder(builder):
 props.append(util.BooleanParameter(
 name="deploy_artifacts",
 label="Do we want to deploy artifacts? ",
-default=Boolean
+default=False
+))
+if builder in ['nightly-x86-64', 'nightly-x86-64-lsb', 'nightly-arm', 
'nightly-arm-lsb', 'nightly-arm64']:
+props.append(util.BooleanParameter(
+name="enable_hw_test",
+label="Enable BSP Test case(s) on Hardware?",
+default=config.enable_hw_test['enable']
 ))
-
 props = props + repos_for_builder(builder)
 return props
 
--
2.7.4



--

Message: 2
Date: Wed, 13 Jun 2018 10:50:49 +0100
From: Alex Lennon 
To: Craig McQueen 
Cc: "yocto@yoctoproject.org" 
Subject: Re: [yocto] mono-native is trying to install files into a
shared area...
Message-ID: 
Content-Type: text/plain; charset="utf-8"; Format="flowed"



On 12/06/2018 05:43, Khem Raj wrote:
>
>
> On Mon, Jun 11, 2018 at 8:36 PM Craig McQueen 
> mailto:craig.mcqu...@innerrange.com>> 
> wrote:
>
> I wrote:
> >
> > I wrote:
> > >
> > > Lately, I'm trying to upgrade to a later version of mono, 5.4.1.6.
> > > When I try to do a build of my Yocto image, bitbake gets to
> the end of
> > > building mono- native, and then gets an error:
> > >
> > >
> > > ERROR: mono-native-5.4.1.6-r0 do_populate_sysroot: The recipe
> mono-
> > > native is trying to install files into a shared area when
> those files already
> > exist.
> > > Those files and their manifest location are:
> > > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64-
> > > linux/usr/lib/mono/lldb/mono.py
> > >? Matched in b''
> > > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64-
> > > linux/usr/lib/mono/4.6.1-api/System.Web.Http.SelfHost.dll
> > >? Matched in b''
> > > ...
> > > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64-
> > >
> >
> linux/usr/lib/mono/xbuild/14.0/bin/MSBuild/Microsoft.Build.CommonTypes.
> > > xsd
> > >? Matched in b''
> > > /home/craigm/yocto/poky/build/tmp/sysroots/x86_64-
> > >
> linux/usr/lib/mono/xbuild/14.0/bin/MSBuild/Microsoft.Build.Core.xsd
> > >? Matched in b'

Re: [yocto] [PATCH] [yocto-ab-helper] Add qemux86, qemux86-64 WIC testimage buildset-config

2018-06-26 Thread Chan, Aaron Chun Yew
Hi Richard

Fundamentally we are performing  the QA release for Yocto with this buildset 
config covering qemux86, qemux86-64 on core-image-lsb-sdk image.
If the community feels that we have sufficient coverage which is required, we 
can omit this support from future QA cycles. 

If this is still "good to have" I will resend a "better description" in the 
next submission.

Cheers,
Aaron

-Original Message-
From: richard.pur...@linuxfoundation.org 
[mailto:richard.pur...@linuxfoundation.org] 
Sent: Tuesday, June 26, 2018 6:30 PM
To: Chan, Aaron Chun Yew ; 
yocto@yoctoproject.org; Yeoh, Ee Peng 
Subject: Re: [PATCH] [yocto-ab-helper] Add qemux86, qemux86-64 WIC testimage 
buildset-config

On Tue, 2018-06-26 at 16:51 +0800, Aaron Chan wrote:
> Signed-off-by: Aaron Chan 
> ---
>  config.json | 32 +++-
>  1 file changed, 31 insertions(+), 1 deletion(-)

This patch looks like it might be correct. I say "might" as your commit message 
tells me what you did (kind of) but not why.

I say "kind of" as you're using core-image-lsb-sdk and poky-lsb here, probably 
for a good reason but you don't say why.

The key thing I need to know is what extra testing this provides that isn't 
already covered elsewhere on the autobuilder - why do we need to spend the time 
testing this?

Can you resend with a better description of why we need to do this please?

Cheers,

Richard

> 
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH 1/2] [yocto-ab-helper] Add qemux86, qemux86-64 WIC testimage buildset-config

2018-07-02 Thread Chan, Aaron Chun Yew
Hi Richard,

Kindly ignore this patch as it contains the old description. I'll send a new 
patch indicating the reason behind this buildset-config is needed.

Cheers,
Aaron

From: Chan, Aaron Chun Yew
Sent: Monday, July 02, 2018 6:30 PM
To: richard.pur...@linuxfoundation.org; yocto@yoctoproject.org
Cc: Chan, Aaron Chun Yew
Subject: [PATCH 1/2] [yocto-ab-helper] Add qemux86, qemux86-64 WIC testimage 
buildset-config

Signed-off-by: Aaron Chan 
---
 config.json | 32 +++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/config.json b/config.json
index c9dc21e..3c1f989 100644
--- a/config.json
+++ b/config.json
@@ -383,6 +383,36 @@
 ],
 "step1" : {
 "MACHINE" : "qemux86",
+"SDKMACHINE" : "x86_64",
+"DISTRO" : "poky-lsb",
+"BBTARGETS" : "wic-tools core-image-lsb-sdk",
+"EXTRACMDS" : [
+"wic create directdisk -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-lsb-sdk/",
+"wic create directdisk-gpt -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-lsb-sdk/",
+"wic create mkefidisk -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-lsb-sdk/"
+],
+"extravars" : [
+"IMAGES_FSTYPES += ' wic'"
+],
+"SANITYTARGETS" : "core-image-lsb-sdk:do_testimage"
+},
+"step2" : {
+"MACHINE" : "qemux86-64",
+"SDKMACHINE" : "x86_64",
+"DISTRO" : "poky-lsb",
+"BBTARGETS" : "wic-tools core-image-lsb-sdk",
+"EXTRACMDS" : [
+"wic create directdisk -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86-64/directdisk/core-image-lsb-sdk/",
+"wic create directdisk-gpt -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86-64/directdisk/core-image-lsb-sdk/",
+"wic create mkefdisk -e core-image-lsb-sdk -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86-64/directdisk/core-image-lsb-sdk/"
+],
+"extravars" : [
+"IMAGES_FSTYPES += ' wic'"
+],
+"SANITYTARGETS" : "core-image-lsb-sdk:do_testimage"
+},
+"step3" : {
+"MACHINE" : "qemux86",
 "BBTARGETS" : "wic-tools core-image-sato",
 "EXTRACMDS" : [
 "wic create directdisk -e core-image-sato -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-sato/",
@@ -390,7 +420,7 @@
 "wic create mkefidisk -e core-image-sato -o 
${BUILDDIR}/tmp/deploy/wic_images/qemux86/directdisk/core-image-sato/"
 ]
 },
-"step2" : {
+"step4" : {
 "MACHINE" : "genericx86",
 "BBTARGETS" : "wic-tools core-image-sato",
 "EXTRACMDS" : [
--
2.7.4

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH] [yocto-ab-helper] Fix syntax load config.json clobber buildStep

2018-07-02 Thread Chan, Aaron Chun Yew
Hi Richard,

Added a patch fix on this on 
https://lists.yoctoproject.org/pipermail/yocto/2018-July/041644.html

This resolves the handling unicode string during loadconfig() apart from 
dictonary data expansions.
Let me know what are your thoughts on this. Thanks.

Cheers,
Aaron

From: richard.pur...@linuxfoundation.org [richard.pur...@linuxfoundation.org]
Sent: Tuesday, June 26, 2018 6:20 PM
To: Chan, Aaron Chun Yew; yocto@yoctoproject.org
Subject: Re: [PATCH] [yocto-ab-helper] Fix syntax load config.json clobber 
buildStep

On Tue, 2018-06-26 at 15:05 +0800, Aaron Chan wrote:
> Signed-off-by: Aaron Chan 
> ---
>  config.json| 5 ++---
>  janitor/clobberdir | 3 +--
>  2 files changed, 3 insertions(+), 5 deletions(-)
>
> diff --git a/config.json b/config.json
> index ecfca51..c9dc21e 100644
> --- a/config.json
> +++ b/config.json
> @@ -8,15 +8,14 @@
>  "BUILD_HISTORY_DIRECTPUSH" : ["poky:morty", "poky:pyro",
> "poky:rocko", "poky:master"],
>  "BUILD_HISTORY_FORKPUSH" : {"poky-contrib:ross/mut" :
> "poky:master", "poky:master-next" : "poky:master"},
>
> -"REPO_STASH_DIR" : "${BASE_HOMEDIR}/git/mirror",
> -"TRASH_DIR" : "${BASE_HOMEDIR}/git/trash",
> +"REPO_STASH_DIR" : "/git/mirror",
> +"TRASH_DIR" : "/git/trash",
>
>  "QAMAIL_TO" : "richard.pur...@linuxfoundation.org",
>  "QAMAIL_TO1" : "yocto@yoctoproject.org",
>  "QAMAIL_CC1" : "pi...@toganlabs.com, ota...@ossystems.com.br, yi
> .z...@windriver.com, tracy.gray...@intel.com, joshua.g.l...@intel.com
> , apoorv.san...@intel.com, ee.peng.y...@intel.com, aaron.chun.yew.cha
> n...@intel.com, rebecca.swee.fun.ch...@intel.com, chin.huat@intel.co
> m",
>  "WEBPUBLISH_DIR" : "${BASE_SHAREDDIR}/",
>  "WEBPUBLISH_URL" : "https://autobuilder.yocto.io/";,
> -
>  "defaults" : {
>  "NEEDREPOS" : ["poky"],
>  "DISTRO" : "poky",
> diff --git a/janitor/clobberdir b/janitor/clobberdir
> index 5dab5af..73ec87c 100755
> --- a/janitor/clobberdir
> +++ b/janitor/clobberdir
> @@ -19,7 +19,6 @@ import utils
>
>  ourconfig = utils.loadconfig()
>
> -
>  def mkdir(path):
>  try:
>  os.makedirs(path)
> @@ -43,7 +42,7 @@ if "TRASH_DIR" not in ourconfig:
>  print("Please set TRASH_DIR in the configuration file")
>  sys.exit(1)
>
> -trashdir = ourconfig["TRASH_DIR"]
> +trashdir = ourconfig["BASE_HOMEDIR"] + ourconfig["TRASH_DIR"]

You are correct there is a bug here but reverting my change to
config.json is not the way to fix this.

Have a look at this commit:

http://git.yoctoproject.org/cgit/cgit.cgi/yocto-autobuilder-helper/commit/?id=d6253df2bc21752bc0b53202e491140b0994ff63

and then see if you can send me the patch which fixes this in line with
the commit above.

Cheers,

Richard
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH] [yocto-ab-helper] utils.py: Resolved unicode data expansion

2018-07-03 Thread Chan, Aaron Chun Yew
Hello Richard,

This morning I had the new autobuilder setup from scratch with the latest patch 
you just check out, thanks for that and rollout the fix below manually. 
Everything else is clean and I did what you asked me to.
However this did not resolved the data expansion when called 
utils.getconfig("REPO_STASH_DIR, ourconfig), this applies when you invoke 
ourconfig["REPO_STASH_DIR"]. Both yields the same errors.
We assume the  JSON dumps are properly handled in ourconfig[c] when we handles 
config[c] but that is not the case. I do see there is a growing issues as your 
strategy to use nested JSON, however 
we wont be able to handle all of these conditions needed when nested JSON 
becomes complex. Anyway, i'll leave it you decide what will be best course of 
action.

STDERR logs on autobuilder: (poky-tiny)
--
mv: cannot move '/home/pokybuild/yocto-worker/poky-tiny/' to a subdirectory of 
itself, '${BASE_HOMEDIR}/git/mirror/1530669213-56172/poky-tiny'
Traceback (most recent call last):
  File "/home/pokybuild/yocto-autobuilder-helper/janitor/clobberdir", line 52, 
in 
subprocess.check_call(['mv', x, trashdest])
  File "/usr/lib/python2.7/subprocess.py", line 541, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['mv', 
'/home/pokybuild/yocto-worker/poky-tiny/', 
u'${BASE_HOMEDIR}/git/mirror/1530669213-56172']' returned non-zero exit status 1

also, this action causes directory named ${BASE_HOMEDIR} to be created under 
~/yocto-worker/poky-tiny/build

This patch 
[https://lists.yoctoproject.org/pipermail/yocto/2018-July/041685.html] which 
submitted today resolves the Step 1: Clobber build dir on autobuilder. 

Best wishes,
Aaron

From: richard.pur...@linuxfoundation.org [richard.pur...@linuxfoundation.org]
Sent: Tuesday, July 03, 2018 9:25 PM
To: Chan, Aaron Chun Yew; yocto@yoctoproject.org
Subject: Re: [PATCH] [yocto-ab-helper] utils.py: Resolved unicode data expansion

On Tue, 2018-07-03 at 09:44 +0800, Aaron Chan wrote:
> Updated patch to trigger handlestr() when unicode string is found
> during iteration json.loads(config.json). Unicode and list with data
> expansion were not handled hence adding this patch to handle
> conversion.
> Added a debug message to dump pretty json data populated to
> ourconfig[c].
>
> e.g "REPO_STASH_DIR" read as ${BASE_HOMEDIR}/git/mirror, where it
> should be
> "REPO_STASH_DIR" as /home/pokybuild/git/mirror
>
> Signed-off-by: Aaron Chan 
> ---
>  scripts/utils.py | 6 +-
>  1 file changed, 5 insertions(+), 1 deletion(-)

It took me a while to figure out why you were doing this.

We can't expand the data half way through loading the json file as
other pieces of data may later override the values. We therefore have
to defer expansion of variables until the file is completely loaded.

We therefore have to expand the variables later on, when we read them.

I pointed you at this commit:

http://git.yoctoproject.org/cgit/cgit.cgi/yocto-autobuilder-helper/commit/?id=d6253df2bc21752bc0b53202e491140b0994ff63

which changes direct accesses into ourconfig, e.g.:

ourconfig["REPO_STASH_DIR"]

into accesses using a function:

utils.getconfig("REPO_STASH_DIR", ourconfig)

and that function handles the expansion.

You should therefore be able to fix the clobberdir issue by using the
getconfig() method instead of direct access?

Cheers,

Richard


-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH] [yocto-ab-helper] utils.py: Resolved unicode data expansion

2018-07-04 Thread Chan, Aaron Chun Yew
Hi Richard,

This is what I did on which you asked me to do

~/yocto-autobuilder-helper/janitor/clobberdir

1. Change to use python3 
#!/usr/bin/env/python2 to #!/usr/bin/env/python3

Retain and add print to debug message
trashdir = ourconfig["TRASH_DIR"]
print(trashdir)   

Results:
mv: cannot move '/home/pokybuild/yocto-worker/nightly-x86-64-bsp/' to a 
subdirectory of itself, 
'${BASE_HOMEDIR}/git/trash/1530767509-20564/nightly-x86-64-bsp'
${BASE_HOMEDIR}/git/trash   <--- debug message
Traceback (most recent call last):
  File "/home/pokybuild/yocto-autobuilder-helper/janitor/clobberdir", line 54, 
in 
subprocess.check_call(['mv', x, trashdest])
  File "/usr/lib/python3.5/subprocess.py", line 581, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['mv', 
'/home/pokybuild/yocto-worker/nightly-x86-64-bsp/', 
'${BASE_HOMEDIR}/git/trash/1530767509-20564']' returned non-zero exit status 1
program finished with exit code 1

2. Change to use python3
#!/usr/bin/env python2 to #!/usr/bin/env/python3

Change and print to debug message
trashdir = utils.getconfig("TRASH_DIR", ourconfig)
print(trashdir)

Results:
using PTY: False
/home/pokybuild/git/trash  <--- debug message
program finished with exit code 0
elapsedTime=0.050781

I'll updated the patch on Item 2 in 
https://lists.yoctoproject.org/pipermail/yocto/2018-July/041694.html
Thanks,

Best wishes,
Aaron

From: richard.pur...@linuxfoundation.org [richard.pur...@linuxfoundation.org]
Sent: Wednesday, July 04, 2018 3:51 PM
To: Chan, Aaron Chun Yew; yocto@yoctoproject.org
Subject: Re: [PATCH] [yocto-ab-helper] utils.py: Resolved unicode data expansion

Hi Aaron,

On Wed, 2018-07-04 at 06:42 +, Chan, Aaron Chun Yew wrote:
> This morning I had the new autobuilder setup from scratch with the
> latest patch you just check out, thanks for that and rollout the fix
> below manually. Everything else is clean and I did what you asked me
> to.
> However this did not resolved the data expansion when called
> utils.getconfig("REPO_STASH_DIR, ourconfig), this applies when you
> invoke ourconfig["REPO_STASH_DIR"]. Both yields the same errors.
> We assume the  JSON dumps are properly handled in ourconfig[c] when
> we handles config[c] but that is not the case. I do see there is a
> growing issues as your strategy to use nested JSON, however
> we wont be able to handle all of these conditions needed when nested
> JSON becomes complex. Anyway, i'll leave it you decide what will be
> best course of action.

I had another look at this as we shouldn't be seeing unicode items in
this in python3. I've realised that clobberdir is using python2. If you
change clobberdir to use python3 does this mean the unicode part of the
patch is no longer needed? I think that may be the real problem!

Cheers,

Richard
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH] [yocto-ab-helper] clobberdir: Fix Unicode data expansion with utils API

2018-07-05 Thread Chan, Aaron Chun Yew
My apologizes to that as my local copy contains the fixes from the previous 
commits. Therefore this commit builts on top of it and only contains the delta 
on the current changes, this is the reason why its not complete.

Thanks again for the merge.

-Original Message-
From: richard.pur...@linuxfoundation.org 
[mailto:richard.pur...@linuxfoundation.org] 
Sent: Thursday, July 5, 2018 5:54 PM
To: Chan, Aaron Chun Yew ; yocto@yoctoproject.org
Subject: Re: [PATCH] [yocto-ab-helper] clobberdir: Fix Unicode data expansion 
with utils API

On Thu, 2018-07-05 at 13:34 +0800, Aaron Chan wrote:
> This fix is to move clobberdir from python2 to python3 to resolve 
> unicode data in python2 and change the data extraction expansion from 
> ourconfig["TRASH_DIR"] to utils.getconfig("TRASH_DIR", ourconfig) on 
> "Clobber build dir"
> BuildStep
> 
> Signed-off-by: Aaron Chan 
> ---
>  janitor/clobberdir | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/janitor/clobberdir b/janitor/clobberdir index 
> 5e04ed7..b05a876 100755
> --- a/janitor/clobberdir
> +++ b/janitor/clobberdir
> @@ -1,4 +1,4 @@
> -#!/usr/bin/env python2
> +#!/usr/bin/env python3
>  #
>  # Delete a directory using the ionice backgrounded command  #

At this point I think we're all getting frustrated with this. Please can you 
give patches a sanity check when you're sending them. You mention in the commit 
message what you need to do but the getconfig() change is missing from the 
patch itself. This has happened with several previous patches too where there 
were pieces missing. I deal with a lot of patches and I can't fix up each one.

The commit message mentions its fixing something but not what (a regression 
introduced by a previous change).

In the interests of resolving this I've fixed up this commit and merged
it:

http://git.yoctoproject.org/cgit.cgi/yocto-autobuilder-helper/commit/?id=54f848380fc77a9b9523bd85cd1cdce075bfb96e

Cheers,

Richard
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH] [yocto-ab-helper] scripts/run-jinja-parser: Add Jinja2 parser extension in autobuilder

2018-07-06 Thread Chan, Aaron Chun Yew
Hi Richard,

Let me do a quick explanation on what I intended to do with this script.

1. I don't have much lava knowledge but at a quick glance this may be ok, apart 
from one of the file names which I've commented on below.

[Ans]: Basically this jinja template is mapped to lava.py module where we 
defines job configuration for several architectures (ARM, MIPS, PPC, x86).
lava/device/bsp-packages.jinja2 maybe we can name it to 
"lava-yaml-template.jinja2" since this is YAML template for LAVA? I'm open for 
all suggestions.

2. I'm not sure we want to call this "lava.py". Would something like 
"lava-conf-intelqa-minnowboard.py" be a better description (as that is what it 
appears to be)?

[Ans]: So I've put the LAVA job config into lava.py, the concept is very 
similar to our yoctoabb (config.py)   
If we change the name from lava.py to 
lava-conf-intelqa-minnowboard.py, in the code we need to change to "import 
lava-conf-intelqa-minnow" instead of "import lava".
I am good with the rename but it will be just long.

3. Should configuration files be places somewhere outside scripts?

[Ans]: So, basically my script is able to load the module/config outside (in 
any file structure) and I have it tested like the examples below:

$ run-jinja-parser  $2 $3 $4 $5

e.g. It works in this case:
~/yocto-autobuilder/lava.py
$ run-jinja-parser "~/yocto-autobuilder-helper" $2 $3 $4 $5

Or 
~/yocto-autobuilder/scripts/lava.py
$ run-jinja-parser "~/yocto-autobuilder-helper/scripts" $2 $3 $4 $5

Let me know your thoughs about these. Once these changes are added, the 
community will benefit from using Yocto autobuilder to trigger 
their automated BSP test (using LAVA) on their platform starting with 
benchmarking own manual BSP test case(s).

Cheers,
Aaron

-Original Message-
From: richard.pur...@linuxfoundation.org 
[mailto:richard.pur...@linuxfoundation.org] 
Sent: Friday, July 6, 2018 9:30 PM
To: Chan, Aaron Chun Yew ; yocto@yoctoproject.org
Subject: Re: [PATCH] [yocto-ab-helper] scripts/run-jinja-parser: Add Jinja2 
parser extension in autobuilder

On Fri, 2018-07-06 at 17:15 +0800, Aaron Chan wrote:
> This patch is introduced as a feature in 2.6 M2 to support the 
> extension of autobuilder to LAVA (Linaro Automated Validation 
> Architecture).
> run-jinja2-parser loads lava config module and generates LAVA job 
> config in a YAML format before its triggers LAVA server to execute a 
> task.

I don't have much lava knowledge but at a quick glance this may be ok, apart 
from one of the file names which I've commented on below.

> Signed-off-by: Aaron Chan 
> ---
>  lava/device/bsp-packages.jinja2 | 43 ++
>  scripts/lava.py | 76
> 
>  scripts/run-jinja-parser| 97
> +
>  3 files changed, 216 insertions(+)
>  create mode 100644 lava/device/bsp-packages.jinja2  create mode 
> 100644 scripts/lava.py  create mode 100755 scripts/run-jinja-parser
> 
> diff --git a/lava/device/bsp-packages.jinja2 
> b/lava/device/bsp-packages.jinja2 new file mode 100644 index 
> 000..61fbcad
> --- /dev/null
> +++ b/lava/device/bsp-packages.jinja2
> @@ -0,0 +1,43 @@
> +device_type: {{ device_type }}
> +job_name: {{ job_name }}
> +timeouts: 
> +  job:
> +minutes: {{ timeout.job.minutes }}
> +  action:
> +minutes: {{ timeout.action.minutes }}
> +  connection:
> +minutes: {{ timeout.connection.minutes }}
> +priority: {{ priority }}
> +visibility: {{ visibility }}
> +actions:
> +- deploy:
> +timeout:
> +  minutes: {{ deploy.timeout }}
> +to: {{ deploy.to }}
> +kernel:
> +  url: {{ deploy.kernel.url }}
> +  type: {{ deploy.kernel.type }}
> +modules:
> +  url: {{ deploy.modules.url }}
> +  compression: {{ deploy.modules.compression }}
> +nfsrootfs:
> +  url: {{ deploy.nfsrootfs.url }}
> +  compression: {{ deploy.nfsrootfs.compression }}
> +os: {{ deploy.os }}
> +- boot:
> +timeout:
> +  minutes: {{ boot.timeout }}
> +method: {{ boot.method }}
> +commands: {{ boot.commands }}
> +auto_login: { login_prompt: {{ boot.auto_login.login_prompt }},
> username: {{ boot.auto_login.username }} }
> +prompts:
> +  - {{ boot.prompts }}
> +- test:
> +timeout:
> +  minutes: {{ test.timeout }}
> +name: {{ test.name }}
> +definitions:
> +- repository: {{ test.definitions.repository }}
> +  from: {{ test.definitions.from }}
> +  path: {{ test.definitions.path }}
> +  name: {{ test.definitions.name }}
> diff --git a/scripts/lava.py b/scripts/lava.py new file mode 100644 

Re: [yocto] [PATCH] [yocto-autobuilder] master.cfg: Defaults autobuilder URL based on FQDN

2018-07-10 Thread Chan, Aaron Chun Yew
My name is Aaron and not Aron for start

Martin,

Please try this

#!/usr/bin/env python2

import os

a = os.path.join('http://', 'alibaba.com')
b = '/'.join(['http://', 'alibaba.com'])
c = '/'.join(['http:/', 'alibaba.com', ''])

print(a)
print(b)
print(c)

and repeat the same for python3, I got the following results:

http://alibaba.com
http:///alibaba.com
http://alibaba.com/

Cheers,
Aaron

From: Martin Hundebøll [mar...@geanix.com]
Sent: Tuesday, July 10, 2018 2:10 PM
To: Chan, Aaron Chun Yew; richard.pur...@linuxfoundation.org; 
yocto@yoctoproject.org; Burton, Ross; Eggleton, Paul
Subject: Re: [yocto] [PATCH] [yocto-autobuilder] master.cfg: Defaults 
autobuilder URL based on FQDN

Hi Aron,

On 2018-07-10 05:18, Aaron Chan wrote:
> This patch is to enable auto-assignments buildbot URL based on Hosts FQDN.
> The socket module allows the retrieval on FQDN and constructs the entire
> URL by default, this default settings can be overwritten in c['buildbotURL']
> based on local administrator preferences.
>
> Signed-off-by: Aaron Chan 
> ---
>   master.cfg | 7 +--
>   1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/master.cfg b/master.cfg
> index fca80d2..49ddeb4 100644
> --- a/master.cfg
> +++ b/master.cfg
> @@ -4,6 +4,7 @@
>   import os
>   import imp
>   import pkg_resources
> +import socket
>
>   from buildbot.plugins import *
>   from buildbot.plugins import db
> @@ -55,6 +56,7 @@ imp.reload(services)
>   imp.reload(www)
>
>   c = BuildmasterConfig = {}
> +url = os.path.join('http://', socket.getfqdn() + ':' + str(config.web_port) 
> + '/')

Why use `os.path.join()` here? It isn't supposed to be used to construct
url's, and is overkill for this case, and you'd end up with "http:///...";.

// Martin

>
>   # Disable usage reporting
>   c['buildbotNetUsageData'] = None
> @@ -76,6 +78,7 @@ c['www'] = www.www
>   c['workers'] = workers.workers
>
>   c['title'] = "Yocto Autobuilder"
> -c['titleURL'] = "https://autobuilder.yoctoproject.org/main/";
> +c['titleURL'] = url
>   # visible location for internal web server
> -c['buildbotURL'] = "https://autobuilder.yoctoproject.org/main/";
> +# - Default c['buildbotURL'] = "https://autobuilder.yoctoproject.org/main/";
> +c['buildbotURL'] = url
>

--
Kind regards,
Martin Hundebøll
Embedded Linux Consultant

+45 61 65 54 61
mar...@geanix.com

Geanix IVS
DK39600706
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH] [yocto-autobuilder] master.cfg: Defaults autobuilder URL based on FQDN

2018-07-10 Thread Chan, Aaron Chun Yew
Hi Martin,

My initial concern was that `os.path.join()` was meant for OS
independent path concatenation, and not constructing URLs.

My two first points still stand, though... Why not just
string-concatenate the separate parts; i.e.

url = 'http://' + socket.getfqdn() + ':' + str(config.web_port) + '/'

[Reply] You have a valid point, however in this case seems like os.path.join() 
doesn't seem to construct a path. It's a bug. 
Will send a new patch on this one. Thanks for reviewing it.

Cheers,
Aaron

From: Martin Hundebøll [mar...@geanix.com]
Sent: Tuesday, July 10, 2018 3:14 PM
To: Chan, Aaron Chun Yew; richard.pur...@linuxfoundation.org; 
yocto@yoctoproject.org; Burton, Ross; Eggleton, Paul
Subject: Re: [yocto] [PATCH] [yocto-autobuilder] master.cfg: Defaults 
autobuilder URL based on FQDN

Hi Aaron,

On 2018-07-10 08:59, Chan, Aaron Chun Yew wrote:
> My name is Aaron and not Aron for start

Sorry about that.

> Martin,
>
> Please try this
>
> #!/usr/bin/env python2
>
> import os
>
> a = os.path.join('http://', 'alibaba.com')
> b = '/'.join(['http://', 'alibaba.com'])
> c = '/'.join(['http:/', 'alibaba.com', ''])
>
> print(a)
> print(b)
> print(c)
>
> and repeat the same for python3, I got the following results:
>
> http://alibaba.com
> http:///alibaba.com
> http://alibaba.com/

I see, thanks for clarifying.

My initial concern was that `os.path.join()` was meant for OS
independent path concatenation, and not constructing URLs.

My two first points still stand, though... Why not just
string-concatenate the separate parts; i.e.

url = 'http://' + socket.getfqdn() + ':' + str(config.web_port) + '/'

But I won't stand in the way of the patch for such a stylish point, so
feel free to ignore it :)

// Martin

> Cheers,
> Aaron
> 
> From: Martin Hundebøll [mar...@geanix.com]
> Sent: Tuesday, July 10, 2018 2:10 PM
> To: Chan, Aaron Chun Yew; richard.pur...@linuxfoundation.org; 
> yocto@yoctoproject.org; Burton, Ross; Eggleton, Paul
> Subject: Re: [yocto] [PATCH] [yocto-autobuilder] master.cfg: Defaults 
> autobuilder URL based on FQDN
>
> Hi Aron,
>
> On 2018-07-10 05:18, Aaron Chan wrote:
>> This patch is to enable auto-assignments buildbot URL based on Hosts FQDN.
>> The socket module allows the retrieval on FQDN and constructs the entire
>> URL by default, this default settings can be overwritten in c['buildbotURL']
>> based on local administrator preferences.
>>
>> Signed-off-by: Aaron Chan 
>> ---
>>master.cfg | 7 +--
>>1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/master.cfg b/master.cfg
>> index fca80d2..49ddeb4 100644
>> --- a/master.cfg
>> +++ b/master.cfg
>> @@ -4,6 +4,7 @@
>>import os
>>import imp
>>import pkg_resources
>> +import socket
>>
>>from buildbot.plugins import *
>>from buildbot.plugins import db
>> @@ -55,6 +56,7 @@ imp.reload(services)
>>imp.reload(www)
>>
>>c = BuildmasterConfig = {}
>> +url = os.path.join('http://', socket.getfqdn() + ':' + str(config.web_port) 
>> + '/')
>
> Why use `os.path.join()` here? It isn't supposed to be used to construct
> url's, and is overkill for this case, and you'd end up with "http:///...";.
>
> // Martin
>
>>
>># Disable usage reporting
>>c['buildbotNetUsageData'] = None
>> @@ -76,6 +78,7 @@ c['www'] = www.www
>>c['workers'] = workers.workers
>>
>>c['title'] = "Yocto Autobuilder"
>> -c['titleURL'] = "https://autobuilder.yoctoproject.org/main/";
>> +c['titleURL'] = url
>># visible location for internal web server
>> -c['buildbotURL'] = "https://autobuilder.yoctoproject.org/main/";
>> +# - Default c['buildbotURL'] = "https://autobuilder.yoctoproject.org/main/";
>> +c['buildbotURL'] = url
>>
>
> --
> Kind regards,
> Martin Hundebøll
> Embedded Linux Consultant
>
> +45 61 65 54 61
> mar...@geanix.com
>
> Geanix IVS
> DK39600706
>

--
Kind regards,
Martin Hundebøll
Embedded Linux Consultant

+45 61 65 54 61
mar...@geanix.com

Geanix IVS
DK39600706
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH] [yocto-autobuilder] init: Fix the import module yoctoabb & yocto_console_view

2018-07-10 Thread Chan, Aaron Chun Yew
Hi Richard,

[Richard] I think this means you're using python2 and we really should be using
python3 as I don't want to support both...

[Reply] This error/bug was found during buildbot startup meaning this is out of 
my control. 
Maybe you have a fix for this, otherwise I do suggest to add in 
__init__.py in places where we need to source custom modules.

$ buildbot start ~/yocto-controller
$ cat -n 100 ~/yocto-controller/twistd.log

58002018-07-10 18:42:13+0800 [-] Main loop terminated.
  5801  2018-07-10 18:42:13+0800 [-] Server Shut Down.
  5802  2018-07-10 18:42:27+0800 [-] Loading buildbot.tac...
  5803  2018-07-10 18:42:27+0800 [-] Loaded.
  5804  2018-07-10 18:42:27+0800 [-] twistd 18.4.0 (/usr/bin/python 2.7.13) 
starting up.
  5805  2018-07-10 18:42:27+0800 [-] reactor class: 
twisted.internet.epollreactor.EPollReactor.
  5806  2018-07-10 18:42:27+0800 [-] Starting BuildMaster -- buildbot.version: 
1.2.0
  5807  2018-07-10 18:42:27+0800 [-] Loading configuration from 
'/home/ab/yocto-controller/master.cfg'
  5808  2018-07-10 18:42:27+0800 [-] error while parsing config file:
  5809  Traceback (most recent call last):
  5810File 
"/home/ab/.local/lib/python2.7/site-packages/twisted/python/threadpool.py", 
line 266, in 
  5811  inContext.theWork = lambda: context.call(ctx, func, *args, 
**kw)
  5812File 
"/home/ab/.local/lib/python2.7/site-packages/twisted/python/context.py", line 
122, in callWithContext
  5813  return self.currentContext().callWithContext(ctx, func, 
*args, **kw)
  5814File 
"/home/ab/.local/lib/python2.7/site-packages/twisted/python/context.py", line 
85, in callWithContext
  5815  return func(*args,**kw)
  5816File 
"/home/ab/.local/lib/python2.7/site-packages/buildbot/config.py", line 182, in 
loadConfig
  5817  self.basedir, self.configFileName)
  5818  ---  ---
  5819File 
"/home/ab/.local/lib/python2.7/site-packages/buildbot/config.py", line 140, in 
loadConfigDict
  5820  execfile(filename, localDict)
  5821File 
"/home/ab/.local/lib/python2.7/site-packages/twisted/python/compat.py", line 
246, in execfile
  5822  exec(code, globals, locals)
  5823File "/home/ab/yocto-controller/master.cfg", line 11, in 

  5824  from yoctoabb import builders, config, schedulers, workers, 
services, www
  5825  exceptions.ImportError: No module named yoctoabb
  5826  
  5827  2018-07-10 18:42:27+0800 [-] Configuration Errors:
  5828  2018-07-10 18:42:27+0800 [-]   error while parsing config file: No 
module named yoctoabb (traceback in logfile)
  5829  2018-07-10 18:42:27+0800 [-] Halting master.
  5830  2018-07-10 18:42:27+0800 [-] BuildMaster startup failed
  5831  2018-07-10 18:42:27+0800 [-] BuildMaster is stopped
  5832  2018-07-10 18:42:27+0800 [-] Main loop terminated.
  5833  2018-07-10 18:42:27+0800 [-] Server Shut Down.

Cheers,
Aaron

From: richard.pur...@linuxfoundation.org [richard.pur...@linuxfoundation.org]
Sent: Tuesday, July 10, 2018 4:40 PM
To: Chan, Aaron Chun Yew; yocto@yoctoproject.org; Burton, Ross; Eggleton, Paul
Subject: Re: [PATCH] [yocto-autobuilder] init: Fix the import module yoctoabb & 
yocto_console_view

On Tue, 2018-07-10 at 11:24 +0800, Aaron Chan wrote:
> This patch is to fix the inconsistency in loading custom module
> yoctoabb & yocto_console_view during Buildbot Master startup.
>
> Signed-off-by: Aaron Chan 
> ---
>  __init__.py| 0
>  yocto_console_view/__init__.py | 0
>  2 files changed, 0 insertions(+), 0 deletions(-)
>  create mode 100644 __init__.py
>  create mode 100644 yocto_console_view/__init__.py

I think this means you're using python2 and we really should be using
python3 as I don't want to support both...

Cheers,

Richard
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH] [yocto-autobuilder] master.cfg: Defaults autobuilder URL based on FQDN

2018-07-10 Thread Chan, Aaron Chun Yew
Hi Richard,

[Richard] I appreciate what you're trying to do here and make this 
autoconfigure.
 Unfortunately the urls can't always be figured out this way, the above
for example drops the "/main/" part, without which the autobuilder
won't work. The server can be behind forwarding or proxies/firewalls
which also make this problematic.

I also agree with Martin that using os.path.join() in a url is
incorrect, its not meant for urls.

Most people setting up an autobuilder will need to have some
customisations on top of yocto-autobuilder2, I don't think its possible
to avoid that. I'd therefore perhaps try and concentrate on having key
modules like the lava integration available and accept there will
always be some local configuration the end user needs to make to things
like the URL details? Even the main autobuilder adds in passwords and
some other local config on top of the stardard repo...

[Reply] I do agree with you that some customization are required to be built on 
top of autobuilder.
However this patch is submitted because the URL link below is not 
working on my end and
only http://localhost:8010/ is working without the need to change 
the master.cfg
So therefore, due to several rounds of commission i decided to send 
a patch which defaults
URL following the FQDN sets on the host machines.

c['buildbotURL'] = "https://autobuilder.yoctoproject.org/main/";

c['buildbotURL'] = "http://localhost:8010/'

The intention of this patch is to reduce the need for local configurations, yes 
I do agree that there will
be some level of customization needed on local setup. I'll leave it to you then 
to determine whats best.

Cheers,
Aaron


From: richard.pur...@linuxfoundation.org [richard.pur...@linuxfoundation.org]
Sent: Tuesday, July 10, 2018 4:55 PM
To: Chan, Aaron Chun Yew; yocto@yoctoproject.org; Burton, Ross; Eggleton, Paul
Subject: Re: [PATCH] [yocto-autobuilder] master.cfg: Defaults autobuilder URL 
based on FQDN

On Tue, 2018-07-10 at 11:18 +0800, Aaron Chan wrote:
> This patch is to enable auto-assignments buildbot URL based on Hosts FQDN.
> The socket module allows the retrieval on FQDN and constructs the entire
> URL by default, this default settings can be overwritten in c['buildbotURL']
> based on local administrator preferences.
>
> Signed-off-by: Aaron Chan 
> ---
>  master.cfg | 7 +--
>  1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/master.cfg b/master.cfg
> index fca80d2..49ddeb4 100644
> --- a/master.cfg
> +++ b/master.cfg
> @@ -4,6 +4,7 @@
>  import os
>  import imp
>  import pkg_resources
> +import socket
>
>  from buildbot.plugins import *
>  from buildbot.plugins import db
> @@ -55,6 +56,7 @@ imp.reload(services)
>  imp.reload(www)
>
>  c = BuildmasterConfig = {}
> +url = os.path.join('http://', socket.getfqdn() + ':' + str(config.web_port) 
> + '/')
>
>  # Disable usage reporting
>  c['buildbotNetUsageData'] = None
> @@ -76,6 +78,7 @@ c['www'] = www.www
>  c['workers'] = workers.workers
>
>  c['title'] = "Yocto Autobuilder"
> -c['titleURL'] = "https://autobuilder.yoctoproject.org/main/";
> +c['titleURL'] = url
>  # visible location for internal web server
> -c['buildbotURL'] = "https://autobuilder.yoctoproject.org/main/";
> +# - Default c['buildbotURL'] = "https://autobuilder.yoctoproject.org/main/";
> +c['buildbotURL'] = url

I appreciate what you're trying to do here and make this autoconfigure.
 Unfortunately the urls can't always be figured out this way, the above
for example drops the "/main/" part, without which the autobuilder
won't work. The server can be behind forwarding or proxies/firewalls
which also make this problematic.

I also agree with Martin that using os.path.join() in a url is
incorrect, its not meant for urls.

Most people setting up an autobuilder will need to have some
customisations on top of yocto-autobuilder2, I don't think its possible
to avoid that. I'd therefore perhaps try and concentrate on having key
modules like the lava integration available and accept there will
always be some local configuration the end user needs to make to things
like the URL details? Even the main autobuilder adds in passwords and
some other local config on top of the stardard repo...

Cheers,

Richard

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH] [yocto-autobuilder] init: Fix the import module yoctoabb & yocto_console_view

2018-07-10 Thread Chan, Aaron Chun Yew
Hi Richard,

It appears that your using virtualenv (switch to python3) to startup your 
buildbot processes and not using the default python from host distribution.
I am aware that you can switch interpreter to python2/3 in virtualenv. 
Shouldn’t we handle these for both in events users uses
the default host distribution as default python2 and still make it worked? What 
is the best practices here and should all the buildbot-workers start in
virtualenv defaults to python3 as well ? I am little lost right here, please 
advise.

Cheers,
Aaron

Try using virtualenv:

$ python --version
Python 2.7.15rc1

$ virtualenv -p python3 test
Already using interpreter /usr/bin/python3 Using base prefix '/usr'
New python executable in /media/build1/test/bin/python3 Also creating 
executable in /media/build1/test/bin/python Installing setuptools, 
pkg_resources, pip, wheel...done.

$ . ./test/bin/activate

(test) richard@dax:/media/build1/$ python --version Python 3.6.5


If you start buildbot within the virtualenv, you should see it using python3.

Cheers,

Richard

-Original Message-
From: richard.pur...@linuxfoundation.org 
[mailto:richard.pur...@linuxfoundation.org] 
Sent: Wednesday, July 11, 2018 6:17 AM
To: Chan, Aaron Chun Yew ; 
yocto@yoctoproject.org; Burton, Ross ; Eggleton, Paul 

Subject: Re: [PATCH] [yocto-autobuilder] init: Fix the import module yoctoabb & 
yocto_console_view

On Tue, 2018-07-10 at 10:47 +0000, Chan, Aaron Chun Yew wrote:
> [Richard] I think this means you're using python2 and we really should
> be using
> python3 as I don't want to support both...
> 
> [Reply] This error/bug was found during buildbot startup meaning this 
> is out of my control.

No, it is not out of your control.

> Maybe you have a fix for this, otherwise I do suggest to 
> add in __init__.py in places where we need to source custom modules.
> 
> $ buildbot start ~/yocto-controller
> $ cat -n 100 ~/yocto-controller/twistd.log

Try using virtualenv:

$ python --version
Python 2.7.15rc1

$ virtualenv -p python3 test
Already using interpreter /usr/bin/python3 Using base prefix '/usr'
New python executable in /media/build1/test/bin/python3 Also creating 
executable in /media/build1/test/bin/python Installing setuptools, 
pkg_resources, pip, wheel...done.

$ . ./test/bin/activate

(test) richard@dax:/media/build1/$ python --version Python 3.6.5


If you start buildbot within the virtualenv, you should see it using python3.

Cheers,

Richard
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PATCH] [yocto-autobuilder] init: Fix the import module yoctoabb & yocto_console_view

2018-07-11 Thread Chan, Aaron Chun Yew
Hi Richard,

Thanks for the well crafted explanation, and after talking to Paul I got a 
clearer picture on which direction we should be heading and we should align
and work towards python3. Maybe I was not aware, my bad.

Cheers,
Aaron

-Original Message-
From: richard.pur...@linuxfoundation.org 
[mailto:richard.pur...@linuxfoundation.org] 
Sent: Wednesday, July 11, 2018 3:12 PM
To: Chan, Aaron Chun Yew ; 
yocto@yoctoproject.org; Burton, Ross ; Eggleton, Paul 

Subject: Re: [PATCH] [yocto-autobuilder] init: Fix the import module yoctoabb & 
yocto_console_view

On Wed, 2018-07-11 at 02:23 +, Chan, Aaron Chun Yew wrote:
> It appears that your using virtualenv (switch to python3) to startup 
> your buildbot processes and not using the default python from host 
> distribution.
> I am aware that you can switch interpreter to python2/3 in virtualenv. 
> Shouldn’t we handle these for both in events users uses the default 
> host distribution as default python2 and still make it worked? What is 
> the best practices here and should all the buildbot- workers start in 
> virtualenv defaults to python3 as well ? I am little lost right here, 
> please advise.

There are multiple ways to achieve this, you can make /usr/bin/python point at 
python3 too, or simply not install python2 on the controller.
Using virtualenv was just a simple way to illustrate how you can use it.

My worry about trying to support both python versions is that as you've found 
out, there are problems that can occur in one that don't occur in the other, 
which then means twice as much testing. Since python2 is an official "dead end" 
and as a project we're trying hard to use python3 everywhere, it doesn't seem 
to make sense to try and support python2 on new code such as the autobuilder?

Cheers,

Richard
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [RFC] Yocto Autobuilder and LAVA Integration

2018-11-08 Thread Chan, Aaron Chun Yew
Hi Anibal/RP,

> In order to do a distributed Boards testing the Yocto Autobuilder 
> needs to publish in some place accessible the artifacts (image, 
> kernel, etc) to flash/boot the board and the test suite expected to 
> execute.

[Reply] That is correct, since Linaro have this in place to use 
https://archive.validation.linaro.org/directories/ and I have look into this as 
well, we can leverage on this 
  but I am up for any suggestion you might have. So the idea here 
is that we have a placeholder to store the publish artifacts remotely and 
deploy using 
  native curl command with token access. Then based on your LAVA 
job definitions we can instruct LAVA to source the images via https.
  Having said that, the deploy stage in LAVA must have some 
capabilities to read a token file in the LAVA job defintion and pick up the 
binaries from public repo (git LFS).

  In order for Board Distributed Tests to happen, there are 2 items 
in my wish lists

  1. Public hosting of binary repository - with access control 
  2. Ease Handshaking between two(2) different systems CI (e.g. 
Jenkins/Autobuilder) with LAVA 
   a. Exchange build property (metadata) - includes hardware 
info, system info 
   b. Test reporting results
   
> I created a simple LAVA test definition that allows run testimage 
> (oe-test runtime) in my own LAVA LAB, is very simplistic only has a 
> regex to parse results and uses lava-target-ip and lava-echo-ipv4 to 
> get target and server IP addresses.

[Reply] Although the lava test shell have these capabilities to use 
lava-target-ip or/and lava-echo-ipv4 this only works within LAVA scope, the way 
we retrieve the Ipv4
  address is reading the logs from LAVA thru XML-RPC and grep the 
pattern matching string which contains IP even before the HW get initialize 
entirely then parse
  IP back to the Yocto Autobuilder. 

  
http://git.yoctoproject.org/cgit/cgit.cgi/yocto-autobuilder-helper/tree/lava/trigger-lava-jobs

> Some of the tasks, I identified,  (if is accepted)
> 
> - Yocto-aubuilder-helper: Implement/adapt to cover this new behavior , 
> move the EXTRA_PLAIN_CMDS to a class.
> - Poky/OE: Review/fix test-export or provide other mechanism to export 
> the test suite. > - Poky/OE: Review/fix test-export or provide other 
> mechanism to export 
> the test suite.

[Reply] I would like to understand further what is the implementation here and 
how it addresses the problems that we have today. I believe in the past, Tim 
has tried
  to enable testexport and transfer the testexport into the DUT but 
it was not very successful and we found breakage.

> -  Yocto-aubuilder-helper: Create a better approach to re-use LAVA job 
> templates across boards.

[Reply] I couldn’t be more supportive on this having a common LAVA job template 
across boards but I would like to stress this, we don’t exactly know how 
  community will define their own LAVA job definition, therefore 
what I had in mind as per today is to create a placeholde where LAVA job 
templates 
  can be define and other boards/community can reuse the same 
template if it fits their use cases. In general the templates we have today are 
  created to fit into Yocto Project use cases.  

Lastly  there are some works I've done on provisiong QEMU on LAVA sourceing 
from Yocto Project public releases, I am looking at where we can upstream this
https://github.com/lab-github/yoctoproject-lava-test-shell

Thanks!

Cheers,
Aaron Chan
Open Source Technology Center Intel 

-Original Message-
From: richard.pur...@linuxfoundation.org 
[mailto:richard.pur...@linuxfoundation.org] 
Sent: Thursday, November 8, 2018 6:45 AM
To: Anibal Limon ; yocto@yoctoproject.org
Cc: Nicolas Dechesne ; Chan, Aaron Chun Yew 

Subject: Re: [RFC] Yocto Autobuilder and LAVA Integration

Hi Anibal,

On Wed, 2018-11-07 at 16:25 -0600, Anibal Limon wrote:
> We know the need to execute OE testimage over real HW not only QEMU,
> 
> I'm aware that currently there is an implementation on the Yocto 
> Autobuilder Helper , this initial implementation looks pretty well 
> separating parts for template generation [1] and the script to send 
> jobs to LAVA [2].
> 
> There are some limitations.
> 
> - Requires that the boards are accessible trough SSH (same network?) 
> by the Autobuilder, so no distributed LAB testing.
> - LAVA doesn't know about test results because the execution is 
> injected via SSH.
> 
> In order to do a distributed Boards testing the Yocto Autobuilder 
> needs to publish in some place accessible the artifacts (image, 
> kernel, etc) to flash/boot the board and the test suite expected to 
> execute.
> 
> Currently there is a func