[yocto] Question regarding kernel modules using outoftree kernel builds (EXTERNALSRC)

2016-08-01 Thread Manjukumar Harthikote Matha

All,

I am unable to build kernel modules when using EXTERNALSRC. When you 
inherit module.bbclass, it depends on kernel shared dir (from 
do_shared_workdir). I do see that SRCTREECOVEREDTASKS has explicitly 
made sure we skip do_shared_workdir in kernel-yocto.bbclass.


Is this intended? What is the expected flow when building out-of-tree 
kernel modules using kernel EXTERNALSRC


Thanks
Manju

--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] eSDK errors and sstate signature issues

2016-08-15 Thread Manjukumar Harthikote Matha

All,


I am trying to build Yocto eSDK with OE_core and meta-xilinx layers.

Having issues while extracting the eSDK, it warns quite a bit on 
signature mismatch. Can this happen due to the fact that I am using 
external tool chain??


And eventually bails stating
ERROR: Unexpected tasks or setscene left over to be executed:

meta-xilinx/recipes-kernel/linux/linux-xlnx_4.6.bb, do_fetch
meta-xilinx/recipes-kernel/linux/linux-xlnx_4.6.bb, do_unpack
meta-xilinx/recipes-kernel/linux/linux-xlnx_4.6.bb, do_kernel_configme


How do I go about investigating why set scene was corrupted or not 
executed correctly while building kernel?


Any help is appreciated

Thanks
Manju

--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Yocto Extensible SDK question

2016-08-16 Thread Manjukumar Harthikote Matha

Hi,

Is there a way to disable recipes to be excluded in eSDK?

I am facing issues while extracting an SDK, which complains about kernel 
fragments are left over in setscene


Thanks
Manju
--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] eSDK errors and sstate signature issues

2016-08-23 Thread Manjukumar Harthikote Matha

Hi Paul.

On 08/22/2016 09:47 AM, Paul Eggleton wrote:

Hi Manju,

Sorry for the delayed reply.


No worries.

On Mon, 15 Aug 2016 17:43:23 Manjukumar Harthikote Matha wrote:

I am trying to build Yocto eSDK with OE_core and meta-xilinx layers.

Having issues while extracting the eSDK, it warns quite a bit on
signature mismatch. Can this happen due to the fact that I am using
external tool chain??

And eventually bails stating
ERROR: Unexpected tasks or setscene left over to be executed:

meta-xilinx/recipes-kernel/linux/linux-xlnx_4.6.bb, do_fetch
meta-xilinx/recipes-kernel/linux/linux-xlnx_4.6.bb, do_unpack
meta-xilinx/recipes-kernel/linux/linux-xlnx_4.6.bb, do_kernel_configme


How do I go about investigating why set scene was corrupted or not
executed correctly while building kernel?


So it looks like linux-xlnx is attempting to build for some reason and yet we
don't expect it to be because it's locked. It may or may not be related to the
use of an external toolchain - I'm not sure because it's not a scenario I have
tested.

Thanks YP masters for having bitbake-diffsig. This enabled us to 
identify the distro settings that were incorrect, specially on 
UNINATIVE. After we corrected our distro(meta-petalinux) to use this 
concept, the error disappeared.


We also had to remove some INITRAMFS and IMAGEFSTYPES from the machine 
as well. We are working on krogoth branch, but I did notice a patch on 
master to correct this in populate_sdk_ext.


We are able to build and test SDK using external toolchain now and its 
super cool :)


Is there a way to make tasks for a recipe ARCH specific instead of 
MACHINE ARCH? For ex: Kernel is packaged as machine specific, however if 
you do_fetch (and maybe do_patch) could use ARCH specific, then multiple 
machines belonging to the same ARCH can use it from sstate, instead of 
downloading it again.



Are the layers / toolchain you are using downloadable, i.e. could I reproduce
the issue here?

Layers and toolchain are available but not for the dev branch which we 
are working on. It is going through legal-sweep, hence I am unable to 
share it :(


Thanks
Manju
--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [Yocto] Prelink compilation for ZCU102

2018-04-17 Thread Manjukumar Harthikote Matha
Hi Nicolas,

Seems like there was a “fetcher failure”, can you make sure git proxy is set 
correctly for your network.
If you are able to manually clone using the clone command found in the 
log.do_fetch of prelink recipe, then bitbake should also be able to clone 
without failures.

Thanks,
Manju

From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org] On 
Behalf Of Nicolas Salmin
Sent: Tuesday, April 17, 2018 1:55 AM
To: yocto@yoctoproject.org
Subject: [yocto] [Yocto] Prelink compilation for ZCU102

Hello guys,

Someone here have some information to build prelink recipe for ZCU102 board 
because i'm not able to do it ...

Hello guys,

Someone here have some information to build prelink recipe for ZCU102 board 
because i'm not able to do it ...

ERROR: prelink-1.0+gitAUTOINC+aa2985eefa-r0 do_unpack: Fetcher failure: Fetch 
command export PSEUDO_DISABLED=1; export 
DBUS_SESSION_BUS_ADDRESS="unix:abstract=/tmp/dbus-KTbsI0WgZ2"; export 
SSH_AUTH_SOCK="/run/user/1000/keyring/ssh"; export 
PATH="/private/path/UltraScale/zcu102/yocto/build/tmp/sysroots-uninative/x86_64-linux/usr/bin:/private/path/UltraScale/zcu102/yocto/poky/scripts:/private/path/UltraScale/zcu102/yocto/build/tmp/work/aarch64-poky-linux/prelink/1.0+gitAUTOINC+aa2985eefa-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux:/private/path/UltraScale/zcu102/yocto/build/tmp/work/aarch64-poky-linux/prelink/1.0+gitAUTOINC+aa2985eefa-r0/recipe-sysroot/usr/bin/crossscripts:/private/path/UltraScale/zcu102/yocto/build/tmp/work/aarch64-poky-linux/prelink/1.0+gitAUTOINC+aa2985eefa-r0/recipe-sysroot-native/usr/sbin:/private/path/UltraScale/zcu102/yocto/build/tmp/work/aarch64-poky-linux/prelink/1.0+gitAUTOINC+aa2985eefa-r0/recipe-sysroot-native/usr/bin:/private/path/UltraScale/zcu102/yocto/build/tmp/work/aarch64-poky-linux/prelink/1.0+gitAUTOINC+aa2985eefa-r0/recipe-sysroot-native/sbin:/private/path/UltraScale/zcu102/yocto/build/tmp/work/aarch64-poky-linux/prelink/1.0+gitAUTOINC+aa2985eefa-r0/recipe-sysroot-native/bin:/private/path/UltraScale/zcu102/yocto/poky/bitbake/bin:/private/path/UltraScale/zcu102/yocto/build/tmp/hosttools";
 export HOME="/home/nicolas"; git -c core.fsyncobjectfiles=0 checkout -B 
cross_prelink aa2985eefa94625037ad31e9dc5207fd5bf31ca7 failed with exit code 
128, output:
fatal: reference is not a tree: aa2985eefa94625037ad31e9dc5207fd5bf31ca7

ERROR: prelink-1.0+gitAUTOINC+aa2985eefa-r0 do_unpack: Function failed: 
base_do_unpack
ERROR: Logfile of failure stored in: 
/private/path/UltraScale/zcu102/yocto/build/tmp/work/aarch64-poky-linux/prelink/1.0+gitAUTOINC+aa2985eefa-r0/temp/log.do_unpack.85986
ERROR: Task 
(/private/path/UltraScale/zcu102/yocto/poky/meta/recipes-devtools/prelink/prelink_git.bb:do_unpack)
 failed with exit code '1'


I don't know if it's because of the target is an aarch64 architecture or not ...

Any advise ?
Cheers,
Nicolas
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Gratitude

2018-06-06 Thread Manjukumar Harthikote Matha
Thanks for all the work Jefro

Thanks,
Manju

> -Original Message-
> From: yocto-boun...@yoctoproject.org [mailto:yocto-
> boun...@yoctoproject.org] On Behalf Of Osier-mixon, Jeffrey
> Sent: Tuesday, June 05, 2018 11:42 PM
> To: yocto@yoctoproject.org
> Subject: [yocto] Gratitude
> 
> I have been the Yocto Project community manager for over 7 years now, and have
> had the pleasure of knowing or conversing individually with several hundred of
> you. It is with mixed feelings that I must announce that I am stepping down 
> from
> my position as the YP community manager and the Advisory Board chair after 7
> years, as I am taking on a new role in my job.
> 
> I am very proud of the progress that the project has made, growing from a 
> small
> set of build tools into an industry standard for building and working with
> embedded Linux-based operating systems, supporting upstream projects including
> the Linux kernel, hosting projects like opkg, and inspiring many very 
> successful
> downstream projects, including AGL and OpenBMC among many others, and also
> supporting countless configurations of hardware among seven different
> architectures. We have also seen the community of users grow from fewer than
> 1000 in 2010 to a large city-sized community, estimated in the tens of 
> thousands
> of developers.
> 
> Please continue to participate, collaborate, and come together as a community!
> The Yocto Project is a success because every one of you participates.
> 
> Jeffrey "Jefro" Osier-Mixon, Intel Corporation Open Source Community Ecosystem
> Strategist
> 
> 
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] liblzma: memory allocation failed

2018-09-16 Thread Manjukumar Harthikote Matha
Hi Peter,

> -Original Message-
> From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org]
> On Behalf Of Peter Bergin
> Sent: Sunday, September 16, 2018 1:41 PM
> To: yocto@yoctoproject.org
> Subject: [yocto] liblzma: memory allocation failed
> 
> Hi,
> 
> during the task do_package_write_rpm I get the error "liblzma: Memory
> allocation failed". It happens during packaging of binary RPM packages.
> The root cause seems to be the host environment that is used in our
> project. We run our builds on a big server with 32 cores and 256GB of
> physical RAM but each user has a limit of virtual memory usage to 32GB
> (ulimit -v). The packaging in rpm-native has been parallelized in the
> commit
> http://git.yoctoproject.org/cgit/cgit.cgi/poky/commit/meta/recipes-
> devtools/rpm?id=84e0bb8d936f1b9094c9d5a92825e9d22e1bc7e3.
> What seems to happen is that rpm-native put up 32 parallel tasks with
> '#pragma omp', each task is using liblzma that also put up 32 tasks for
> the compression work. The memory calculations in liblzma is based on the
> amount of physical RAM but as the user is limited by 'ulimit -v' we get
> into a OOM situation in liblzma.
> 
> Here is the code snippet from rpm-native/build/pack.c where it happens:
> 
>      #pragma omp parallel
>      #pragma omp single
>      // re-declaring task variable is necessary, or older gcc versions will 
> produce code
> that segfaults
>      for (struct binaryPackageTaskData *task = tasks; task != NULL; task = 
> task->next) {
>      if (task != tasks)
>      #pragma omp task
>      {
>      task->result = packageBinary(spec, task->pkg, cookie, cheating, 
> &(task-
> >filename), buildTime, buildHost);
>      rpmlog(RPMLOG_NOTICE, _("Finished binary package job, result %d, 
> filename
> %s\n"), task->result, task->filename);
>      }
>      }
> 
> 
> Steps to reproduce is to set 'ulimit -v' in your shell to, for example,
> 1/8 of the amount of physical RAM and then build for example
> glibc-locale. I have tested this with rocko. If the '#pragma omp'
> statements in code snippet above is removed the problem is solved. But
> that not good as the parallel processing speed up the process.
> 
> Is the host environment used here with restricted virtual memory
> supported by Yocto? If it is, someone that have any suggestion for a
> solution on this issue?
> 

We had seen this issue as well and concluded that some settings in the server 
was causing this.
See 
http://lists.openembedded.org/pipermail/openembedded-core/2018-January/146705.html

Thanks,
Manju
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] bitbake openamp-image-minimal fails to create image

2019-01-07 Thread Manjukumar Harthikote Matha
+meta-xilinx mailing list

From: Pandey, Kamal [mailto:kamal.pan...@ifm.com]
Sent: Monday, January 07, 2019 8:59 AM
To: yocto@yoctoproject.org; Manjukumar Harthikote Matha 
Subject: RE: bitbake openamp-image-minimal fails to create image

Hi,
I solved the problem by using meta-openamp(branch should be similar to xilinx 
version) layer in my yocto project and enabled "libmetal" and "open-amp" 
packages in my image recipe. It compiled successfully.  I also add 
rpmsg-echo-test,  rpmsg-mat-mul , and rpmsg-proxy-app packages in the image and 
it successfully compiled. I am building Linux Application that uses RPMsg in 
user space.
Now the executable generated from echo-test or mat-mul are in my host linux 
master (a53-core). I have also created an r5-application using XSDK.
What is the next step to have a communication between a53 and r5. How to use 
the generated elf file for r5 processor.
Also I have enabled remoteproc, rpmsg, virtio  in kernel configuration.
But while using the command " $modprobe zynqmp_r5_remoteproc", I get the 
following error:

"modprobe: module zynqmp_r5_remoteproc not found in modules.dep"

How can I boot the r5 processor and where to store the elf file generated.
From: Manjukumar Harthikote Matha 
mailto:manju...@xilinx.com>>
Sent: 07 January 2019 13:53
To: Pandey, Kamal mailto:kamal.pan...@ifm.com>>; 
yocto@yoctoproject.org<mailto:yocto@yoctoproject.org>
Subject: RE: bitbake openamp-image-minimal fails to create image

Hi Kamal,

Seems like the required kernel modules are missing causing the breakage.
Enable them using kernel menuconfig (bitbake virtual/kernel -c menuconfig) and 
then build the image

Thanks,
Manju

From: yocto-boun...@yoctoproject.org<mailto:yocto-boun...@yoctoproject.org> 
[mailto:yocto-boun...@yoctoproject.org] On Behalf Of Pandey, Kamal
Sent: Sunday, January 06, 2019 11:14 PM
To: yocto@yoctoproject.org<mailto:yocto@yoctoproject.org>
Subject: [yocto] bitbake openamp-image-minimal fails to create image

Hello,
I used the meta-openamp layer for r5-a53 communication.
but when I simply ran $bitbake openamp-image-minimal, it gave me the following 
error:

ERROR: openamp-image-minimal-1.0-r0 do_rootfs: Could not invoke dnf. Command 
'/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/recipe-sysroot-native/usr/bin/dnf
 -y -c 
/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/rootfs/etc/dnf/dnf.conf
 
--setopt=reposdir=/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/rootfs/etc/yum.repos.d
 
--repofrompath=oe-repo,/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/oe-rootfs-repo
 
--installroot=/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/rootfs
 
--setopt=logdir=/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/temp
 --nogpgcheck install kernel-module-virtio-ring kernel-module-virtio-rpmsg-bus 
kernel-module-uio-pdrv-genirq kernel-module-virtio libopen-amp0 
packagegroup-base-extended kernel-image-fitimage-4.14.79-yocto-standard 
run-postinsts libmetal packagegroup-core-boot kernel-module-remoteproc' 
returned 1:
Added oe-repo repo from 
/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/oe-rootfs-repo
Last metadata expiration check: 0:00:00 ago on Fri 04 Jan 2019 01:58:31 PM UTC.
No package kernel-module-virtio-ring available.
No package   available.
No package kernel-module-virtio available.
No package   available.
Error: Unable to find a match

ERROR: openamp-image-minimal-1.0-r0 do_rootfs: Function failed: do_rootfs
ERROR: Logfile of failure stored in: 
/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/temp/log.do_rootfs.7153
ERROR: Task 
(/home/iepl/work/yocto_build/poky/../meta-openamp/recipes-openamp/images/openamp-image-minimal.bb:do_rootfs)
 failed with exit code '1'


Can someone provide me a solution to this problem.
Thanks
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] bitbake openamp-image-minimal fails to create image

2019-01-08 Thread Manjukumar Harthikote Matha
Hi Kamal,

Seems like the required kernel modules are missing causing the breakage.
Enable them using kernel menuconfig (bitbake virtual/kernel -c menuconfig) and 
then build the image

Thanks,
Manju

From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org] On 
Behalf Of Pandey, Kamal
Sent: Sunday, January 06, 2019 11:14 PM
To: yocto@yoctoproject.org
Subject: [yocto] bitbake openamp-image-minimal fails to create image

Hello,
I used the meta-openamp layer for r5-a53 communication.
but when I simply ran $bitbake openamp-image-minimal, it gave me the following 
error:

ERROR: openamp-image-minimal-1.0-r0 do_rootfs: Could not invoke dnf. Command 
'/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/recipe-sysroot-native/usr/bin/dnf
 -y -c 
/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/rootfs/etc/dnf/dnf.conf
 
--setopt=reposdir=/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/rootfs/etc/yum.repos.d
 
--repofrompath=oe-repo,/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/oe-rootfs-repo
 
--installroot=/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/rootfs
 
--setopt=logdir=/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/temp
 --nogpgcheck install kernel-module-virtio-ring kernel-module-virtio-rpmsg-bus 
kernel-module-uio-pdrv-genirq kernel-module-virtio libopen-amp0 
packagegroup-base-extended kernel-image-fitimage-4.14.79-yocto-standard 
run-postinsts libmetal packagegroup-core-boot kernel-module-remoteproc' 
returned 1:
Added oe-repo repo from 
/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/oe-rootfs-repo
Last metadata expiration check: 0:00:00 ago on Fri 04 Jan 2019 01:58:31 PM UTC.
No package kernel-module-virtio-ring available.
No package kernel-module-virtio-rpmsg-bus available.
No package kernel-module-virtio available.
No package kernel-module-remoteproc available.
Error: Unable to find a match

ERROR: openamp-image-minimal-1.0-r0 do_rootfs: Function failed: do_rootfs
ERROR: Logfile of failure stored in: 
/media/iepl/iepl1/work/yocto_build/build-open-amp/tmp/work/pdm3_rev_b_zynqmp-pdm3-linux/openamp-image-minimal/1.0-r0/temp/log.do_rootfs.7153
ERROR: Task 
(/home/iepl/work/yocto_build/poky/../meta-openamp/recipes-openamp/images/openamp-image-minimal.bb:do_rootfs)
 failed with exit code '1'


Can someone provide me a solution to this problem.
Thanks
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Notes: Yocto Project Technical Team Meeting @ Tue Jan 8, 2019 8am - 8:30am (PST)

2019-01-08 Thread Manjukumar Harthikote Matha
Hi 



> -Original Message-
> From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org]
> On Behalf Of Jolley, Stephen K
> Sent: Tuesday, January 08, 2019 8:29 AM
> To: yocto@yoctoproject.org
> Subject: [yocto] Notes: Yocto Project Technical Team Meeting @ Tue Jan 8, 2019
> 8am - 8:30am (PST)
> 
> Attendees: Stephen, Armin, Alex, Richard G., Joshua W., Richard. P., Randy, 
> Ross,
> Trevor, Jon, Lewis, Jesse, Tim, Matt, Tracey,
> 
> YP Status:  See - https://wiki.yoctoproject.org/wiki/Weekly_Status
> 
> 
> 
> * YP 2.7 M1 is out of QA and being prepared for release.
> * YP 2.5.2 was released on Jan. 4, 2019.
> * YP 2.6.1 should build soon.
> * YP 2.7 M2 is targeted for build Jan. 21, 2019.
> 
> 
> Richard and Stephen discussed QA status and plans for YP 2.6.1.  We will 
> still do
> some manual QA work in Q1’19 but plan to automate QA before the end of Q1’19.
> 
> Joshua discussed the hash work on sstate.  It has merged for M2. Discussed how
> build history use of hashes needs some updates to improve reproducibility.
> 
> Trevor asked about the Weekly status report meetings. These are where the 
> weekly
> status report is written and while all are welcome to attend, no one but 
> Stephen and
> Richard are required.
> 
> Trevor discussed that YP is dropping support for some old  ARM targets, since 
> they
> are being dropped from gcc.
> 
> Richard asked about incremental image packaging.  Randy commented that WRS is
> using this feature for RPM.
> 

This is something that we are looking for, can you provide more details on the 
current implementation?

Thanks,
Manju
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Review request 0/13: Contribute meta-tensorflow to Yocto

2019-02-24 Thread Manjukumar Harthikote Matha
Hi Hongxu,

> -Original Message-
> From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org]
> On Behalf Of Stephen Lawrence
> Sent: Friday, February 22, 2019 8:52 AM
> To: Hongxu Jia ; richard.pur...@linuxfoundation.org;
> mhalst...@linuxfoundation.org; ross.bur...@intel.com; raj.k...@gmail.com;
> paul.eggle...@linux.intel.com; yocto@yoctoproject.org
> Cc: lpd-cdc-core-...@windriver.com; zhangle.y...@windriver.com
> Subject: Re: [yocto] Review request 0/13: Contribute meta-tensorflow to Yocto
> 
> Hi Hongxu,
> 
> > -Original Message-
> > From: yocto-boun...@yoctoproject.org 
> > On Behalf Of Hongxu Jia
> > Sent: 21 February 2019 11:37
> > To: richard.pur...@linuxfoundation.org; mhalst...@linuxfoundation.org;
> > ross.bur...@intel.com; raj.k...@gmail.com;
> > paul.eggle...@linux.intel.com; yocto@yoctoproject.org
> > Cc: lpd-cdc-core-...@windriver.com; zhangle.y...@windriver.com
> > Subject: [yocto] Review request 0/13: Contribute meta-tensorflow to
> > Yocto
> >
> > Hi RP and Yocto folks,
> >
> > Currently AI on IoT edge becomes more and more popular, but there is
> > no machine learning framework in Yocto/OE. With the support of Eric
> > , Robert  and
> > Randy , after two months effort, I've
> > integrated TensorFlow to Yocto.
> 
> Good work.
> 
> You might be interested in the yocto layers for tensorflow, tensorflow-lite 
> and
> caffe2 on github here [1]. I'm not part of the team that developed that work 
> but I
> forwarded your announcement to them. Perhaps there is the opportunity for some
> collaboration on the platform independent parts. The maintainer details are 
> in the
> readme.
> 

Thanks for the layer Hongxu. I agree with Steve, it would be good if you could 
collaborate with meta-renesas-ai and introduce the layer as meta-ai under 
meta-openembedded. 

Thanks,
Manju

> [1] https://github.com/renesas-rz/meta-renesas-ai
> 
> The layers were developed for the industrial focused Renesas RZ/G1 platforms.
> 
> Regards
> 
> Steve
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Kernel Build Failures with Shared SSTATE

2017-09-20 Thread Manjukumar Harthikote Matha
Hi Richard,

> -Original Message-
> From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org]
> On Behalf Of Schmitt, Richard
> Sent: Friday, July 14, 2017 8:23 AM
> To: yocto@yoctoproject.org
> Subject: [yocto] Kernel Build Failures with Shared SSTATE
> 
> Hi,
> 
> 
> 
> I had been running into kernel build failures on the morty branch when using 
> a shared
> state.  First I'll describe the error, and then my solution.
> 
> 
> 
> The first build that initializes the sstate cache works fine.  Subsequent 
> clean builds
> will fail.  The failure
> 
> would occur in the do_compile_kernelmodules task.  The error would indicate a
> failure because tmp/work-shared//kernel-build-artifacts was missing.
> 
> 
> 
> My analysis concluded that the kernel build was restored from the cache, but 
> it did
> not restore the kernel-build-artifacts needed by the do_compile_kernelmodules 
> task.
> 
> 
> 
> My solution was to include the following in a bbappend file for the kernel:
> 
> 
> 
> SSTATETASKS += "do_shared_workdir"
> 
> 
> 
> do_shared_workdir[sstate-plaindirs] = "${STAGING_KERNEL_BUILDDIR}"
> 
> 
> 
> python do_shared_workdir_setscene () {
> 
> sstate_setscene(d)
> 
> }
> 
> 
> 
> I assume the correct way to fix this would be to update the
> meta/classes/kernel.bbclass.  It looks like there was some attempt to do 
> something
> with the shared_workdir because there is a do_shared_workdir_setscene routine,
> but right now it just returns a 1.  Is that intentional.  It seems wrong.
> 

Even I am facing the same issue, but seen only few instances of failures. Not 
able to concretely figure out exact steps to replicate the issue.
Is it better to remove the addtask shared_workdir_setscene ?
If you see do_deploy task in kernel.bbclass, it doesn't handle the setscene 
task either

Thanks,
Manju

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-raspberrypi][PATCH] Revert "qtbase: Enable EGLFS support"

2017-09-27 Thread Manjukumar Harthikote Matha
Hi Otavio,

> -Original Message-
> From: Otavio Salvador [mailto:otavio.salva...@ossystems.com.br]
> Sent: Wednesday, September 27, 2017 12:23 PM
> To: Manjukumar Harthikote Matha 
> Cc: Khem Raj ; yo...@lists.yoctoproject.org; Otavio
> Salvador 
> Subject: Re: [yocto] [meta-raspberrypi][PATCH] Revert "qtbase: Enable EGLFS
> support"
> 
> On Wed, Sep 27, 2017 at 3:53 PM, Manjukumar Harthikote Matha
>  wrote:
> ...
> >> https://github.com/Freescale/meta-freescale/blob/master/classes/fsl-dynamic-
> >> packagearch.bbclass
> >>
> >> Something like this?
> >>
> >
> > This is very useful, can this concept be upstreamed to OE-Core?
> 
> I think so; if people agree with this concept I can work in upstreaming it.
> 
> There is also the machine-overrides-extender.bbclass[1] which allows
> for overrides to be added/removed based on other overrides.
> 
> 1. https://github.com/Freescale/meta-freescale/blob/master/classes/machine-
> overrides-extender.bbclass
> 
> This was how I could generalize the BSP in a kind of SoC feature set.
> 
> You can see the original commit where I enable it:
> 
> https://github.com/Freescale/meta-
> freescale/commit/ad4611ab16bcd09eef11d630159253a12c5ecced#diff-
> 7bac7755a2891a94e863ed0a7af1876a
> 
Thanks for the patch.

We were thinking on similar lines for SOC variants and MACHINE variants 
(basically boards supporting different features like GPU, VCU etc).  These 
configuration mechanism will help resolve these cases. 
SOARCH can also help in package-feed mechanism for boards

You should definitely considering up streaming these to OE-core

Thanks,
Manju
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-raspberrypi][PATCH] Revert "qtbase: Enable EGLFS support"

2017-09-27 Thread Manjukumar Harthikote Matha
Hi Otavio,

> -Original Message-
> From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org]
> On Behalf Of Otavio Salvador
> Sent: Wednesday, September 27, 2017 10:29 AM
> To: Khem Raj 
> Cc: yo...@lists.yoctoproject.org; Otavio Salvador 
> Subject: Re: [yocto] [meta-raspberrypi][PATCH] Revert "qtbase: Enable EGLFS
> support"
> 
> On Wed, Sep 27, 2017 at 2:20 PM, Khem Raj  wrote:
> >
> > On Wed, Sep 27, 2017 at 10:17 AM Andrei Gherzan  wrote:
> >>
> >> On Wed, Sep 27, 2017 at 4:23 PM, Martin Jansa
> >> 
> >> wrote:
> >>>
> >>> * this reverts commit 04b37dbdb79638b17a670280058400ffaf1b6ccb.
> >>> * this makes qtbase and everything which depends on some qt* recipe to
> >>>   be effectivelly MACHINE_ARCH
> >>>
> >>> Signed-off-by: Martin Jansa 
> >>> ---
> >>>  dynamic-layers/qt5-layer/recipes-qt/qt5/qtbase_%.bbappend | 3 ---
> >>>  1 file changed, 3 deletions(-)
> >>>  delete mode 100644
> >>> dynamic-layers/qt5-layer/recipes-qt/qt5/qtbase_%.bbappend
> >>>
> >>> diff --git
> >>> a/dynamic-layers/qt5-layer/recipes-qt/qt5/qtbase_%.bbappend
> >>> b/dynamic-layers/qt5-layer/recipes-qt/qt5/qtbase_%.bbappend
> >>> deleted file mode 100644
> >>> index ae3f1d3..000
> >>> --- a/dynamic-layers/qt5-layer/recipes-qt/qt5/qtbase_%.bbappend
> >>> +++ /dev/null
> >>> @@ -1,3 +0,0 @@
> >>> -# Copyright (C) 2017 O.S. Systems Software LTDA.
> >>> -
> >>> -PACKAGECONFIG_GL_rpi   = "gles2 eglfs"
> >>
> >>
> >> What would be the solution though?
> >
> > I think check for OpenGL feature to enable it I think another thing is
> > to also check for X11 in distro features before enabling it
> >
> > Gl support is quite soc specific so I don't think there is an elegant
> > way unless qt components can be built with soc specific elements as
> > plugins or something which then can have independent recipe
> 
> https://github.com/Freescale/meta-freescale/blob/master/classes/fsl-dynamic-
> packagearch.bbclass
> 
> Something like this?
> 

This is very useful, can this concept be upstreamed to OE-Core?

Thanks,
Manju
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Using kernel fitimage with initramfs

2017-10-12 Thread Manjukumar Harthikote Matha
Hi All,

Had a question about kernel-fitimage.bbclass. I am enabling the fitimage using 
KERNEL_CLASSES += "kernel-fitimage" and KERNEL_IMAGETYPE = "fitImage".
It works and I see fitimage in my deploy directory without any issues.

However when I enable initramfs along with fitimage, using INITRAMFS_IMAGE = 
"core-image-minimal" and  INITRAMFS_IMAGE_BUNDLE = "1", kernel build fails.
It's mostly from kernel.bbclass because it tries to deploy fitimage 
https://github.com/openembedded/openembedded-core/blob/master/meta/classes/kernel.bbclass#L639
Am I using this featurecorrectly? anyone else facing same issue?

Below is a initial patch which I did to get me across the error, but I am not 
sure if this is the correct answer.

diff --git a/meta/classes/kernel.bbclass b/meta/classes/kernel.bbclass
index 756707a..d5342b4 100644
--- a/meta/classes/kernel.bbclass
+++ b/meta/classes/kernel.bbclass
@@ -208,14 +208,16 @@ do_bundle_initramfs () {
# Backing up kernel image relies on its type(regular file or 
symbolic link)
tmp_path=""
for type in ${KERNEL_IMAGETYPES} ; do
-   if [ -h ${KERNEL_OUTPUT_DIR}/$type ] ; then
-   linkpath=`readlink -n 
${KERNEL_OUTPUT_DIR}/$type`
-   realpath=`readlink -fn 
${KERNEL_OUTPUT_DIR}/$type`
-   mv -f $realpath $realpath.bak
-   tmp_path=$tmp_path" 
"$type"#"$linkpath"#"$realpath
-   elif [ -f ${KERNEL_OUTPUT_DIR}/$type ]; then
-   mv -f ${KERNEL_OUTPUT_DIR}/$type 
${KERNEL_OUTPUT_DIR}/$type.bak
-   tmp_path=$tmp_path" "$type"##"
+   if [ "$type" != "fitImage" ]; then
+   if [ -h ${KERNEL_OUTPUT_DIR}/$type ] ; then
+   linkpath=`readlink -n 
${KERNEL_OUTPUT_DIR}/$type`
+   realpath=`readlink -fn 
${KERNEL_OUTPUT_DIR}/$type`
+   mv -f $realpath $realpath.bak
+   tmp_path=$tmp_path" 
"$type"#"$linkpath"#"$realpath
+   elif [ -f ${KERNEL_OUTPUT_DIR}/$type ]; then
+   mv -f ${KERNEL_OUTPUT_DIR}/$type 
${KERNEL_OUTPUT_DIR}/$type.bak
+   tmp_path=$tmp_path" "$type"##"
+   fi
fi
done

use_alternate_initrd=CONFIG_INITRAMFS_SOURCE=${B}/usr/${INITRAMFS_IMAGE_NAME}.cpio
@@ -627,8 +629,10 @@ MODULE_TARBALL_DEPLOY ?= "1"

 kernel_do_deploy() {
for type in ${KERNEL_IMAGETYPES} ; do
-   base_name=${type}-${KERNEL_IMAGE_BASE_NAME}
-   install -m 0644 ${KERNEL_OUTPUT_DIR}/${type} 
${DEPLOYDIR}/${base_name}.bin
+   if [ "$type" != "fitImage" ]; then
+   base_name=${type}-${KERNEL_IMAGE_BASE_NAME}
+   install -m 0644 ${KERNEL_OUTPUT_DIR}/${type} 
${DEPLOYDIR}/${base_name}.bin
+   fi
done
if [ ${MODULE_TARBALL_DEPLOY} = "1" ] && (grep -q -i -e 
'^CONFIG_MODULES=y$' .config); then
mkdir -p ${D}/lib
@@ -637,21 +641,25 @@ kernel_do_deploy() {
fi

for type in ${KERNEL_IMAGETYPES} ; do
-   base_name=${type}-${KERNEL_IMAGE_BASE_NAME}
-   symlink_name=${type}-${KERNEL_IMAGE_SYMLINK_NAME}
-   ln -sf ${base_name}.bin ${DEPLOYDIR}/${symlink_name}.bin
-   ln -sf ${base_name}.bin ${DEPLOYDIR}/${type}
+   if [ "$type" != "fitImage" ]; then
+   base_name=${type}-${KERNEL_IMAGE_BASE_NAME}
+   symlink_name=${type}-${KERNEL_IMAGE_SYMLINK_NAME}
+   ln -sf ${base_name}.bin ${DEPLOYDIR}/${symlink_name}.bin
+   ln -sf ${base_name}.bin ${DEPLOYDIR}/${type}
+   fi
done

cd ${B}
# Update deploy directory
for type in ${KERNEL_IMAGETYPES} ; do
-   if [ -e "${KERNEL_OUTPUT_DIR}/${type}.initramfs" ]; then
-   echo "Copying deploy ${type} kernel-initramfs image and 
setting up links..."
-   initramfs_base_name=${type}-${INITRAMFS_BASE_NAME}
-   initramfs_symlink_name=${type}-initramfs-${MACHINE}
-   install -m 0644 ${KERNEL_OUTPUT_DIR}/${type}.initramfs 
${DEPLOYDIR}/${initramfs_base_name}.bin
-   ln -sf ${initramfs_base_name}.bin 
${DEPLOYDIR}/${initramfs_symlink_name}.bin
+   if [ "$type" != "fitImage" ]; then
+   if [ -e "${KERNEL_OUTPUT_DIR}/${type}.initramfs" ]; then
+   echo "Copying deploy ${type} kernel-initramfs 
image and setting up links..."
+   
initramfs_base_name=${type}-${INITRAM

[yocto] liblzma: Memory allocation failed on do_package_rpm

2018-01-14 Thread Manjukumar Harthikote Matha
All,

Has anybody seen this error?

Finished binary package job, result 0, filename (null)
error: create archive failed: cpio: write failed - Cannot allocate memory
error: liblzma: Memory allocation failederror: liblzma: Memory allocation 
failedFinished binary package job, result 2,

...

RPM build errors:
| Deprecated external dependency generator is used!
| Deprecated external dependency generator is used!
| Deprecated external dependency generator is used!
| Deprecated external dependency generator is used!
| Deprecated external dependency generator is used!
| Deprecated external dependency generator is used!
| Deprecated external dependency generator is used!
| Deprecated external dependency generator is used!
| Deprecated external dependency generator is used!
| liblzma: Memory allocation failedliblzma: Memory allocation failed
liblzma: Memory allocation failedliblzma: Memory allocation failed
liblzma: Memory allocation failedliblzma: Memory allocation failed

This happens in do_package_rpm task on various recipes, 
coreutils/libgpg/diffutils etc.

The host machine has good amount on memory, 256G to be exact. When I see the 
error and issue free -h command, I still see ~120G free memory available.
This is a RHEL 7.2 machine and I am running rocko baseline.

Any reason why this would happen? Where should I look to fix this issue?

Thanks,
Manju
This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-xilinx] liblzma: Memory allocation failed on do_package_rpm

2018-01-22 Thread Manjukumar Harthikote Matha
Hi All,

> -Original Message-
> From: Alejandro Enedino Hernandez Samaniego
> Sent: Monday, January 15, 2018 11:00 PM
> To: Manjukumar Harthikote Matha ; openembedded-
> c...@lists.openembedded.org; yocto@yoctoproject.org; meta-
> xil...@yoctoproject.org
> Subject: RE: [meta-xilinx] liblzma: Memory allocation failed on do_package_rpm
> 
> Hey Manju,
> 
> I'd like to see if I can reproduce this, could you please send the the steps 
> you
> followed?
> 
> Thanks!
> 
> Alejandro
> 
> -Original Message-
> From: meta-xilinx-boun...@yoctoproject.org [mailto:meta-xilinx-
> boun...@yoctoproject.org] On Behalf Of Manjukumar Harthikote Matha
> Sent: Sunday, January 14, 2018 10:20 PM
> To: openembedded-c...@lists.openembedded.org; yocto@yoctoproject.org;
> meta-xil...@yoctoproject.org
> Subject: [meta-xilinx] liblzma: Memory allocation failed on do_package_rpm
> 
> All,
> 
> Has anybody seen this error?
> 
> Finished binary package job, result 0, filename (null)
> error: create archive failed: cpio: write failed - Cannot allocate memory
> error: liblzma: Memory allocation failederror: liblzma: Memory allocation
> failedFinished binary package job, result 2,
> 
> ...
> 
> RPM build errors:
> | Deprecated external dependency generator is used!
> | Deprecated external dependency generator is used!
> | Deprecated external dependency generator is used!
> | Deprecated external dependency generator is used!
> | Deprecated external dependency generator is used!
> | Deprecated external dependency generator is used!
> | Deprecated external dependency generator is used!
> | Deprecated external dependency generator is used!
> | Deprecated external dependency generator is used!
> | liblzma: Memory allocation failedliblzma: Memory allocation failed  
>   liblzma:
> Memory allocation failedliblzma: Memory allocation failedliblzma: 
> Memory
> allocation failedliblzma: Memory allocation failed
> 
> This happens in do_package_rpm task on various recipes, 
> coreutils/libgpg/diffutils
> etc.
> 
> The host machine has good amount on memory, 256G to be exact. When I see the
> error and issue free -h command, I still see ~120G free memory available.
> This is a RHEL 7.2 machine and I am running rocko baseline.
> 
> Any reason why this would happen? Where should I look to fix this issue?
> 


This seems to happen if your server configuration has (not a standard 
configuration)
vm.overcommit_memory = 2
vm.overcommit_ratio = 90

And probably this as well
kernel.unknown_nmi_panic = 1

Thanks,
Manju
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] scipy recipe

2018-03-26 Thread Manjukumar Harthikote Matha

> -Original Message-
> From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org]
> On Behalf Of Matthias Schöpfer
> Sent: Friday, March 23, 2018 3:08 AM
> To: Peter Balazovic ; Yocto-mailing-list
> 
> Subject: Re: [yocto] scipy recipe
> 
> Hi Peter,
> 
> I managed to get scipy to cross compile, since I was in a hurry, and have no 
> deeper
> understanding of python / distutils / setuptools, it turned out to be an ugly 
> hack (but
> obviously I was not the first one to do ugly things there ;) )
> 
> Maybe you have had some progress as well, and we can figure out a nicer 
> solution.
> 
> See the attached files, involving openblas, a bbappend for python-numpy and
> python-scipy.
> 

Thanks Matthias, will try this out

Thanks,
Manju
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-xilinx] xen-image-minimal testing

2017-05-02 Thread Manjukumar Harthikote Matha


> -Original Message-
> From: meta-xilinx-boun...@yoctoproject.org [mailto:meta-xilinx-
> boun...@yoctoproject.org] On Behalf Of Jason Wu
> Sent: Tuesday, May 02, 2017 2:16 AM
> To: Pello Heriz ; meta-
> xil...@yoctoproject.org; meta-xilinx-requ...@yoctoproject.org; meta-
> virtualizat...@yoctoproject.org; yocto@yoctoproject.org
> Subject: Re: [meta-xilinx] xen-image-minimal testing
>
>
>
> On 2/05/2017 4:33 PM, Pello Heriz wrote:
> > Hi all,
> >
> > I have built an image using "xen-image-minimal" command with Yocto and
> > I would want to know how can I test if xen functionalities are correct
> > or not with Zynq MPSoC QEMU. How can I do this?
> http://www.wiki.xilinx.com/Building+the+Xen+Hypervisor+with+PetaLinux+2016.4+
> and+newer
>
> have look at the "TFTP Booting Xen and Dom0 2016.4" section and hope that 
> helps.
>
> >
> > Anyway, I have seen that sometimes in the QEMU terminal appears the
> > next message when the mentioned image is launched by "runqemu zcu102".
> >
> > INIT: Id "X0" respawning too fast: disabled for 5 minutes
> >
> > What's the meaning of the message? Is it critical?
> It is not critical if you don't need it. The reason you are getting this 
> error message is
> because the your inittab trying spawn a console when the device node (e.g. 
> hvc0) for
> Id "X0" does not exists.
>

Is there a way to stop inittab from doing this?
Tried few options for initiab, for ex: using "once" instead of "spawn"

Any other good ideas on how we can disable it? Or what is a better approach to 
resolve this issue during runtime

Thanks
Manju

> Jason
> >
> > Any answer will be welcome,
> >
> > Best regards,
> > Pello
> >
> >
> >
> --
> ___
> meta-xilinx mailing list
> meta-xil...@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/meta-xilinx


This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [meta-xilinx] xen-image-minimal testing

2017-05-02 Thread Manjukumar Harthikote Matha


> -Original Message-
> From: Nathan Rossi [mailto:nat...@nathanrossi.com]
> Sent: Tuesday, May 02, 2017 10:23 AM
> To: Manjukumar Harthikote Matha 
> Cc: Jason Wu ; Pello Heriz
> ; meta-xil...@yoctoproject.org; meta-xilinx-
> requ...@yoctoproject.org; meta-virtualizat...@yoctoproject.org;
> yocto@yoctoproject.org
> Subject: Re: [meta-xilinx] xen-image-minimal testing
>
> On 3 May 2017 at 02:47, Manjukumar Harthikote Matha  ma...@xilinx.com> wrote:
> >
> >
> >> -Original Message-
> >> From: meta-xilinx-boun...@yoctoproject.org [mailto:meta-xilinx-
> >> boun...@yoctoproject.org] On Behalf Of Jason Wu
> >> Sent: Tuesday, May 02, 2017 2:16 AM
> >> To: Pello Heriz ; meta-
> >> xil...@yoctoproject.org; meta-xilinx-requ...@yoctoproject.org; meta-
> >> virtualizat...@yoctoproject.org; yocto@yoctoproject.org
> >> Subject: Re: [meta-xilinx] xen-image-minimal testing
> >>
> >>
> >>
> >> On 2/05/2017 4:33 PM, Pello Heriz wrote:
> >> > Hi all,
> >> >
> >> > I have built an image using "xen-image-minimal" command with Yocto
> >> > and I would want to know how can I test if xen functionalities are
> >> > correct or not with Zynq MPSoC QEMU. How can I do this?
> >> http://www.wiki.xilinx.com/Building+the+Xen+Hypervisor+with+PetaLinux
> >> +2016.4+
> >> and+newer
> >>
> >> have look at the "TFTP Booting Xen and Dom0 2016.4" section and hope that
> helps.
> >>
> >> >
> >> > Anyway, I have seen that sometimes in the QEMU terminal appears the
> >> > next message when the mentioned image is launched by "runqemu zcu102".
> >> >
> >> > INIT: Id "X0" respawning too fast: disabled for 5 minutes
> >> >
> >> > What's the meaning of the message? Is it critical?
> >> It is not critical if you don't need it. The reason you are getting
> >> this error message is because the your inittab trying spawn a console
> >> when the device node (e.g. hvc0) for Id "X0" does not exists.
> >>
> >
> > Is there a way to stop inittab from doing this?
> > Tried few options for initiab, for ex: using "once" instead of "spawn"
> >
> > Any other good ideas on how we can disable it? Or what is a better
> > approach to resolve this issue during runtime
>
> Change the sysvinit-inittab bbappend in meta-virtualization use start_getty 
> instead
> of just getty, start_getty does a 'test -c' before starting getty on the 
> device. This is
> how sysvinit-inittab handles SERIAL_CONSOLES. Alternatively the 
> meta-virtualization
> bbappend could just expand SERIAL_CONSOLES.
>
Something like this?
https://github.com/Xilinx/meta-virtualization/commit/610887495c01f0b17db6084e1426cc55a3f806ea

We still see initab looking for console even after this change

Thanks
Manju


> Regards,
> Nathan


This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.

-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [yocto-announce] [ANNOUNCEMENT] Yocto Project 2.6.2 (thud 20.0.2) Released

2019-04-18 Thread Manjukumar Harthikote Matha
Hi Tracy,

> -Original Message-
> From: yocto-announce-boun...@yoctoproject.org [mailto:yocto-announce-
> boun...@yoctoproject.org] On Behalf Of Tracy Graydon
> Sent: Wednesday, April 17, 2019 1:25 PM
> To: yocto-annou...@yoctoproject.org; yocto@yoctoproject.org
> Subject: [yocto-announce] [ANNOUNCEMENT] Yocto Project 2.6.2 (thud 20.0.2)
> Released
> 
> We are pleased to announce the latest release of the Yocto Project 2.6.2 
> (thud-
> 20.0.2) is now available for download:
> 
> http://downloads.yoctoproject.org/releases/yocto/yocto-2.6.2/poky-thud-
> 20.0.2.tar.bz2
> http://mirrors.kernel.org/yocto/yocto/yocto-2.6.2/poky-thud-20.0.2.tar.bz2
> 
> A gpg signed version of these release notes is available at:
> 
> http://downloads.yoctoproject.org/releases/yocto/yocto-2.6.2/RELEASENOTES
> 
> Yocto 2.6.2 QA reporting:
> 
> Summary: https://lists.yoctoproject.org/pipermail/yocto/2019-
> April/044827.html
> Results: 
> https://autobuilder.yocto.io/pub/releases/yocto-2.6.2.rc3/testresults/
> 

I am not able to access this results page. It says "No Such Resource
File not found."


Thanks,
Manju


> Build log:
> hhttps://autobuilder.yoctoproject.org/typhoon/#/builders/83/builds/122
> 
> Release Criteria:
> https://wiki.yoctoproject.org/wiki/Yocto_Project_v2.6_Status#Yocto_Project_2
> .6.2_release
> 
> 
> Thank you for everyone's contributions to this release.
> 
> Sincerely,
> 
> Tracy Graydon
> Yocto Project Build and Release
> tracy.gray...@intel.com
> --
> ___
> yocto-announce mailing list
> yocto-annou...@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto-announce
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto