Re: [yocto] How to extract files from wic.gz image?

2019-06-01 Thread JH
Thanks Zoran and Tom, I can use dd to install the image to SD card on
IMX EVK, now I need to install the image to NAND flash of a customized
IMX device via USB HID, it cannot use the dd, IMX provided UUU utility
which I am new to it and I have to decompose the image to
zImage-initramfs and  losetup -P does work, dd does not work either.

$ losetup -P dev-image-20190528085324.rootfs.wic
losetup: solar-dev-image-solarevk-20190518084330.rootfs.wic: failed to
use device: No such device

Thanks for your helps anyway.

Kind regards,

- jupiter

On 5/31/19, Tom Rini  wrote:
> On Fri, May 31, 2019 at 10:21:40PM +1000, JH wrote:
>> Hi,
>>
>> What command and tools to extract files from Yocto / bitbake image
>> such as dev-image-20190528085324.rootfs.wic.gz?
>>
>> I am using IMX UUU to install dev-image-20190528085324.rootfs.wic.gz
>> to IMX, I was advised to extract the archive and use "uuu 

[yocto] do_rootfs fails while attempting to install hostapd package

2019-06-01 Thread Morné Lamprecht

Hi

I created a new custom distribution and 
everything seems to work fine until the do_rootfs 
task is executed. It fails specifically when 
trying to install the hostapd package, with the 
error below (snippet from log.do_rootfs):



Running scriptlet: hostapd-2.6-r0.aarch64

usage: update-rc.d [-n] [-f] [-r ]  remove update-rc.d
[-n] [-r ] [-s]  defaults [NN | sNN kNN]
update-rc.d [-n] [-r ] [-s]  start|stop NN runlvl [runlvl]

error: %prein(hostapd-2.6-r0.aarch64) scriptlet failed, exit status 1



error: hostapd-2.6-r0.aarch64: install failed


The hostapd package itself builds fine, it is 
just the installation to the rootfs that fails.


The hostapd package is specified in 
MACHINE_EXTRA_DEPENDS, if I remove it, then the 
build succeeds without any issues.


If I interpret the error correctly, it is the 
preinstall scriptlet that fails...but I am not 
sure where to start debugging this.


Any suggestions ?

- Morné
--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] do_rootfs fails while attempting to install hostapd package

2019-06-01 Thread Belisko Marek
Hi,

On Sat, Jun 1, 2019 at 4:41 PM Morné Lamprecht  wrote:
>
> Hi
>
> I created a new custom distribution and
> everything seems to work fine until the do_rootfs
> task is executed. It fails specifically when
> trying to install the hostapd package, with the
> error below (snippet from log.do_rootfs):
I would check run.do_rootfs and check it maybe there will be some more info.
Do you have some custom extension for hostapd? Thanks.
>
> > Running scriptlet: hostapd-2.6-r0.aarch64
> >
> > usage: update-rc.d [-n] [-f] [-r ]  remove update-rc.d
> > [-n] [-r ] [-s]  defaults [NN | sNN kNN]
> > update-rc.d [-n] [-r ] [-s]  start|stop NN runlvl [runlvl]
> >
> > error: %prein(hostapd-2.6-r0.aarch64) scriptlet failed, exit status 1
>
> > error: hostapd-2.6-r0.aarch64: install failed
>
> The hostapd package itself builds fine, it is
> just the installation to the rootfs that fails.
>
> The hostapd package is specified in
> MACHINE_EXTRA_DEPENDS, if I remove it, then the
> build succeeds without any issues.
>
> If I interpret the error correctly, it is the
> preinstall scriptlet that fails...but I am not
> sure where to start debugging this.
>
> Any suggestions ?
>
> - Morné
> --
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto

marek
-- 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] do_rootfs fails while attempting to install hostapd package

2019-06-01 Thread Morné Lamprecht
I would check run.do_rootfs and check it maybe 
there will be some more info.


I checked it based on your suggestion, 
unfortunately found no relevant info.



Do you have some custom extension for hostapd?


No, just the standard package.

- Morné
--
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] prelink-cross with -fno-plt

2019-06-01 Thread Mark Hatle
Thanks, this shows that the prelinking is still working in this case.  I'll get
you patch queued up.  If you don't see any progress on it this coming week,
please feel free to remind me.

--Mark

On 5/29/19 1:42 PM, Shane Peelar wrote:
> Hi Mark,
> 
> Thank you for your reply and no problem -- I chose to benchmark ssh-add with
> it.  It contains no `.plt`.
> 
> The results are as follows:
> 
> Without prelink (ran prelink -auv):
> 
>      26019:
>      26019:     runtime linker statistics:
>      26019:       total startup time in dynamic loader: 1321674 cycles
>      26019:                 time needed for relocation: 797948 cycles (60.3%)
>      26019:                      number of relocations: 624
>      26019:           number of relocations from cache: 3
>      26019:             number of relative relocations: 9691
>      26019:                time needed to load objects: 389972 cycles (29.5%)
> Could not open a connection to your authentication agent.
>      26019:
>      26019:     runtime linker statistics:
>      26019:                final number of relocations: 630
>      26019:     final number of relocations from cache: 3
> 
> With prelink (ran prelink -av):
> 
>       1930:
>       1930:     runtime linker statistics:
>       1930:       total startup time in dynamic loader: 462288 cycles
>       1930:                 time needed for relocation: 48730 cycles (10.5%)
>       1930:                      number of relocations: 7
>       1930:           number of relocations from cache: 134
>       1930:             number of relative relocations: 0
>       1930:                time needed to load objects: 286076 cycles (61.8%)
> Could not open a connection to your authentication agent.
>       1930:
>       1930:     runtime linker statistics:
>       1930:                final number of relocations: 9
>       1930:     final number of relocations from cache: 134
> 
> I also tested against execstack, which for sure had the assertion fire on.
> Without prelink:
> 
>      27736:
>      27736:     runtime linker statistics:
>      27736:       total startup time in dynamic loader: 1955954 cycles
>      27736:                 time needed for relocation: 755440 cycles (38.6%)
>      27736:                      number of relocations: 247
>      27736:           number of relocations from cache: 3
>      27736:             number of relative relocations: 1353
>      27736:                time needed to load objects: 710384 cycles (36.3%)
> /usr/bin/execstack: no files given
>      27736:
>      27736:     runtime linker statistics:
>      27736:                final number of relocations: 251
>      27736:     final number of relocations from cache: 3
> 
> With prelink:
> 
>       3268:
>       3268:     runtime linker statistics:
>       3268:       total startup time in dynamic loader: 1421206 cycles
>       3268:                 time needed for relocation: 199396 cycles (14.0%)
>       3268:                      number of relocations: 3
>       3268:           number of relocations from cache: 88
>       3268:             number of relative relocations: 0
>       3268:                time needed to load objects: 696886 cycles (49.0%)
> /usr/bin/execstack: no files given
>       3268:
>       3268:     runtime linker statistics:
>       3268:                final number of relocations: 5
>       3268:     final number of relocations from cache: 88
> 
> So, it looks like prelink is working on these :)
> 
> On Tue, May 28, 2019 at 2:57 PM Mark Hatle  > wrote:
> 
> Sorry for my delayed reply.  I was out on a business trip.
> 
> Did you try this with the ld.so statistics to see if the relocations were 
> indeed
> reduced at runtime?
> 
> One of my worries with these changes (since I am not an ELF expert 
> either) is
> that we make a change that doesn't actually do anything -- but people 
> expect
> it to.
> 
> $ LD_DEBUG=help /lib/ld-linux.so.2
> Valid options for the LD_DEBUG environment variable are:
> 
>   libs        display library search paths
>   reloc       display relocation processing
>   files       display progress for input file
>   symbols     display symbol table processing
>   bindings    display information about symbol binding
>   versions    display version dependencies
>   scopes      display scope information
>   all         all previous options combined
>   statistics  display relocation statistics
>   unused      determined unused DSOs
>   help        display this help message and exit
> 
> To direct the debugging output into a file instead of standard output
> a filename can be specified using the LD_DEBUG_OUTPUT environment 
> variable.
> 
> I believe that it's the 'statistics' option.
> 
> LD_DEBUG=statistics 
> 
> Should result in something like:
> 
>     128820:     runtime linker statistics:
>     128820:       total startup time in dynamic