On Mon, 25 Feb 2019 at 00:45, chaitanya cherukuri
wrote:
> Thank you for the clarification.
> In do_install(), I used rpm2cpio.sh to extract the rpm and then copied the
> rpm contents in the right place.
> $rpm2cpio.sh *.rpm | cpio -idmv
> I hope this is the right way to handle RPM packag
On Mon, Feb 25, 2019 at 5:09 AM Burton, Ross wrote:
> On Mon, 25 Feb 2019 at 00:45, chaitanya cherukuri
> wrote:
> > Thank you for the clarification.
> > In do_install(), I used rpm2cpio.sh to extract the rpm and then copied
> the rpm contents in the right place.
> > $rpm2cpio.sh *.rpm |
On Mon, 25 Feb 2019 at 12:14, chaitanya cherukuri
wrote:
>> If I understand correctly, you want to me copy what script is doing in
>> do_install(). I was thinking to add the script has startup to a YOCTO
>> Image, so that I don't need to copy anything manually and let the script to
>> its job.
From: Stefan Agner
Process consecutive commands separated by null-termations. Since
it is a FIFO, in theory, two commands can be queued from two
independent calls to psplash-write. This also makes the command
parser more robust. With this code, sequences like this get
parsed just fine:
echo -e
From: Stefan Agner
Use /run for communication FIFO which is typically preserved
between initramfs and regular root file system. Introduce a
new environment variable PSPLASH_FIFO_DIR which allows to
pass /tmp for the old behavior or another directory.
Signed-off-by: Stefan Agner
---
psplash-wri
From: Stefan Agner
While the source files for the main splash image is present in
the source folder base-image, the progress bar isn't. This
patch adds the bar.png recovered from the RLE data in the bar
header file. The tool make-image-header.sh allows to translate
this png back the the header ex
All,
The triage team meets weekly and does its best to handle the bugs reported
into the Bugzilla. The number of people attending that meeting has fallen,
as have the number of people available to help fix bugs. One of the things
we hear users report is they don't know how to help. We (the tria
> -Original Message-
> From: Khem Raj
> Sent: 23 February 2019 17:05
> To: Richard Purdie
> Cc: Manjukumar Harthikote Matha ; Stephen Lawrence
> ; Hongxu Jia ;
> mhalst...@linuxfoundation.org; ross.bur...@intel.com;
> paul.eggle...@linux.intel.com; yocto@yoctoproject.org; lpd-cdc-core-
>
Hi,
I'm using meta-java sumo branch and on ubuntu 18.04 I have this issue
(building for beaglebone-yocto machine):
ERROR: jaxp1.3-native-1.4.01-r0 do_compile: Function failed: do_compile
(log file is located at
/home/jenkins/my_build/tmp/work/x86_64-linux/jaxp1.3-native/1.4.01-r0/temp/log.do_compi
I've been spending a bit too long this past week trying to build up a
reproducable build infrastructure in AWS and I've got very little
experience with cloud infrastucture and I'm wondering if I'm going in the
wrong direction. I'm attempting to host my sstate_cache as a mirror in a
private S3 bucke
Have you done any wireshark analysis on the traffic? My guess is that the
round trip with network latency is bumping your build time by a factor of
at least 100x. The state-cache is hammered on continuously, so have
probably introduced a significant bottleneck.
..Ch:W..
On Mon, Feb 25, 2019 at 1
11 matches
Mail list logo