The following changes since commit 53343338a6e7b83777b82803398572b40afc8c0f:
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
(2016-04-22 16:17:12 +0100)
are available in the git repository at:
git://github.com/dgibson/qemu.git tags/ppc-for-2.6-20160423
for you t
From: Thomas Huth
QEMU currently crashes when using bad parameters for the
spapr-pci-host-bridge device:
$ qemu-system-ppc64 -device
spapr-pci-host-bridge,buid=0x123,liobn=0x321,mem_win_addr=0x1,io_win_addr=0x10
Segmentation fault
The problem is that spapr_tce_find_by_liobn() might return NULL
I've just changed last patch to new one that uses loop iteration instead of
recursion. https://lists.gnu.org/archive/html/qemu-block/2016-04/msg00584.html
At first, a head of chain has checked it's status by bdrv_get_block_status in
"convert_iteration_sectors".
If this status is not BDRV_BLOCK_D
When converting images, check the block status of its backing file chain
to avoid needlessly reading zeros.
Signed-off-by: Ren Kimura
---
qemu-img.c | 31 +--
1 file changed, 29 insertions(+), 2 deletions(-)
diff --git a/qemu-img.c b/qemu-img.c
index 06264d9..b771227
Since this cannot automatically recover from a crashed QEMU client with an
RBD image, perhaps this RBD locking should not default to enabled.
Additionally, this will conflict with the "exclusive-lock" feature
available since the Ceph Hammer-release since both utilize the same locking
construct.
As
On 2016/4/22 20:25, Andrew Jones wrote:
> On Thu, Apr 21, 2016 at 02:23:50PM +0800, Shannon Zhao wrote:
>> > From: Shannon Zhao
>> >
>> > This /distance-map node is used to describe the accessing distance
>> > between NUMA nodes.
>> >
>> > Signed-off-by: Shannon Zhao
>> > ---
>> > hw/arm/vir
On 2016/4/22 20:48, Andrew Jones wrote:
> On Thu, Apr 21, 2016 at 02:23:52PM +0800, Shannon Zhao wrote:
>> From: Shannon Zhao
>>
>> When specifying NUMA for ARM machine, generate /memory node according to
>> NUMA topology.
>>
>> Signed-off-by: Shannon Zhao
>> ---
>> hw/arm/boot.c | 31
On 2016/4/22 21:26, Andrew Jones wrote:
>> +core->flags = cpu_to_le32(1);
>> > +}
>> > +g_free(cpu_node);
>> > +
>> > +mem_base = guest_info->memmap[VIRT_MEM].base;
>> > +for (i = 0; i < nb_numa_nodes; ++i) {
>> > +mem_len = numa_info[i].node_mem;
>> > +num
On 2016/4/22 21:41, Andrew Jones wrote:
> On Fri, Mar 25, 2016 at 05:46:19PM +0800, Shannon Zhao wrote:
>> > From: Shannon Zhao
>> >
>> > Check if kvm supports guest PMUv3. If so, set the corresponding feature
>> > bit for vcpu.
>> >
>> > Signed-off-by: Shannon Zhao
>> > ---
>> > linux-heade
On 2016/4/22 22:32, Andrew Jones wrote:
> On Fri, Mar 25, 2016 at 05:46:20PM +0800, Shannon Zhao wrote:
>> From: Shannon Zhao
>>
>> Add a virtual PMU device for virt machine while use PPI 7 for PMU
>> overflow interrupt number.
>>
>> Signed-off-by: Shannon Zhao
>> ---
>> hw/arm/virt.c
Upstream NBD protocol recently added the ability to efficiently
write zeroes without having to send the zeroes over the wire,
along with a flag to control whether the client wants a hole.
The generic block code takes care of falling back to the obvious
write of lots of zeroes if we return -ENOTSUP
NBD_OPT_EXPORT_NAME is lousy: it doesn't have any sane error
reporting. Upstream NBD recently added NBD_OPT_GO as the
improved version of the option that does what we want: it
reports sane errors on failures (including when a server
requires TLS but does not have NBD_OPT_GO!), and on success
it pr
The upstream NBD Protocol has defined a new extension to allow
the server to advertise block sizes to the client, as well as
a way for the client to inform the server that it intends to
obey block sizes.
Pass any received sizes on to the block layer.
Use the minimum block size as the sector size
NBD commit 6d34500b clarified how clients and servers are supposed
to behave before closing a connection. It added NBD_REP_ERR_SHUTDOWN
(for the server to announce it is about to go away during option
haggling, so the client should quit sending NBD_OPT_* other than
NBD_OPT_ABORT) and ESHUTDOWN (for
The NBD protocol would like to advertise the optimal I/O
size to the client; but it would be a layering violation to
peek into blk_bs(blk)->bl, when we only have a BB.
I just copied the existing blk_get_max_transfer_length() in
reading a value from the top BDS; I have no idea if
bdrv_refresh_limit
The upstream NBD Protocol has defined a new extension to allow
the server to advertise block sizes to the client, as well as
a way for the client to inform the server that it intends to
obey block sizes.
Thanks to a recent fix, our minimum transfer size is always
1 (the block layer takes care of r
The NBD Protocol allows the server and client to mutually agree
on a shorter handshake (omit the 124 bytes of reserved 0), via
the server advertising NBD_FLAG_NO_ZEROES and the client
acknowledging with NBD_FLAG_C_NO_ZEROES (only possible in
newstyle, whether or not it is fixed newstyle). It doesn
Rather than open-coding each option request, it's easier to
have common helper functions do the work. That in turn requires
having convenient packed types for handling option requests
and replies.
Signed-off-by: Eric Blake
Reviewed-by: Alex Bligh
---
v3: rebase, tweak a debug message
---
incl
The NBD Protocol is introducing some additional information
about exports, such as minimum request size and alignment, as
well as an advertised maximum request size. It will be easier
to feed this information back to the block layer if we gather
all the information into a struct, rather than addin
Since we know that the maximum name we are willing to accept
is small enough to stack-allocate, rework the iteration over
NBD_OPT_LIST responses to reuse a stack buffer rather than
allocating every time. Furthermore, we don't even have to
allocate if we know the server's length doesn't match what
Declare a constant and use that when determining if an export
name fits within the constraints we are willing to support.
Note that upstream NBD recently documented that clients MUST
support export names of 256 bytes (not including trailing NUL),
and SHOULD support names up to 4096 bytes. 4096 is
The server has a nice helper function nbd_negotiate_drop_sync()
which lets it easily ignore fluff from the client (such as the
payload to an unknown option request). We can't quite make it
common, since it depends on nbd_negotiate_read() which handles
coroutine magic, but we can copy the idea into
NBD_OPT_EXPORT_NAME is lousy: it requires us to close the connection
rather than report an error. Upstream NBD recently added NBD_OPT_GO
as the improved version of the option that does what we want, along
with NBD_OPT_INFO that returns the same information but does not
transition to transmission p
Rather than always flushing ourselves, let the block layer
forward the FUA on to the underlying device - where all
layers understand FUA, we are now more efficient; and where
the underlying layer doesn't understand it, now the block
layer takes care of the full flush fallback on our behalf.
Signed
On Fri, Apr 22, 2016 at 12:59:52 -0700, Richard Henderson wrote:
> FWIW, so that I could get an idea of how the stats change as we improve the
> hashing, I inserted the attachment 1 patch between patches 5 and 6, and with
> attachment 2 attempting to fix the accounting for patches 9 and 10.
For qh
Make it easier to test block drivers with BDRV_REQ_FUA in
.supported_write_flags, by adding a flag to qemu-io to
conditionally pass the flag through to specific writes. You'll
want to use 'qemu-io -t none' to actually make -f useful (as
otherwise, the default writethrough mode automatically sets
t
Upstream NBD protocol recently added the ability to efficiently
write zeroes without having to send the zeroes over the wire,
along with a flag to control whether the client wants a hole.
Signed-off-by: Eric Blake
---
v3: abandon NBD_CMD_CLOSE extension, rebase to use blk_pwrite_zeroes
---
incl
Rather than open-coding NBD_REP_SERVER, reuse the code we
already have by adding a length parameter. Additionally,
the refactoring will make adding NBD_OPT_GO in a later patch
easier.
Signed-off-by: Eric Blake
---
v3: rebase to changes earlier in series
---
nbd/server.c | 48 ++
Current upstream NBD documents that requests have a 16-bit flags,
followed by a 16-bit type integer; although older versions mentioned
only a 32-bit field with masking to find flags. Since the protocol
is in network order (big-endian over the wire), the ABI is unchanged;
but dealing with the flags
Make it easier to control whether the BDRV_REQ_MAY_UNMAP flag
can be passed through a write_zeroes command, by adding a flag
to qemu-io. To be useful, the device has to be opened with
'qemu-io -d unmap' (or the just-added 'open -u' subcommand).
Signed-off-by: Eric Blake
---
qemu-io-cmds.c | 24
Add some debugging to flag servers that are not compliant to
the NBD protocol. This would have flagged the server bug
fixed in commit c0301fcc.
Signed-off-by: Eric Blake
Reviewed-by: Alex Bligh
---
v3: later in series, but no change
---
nbd/client.c | 4 +++-
1 file changed, 3 insertions(+),
Commit 499afa2 added --image-opts, but forgot to document it in
--help. Likewise for commit 9e8f183 and -d/--discard.
Finally, commit 10d9d75 removed -g/--growable, but forgot to
cull it from the valid short options.
Signed-off-by: Eric Blake
---
qemu-io.c | 4 +++-
1 file changed, 3 insertion
blk_write() and blk_read() are now very simple wrappers around
blk_pwrite() and blk_pread(). There's no reason to require
the user to pass in aligned numbers. Keep 'read -p' and
'write -p' so that I don't have to hunt down and update all
users of qemu-io, but make the default 'read' and 'write' n
Sector-based blk_write() should die; convert the one-off
variant blk_write_zeroes().
Signed-off-by: Eric Blake
---
include/sysemu/block-backend.h | 4 ++--
block/block-backend.c | 8
block/parallels.c | 3 ++-
qemu-img.c | 3 ++-
4 files changed
When opening a file from the command line, qemu-io defaults
to BDRV_O_UNMAP but allows -d to give full control to disable
unmaps. But when opening via the 'open' command, qemu-io did
not set BDRV_O_UNMAP, and had no way to allow it.
Make it at least possible to symmetrically test things:
'qemu-io
Now that there are no remaining clients, we can drop these
functions, to ensure that all future users get the byte-based
interfaces. Sadly, there are still remaining sector-based
interfaces, such as blk_aio_writev; those will have to wait
for another day.
Signed-off-by: Eric Blake
---
include/s
We have several block drivers that understand BDRV_REQ_FUA,
and emulate it in the block layer for the rest by a full flush.
But without a way to actually request BDRV_REQ_FUA during a
pass-through blk_pwrite(), FUA-aware block drivers like NBD are
forced to repeat the emulation logic of a full flus
Sector-based blk_read() should die; convert the one-off
variant blk_read_unthrottled().
Signed-off-by: Eric Blake
---
include/sysemu/block-backend.h | 4 ++--
block/block-backend.c | 8
hw/block/hd-geometry.c | 2 +-
3 files changed, 7 insertions(+), 7 deletions(-)
dif
Sector-based blk_write() should die; switch to byte-based
blk_pwrite() instead. Likewise for blk_read().
Signed-off-by: Eric Blake
---
qemu-img.c | 28 +++-
1 file changed, 19 insertions(+), 9 deletions(-)
diff --git a/qemu-img.c b/qemu-img.c
index 1697762..2e4646e 1006
Sector-based blk_write() should die; switch to byte-based
blk_pwrite() instead. Likewise for blk_read().
Greatly simplifies the code, now that we let the block layer
take care of alignment and read-modify-write on our behalf :)
Signed-off-by: Eric Blake
---
hw/sd/sd.c | 46 +++-
Sector-based blk_write() should die; switch to byte-based
blk_pwrite() instead. Likewise for blk_read().
Signed-off-by: Eric Blake
---
hw/block/pflash_cfi01.c | 12 ++--
hw/block/pflash_cfi02.c | 12 ++--
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/hw/block/
Sector-based blk_read() should die; switch to byte-based
blk_pread() instead.
Signed-off-by: Eric Blake
---
hw/ide/atapi.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/hw/ide/atapi.c b/hw/ide/atapi.c
index 2bb606c..81000d8 100644
--- a/hw/ide/atapi.c
+++ b/hw/ide/a
Sector-based blk_read() should die; switch to byte-based
blk_pread() instead.
Signed-off-by: Eric Blake
---
qemu-nbd.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/qemu-nbd.c b/qemu-nbd.c
index a85e98f..01eb7e4 100644
--- a/qemu-nbd.c
+++ b/qemu-nbd.c
@@ -160,1
Sector-based blk_write() should die; switch to byte-based
blk_pwrite() instead. Likewise for blk_read().
This file is doing some complex computations to map various
flash page sizes (256, 512, and 2048) atop generic uses of
512-byte sector operations. Perhaps someone will want to tidy
up the fil
Sector-based blk_write() should die; switch to byte-based
blk_pwrite() instead. Likewise for blk_read().
Signed-off-by: Eric Blake
---
hw/block/onenand.c | 36 ++--
1 file changed, 22 insertions(+), 14 deletions(-)
diff --git a/hw/block/onenand.c b/hw/block/onen
Sector-based blk_write() should die; switch to byte-based
blk_pwrite() instead. Likewise for blk_read().
Signed-off-by: Eric Blake
---
hw/block/fdc.c | 25 +
1 file changed, 17 insertions(+), 8 deletions(-)
diff --git a/hw/block/fdc.c b/hw/block/fdc.c
index 3722275..f73
The NBD protocol allows servers to advertise a human-readable
description alongside an export name during NBD_OPT_LIST. Add
an option to pass through the user's string to the NBD client.
Doing this also makes it easier to test commit 200650d4, which
is the client counterpart of receiving the desc
The kernel ioctl() interface into NBD is limited to 'unsigned long';
we MUST pass in input with that type (and not int or size_t, as
there may be platform ABIs where the wrong types promote incorrectly
through var-args). Furthermore, on 32-bit platforms, the kernel
is limited to a maximum export s
This series is for qemu 2.7, and is a bit more stable this
time (upstream NBD extensions have been reaching some consensus
based on feedback I've made while implementing this series).
Included are some interoperability bug fixes, code cleanups, then
added support both client-side and server-side f
Sector-based blk_read() should die; switch to byte-based
blk_pread() instead.
Signed-off-by: Eric Blake
---
hw/block/m25p80.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/hw/block/m25p80.c b/hw/block/m25p80.c
index 906b712..01c51a2 100644
--- a/hw/block/m25p80.c
+++ b/hw
NBD ioctl()s are used to manage an NBD client session where
initial handshake is done in userspace, but then the transmission
phase is handed off to the kernel through a /dev/nbdX device.
As such, all ioctls sent to the kernel on the /dev/nbdX fd belong
in client.c; nbd_disconnect() was out-of-plac
Clean up some debug message oddities missed earlier; this includes
both typos, and recognizing that %d is not necessarily compatible
with uint32_t.
Signed-off-by: Eric Blake
Reviewed-by: Alex Bligh
---
v3: rebase
---
nbd/client.c | 41 ++---
nbd/server.c | 4
We have a few bugs in how we handle invalid client commands:
- A client can send an NBD_CMD_DISC where from + len overflows,
convincing us to reply with an error and stay connected, even
though the protocol requires us to silently disconnect. Fix by
hoisting the special case sooner.
- A client ca
We should never ignore failure from nbd_negotiate_send_rep(); if
we are unable to write to the client, then it is not worth trying
to continue the negotiation. Fortunately, the problem is not
too severe - chances are that the errors being ignored here (mainly
inability to write the reply to the cl
Rather than asserting that nbdflags is within range, just give
it the correct type to begin with :) nbdflags corresponds to
the per-export portion of NBD Protocol "transmission flags", which
is 16 bits in response to NBD_OPT_EXPORT_NAME and NBD_OPT_GO.
Furthermore, upstream NBD has never passed t
The NBD protocol says that clients should not send a command flag
that has not been negotiated (whether by the client requesting an
option during a handshake, or because we advertise support for the
flag in response to NBD_OPT_EXPORT_NAME), and that servers should
reject invalid flags with EINVAL.
On 04/22/2016 10:41 AM, Richard Henderson wrote:
> On 04/19/2016 04:07 PM, Emilio G. Cota wrote:
>> +ht_avg_len = qht_avg_bucket_chain_length(&tcg_ctx.tb_ctx.htable,
>> &ht_heads);
>> +cpu_fprintf(f, "TB hash avg chain %0.5f buckets\n", ht_avg_len);
>> +cpu_fprintf(f, "TB hash size
On Fri, Apr 22, 2016 at 10:41:25 -0700, Richard Henderson wrote:
> On 04/19/2016 04:07 PM, Emilio G. Cota wrote:
> > +ht_avg_len = qht_avg_bucket_chain_length(&tcg_ctx.tb_ctx.htable,
> > &ht_heads);
> > +cpu_fprintf(f, "TB hash avg chain %0.5f buckets\n", ht_avg_len);
> > +cpu_fprint
Running master as of this morning 4/22 and I'm not getting any more
crashes, and I'm flat beating on it. RC3 still crashes on me, so
whatever the fix is, came after rc3.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.la
Hello,
I have a question about ARM PC-relative load instructions in softmmu
execution, and how the PC is constant-folded at JIT compilation time
into a TB.
I have observed in translate.c the following code:
/* Set a variable to the value of a CPU register. */
static void load_reg_var(DisasC
On 2016-04-22 20:00, Sergey Fedorov wrote:
> On 22/04/16 19:51, Aurelien Jarno wrote:
> > On 2016-04-22 18:47, Aurelien Jarno wrote:
> >> On 2016-04-22 19:08, Sergey Fedorov wrote:
> >>> From: Sergey Fedorov
> >>>
> >>> Ensure direct jump patching in MIPS is atomic by using
> >>> atomic_read()/ato
On 04/21/2016 05:06 PM, Emilio G. Cota wrote:
> #ifdef USE_STATIC_CODE_GEN_BUFFER
> -static uint8_t static_code_gen_buffer[DEFAULT_CODE_GEN_BUFFER_SIZE]
> +static uint8_t static_code_gen_buffer1[DEFAULT_CODE_GEN_BUFFER_SIZE]
> __attribute__((aligned(CODE_GEN_ALIGN)));
> +static uint8_t static
Le 22 avr. 2016 à 13:44, Andrew Baumann a écrit :
> You got further on this than I did. I considered a couple of options, of
> varying complexity/compatibility:
>
> 0. The status quo: we support -kernel (for Linux images) and -bios (e.g.
> Windows), but otherwise all the options that can be set
bdrv_move_feature_fields() and swap_feature_fields() are empty now, they
can be removed.
Signed-off-by: Kevin Wolf
Reviewed-by: Max Reitz
---
block.c | 30 --
1 file changed, 30 deletions(-)
diff --git a/block.c b/block.c
index 22703ba..32527dc 100644
--- a/block.c
Checking whether there are throttled requests requires going to the
associated BlockBackend, which we want to avoid. All users of
bdrv_requests_pending() already call bdrv_parent_drained_begin() first,
which restarts all throttled requests, so no throttled requests can be
left here and this is remo
This reverts commit 76b223200ef4fb09dd87f0e213159795eb68e7a5.
Now that I/O throttling is fully done on the BlockBackend level, there
is no reason any more to block I/O throttling for nodes with multiple
parents as the parents don't influence each other any more.
Conflicts:
block.c
Signed
Signed-off-by: Kevin Wolf
---
block.c | 2 +-
block/block-backend.c | 43 +++--
block/io.c | 41 ---
block/qapi.c| 2 +-
block/throttle-groups.c
As a first step towards moving I/O throttling to the BlockBackend level,
this patch changes all pointers in struct ThrottleGroup from referencing
a BlockDriverState to referencing a BlockBackend.
This change is valid because we made sure that throttling can only be
enabled on BDSes which have a BB
This moves the throttling related part of the BDS life cycle management
to BlockBackend. The throttling group reference is now kept even when no
medium is inserted.
With this commit, throttling isn't disabled and then re-enabled any more
during graph reconfiguration. This fixes the temporary break
This removes the last part of I/O throttling from block/io.c and moves
it to the BlockBackend.
Instead of having knowledge about throttling inside io.c, we can call a
BdrvChild callback .drained_begin/end, which happens to drain the
throttled requests for BlockBackend parents.
Signed-off-by: Kevi
It was already true in principle that a throttled BDS always has a BB
attached, except that the order of operations while attaching or
detaching a BDS to/from a BB wasn't careful enough.
This commit breaks graph manipulations while I/O throttling is enabled.
It would have been possible to keep thi
BlockBackends use it to get a back pointer from BdrvChild to
BlockBackend in any BdrvChildRole callbacks.
Signed-off-by: Kevin Wolf
---
block/block-backend.c | 2 ++
include/block/block_int.h | 1 +
2 files changed, 3 insertions(+)
diff --git a/block/block-backend.c b/block/block-backend.c
Signed-off-by: Kevin Wolf
---
block/block-backend.c | 10 ++
block/io.c | 10 --
block/throttle-groups.c | 5 ++---
include/block/throttle-groups.h | 2 +-
4 files changed, 13 insertions(+), 14 deletions(-)
diff --git a/block/block-backend
This patch changes where the throttling state is stored (used to be the
BlockDriverState, now it is the BlockBackend), but it doesn't actually
make it a BB level feature yet. For example, throttling is still
disabled when the BDS is detached from the BB.
Signed-off-by: Kevin Wolf
---
block.c
Some features, like I/O throttling, are implemented outside
block-backend.c, but still want to keep information in BlockBackend,
e.g. list entries that allow keeping a list of BlockBackends.
In order to avoid exposing the whole struct layout in the public header
file, this patch introduces an embe
Signed-off-by: Kevin Wolf
---
block/block-backend.c | 2 +-
block/io.c | 2 +-
block/qapi.c| 2 +-
block/throttle-groups.c | 12 ++--
include/block/throttle-groups.h | 2 +-
tests/test-throttle.c | 4 ++--
6 files c
This is another feature that was "logically" part of the BlockBackend, but
implemented as a BlockDriverState feature. It was always kept on top using
swap_feature_fields().
This series moves it to be actually implemented in the BlockBackend, removing
another obstacle for removing bs->blk and allow
On 04/19/2016 04:07 PM, Emilio G. Cota wrote:
> +ht_avg_len = qht_avg_bucket_chain_length(&tcg_ctx.tb_ctx.htable,
> &ht_heads);
> +cpu_fprintf(f, "TB hash avg chain %0.5f buckets\n", ht_avg_len);
> +cpu_fprintf(f, "TB hash size%zu head buckets\n", ht_heads);
I think the acc
On 22/04/16 19:51, Aurelien Jarno wrote:
> On 2016-04-22 18:47, Aurelien Jarno wrote:
>> On 2016-04-22 19:08, Sergey Fedorov wrote:
>>> From: Sergey Fedorov
>>>
>>> Ensure direct jump patching in MIPS is atomic by using
>>> atomic_read()/atomic_set() for code patching.
>>>
>>> Signed-off-by: Serge
On 22/04/16 19:47, Aurelien Jarno wrote:
> On 2016-04-22 19:08, Sergey Fedorov wrote:
>> From: Sergey Fedorov
>>
>> Ensure direct jump patching in MIPS is atomic by using
>> atomic_read()/atomic_set() for code patching.
>>
>> Signed-off-by: Sergey Fedorov
>> Signed-off-by: Sergey Fedorov
>> ---
On 04/21/2016 09:28 PM, David Gibson wrote:
> On Thu, Apr 21, 2016 at 10:22:10AM -0700, Jianjun Duan wrote:
>>
>>
>> On 04/19/2016 10:14 PM, David Gibson wrote:
>>> On Fri, Apr 15, 2016 at 01:33:04PM -0700, Jianjun Duan wrote:
ccs_list in spapr state maintains the device tree related
in
On Fri, Apr 22, 2016 at 4:44 AM, Andrew Baumann
wrote:
> Hi all,
>
>> From: Peter Crosthwaite [mailto:crosthwaitepe...@gmail.com]
>> Sent: Friday, 22 April 2016 09:18
>>
>> On Thu, Apr 21, 2016 at 9:06 AM, Stephen Warren
>> wrote:
>> > On 04/21/2016 08:07 AM, Sylvain Garrigues wrote:
>> >>
>> >>
On 2016-04-22 18:47, Aurelien Jarno wrote:
> On 2016-04-22 19:08, Sergey Fedorov wrote:
> > From: Sergey Fedorov
> >
> > Ensure direct jump patching in MIPS is atomic by using
> > atomic_read()/atomic_set() for code patching.
> >
> > Signed-off-by: Sergey Fedorov
> > Signed-off-by: Sergey Fedor
On Fri, Apr 22, 2016 at 12:46 AM, Gerd Hoffmann wrote:
> Hi,
>
>> > Ideally as was mentioned earlier this would be done by simply executing the
>> > existing bootloader under emulation, rather than building all that code
>> > into
>> > qemu. However, in the Pi case, the bootloader runs on the V
On 2016-04-22 19:08, Sergey Fedorov wrote:
> From: Sergey Fedorov
>
> Ensure direct jump patching in MIPS is atomic by using
> atomic_read()/atomic_set() for code patching.
>
> Signed-off-by: Sergey Fedorov
> Signed-off-by: Sergey Fedorov
> ---
>
> Changes in v2:
> * s/atomic_write/atomic_se
On 04/21/2016 09:25 PM, David Gibson wrote:
> On Thu, Apr 21, 2016 at 10:03:56AM -0700, Jianjun Duan wrote:
>>
>>
>> On 04/19/2016 09:32 PM, David Gibson wrote:
>>> On Fri, Apr 15, 2016 at 01:33:02PM -0700, Jianjun Duan wrote:
To manage hotplug/unplug of dynamic resources such as PCI cards,
From: Sergey Fedorov
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
Reviewed-by: Alex Bennée
---
Changes in v2:
* Minor rewording
include/exec/exec-all.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index 6a054ee720a8..6
From: Sergey Fedorov
Ensure direct jump patching in ARM is atomic by using
atomic_read()/atomic_set() for code patching.
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
---
Changes in v 2:
* Add tcg_debug_assert() to check offset
* Use deposit32() for insturction patching
incl
From: Sergey Fedorov
Ensure direct jump patching in MIPS is atomic by using
atomic_read()/atomic_set() for code patching.
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
---
Changes in v2:
* s/atomic_write/atomic_set/
tcg/mips/tcg-target.inc.c | 3 ++-
1 file changed, 2 inserti
From: Sergey Fedorov
Ensure direct jump patching in s390 is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() for code patching.
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
---
Changes in v2:
* Use QEMU_PTR_IS_ALIGNED()
From: Sergey Fedorov
Ensure direct jump patching in SPARC is atomic by using
atomic_read()/atomic_set() for code patching.
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
Reviewed-by: Alex Bennée
---
Changes in v2:
* Use deposit32() to put displacement into call instruction
tc
From: Sergey Fedorov
Ensure direct jump patching in PPC is atomic by:
* limiting translation buffer size in 32-bit mode to be addressable by
Branch I-form instruction;
* using atomic_read()/atomic_set() for code patching.
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
Reviewe
From: Sergey Fedorov
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
---
include/qemu/osdep.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
index 408783f532e6..e3bc50b61359 100644
--- a/include/qemu/osdep.h
+++ b/include/qemu/osd
From: Sergey Fedorov
Ensure direct jump patching in AArch64 is atomic by using
atomic_read()/atomic_set() for code patching.
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
---
Changes in v2:
* Use tcg_debug_assert() instead of assert()
tcg/aarch64/tcg-target.inc.c | 14 +++
From: Sergey Fedorov
Ensure direct jump patching in i386 is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() for code patching.
tcg_out_nopn() implementation:
Suggested-by: Richard Henderson .
Signed-off-by: Sergey Fedorov
Signed-off-by: S
From: Sergey Fedorov
When patching translated code for direct block chaining/unchaining,
modification of concurrently executing code can happen in multi-threaded
execution. Currently only user-mode is affected. To make direct block patching
safe, some care must be taken to make sure that the cod
From: Sergey Fedorov
Ensure direct jump patching in TCI is atomic by:
* naturally aligning a location of direct jump address;
* using atomic_read()/atomic_set() to load/store the address.
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
---
Changes in v2:
* Use QEMU_ALIGN_PTR_UP
From: Sergey Fedorov
These macros provide a convenient way to n-byte align pointers up and
down and check if a pointer is n-byte aligned.
Signed-off-by: Sergey Fedorov
Signed-off-by: Sergey Fedorov
---
include/qemu/osdep.h | 11 +++
1 file changed, 11 insertions(+)
diff --git a/inclu
Sure, that will be great! How about give me a few days so I can cleanup the
code and provide a better workaround as patch for you all to review?
Regards,
Tianyou
-Original Message-
From: Artyom Tarasenko [mailto:atar4q...@gmail.com]
Sent: Friday, April 22, 2016 11:50 PM
To: Li, Tianyou
On 22 April 2016 at 16:05, Kevin Wolf wrote:
> The following changes since commit ee1e0f8e5d3682c561edcdceccff72b9d9b16d8b:
>
> util: align memory allocations to 2M on AArch64 (2016-04-22 12:26:01 +0100)
>
> are available in the git repository at:
>
> git://repo.or.cz/qemu/kevin.git tags/for-u
1 - 100 of 217 matches
Mail list logo