On 07/09/2015 02:06 PM, Paolo Bonzini wrote:
On 09/07/2015 10:17, Richard Henderson wrote:
@@ -405,7 +405,7 @@ static FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
},
[FEAT_XSAVE] = {
.feat_names = cpuid_xsave_feature_name,
-.cpuid_eax = 0xd,
+.cpui
On Fri, 07/10 08:54, Alexandre DERUMIER wrote:
> >>Thinking about this again, I doubt
> >>that lengthening the duration with a hardcoded value benifits everyone; and
> >>before Alexandre's reply on old server/slow disks
>
> With 1ms sleep, I can reproduce the hang 100% with a fast cpu (xeon e5 v3
On 07/09/2015 02:15 PM, Paolo Bonzini wrote:
On 09/07/2015 10:17, Richard Henderson wrote:
+
+/* ??? This variable is somewhat silly. Methinks KVM should be
+ using XCR0 to store into the XSTATE_BV field. Either that or
+ there's more missing information, e.g. the AVX bits. */
On 07/09/2015 02:18 PM, Paolo Bonzini wrote:
On 09/07/2015 10:17, Richard Henderson wrote:
+void cpu_sync_bndcs_hf(CPUX86State *env)
s/hf/hflags/ :)
Heh. Done.
Why aren't you just using a goto, like
if (ret < 0) {
goto out;
}
ret = 0;
out:
cpu_sync_bndcs_h
The purpose of this new bitmap is to flag the memory pages that are in
the middle of LL/SC operations (after a LL, before a SC).
For all these pages, the corresponding TLB entries will be generated
in such a way to force the slow-path.
When the system starts, the whole memory is dirty (all the bitm
Add a new flag for the TLB entries to force all the accesses made to a
page to follow the slow-path.
In the case we remove a TLB entry marked as EXCL, we unset the
corresponding exclusive bit in the bitmap.
Mark the accessed page as dirty to invalidate any pending operation of
LL/SC only if a vCP
The new helpers rely on the legacy ones to perform the actual read/write.
The StoreConditional helper (helper_le_stcond_name) returns 1 if the
store has to fail due to a concurrent access to the same page by
another vCPU. A 'concurrent access' can be a store made by *any* vCPU
(although, some imp
In order to perfom "lazy" TLB invalidation requests, introduce a
queue of callbacks at every vCPU disposal that will be fired just
before entering the next TB.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
cpus.c| 34 ++
Add a new way to query a TLB flush request for a given vCPU using the
new callback support.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
cputlb.c | 6 ++
include/qom/cpu.h | 1 +
2 files changed, 7 insertions(+)
diff --git a/cputlb.c b/
This is the third iteration of the patch series; starting from PATCH 007
there are the changes to move the whole work to multi-threading.
Changes versus previous versions are at the bottom of this cover letter.
This patch series provides an infrastructure for atomic
instruction implementation in Q
Introduce two new variables to synchronize the vCPUs during atomic
operations.
- exit_flush_request allows one vCPU to make an exclusive flush request for all
the running vCPUs
- tcg_excl_access_lock is a mutex that protects all the sensible
operations concerning atomic instruction emulation.
Implement strex and ldrex instruction relying on TCG's qemu_ldlink and
qemu_stcond. For the time being only 32bit configurations are supported.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
tcg/i386/tcg-target.c | 136
Update the TCG LL/SC instructions to work in multi-threading.
The basic idea remains untouched, but the whole mechanism is improved to
make use of the callback support to query TLB flush requests and the
rendezvous callback to synchronize all the currently running vCPUs.
In essence, if a vCPU wan
Create a new pair of instructions that implement a LoadLink/StoreConditional
mechanism.
It has not been possible to completely include the two new opcodes
in the plain variants, since the StoreConditional will always require
one more argument to store the success of the operation.
Suggested-by: J
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
include/exec/ram_addr.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 2766541..e51bd65 100644
--- a/include/exec/ram_addr.
Implement strex and ldrex instruction relying on TCG's qemu_ldlink and
qemu_stcond. For the time being only the 32bit instructions are supported.
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: Alvise Rigo
---
target-arm/translate.c | 87 +
When a vCPU is about to set a memory page as exclusive, it needs to wait
that all the running vCPUs finish to execute the current TB and to be aware
of the exact moment when that happens. For this, add a simple rendezvous
mechanism that will be used in softmmu_llsc_template.h to implement the
ldlin
Exploiting the tcg_excl_access_lock, port the helper_{le,be}_st_name to
work in real multithreading.
- The macro lookup_cpus_ll_addr now uses directly the
env->excl_protected_addr to invalidate others' LL/SC operations
Suggested-by: Jani Kokkonen
Suggested-by: Claudio Fontana
Signed-off-by: A
To be clear, for a normal user (e.g. they boot linux, they run some apps,
etc)..., if they use only one core, is it true that they will see no difference
in performance?
For a ‘normal user’ who does use multi-core, are you saying that a typical boot
is slower?
Cheers
Mark.
> On 10 Jul 2015,
John Snow wrote:
> Fedora 21, 4.1.1, qemu 2.4.0-rc0
>> ../../configure --target-list="x86_64-softmmu"
>
> 068 2s ... - output mismatch (see 068.out.bad)
> --- /home/bos/jhuston/src/qemu/tests/qemu-iotests/068.out 2015-07-08
> 17:56:18.588164979 -0400
> +++ 068.out.bad 2015-07-09 17:39:58
On (Tue) 16 Jun 2015 [11:26:17], Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert"
>
> Postcopy sends RAMBlock names and offsets over the wire (since it can't
> rely on the order of ramaddr being the same), and it starts out with
> HVA fault addresses from the kernel.
>
> qemu
On 10/07/2015 10:23, Alvise Rigo wrote:
This is the third iteration of the patch series; starting from PATCH 007
there are the changes to move the whole work to multi-threading.
Changes versus previous versions are at the bottom of this cover letter.
This patch series provides an infrastructure
Hi Mark,
On Fri, Jul 10, 2015 at 10:31 AM, Mark Burton wrote:
>
> To be clear, for a normal user (e.g. they boot linux, they run some apps,
> etc)..., if they use only one core, is it true that they will see no
> difference in performance?
I didn't test the one core scenario, but I expect les
On 9 July 2015 at 21:34, Michael Roth wrote:
> Hello,
>
> On behalf of the QEMU Team, I'd like to announce the availability of the
> first release candidate for the QEMU 2.4 release. This release is meant
> for testing purposes and should not be used in a production environment.
>
> http://wiki.q
On Tue, 07/07 12:19, Michael S. Tsirkin wrote:
> On Tue, Jul 07, 2015 at 05:09:09PM +0800, Fam Zheng wrote:
> > On Tue, 07/07 11:13, Michael S. Tsirkin wrote:
> > > On Tue, Jul 07, 2015 at 09:21:07AM +0800, Fam Zheng wrote:
> > > > Since commit 6e99c63 "net/socket: Drop net_socket_can_send" and fri
On Fri, Jul 10, 2015 at 10:39 AM, Frederic Konrad
wrote:
> On 10/07/2015 10:23, Alvise Rigo wrote:
>>
>> This is the third iteration of the patch series; starting from PATCH 007
>> there are the changes to move the whole work to multi-threading.
>> Changes versus previous versions are at the botto
On Fri, 2015-07-10 at 15:19 +1000, Alexey Kardashevskiy wrote:
>
> > If I'm reading the kernel source[1] correctly, there are actually
> > subtle
> > differences other than the number of cores:
> >
> >#define CPU_FTRS_POWER8 (/* Bunch of features here */)
> >#define CPU_FTRS_POWER8E (CPU_
On Wed, 07/08 17:40, Jason Wang wrote:
>
>
> On 07/07/2015 05:03 PM, Fam Zheng wrote:
> > On Tue, 07/07 15:44, Jason Wang wrote:
> >>
> >> On 07/07/2015 09:21 AM, Fam Zheng wrote:
> >>> Since commit 6e99c63 "net/socket: Drop net_socket_can_send" and friends,
> >>> net queues need to be explicitly
On 10/07/2015 01:51, arei.gong...@huawei.com wrote:
> From: Gonglei
>
> Failing to save or free storage allocated
> by "g_strdup(cmd)" leaks it. Let's use a
> variable to storage it.
>
> Signed-off-by: Gonglei
> ---
> vl.c | 5 -
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff -
On Thu, 9 Jul 2015 16:22:41 +0200
Eduardo Otubo wrote:
> On Tue, Jun 30, 2015 at 05=56=02PM +0200, Igor Mammedov wrote:
> > On Tue, 30 Jun 2015 15:56:13 +0200
> > Eduardo Otubo wrote:
> >
> > > On Tue, Jun 30, 2015 at 11=18=21AM +0200, Igor Mammedov wrote:
> > > > On Tue, 30 Jun 2015 10:07:52 +
Timer was added in virtio-rng to rate limit the
entropy. It used to trigger at regular intervals to
bump up the quota value. The value of quota and timer
slice is decided based on entropy source rate in host.
This resulted in triggring of timer even when quota
is not exhausted at all and re
On 10/07/2015 09:24, Richard Henderson wrote:
> On 07/09/2015 02:15 PM, Paolo Bonzini wrote:
>> On 09/07/2015 10:17, Richard Henderson wrote:
>>> +
>>> +/* ??? This variable is somewhat silly. Methinks KVM should be
>>> + using XCR0 to store into the XSTATE_BV field. Either that or
>>
On 10/07/2015 10:23, Alvise Rigo wrote:
> In order to perfom "lazy" TLB invalidation requests, introduce a
> queue of callbacks at every vCPU disposal that will be fired just
> before entering the next TB.
>
> Suggested-by: Jani Kokkonen
> Suggested-by: Claudio Fontana
> Signed-off-by: Alvise
I tried to use it, but it would then create a deadlock at a very early
stage of the stress test.
The problem is likely related to the fact that flush_queued_work
happens with the global mutex locked.
As Frederick suggested, we can use the newly introduced
flush_queued_safe_work for this.
Regards,
On 10/07/2015 11:47, alvise rigo wrote:
I tried to use it, but it would then create a deadlock at a very early
stage of the stress test.
The problem is likely related to the fact that flush_queued_work
happens with the global mutex locked.
As Frederick suggested, we can use the newly introduced
QEMU targets ISAs contain instruction that can break the execution
flow with exceptions. When exception breaks the execution of the translation
block it may corrupt PC and icount values.
This set of patches fixes exception handling for MIPS, PowerPC, and i386
targets.
Incorrect execution for i38
Now that the cpu_ld/st_* function directly call helper_ret_ld/st, we can
drop the old helper_ld/st functions.
Signed-off-by: Aurelien Jarno
Signed-off-by: Pavel Dovgalyuk
---
include/exec/cpu_ldst.h | 19 ---
softmmu_template.h | 16
2 files changed, 0
This patch introduces several helpers to pass return address
which points to the TB. Correct return address allows correct
restoring of the guest PC and icount. These functions should be used when
helpers embedded into TB invoke memory operations.
Reviewed-by: Aurelien Jarno
Reviewed-by: Richard
This patch improves exception handling in MIPS.
Instructions generate several types of exceptions.
When exception is generated, it breaks the execution of the current translation
block. Implementation of the exceptions handling does not correctly
restore icount for the instruction which caused the
This patch introduces loop exit function, which also
restores guest CPU state according to the value of host
program counter.
Reviewed-by: Richard Henderson
Reviewed-by: Aurelien Jarno
Signed-off-by: Pavel Dovgalyuk
---
cpu-exec.c |9 +
include/exec/exec-all.h |1
This patch introduces new versions of raise_exception functions
that receive TB return address as an argument.
Reviewed-by: Aurelien Jarno
Reviewed-by: Richard Henderson
Signed-off-by: Pavel Dovgalyuk
---
target-i386/cpu.h |4
target-i386/excp_helper.c | 30
On 16 June 2015 at 02:51, Edgar E. Iglesias wrote:
> From: "Edgar E. Iglesias"
>
> Signed-off-by: Edgar E. Iglesias
> ---
> target-arm/cpu-qom.h | 1 +
> target-arm/cpu.c | 2 ++
> target-arm/cpu.h | 3 ++-
> target-arm/helper.c | 68
> ++
This patch fixes exception handling for div instructions
and removes obsolete PC update from translate.c.
Reviewed-by: Richard Henderson
Reviewed-by: Aurelien Jarno
Signed-off-by: Pavel Dovgalyuk
---
target-i386/int_helper.c | 32
target-i386/translate.c |
This patch fixes exception handling for FPU instructions
and removes obsolete PC update from translate.c.
Reviewed-by: Aurelien Jarno
Reviewed-by: Richard Henderson
Signed-off-by: Pavel Dovgalyuk
---
target-i386/fpu_helper.c | 164 +++---
target-i386/t
This patch fixes exception handling for memory helpers
and removes obsolete PC update from translate.c.
Reviewed-by: Richard Henderson
Reviewed-by: Aurelien Jarno
Signed-off-by: Pavel Dovgalyuk
---
target-i386/mem_helper.c | 39 ++-
target-i386/translate.
This patch fixes exception handling for other helper functions.
Signed-off-by: Pavel Dovgalyuk
---
target-i386/cc_helper.c |2 +-
target-i386/misc_helper.c |8
target-i386/ops_sse.h |2 +-
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/target-i386/cc_he
This patch fixes exception handling in PowerPC.
Instructions generate several types of exceptions.
When exception is generated, it breaks the execution of the current translation
block. Implementation of the exceptions handling does not correctly
restore icount for the instruction which caused the
On Fri, Jul 10, 2015 at 11:53 AM, Frederic Konrad
wrote:
> On 10/07/2015 11:47, alvise rigo wrote:
>>
>> I tried to use it, but it would then create a deadlock at a very early
>> stage of the stress test.
>> The problem is likely related to the fact that flush_queued_work
>> happens with the globa
On Thu, 9 Jul 2015 16:46:43 +0300
"Michael S. Tsirkin" wrote:
> On Thu, Jul 09, 2015 at 03:43:01PM +0200, Paolo Bonzini wrote:
> >
> >
> > On 09/07/2015 15:06, Michael S. Tsirkin wrote:
> > > > QEMU asserts in vhost due to hitting vhost backend limit
> > > > on number of supported memory region
On Sat, May 30, 2015 at 11:11 PM, Peter Crosthwaite
wrote:
> Create a global list of tcg_exec_init functions that is populated at
> startup. Multiple translation engines can register an init function
> and all will be called on the master call to tcg_exec_init.
>
> Introduce a new module, translat
This patch fixes exception handling for seg_helper functions.
Signed-off-by: Pavel Dovgalyuk
---
target-i386/helper.h |4
target-i386/seg_helper.c | 616 --
target-i386/translate.c | 42 +--
3 files changed, 335 insertions(+), 327 deletion
On 10/07/2015 11:47, alvise rigo wrote:
> I tried to use it, but it would then create a deadlock at a very early
> stage of the stress test.
> The problem is likely related to the fact that flush_queued_work
> happens with the global mutex locked.
Let's fix that and move the global mutex inside
>>Does it completely hang?
yes, can't ping network, and vnc console is frozen.
>>What does "perf top" show?
I'll do test today . (I'm going to vacation this night,I'll try to send results
before that)
- Mail original -
De: "Fam Zheng"
À: "aderumier"
Cc: "Stefan Hajnoczi" , "Kevin Wo
On 2015/7/10 17:28, Leon Alrae wrote:
> On 10/07/2015 01:51, arei.gong...@huawei.com wrote:
>> From: Gonglei
>>
>> Failing to save or free storage allocated
>> by "g_strdup(cmd)" leaks it. Let's use a
>> variable to storage it.
>>
>> Signed-off-by: Gonglei
>> ---
>> vl.c | 5 -
>> 1 file cha
Here are few patches to prepare an existing listener for handling memory
preregistration for SPAPR guests running on POWER8.
This used to be a part of DDW patchset but now is separated as requested.
I left versions in changelog of 5/5 for convenience.
Regarding 1/5, there is a question - in reali
These started switching from TARGET_PAGE_MASK (hardcoded as 4K) to
a real host page size:
4e51361d7 "cpu-all: complete "real" host page size API" and
f7ceed190 "vfio: cpu: Use "real" page size API"
This finished the transition by:
- %s/TARGET_PAGE_MASK/qemu_real_host_page_mask/
- %s/TARGET_PAGE_AL
The vfio_memory_listener is registered for PCI address space. On Type1
IOMMU that falls back to @address_space_memory and the listener is
called on RAM blocks. On sPAPR IOMMU is guest visible and the listener
is called on DMA windows. Therefore Type1 IOMMU only handled RAM regions
and sPAPR IOMMU o
So far we were managing not to have an IOMMU type stored anywhere but
since we are going to implement different behavior for different IOMMU
types in the same memory listener, we need to know IOMMU type after
initialization.
This adds an IOMMU type into VFIOContainer and initializes it.
Since zero
In some cases PCI BARs are registered as RAM via
memory_region_init_ram_ptr() and the vfio_memory_listener will be called
on them too. However DMA will not be performed to/from these regions so
just skip them.
Signed-off-by: Alexey Kardashevskiy
---
hw/vfio/common.c | 3 ++-
1 file changed, 2 in
This makes use of the new "memory registering" feature. The idea is
to provide the userspace ability to notify the host kernel about pages
which are going to be used for DMA. Having this information, the host
kernel can pin them all once per user process, do locked pages
accounting (once) and not s
Some registers like the CNTVCT register should only be written to the
kernel as part of machine initialization or on vmload operations, but
never during runtime, as this can potentially make time go backwards or
create inconsistent time observations between VCPUs.
Introduce a list of registers tha
Correct computation of vector offsets for EXCP_EXT_INTERRUPT.
For instance, if Cause.IV is 0 the vector offset should be 0x180.
Simplify the finding vector number logic for the Vectored Interrupts.
Signed-off-by: Yongbok Kim
---
target-mips/helper.c| 47 ++-
As full specification of P5600 is available, mips32r5-generic should
be renamed to P5600 and corrected as its intention.
Correct PRid and detail of configuration.
Features which are not currently supported are described as FIXME.
Fix Config.MM bit location
Signed-off-by: Yongbok Kim
---
target-
On 10 July 2015 at 12:00, Christoffer Dall wrote:
> Some registers like the CNTVCT register should only be written to the
> kernel as part of machine initialization or on vmload operations, but
> never during runtime, as this can potentially make time go backwards or
> create inconsistent time obs
On 10/07/2015 7:58 pm, "Peter Maydell" wrote:
>
> On 16 June 2015 at 02:51, Edgar E. Iglesias
wrote:
> > From: "Edgar E. Iglesias"
> >
> > Signed-off-by: Edgar E. Iglesias
> > ---
> > target-arm/cpu-qom.h | 1 +
> > target-arm/cpu.c | 2 ++
> > target-arm/cpu.h | 3 ++-
> > target-a
On 10 July 2015 at 12:23, Edgar E. Iglesias wrote:
>
> On 10/07/2015 7:58 pm, "Peter Maydell" wrote:
>> Something I just noticed while I was trying to add support
>> for the secure physical timer on top of this series: the
>> gt_*_cnt_reset functions are misnamed, because they're not
>> resetting
On 10/07/2015 9:26 pm, "Peter Maydell" wrote:
>
> On 10 July 2015 at 12:23, Edgar E. Iglesias
wrote:
> >
> > On 10/07/2015 7:58 pm, "Peter Maydell" wrote:
> >> Something I just noticed while I was trying to add support
> >> for the secure physical timer on top of this series: the
> >> gt_*_cnt_r
On 10/07/2015 12:24, Paolo Bonzini wrote:
On 10/07/2015 11:47, alvise rigo wrote:
I tried to use it, but it would then create a deadlock at a very early
stage of the stress test.
The problem is likely related to the fact that flush_queued_work
happens with the global mutex locked.
Let's fix th
Hi again,
I have redone a lot of tests,
with raw on nfs
---
Patch 3/3, fix my problem (not apply patch 1/3 and patch 2/3).
without patch 3/3, I'm seeing a lot of lseek, can take some minutes with guest
hang.
with patch 3/3, it almost take no time to generate the bitmap, no guest h
In nettle 3, cbc_encrypt() accepts 'nettle_cipher_func' instead of
'nettle_crypt_func' and these two differ in 'const' qualifier of the
first argument. The build fails with:
In file included from crypto/cipher.c:71:0:
./crypto/cipher-nettle.c: In function ‘qcrypto_cipher_encrypt’:
./crypto/
On 10/07/2015 02:51, arei.gong...@huawei.com wrote:
> From: Gonglei
>
> Spotted by Coverity.
>
> Gonglei (4):
> cpu: fix memory leak
> ppc/spapr_drc: fix memory leak
> arm/xlnx-zynqmp: fix memory leak
> vl.c: fix memory leak
>
> hw/arm/xlnx-zynqmp.c | 4 ++--
> hw/ppc/spapr_drc.c |
On 10 July 2015 at 13:33, Radim Krčmář wrote:
> In nettle 3, cbc_encrypt() accepts 'nettle_cipher_func' instead of
> 'nettle_crypt_func' and these two differ in 'const' qualifier of the
> first argument. The build fails with:
>
> In file included from crypto/cipher.c:71:0:
> ./crypto/cipher-n
Otherwise, it is not found
Signed-off-by: Juan Quintela
---
vl.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/vl.c b/vl.c
index 3f269dc..5856396 100644
--- a/vl.c
+++ b/vl.c
@@ -4615,6 +4615,7 @@ int main(int argc, char **argv, char **envp)
}
qemu_system_reset(V
We use global state in both savevm & migration. The easiest way is to
put the setup in a single place.
Signed-off-by: Juan Quintela
---
migration/migration.c | 30 --
1 file changed, 12 insertions(+), 18 deletions(-)
diff --git a/migration/migration.c b/migration/mi
Hi
Store global state for both savevm & migration in a single place.
Register globalstate save handler before loadvm happens.
Please, review.
Juan Quintela (2):
migration: Register global state section before loadvm
migration: store globalstate in pre_safe
migration/migration.c | 30 ++
On 26/06/2015 11:22, Thibaut Collet wrote:
> Some vhost client/backend are able to support live migration.
> To provide this service the following features must be added:
> 1. Add the VIRTIO_NET_F_GUEST_ANNOUNCE capability to vhost-net when netdev
>backend is vhost-user.
> 2. Provide a nop re
Am 10.07.2015 um 14:58 schrieb Juan Quintela:
> Hi
>
> Store global state for both savevm & migration in a single place.
> Register globalstate save handler before loadvm happens.
>
> Please, review.
>
> Juan Quintela (2):
> migration: Register global state section before loadvm
> migration:
On 09/07/15 04:30, Michael Roth wrote:
Quoting Denis V. Lunev (2015-07-08 17:47:51)
On 09/07/15 01:02, Michael Roth wrote:
Quoting Denis V. Lunev (2015-07-07 03:06:08)
On 07/07/15 04:31, Michael Roth wrote:
Quoting Denis V. Lunev (2015-06-30 05:25:19)
From: Olga Krishtal
Child process' std
2015-07-10 13:56+0100, Peter Maydell:
> On 10 July 2015 at 13:33, Radim Krčmář wrote:
>> @@ -83,8 +87,8 @@ QCryptoCipher *qcrypto_cipher_new(QCryptoCipherAlgorithm
>> alg,
>> -ctx->alg_encrypt = (nettle_crypt_func *)des_encrypt;
>> -ctx->alg_decrypt = (nettle_crypt_func *)des_decr
On 10 July 2015 at 14:31, Radim Krčmář wrote:
> 2015-07-10 13:56+0100, Peter Maydell:
>> On 10 July 2015 at 13:33, Radim Krčmář wrote:
>>> @@ -83,8 +87,8 @@ QCryptoCipher *qcrypto_cipher_new(QCryptoCipherAlgorithm
>>> alg,
>>> -ctx->alg_encrypt = (nettle_crypt_func *)des_encrypt;
>>> -
On 10 July 2015 at 14:38, Peter Maydell wrote:
> On 10 July 2015 at 14:31, Radim Krčmář wrote:
>> We pass 'ctx' as a 'void *' in the code, but these functions accept
>> specialized structures, which makes them incompatible:
>>
>> void nettle_cipher_func(const void *ctx, size_t length, [...])
>>
2015-07-10 14:38+0100, Peter Maydell:
> On 10 July 2015 at 14:31, Radim Krčmář wrote:
>> 2015-07-10 13:56+0100, Peter Maydell:
>>> On 10 July 2015 at 13:33, Radim Krčmář wrote:
@@ -83,8 +87,8 @@ QCryptoCipher *qcrypto_cipher_new(QCryptoCipherAlgorithm
alg,
-ctx->alg_encryp
On 09/07/2015 20:57, Laszlo Ersek wrote:
>> Without EPT, you don't
>> hit the processor limitation with your setup, but the user should
>> nevertheless
>> still be notified.
>
> I disagree.
FWIW, I also disagree (and it looks like Bandan disagrees with himself
now :)).
>> In fact, I think sha
* Juan Quintela (quint...@redhat.com) wrote:
> We use global state in both savevm & migration. The easiest way is to
> put the setup in a single place.
>
> Signed-off-by: Juan Quintela
I don't think this works; I think pre-save is called after the migration
code has changed the runstate, so yo
Public bug reported:
Currently qemu-system-alpha -bios parameter takes an ELF image.
However HP maintains firmware updates for those systems.
Some example rom files can be found here
ftp://ftp.hp.com/pub/alphaserver/firmware/current_platforms/v7.3_release/DS20_DS20e/
It might allow things like u
On 07/10/15 16:13, Paolo Bonzini wrote:
>
>
> On 09/07/2015 20:57, Laszlo Ersek wrote:
>>> Without EPT, you don't
>>> hit the processor limitation with your setup, but the user should
>>> nevertheless
>>> still be notified.
>>
>> I disagree.
>
> FWIW, I also disagree (and it looks like Bandan d
On 10/07/2015 16:57, Laszlo Ersek wrote:
> > > ... In any case, please understand that I'm not campaigning for this
> > > warning :) IIRC the warning was your (very welcome!) idea after I
> > > reported the problem; I'm just trying to ensure that the warning match
> > > the exact issue I encounte
On 07/10/15 16:59, Paolo Bonzini wrote:
>
>
> On 10/07/2015 16:57, Laszlo Ersek wrote:
... In any case, please understand that I'm not campaigning for this
warning :) IIRC the warning was your (very welcome!) idea after I
reported the problem; I'm just trying to ensure that the war
From: KONRAD Frederic
This is the async_safe_work introduction bit of the Multithread TCG work.
Rebased on current upstream (6169b60285fe1ff730d840a49527e721bfb30899).
It can be cloned here:
http://git.greensocs.com/fkonrad/mttcg.git branch async_work
The first patch introduces a mutex to prote
From: KONRAD Frederic
This protects queued_work_* used by async_run_on_cpu, run_on_cpu and
flush_queued_work with a new lock (work_mutex) to prevent multiple (concurrent)
access.
Signed-off-by: KONRAD Frederic
---
cpus.c| 9 +
include/qom/cpu.h | 3 +++
qom/cpu.c |
From: KONRAD Frederic
This flag indicates if the VCPU is currently executing TCG code.
Signed-off-by: KONRAD Frederic
Changes V1 -> V2:
* do both tcg_executing = 0 or 1 in cpu_exec().
---
cpu-exec.c| 2 ++
include/qom/cpu.h | 3 +++
qom/cpu.c | 1 +
3 files changed, 6 insert
On 10/07/2015 17:19, fred.kon...@greensocs.com wrote:
> +qemu_mutex_lock(&cpu->work_mutex);
> while ((wi = cpu->queued_work_first)) {
> cpu->queued_work_first = wi->next;
> wi->func(wi->data);
Please unlock the mutex while calling the callback.
Paolo
> @@ -905,6 +912
On 10/07/2015 17:19, fred.kon...@greensocs.com wrote:
> +static void flush_queued_safe_work(CPUState *cpu)
> +{
> +struct qemu_work_item *wi;
> +CPUState *other_cpu;
> +
> +if (cpu->queued_safe_work_first == NULL) {
> +return;
> +}
> +
> +CPU_FOREACH(other_cpu) {
> +
On 10/07/2015 17:22, Paolo Bonzini wrote:
On 10/07/2015 17:19, fred.kon...@greensocs.com wrote:
+qemu_mutex_lock(&cpu->work_mutex);
while ((wi = cpu->queued_work_first)) {
cpu->queued_work_first = wi->next;
wi->func(wi->data);
Please unlock the mutex while calling
On 08.07.2015 21:36, Kevin Wolf wrote:
Let the callers of bdrv_open_inherit() call bdrv_attach_child(). It
needs to be called in all cases where bdrv_open_inherit() succeeds (i.e.
returns 0) and a child_role is given.
bdrv_attach_child() is moved upwards to avoid a forward declaration.
Signed-o
From: KONRAD Frederic
We already had async_run_on_cpu but we need all VCPUs outside their execution
loop to execute some tb_flush/invalidate task:
async_run_on_cpu_safe schedule a work on a VCPU but the work start when no more
VCPUs are executing code.
When a safe work is pending cpu_has_work re
On 10/07/2015 17:32, Frederic Konrad wrote:
>>>
>
> I think something like that can work because we don't have two
> flush_queued_work at the same time on the same CPU?
Yes, this works; there is only one consumer.
Holding locks within a callback can be very painful, especially if there
is a ch
> > > > Yes, you're right. The reason is surely because dimm1 wasn't deleted
> > > > -- and I think I didn't make my point very clear -- my question was
> > > > more about: Is there any reason for dimm1 not being deleted? The
> > > > reason why I tested with the guest OS fully running and on GRUB i
On 10/07/2015 17:34, Paolo Bonzini wrote:
On 10/07/2015 17:32, Frederic Konrad wrote:
I think something like that can work because we don't have two
flush_queued_work at the same time on the same CPU?
Yes, this works; there is only one consumer.
Holding locks within a callback can be very pai
Laszlo Ersek writes:
> On 07/10/15 16:59, Paolo Bonzini wrote:
>>
>>
>> On 10/07/2015 16:57, Laszlo Ersek wrote:
> ... In any case, please understand that I'm not campaigning for this
> warning :) IIRC the warning was your (very welcome!) idea after I
> reported the problem; I'm jus
1 - 100 of 158 matches
Mail list logo