This is likely expected/correct behavior. You should try building with
-mieee.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1701835
Title:
floating-point operation bugs in qemu-alpha
Status in QE
Public bug reported:
For some reason the SIGILL handler receives a different address under
qemu than it used to on real hardware. I don't know specifics about the
hardware used back then – it was some sort of 21264a somewhere between
600-800 MHz –, and I cannot say anything about the kernel as wel
Most likely some bits are initialized differently in the FPCR.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1701835
Title:
floating-point operation bugs in qemu-alpha
Status in QEMU:
New
Bug d
Run this:
#include
#include
double get_fpcr()
{
double x;
asm ("mf_fpcr %0": "=f" (x));
return x;
}
int main()
{
double fpcr = get_fpcr();
unsigned long l;
memcpy(&l, &fpcr, 8);
printf("%016lx\n", l);
return 0;
}
Under qemu-system-alpha I get 680e8000.
https://download.majix.org/dec/alpha_arch_ref.pdf
The bits are defined in 4.7.8 Floating-Point Control Register (FPCR).
59/58 zero is chopped rounding. This does not seem like a good default.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to
Works, thanks!
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1810545
Title:
[alpha] Strange exception address reported
Status in QEMU:
Fix Committed
Bug description:
For some reason the SIGIL
On Fri, Feb 28, 2020 at 12:10 PM Kevin Wolf wrote:
>
> This sounds almost like two other bugs we got fixed recently (in the
> QEMU file-posix driver and in the XFS kernel driver) where two write
> extending the file size were in flight in parallel, but if the shorter
> one completed last, instead
On Mon, Feb 24, 2020 at 1:35 PM Stefan Ring wrote:
>
> [...]. As already stated in
> the original post, the problem only occurs with multiple parallel
> write requests happening.
Actually I did not state that. Anyway, the corruption does not happen
when I restrict the ZFS io schedul
On Thu, Feb 20, 2020 at 10:19 AM Stefan Ring wrote:
>
> Hi,
>
> I have a very curious problem on an oVirt-like virtualization host
> whose storage lives on gluster (as qcow2).
>
> The problem is that of the writes done by ZFS, whose sizes according
> to blktrace ar
On Mon, Feb 24, 2020 at 2:27 PM Kevin Wolf wrote:
> > > There are quite a few machines running on this host, and we have not
> > > experienced other problems so far. So right now, only ZFS is able to
> > > trigger this for some reason. The guest has 8 virtual cores. I also
> > > tried writing dire
On Mon, Feb 24, 2020 at 1:35 PM Stefan Ring wrote:
>
> What I plan to do next is look at the block ranges being written in
> the hope of finding overlaps there.
Status update:
I still have not found out what is actually causing this. I have not
found concurrent writes to overlapping f
On Tue, Feb 25, 2020 at 3:12 PM Stefan Ring wrote:
>
> I find many instances with the following pattern:
>
> current file length (= max position + size written): p
> write request n writes from (p + hole_size), thus leaving a hole
> request n+1 writes exactly hole_size, sta
On Thu, Feb 27, 2020 at 10:12 PM Stefan Ring wrote:
> Victory! I have a reproducer in the form of a plain C libgfapi client.
>
> However, I have not been able to trigger corruption by just executing
> the simple pattern in an artificial way. Currently, I need to feed my
> reproduc
There seems to be more confusion of the sort. This fixes it for me:
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -10226,7 +10226,7 @@ static abi_long do_syscall1(void *cpu_env, int num,
abi_long arg1,
return -TARGET_EFAULT;
}
orig
14 matches
Mail list logo