Re : [Qemu-devel] Making qemu use 10.0.3.x not 10.0.2.x

2008-02-06 Thread Sylvain Petreolle
- Message d'origine 
> De : Asheesh Laroia <[EMAIL PROTECTED]>
> À : Asheesh Laroia on [qemu-devel] 
> Envoyé le : Mardi, 5 Février 2008, 23h24mn 42s
> Objet : Re: [Qemu-devel] Making qemu use 10.0.3.x not 10.0.2.x
> 
> On Tue, 5 Feb 2008, Jernej Simončič wrote:
> 
> > On Tuesday, February 5, 2008, 22:34:04, Asheesh Laroia wrote:
> >
> >> I agree with this - guesswork and invisible options can be confusing. 
> >> That's why I suggest what I think is the simplest solution: Just let 
> >> this be overridable on the command line.
> >
> > Isn't the user-net IP irrelevant to the outside? AFAIK, it just causes 
> > Qemu to act as a normal TCP/IP client to the OS it's running on, and the 
> > guest OS simply can't accept incoming connections (nobody actually knows 
> > that the program issuing the connections is actually hosting an OS 
> > inside).
> 
> The problem I stated in the original message in the thread 
> is 
> that I want to connect from the *guest* to the *host*. Since the host and 
> the guest are on the same subnet, only inside the guest the subnet is 
> fake, the guest cannot e.g. ssh to the host.
> 


Couldnt you use a network bridge for this purpose ?
 
Kind regards,
Sylvain Petreolle (aka Usurp) 
 
Support artists, not multinationals - http://Iwouldntsteal.net
Supportez les artistes, pas les multinationales - http://Iwouldntsteal.net
 
Free music you can listen to everywhere : http://www.jamendo.com




[Qemu-devel] TCG breakage if TARGET_LONG_BITS > HOST_LONG_BITS

2008-02-06 Thread Alexander Graf

Hi,

I've been trying to get the new TCG approach running on an i386 host.  
It works when I use gcc3 (miraculously as I will explain later), but  
fails on gcc4.


On boot the very first instruction that gets issued is the ljmp to the  
bios:


IN:
0xfff0:  ljmp   $0xf000,$0xe05b

This translates to

OP:
 movi_i32 T0_0,$0xf000
 movi_i32 T0_1,$0x0
 movi_i32 T1_0,$0xe05b
 movi_i32 T1_1,$0x0
[...]

and results in

OUT: [size=83]
0x08e38f40:  mov$0xf000,%eax
0x08e38f45:  xor%edx,%edx
0x08e38f47:  mov$0xe05b,%ecx
0x08e38f4c:  xor%ebx,%ebx
[...]

This is perfectly fine if you assume, that these registers get  
clobbered and save/restore them or define them as global register  
variables. Unfortunately on TARGET_LONG_BITS==64 this does not happen,  
as T0 and T1 are supposed to be in memory, not in registers.


As can be seen in the gcc4 generated assembly, gcc thinks that ebx is  
just fine after the function call:


0x80e1449 :  mov%ebp,%ebx
0x80e144b :  mov%esi,0x510(%ebp)
0x80e1451 :  call   *%eax
0x80e1453 :  mov%eax,%edx
0x80e1455 :  sar$0x1f,%edx
0x80e1458 :  mov%eax,(%ebx)

and qemu segfaults here.

So basically there are two things puzzling me here.

1. Why is gcc3 generating code, that does not use ebx?
2. Why does movi_i64 generate code that only accesses registers? I  
have not been able to find any branch in the tcg code generator for  
movi_ixx that generates movs to memory addresses.


The whole issue could be easily fixed by using registers, but putting  
the call into inline assembly, telling gcc that this call clobbers all  
the registers. I do not know if this is the expected behavior though,  
so I think I'd rather ask before doing any patches.


I hope this helps,

Alex




Re: [Qemu-devel] Making qemu use 10.0.3.x not 10.0.2.x

2008-02-06 Thread Ian Jackson
Warner Losh writes ("Re: [Qemu-devel] Making qemu use 10.0.3.x not 10.0.2.x"):
> I think that the suggestion is that qemu picks, one time, a new
> default.  This new default would be selected at random, and would be
> the same on all new versions of qemu.

Yes.

> I don't think that the suggestion is to pick a random address every
> time qemu starts.

Indeed, that would be insane.


Ben Taylor writes ("Re: [Qemu-devel] Making qemu use 10.0.3.x not 10.0.2.x"):
> It seems to me that there is a corner case where the local host has
> a 10.0.2.x or 10.0.x.x address which would cause a qemu guest
> problems that has a 10.0.2.15 address (for -net user only).

Yes, that's exactly the problem.

Using a (once) randomly-chosen default greatly reduces the odds of
that happening.  Many many people foolishly choose 10.0.{0,1,2,3}.x.
Many fewer choose (say) 172.30.206.x.  So the fixed qemu default
should be 172.30.206.x, or some other range also chosen at random.

(This is why it's worth changing: of course if you choose randomly you
sometimes get 10.0.2.x.  But if you knew you were trying to choose
randomly and your RNG gave you 10.0.2.x you'd probably roll the dice
again - because 10.0.2.x is already overused..)

> I think the default should be left at 10.0.2.x, and if the localhost has
> a 10.0.x.x address, then one of the other ranges (172.16.x.x or
> 192.168.x.x) could be used.

This is a bad idea.  That makes the behaviour very difficult to
predict and debug.  For example, the addresses used by qemu might
depend on whether the boot scripts which start a guest happen to run
before or after an external dhcp server manages to give the host an
address.

This kind of `helpful' behaviour is a recipe for pain.  The addresses
used should be fixed in each particular installation, but configurable,
with a well-chosen default.


andrzej zaborowski writes ("Re: [Qemu-devel] Making qemu use 10.0.3.x not 
10.0.2.x"):
> This rfc talks about organisations and networks that are real, not
> about the network inside qemu which doesn't have connectivity with
> another qemu network.

Address clashes are still a problem even if the two networks don't
exchange packets, if there is any system which needs to be on both
networks.  And of course in the qemu case the host is on both
networks.

So the addresses used by the guest networks must be distinct from any
addresses of other systems outside the host that the host might need
to talk to.


Ian.




Re: [Qemu-devel] Making qemu use 10.0.3.x not 10.0.2.x

2008-02-06 Thread andrzej zaborowski
On 06/02/2008, Ian Jackson <[EMAIL PROTECTED]> wrote:
> andrzej zaborowski writes ("Re: [Qemu-devel] Making qemu use 10.0.3.x not 
> 10.0.2.x"):
> > This rfc talks about organisations and networks that are real, not
> > about the network inside qemu which doesn't have connectivity with
> > another qemu network.
>
> Address clashes are still a problem even if the two networks don't
> exchange packets, if there is any system which needs to be on both
> networks.  And of course in the qemu case the host is on both
> networks.

Right, but this happens so rarely (and there are no obvious symptoms
when it happens) that it's okay for the user to set up non-user-net
networking or issue this one line grep command posted in the original
message. A more useful addition would perhaps be a simple warning from
qemu when the host is in a network containing 10.0.2.0/8.

Indeed when you google "10.0.2.2 ip" half of the hits relate to qemu/kvm/vbox.

Regards




[Qemu-devel] [PATCH] Fix parallel port software emulation

2008-02-06 Thread Hervé Poussineau

Hi,

Parallel port control register should always have the 0xc0 bits enabled 
(like this is already done in Qemu parallel port hardware emulation).


Status register should also start with EPP timeout bit set, like on real 
hardware.


Attached patch fixes both issues.

Hervé
Index: parallel.c
===
RCS file: /sources/qemu/qemu/hw/parallel.c,v
retrieving revision 1.12
diff -u -r1.12 parallel.c
--- parallel.c  18 Nov 2007 01:44:37 -  1.12
+++ parallel.c  6 Feb 2008 11:08:01 -
@@ -101,6 +101,7 @@
 parallel_update_irq(s);
 break;
 case PARA_REG_CTR:
+val |= 0xc0;
 if ((val & PARA_CTR_INIT) == 0 ) {
 s->status = PARA_STS_BUSY;
 s->status |= PARA_STS_ACK;
@@ -414,8 +415,10 @@
 s->status |= PARA_STS_ACK;
 s->status |= PARA_STS_ONLINE;
 s->status |= PARA_STS_ERROR;
+s->status |= PARA_STS_TMOUT;
 s->control = PARA_CTR_SELECT;
 s->control |= PARA_CTR_INIT;
+s->control |= 0xc0;
 s->irq = irq;
 s->irq_pending = 0;
 s->chr = chr;



[Qemu-devel] [PATCH] Enhance PC kbd debugging

2008-02-06 Thread Hervé Poussineau

Hi,

Attached patch adds a debug print when keyboard data register is read, 
and removes a dead define at top of file.


It also diminishes registred memory address range when i8042 is 
memory-mapped. Indeed, i8042 only has 2 ports (data and control), and 
it_shift parameter can be used to widen this range again.
Memory-mapped i8042 is only used in MIPS Pica 61 emulation, which 
doesn't suffer from this change.


Hervé
Index: pckbd.c
===
RCS file: /sources/qemu/qemu/hw/pckbd.c,v
retrieving revision 1.26
diff -u -r1.26 hw/pckbd.c
--- hw/pckbd.c  18 Nov 2007 01:44:37 -  1.26
+++ hw/pckbd.c  31 Jan 2008 16:49:47 -
@@ -30,9 +30,6 @@
 /* debug PC keyboard */
 //#define DEBUG_KBD
 
-/* debug PC keyboard : only mouse */
-//#define DEBUG_MOUSE
-
 /* Keyboard Controller Commands */
 #define KBD_CCMD_READ_MODE 0x20/* Read mode bits */
 #define KBD_CCMD_WRITE_MODE0x60/* Write mode bits */
@@ -283,11 +280,17 @@
 static uint32_t kbd_read_data(void *opaque, uint32_t addr)
 {
 KBDState *s = opaque;
+uint32_t val;
 
 if (s->pending == KBD_PENDING_AUX)
-return ps2_read_data(s->mouse);
+val = ps2_read_data(s->mouse);
+else
+val = ps2_read_data(s->kbd);
 
-return ps2_read_data(s->kbd);
+#if defined(DEBUG_KBD)
+printf("kbd: read data=0x%02x\n", val);
+#endif
+return val;
 }
 
 static void kbd_write_data(void *opaque, uint32_t addr, uint32_t val)
@@ -439,7 +442,7 @@
 kbd_reset(s);
 register_savevm("pckbd", 0, 3, kbd_save, kbd_load, s);
 s_io_memory = cpu_register_io_memory(0, kbd_mm_read, kbd_mm_write, s);
-cpu_register_physical_memory(base, 8 << it_shift, s_io_memory);
+cpu_register_physical_memory(base, 2 << it_shift, s_io_memory);
 
 s->kbd = ps2_kbd_init(kbd_update_kbd_irq, s);
 s->mouse = ps2_mouse_init(kbd_update_aux_irq, s);



[Qemu-devel] [PATCH] Add serial loopback mode (+ fixes reset)

2008-02-06 Thread Hervé Poussineau

Hi,

Serial emulation lacks loopback mode (ie a transmitted byte is directly 
received in the serial port).


After a reset, serial port registers should go back to default values. 
This is done by adding a call to qemu_register_reset().


Attached patch fixes both issues.

Hervé
Index: hw/serial.c
===
RCS file: /sources/qemu/qemu/hw/serial.c,v
retrieving revision 1.22
diff -u -r1.22 serial.c
--- hw/serial.c 25 Nov 2007 00:55:06 -  1.22
+++ hw/serial.c 31 Jan 2008 16:51:19 -
@@ -93,6 +93,8 @@
 int it_shift;
 };
 
+static void serial_receive_byte(SerialState *s, int ch);
+
 static void serial_update_irq(SerialState *s)
 {
 if ((s->lsr & UART_LSR_DR) && (s->ier & UART_IER_RDI)) {
@@ -161,11 +163,18 @@
 s->lsr &= ~UART_LSR_THRE;
 serial_update_irq(s);
 ch = val;
-qemu_chr_write(s->chr, &ch, 1);
+if (!(s->mcr & UART_MCR_LOOP)) {
+/* when not in loopback mode, send the char */
+qemu_chr_write(s->chr, &ch, 1);
+}
 s->thr_ipending = 1;
 s->lsr |= UART_LSR_THRE;
 s->lsr |= UART_LSR_TEMT;
 serial_update_irq(s);
+if (s->mcr & UART_MCR_LOOP) {
+/* in loopback mode, say that we just received a char */
+serial_receive_byte(s, ch);
+}
 }
 break;
 case 1:
@@ -223,7 +232,10 @@
 ret = s->rbr;
 s->lsr &= ~(UART_LSR_DR | UART_LSR_BI);
 serial_update_irq(s);
-qemu_chr_accept_input(s->chr);
+if (!(s->mcr & UART_MCR_LOOP)) {
+/* in loopback mode, don't receive any data */
+qemu_chr_accept_input(s->chr);
+}
 }
 break;
 case 1:
@@ -346,6 +358,25 @@
 return 0;
 }
 
+static void serial_reset(void *opaque)
+{
+SerialState *s = opaque;
+
+s->divider = 0;
+s->rbr = 0;
+s->ier = 0;
+s->iir = UART_IIR_NO_INT;
+s->lcr = 0;
+s->mcr = 0;
+s->lsr = UART_LSR_TEMT | UART_LSR_THRE;
+s->msr = UART_MSR_DCD | UART_MSR_DSR | UART_MSR_CTS;
+s->scr = 0;
+
+s->thr_ipending = 0;
+s->last_break_enable = 0;
+qemu_irq_lower(s->irq);
+}
+
 /* If fd is zero, it means that the serial device uses the console */
 SerialState *serial_init(int base, qemu_irq irq, CharDriverState *chr)
 {
@@ -355,9 +386,9 @@
 if (!s)
 return NULL;
 s->irq = irq;
-s->lsr = UART_LSR_TEMT | UART_LSR_THRE;
-s->iir = UART_IIR_NO_INT;
-s->msr = UART_MSR_DCD | UART_MSR_DSR | UART_MSR_CTS;
+
+qemu_register_reset(serial_reset, s);
+serial_reset(s);
 
 register_savevm("serial", base, 2, serial_save, serial_load, s);
 
@@ -452,12 +483,12 @@
 if (!s)
 return NULL;
 s->irq = irq;
-s->lsr = UART_LSR_TEMT | UART_LSR_THRE;
-s->iir = UART_IIR_NO_INT;
-s->msr = UART_MSR_DCD | UART_MSR_DSR | UART_MSR_CTS;
 s->base = base;
 s->it_shift = it_shift;
 
+qemu_register_reset(serial_reset, s);
+serial_reset(s);
+
 register_savevm("serial", base, 2, serial_save, serial_load, s);
 
 if (ioregister) {



Re: [Qemu-devel] Making qemu use 10.0.3.x not 10.0.2.x

2008-02-06 Thread Gerd Hoffmann
  Hi,

> Using a (once) randomly-chosen default greatly reduces the odds of
> that happening.  Many many people foolishly choose 10.0.{0,1,2,3}.x.
> Many fewer choose (say) 172.30.206.x.  So the fixed qemu default
> should be 172.30.206.x, or some other range also chosen at random.

A few years back I've worked for a web company, wrote the border router
firewall rules, had some rules in there to catch packages with
rfc1918-private addresses in public network.  Watching the statistics
showed that the 172.16/12 range was _much_ less used than 10/8 and
192.168/16.

I think 10/8 to be used by companies alot.  192.168.$smallnumber.0/24
seems to be a quite common default for DSL routers and the like.

Thus picking a random /24 network from 172.16/12 as new default value
has a pretty good chance to vastly reduce the number of clashes with
existing setups.

HTH,
  Gerd

-- 
http://kraxel.fedorapeople.org/xenner/




Re: [Qemu-devel] [PATCH] OpenGL for OS X

2008-02-06 Thread Philip Boulain

On 6 Feb 2008, at 06:00, Gwenole Beauchesne wrote:

2008/2/5, Fabrice Bellard <[EMAIL PROTECTED]>:

This is an SDL related issue (i.e. SDL may or may not use OpenGL to
display graphics). Fixing SDL for Mac OS X would also be interesting.


I think SDL trunk (1.3) supports OpenGL rendering more specifically
for various platforms.

Besides, on my MacBook, fullscreen SDL with a HW surface can indeed
perform much better (550 Mpixels/sec) than fullscreen GL (190
Mpixels/sec). With a SW surface, results are equivalent to GL though.

In windowed (800x600) mode, SDL performs at 28 Mpixels/sec and GL at
150 Mpixels/sec. So, SDL 1.2 for OSX (CG?) in windowed mode is indeed
sub-optimal. I have not tried with SDL trunk yet.

You can get my tests as svn co
http://svn.beauchesne.info/svn/gwenole/projects/blitter-tests/trunk
blitter-tests


After some more figures? Modern MacBook Pro (ATI Radeon X1600),  
just ./configure'd and make'd with defaults:


WINDOWED

X11 (Apple's server, circa OS X 10.4) best:
* Testing XShmPutImage()
  607 frames in 10.0 secs, 60.53 fps, 47.600 Mpixels/sec

SDL (1.2.12 stable release) best:
* Testing SDL_Blit() with RGB masks 00ff,ff00,00ff
  600 frames in 10.0 secs, 59.86 fps, 47.078 Mpixels/sec

OpenGL (without pixel buffer objects; with was slower) best:
* Testing glTexSubImage2D with target GL_TEXTURE_RECTANGLE_ARB,  
format GL_BGRA, type GL_UNSIGNED_INT_8_8_8_8_REV

  2628 frames in 10.0 secs, 262.67 fps, 206.571 Mpixels/sec

FULLSCREEN

X11 could only produce a screen-size window; it achieved 28.1 Mp/s  
for comparison.


* Testing SDL_Blit() with RGB masks 00ff,ff00,00ff
  1379 frames in 10.0 secs, 137.78 fps, 178.558 Mpixels/sec

* Testing glTexSubImage2D with target GL_TEXTURE_RECTANGLE_ARB,  
format GL_BGRA, type GL_UNSIGNED_INT_8_8_8_8_REV

  1845 frames in 10.0 secs, 184.37 fps, 238.945 Mpixels/sec

So, yes, can confirm that stable SDL is currently slooow in windowed  
mode compared to OGL, and still slower in fullscreen. Interestingly,  
use of pixel buffer objects made things /slower/ here. I noted very  
heavy CPU activity for glperf for the GL_TEXTURE_2D, format GL_BGRA,  
type GL_UNSIGNED_BYTE case, too (also the fastest, RECTANGLE_ARB  
case, but this may have just been that it was shunting so much more  
data).


Full details below.

LionsPhil

--- WINDOWED ---

#
# Running program x11perf
#
Global test configuration:
  Per-test duration: 10 seconds
  Display size: 1024x768, 24 bpp, windowed mode
* Testing XPutImage()
  361 frames in 10.0 secs, 35.96 fps, 28.279 Mpixels/sec
* Testing XShmPutImage()
  607 frames in 10.0 secs, 60.53 fps, 47.600 Mpixels/sec

#
# Running program sdlperf
#
Test global configuration:
  Per-test duration: 10 seconds
  Display size: 1024x768, 32 bpp, windowed mode, SW surface
* Testing SDL_Blit() with RGB masks 00ff,ff00,00ff
  600 frames in 10.0 secs, 59.86 fps, 47.078 Mpixels/sec
* Testing SDL_Blit() with RGB masks 00ff,ff00,00ff
  303 frames in 10.1 secs, 30.11 fps, 23.677 Mpixels/sec

#
# Running program glperf
#
OpenGL version   : 2.0 ATI-1.4.56
OpenGL vendor: ATI Technologies Inc.
OpenGL renderer  : ATI Radeon X1600 OpenGL Engine
OpenGL extensions:
  GL_APPLE_client_storage
  GL_APPLE_texture_range
  GL_APPLE_packed_pixels
  GL_EXT_abgr
  GL_EXT_bgra
  GL_ARB_shading_language_100
  GL_ARB_fragment_program
  GL_ARB_fragment_shader
  GL_ARB_pixel_buffer_object
  GL_EXT_framebuffer_object
  GL_EXT_texture_rectangle
  GL_ARB_texture_rectangle
  GL_ARB_texture_non_power_of_two
  GL_ARB_imaging
  GL_SGI_color_matrix
Global test configuration:
  Per-test duration: 10 seconds
  Display size: 1024x768, 32 bpp, windowed mode
  Use non-power-of-two textures   : yes
  Use pixel buffer objects: no
  Use Apple client storage extension  : no
  Use Apple texture range extension   : no
* Testing glTexSubImage2D with target GL_TEXTURE_2D, format GL_RGB,  
type GL_UNSIGNED_BYTE

  257 frames in 10.0 secs, 25.58 fps, 20.115 Mpixels/sec
* Testing glTexSubImage2D with target GL_TEXTURE_2D, format GL_RGBA,  
type GL_UNSIGNED_BYTE

  2397 frames in 10.0 secs, 239.56 fps, 188.395 Mpixels/sec
* Testing glTexSubImage2D with target GL_TEXTURE_2D, format GL_BGRA,  
type GL_UNSIGNED_BYTE

  1034 frames in 10.0 secs, 103.21 fps, 81.171 Mpixels/sec
* Testing glTexSubImage2D with target GL_TEXTURE_2D, format GL_BGRA,  
type GL_UNSIGNED_INT_8_8_8_8_REV

  2618 frames in 10.0 secs, 261.62 fps, 205.744 Mpixels/sec
* Testing glTexSubImage2D with target GL_TEXTURE_RECTANGLE_ARB,  
format GL_RGB, type GL_UNSIGNED_BYTE

  259 frames in 10.1 secs, 25.75 fps, 20.251 Mpixels/sec
* Testing glTexSubImage2D with target GL_TEXTURE_RECTANGLE_ARB,  
format GL_RGBA, type GL_UNSIGNED_BYTE

  2377 frames in 10.0 secs, 237.53 fps, 186.804 Mpixels/sec
* Testing glTexSubImage2D with target GL_TEXTURE_RECTANGLE_ARB,  
format GL_BGRA, type GL_UNSIGNED_BYTE

  1034 frames in 10.0 secs, 103.22 fps, 81.179 Mpixels/sec

[Qemu-devel] [PATCH] memory usage and ioports

2008-02-06 Thread Samuel Thibault
Samuel Thibault, le Mon 19 Nov 2007 15:20:16 +, a écrit :
> Qemu currently uses 6 65k tables of pointers for handling ioports, which
> makes 3MB on 64bit machines. There's a comment that says "XXX: use a two
> level table to limit memory usage". But wouldn't it be more simple and
> effective to just allocate them through mmap() and when a NULL pointer
> is read, call the default handlers?

Here is a patch that does this and indeed saves 3MB on 64bit machines.

Samuel
? ChangeLog
Index: vl.c
===
RCS file: /sources/qemu/qemu/vl.c,v
retrieving revision 1.403
diff -u -p -r1.403 vl.c
--- vl.c3 Feb 2008 03:45:47 -   1.403
+++ vl.c6 Feb 2008 14:22:18 -
@@ -267,17 +267,29 @@ static void default_ioport_writeb(void *
 static uint32_t default_ioport_readw(void *opaque, uint32_t address)
 {
 uint32_t data;
-data = ioport_read_table[0][address](ioport_opaque[address], address);
+IOPortReadFunc *func = ioport_read_table[0][address];
+if (!func)
+   func = default_ioport_readb;
+data = func(ioport_opaque[address], address);
 address = (address + 1) & (MAX_IOPORTS - 1);
-data |= ioport_read_table[0][address](ioport_opaque[address], address) << 
8;
+func = ioport_read_table[0][address];
+if (!func)
+   func = default_ioport_readb;
+data |= func(ioport_opaque[address], address) << 8;
 return data;
 }
 
 static void default_ioport_writew(void *opaque, uint32_t address, uint32_t 
data)
 {
-ioport_write_table[0][address](ioport_opaque[address], address, data & 
0xff);
+IOPortWriteFunc *func = ioport_write_table[0][address];
+if (!func)
+   func = default_ioport_writeb;
+func(ioport_opaque[address], address, data & 0xff);
 address = (address + 1) & (MAX_IOPORTS - 1);
-ioport_write_table[0][address](ioport_opaque[address], address, (data >> 
8) & 0xff);
+func = ioport_write_table[0][address];
+if (!func)
+   func = default_ioport_writeb;
+func(ioport_opaque[address], address, (data >> 8) & 0xff);
 }
 
 static uint32_t default_ioport_readl(void *opaque, uint32_t address)
@@ -297,16 +309,6 @@ static void default_ioport_writel(void *
 
 static void init_ioports(void)
 {
-int i;
-
-for(i = 0; i < MAX_IOPORTS; i++) {
-ioport_read_table[0][i] = default_ioport_readb;
-ioport_write_table[0][i] = default_ioport_writeb;
-ioport_read_table[1][i] = default_ioport_readw;
-ioport_write_table[1][i] = default_ioport_writew;
-ioport_read_table[2][i] = default_ioport_readl;
-ioport_write_table[2][i] = default_ioport_writel;
-}
 }
 
 /* size is the word size in byte */
@@ -378,11 +380,14 @@ void isa_unassign_ioport(int start, int 
 
 void cpu_outb(CPUState *env, int addr, int val)
 {
+IOPortWriteFunc *func = ioport_write_table[0][addr];
+if (!func)
+   func = default_ioport_writeb;
 #ifdef DEBUG_IOPORT
 if (loglevel & CPU_LOG_IOPORT)
 fprintf(logfile, "outb: %04x %02x\n", addr, val);
 #endif
-ioport_write_table[0][addr](ioport_opaque[addr], addr, val);
+func(ioport_opaque[addr], addr, val);
 #ifdef USE_KQEMU
 if (env)
 env->last_io_time = cpu_get_time_fast();
@@ -391,11 +396,14 @@ void cpu_outb(CPUState *env, int addr, i
 
 void cpu_outw(CPUState *env, int addr, int val)
 {
+IOPortWriteFunc *func = ioport_write_table[1][addr];
+if (!func)
+   func = default_ioport_writew;
 #ifdef DEBUG_IOPORT
 if (loglevel & CPU_LOG_IOPORT)
 fprintf(logfile, "outw: %04x %04x\n", addr, val);
 #endif
-ioport_write_table[1][addr](ioport_opaque[addr], addr, val);
+func(ioport_opaque[addr], addr, val);
 #ifdef USE_KQEMU
 if (env)
 env->last_io_time = cpu_get_time_fast();
@@ -404,11 +412,14 @@ void cpu_outw(CPUState *env, int addr, i
 
 void cpu_outl(CPUState *env, int addr, int val)
 {
+IOPortWriteFunc *func = ioport_write_table[2][addr];
+if (!func)
+   func = default_ioport_writel;
 #ifdef DEBUG_IOPORT
 if (loglevel & CPU_LOG_IOPORT)
 fprintf(logfile, "outl: %04x %08x\n", addr, val);
 #endif
-ioport_write_table[2][addr](ioport_opaque[addr], addr, val);
+func(ioport_opaque[addr], addr, val);
 #ifdef USE_KQEMU
 if (env)
 env->last_io_time = cpu_get_time_fast();
@@ -418,7 +429,10 @@ void cpu_outl(CPUState *env, int addr, i
 int cpu_inb(CPUState *env, int addr)
 {
 int val;
-val = ioport_read_table[0][addr](ioport_opaque[addr], addr);
+IOPortReadFunc *func = ioport_read_table[0][addr];
+if (!func)
+   func = default_ioport_readb;
+val = func(ioport_opaque[addr], addr);
 #ifdef DEBUG_IOPORT
 if (loglevel & CPU_LOG_IOPORT)
 fprintf(logfile, "inb : %04x %02x\n", addr, val);
@@ -433,7 +447,10 @@ int cpu_inb(CPUState *env, int addr)
 int cpu_inw(CPUState *env, int addr)
 {
 int val;
-val = ioport_read_table[1][addr](ioport_o

[Qemu-devel] Re: [PATCH] avoid name clashes due to LIST_* macros

2008-02-06 Thread Ian Jackson
iwj writes ("[PATCH] avoid name clashes due to LIST_* macros"):
> qemu's audio subdirectory contains a copy of BSD's sys-queue.h, which
> defines a bunch of LIST_ macros.  This makes it difficult to build a
> program made partly out of qemu and partly out of the Linux kernel[1],
> since Linux has a different set of LIST_ macros.  It might also cause
> trouble when mixing with BSD-derived code.
> 
> Under the circumstances it's probably best to rename the versions in
> qemu.  The attached patch does this.

Was there something wrong with my patch ?  I don't seem to have seen
any replies to it.

Ian.




Re: [Qemu-devel] Making qemu use 10.0.3.x not 10.0.2.x

2008-02-06 Thread Ian Jackson
andrzej zaborowski writes ("Re: [Qemu-devel] Making qemu use 10.0.3.x not 
10.0.2.x"):
> Right, but this happens so rarely (and there are no obvious symptoms
> when it happens)

The symptoms are generally that the host loses its network connection
to those parts of the outside world, or that it can't reach the guests
at all.

>  that it's okay for the user to set up non-user-net
> networking or issue this one line grep command posted in the original
> message. A more useful addition would perhaps be a simple warning from
> qemu when the host is in a network containing 10.0.2.0/8.

I think a warning if a clash is detected is fine.

> Indeed when you google "10.0.2.2 ip" half of the hits relate to
> qemu/kvm/vbox.

... and the other half to people whose setups this range will break !


Gerd Hoffmann writes ("Re: [Qemu-devel] Making qemu use 10.0.3.x not 10.0.2.x"):
> A few years back I've worked for a web company, wrote the border router
> firewall rules, had some rules in there to catch packages with
> rfc1918-private addresses in public network.  Watching the statistics
> showed that the 172.16/12 range was _much_ less used than 10/8 and
> 192.168/16.

Exactly.

> I think 10/8 to be used by companies alot.  192.168.$smallnumber.0/24
> seems to be a quite common default for DSL routers and the like.

Indeed.

> Thus picking a random /24 network from 172.16/12 as new default value
> has a pretty good chance to vastly reduce the number of clashes with
> existing setups.

Exactly.


Ian.




[Qemu-devel] Re: [PATCH] Remove clone-and-hack code from qemu-img

2008-02-06 Thread Ian Jackson
iwj writes ("[PATCH] Remove clone-and-hack code from qemu-img"):
> qemu-img.c has copies of qemu_malloc et al, which are already provided
> in osdep.c.  The attached patch removes these from qemu-img.c and
> adds osdep.o to BLOCK_OBJS.

Is there some reason why this patch has not yet been included
in the current CVS ?

Thanks,
Ian.




[Qemu-devel] Re: [PATCH] Allow AF_UNIX sockets to be disabled on non-Windows

2008-02-06 Thread Ian Jackson
iwj writes ("[PATCH] Allow AF_UNIX sockets to be disabled on non-Windows"):
> The patch below makes it possible to disable AF_UNIX (unix-domain)
> sockets in host environments which do not define _WIN32, by adding
> -DNO_UNIX_SOCKETS to the compiler flags.  This is useful in the
> effectively-embedded qemu host which are going to be using for device
> emulation in Xen.

If you don't like my patches please do say.

Ian.




[Qemu-devel] Re: [PATCH] check return value from read() and write() properly

2008-02-06 Thread Ian Jackson
iwj writes ("[PATCH] check return value from read() and write() properly"):
> The system calls read and write may return less than the whole amount
> requested for a number of reasons.  So the idioms
>if (read(fd, &object, sizeof(object)) != sizeof(object)) goto fail;
> and even worse
>if (read(fd, &object, sizeof(object)) < 0) goto fail;
> are wrong.  Additionally, read and write may sometimes return EINTR on
> some systems so interruption is not desired or expected a loop is
> needed.
> 
> In the attached patch I introduce two new pairs of functions:

I think this fix should be applied because it corrects bugs which
might conceivably data loss.

Thanks,
Ian.




Re: [Qemu-devel] [PATCH] avoid name clashes due to LIST_* macros

2008-02-06 Thread Anthony Liguori

Ian Jackson wrote:

qemu's audio subdirectory contains a copy of BSD's sys-queue.h, which
defines a bunch of LIST_ macros.  This makes it difficult to build a
program made partly out of qemu and partly out of the Linux kernel[1],
since Linux has a different set of LIST_ macros.  It might also cause
trouble when mixing with BSD-derived code.
  


That doesn't seem like a very good justification.  If you're mixing QEMU 
code with other code, it's easier for you to maintain these merge 
conflict fixes as normal QEMU developers would have no idea what it 
wasn't okay to just use LIST_xxx


Regards,

Anthony Liguori


Under the circumstances it's probably best to rename the versions in
qemu.  The attached patch does this.

[1] You might well ask why anyone would want to do this.  In Xen we
are moving our emulation of IO devices from processes which run on the
host into a dedicated VM (one per actual VM) which we call a `stub
domain'.  This dedicated VM runs a very cut-down `operating system'
which uses some code from Linux.

Regards,
Ian.

  






Re: [Qemu-devel] [PATCH] check return value from read() and write() properly

2008-02-06 Thread Anthony Liguori

Ian Jackson wrote:

The system calls read and write may return less than the whole amount
requested for a number of reasons.  So the idioms
   if (read(fd, &object, sizeof(object)) != sizeof(object)) goto fail;
and even worse
   if (read(fd, &object, sizeof(object)) < 0) goto fail;
are wrong.  Additionally, read and write may sometimes return EINTR on
some systems so interruption is not desired or expected a loop is
needed.

In the attached patch I introduce two new pairs of functions:
 qemu_{read,write}  which are like read and write but never
  return partial answers unnecessarily
  and which never fail with EINTR
 qemu_{read,write}_ok   which returns -1 for any failure or
  incomplete read, or +1 for success,
  reducing repetition at calling points
  


There is already a unix_write function that serves this purpose.  I 
think a better approach would be to define unix_read/unix_write and 
remove the EAGAIN handling and instead only spin on EINTR.


I don't really like the _ok thing as it's not a very common idiom.

Regards,

Anthony Liguori


I added these to osdep.c, and there are many calls in the block
drivers, so osdep.o needs to be in BLOCK_OBJS as I do in my previous
patch (getting rid of the duplicate definitions of qemu_malloc &c).

The patch then uses these new functions whereever appropriate.  I
think I have got each calling point correct but for obvious reasons I
haven't done a thorough test.

The resulting code is I think both smaller and more correct.  In most
cases the correct behaviour was obvious.

There was one nonobvious case: I removed unix_write from vl.c and
replaced calls to it with calls to qemu_write.  unix_write looped on
EAGAIN (though qemu_write doesn't) but I think this is wrong since
that simply results in spinning if the fd is nonblocking and the write
cannot complete immediately.  Callers with nonblocking fds have to
cope with partial results and retry later.  Since unix_write doesn't
do that I assume that its callers don't really have nonblocking fds;
if they do then the old code is buggy and my new code is buggy too but
in a different way.

Also, the Makefile rule for dyngen$(EXESUF) referred to dyngen.c
rather than dyngen.o, which appears to have been a mistake which I
have fixed since I had to add osdep.o anyway.

Regards,
Ian.


  






Re: [Qemu-devel] Re: [PATCH] Allow AF_UNIX sockets to be disabled on non-Windows

2008-02-06 Thread Anthony Liguori

Ian Jackson wrote:

iwj writes ("[PATCH] Allow AF_UNIX sockets to be disabled on non-Windows"):
  

The patch below makes it possible to disable AF_UNIX (unix-domain)
sockets in host environments which do not define _WIN32, by adding
-DNO_UNIX_SOCKETS to the compiler flags.  This is useful in the
effectively-embedded qemu host which are going to be using for device
emulation in Xen.



If you don't like my patches please do say.
  


It should just check a define for _MINIOS.  That makes it a lot more 
obvious why it's not being included.


Regards,

Anthony Liguori


Ian.


  






Re: [Qemu-devel] Re: [PATCH] Allow AF_UNIX sockets to be disabled on non-Windows

2008-02-06 Thread Samuel Thibault
Anthony Liguori, le Wed 06 Feb 2008 13:19:23 -0600, a écrit :
> Ian Jackson wrote:
> >iwj writes ("[PATCH] Allow AF_UNIX sockets to be disabled on non-Windows"):
> >  
> >>The patch below makes it possible to disable AF_UNIX (unix-domain)
> >>sockets in host environments which do not define _WIN32, by adding
> >>-DNO_UNIX_SOCKETS to the compiler flags.  This is useful in the
> >>effectively-embedded qemu host which are going to be using for device
> >>emulation in Xen.
> 
> It should just check a define for _MINIOS.

That's exactly what we wanted to avoid.

> That makes it a lot more obvious why it's not being included.

But it doesn't necessarily make obvious _what_ is not being included
(here, local sockets). To my mind, something like

#if !(defined(_WIN32) || defined(_MINIOS))
#define DO_UNIX_SOCKET
#endif

And then in the code, #ifdef DO_UNIX_SOCKET, is much nicer than
repeating the if (!def||def) everywhere (and have to change them all if
another system needs that too)

Samuel




Re: [Qemu-devel] Under WinXP, Solaris installation does not work in qemu 0.9.1 but does work in qemu 0.9.0

2008-02-06 Thread Carlo Marcelo Arenas Belon
On Wed, Jan 30, 2008 at 05:31:05PM +0300, Dmitry Bolshakov wrote:
> 
> qemu-0.9.1:
> -builded by myself too
> http://qemu-forum.ipi.fi/viewtopic.php?f=5&t=4269

qemu 0.9.1 was released with a known bug which prevents installing solaris
guests with timeouts in the CD device and which was finally fixed with the
patch from :

  http://lists.gnu.org/archive/html/qemu-devel/2008-01/msg00211.html

Carlo