> >> The qemu chardevs can now return -EAGAIN when a non-blocking remote
> >> isn't ready to accept more data.
> >>
> >> Comments?
> >
> > This is a major change in semantics. Are you sure all users handle this
> > correctly? My guess is that most of the devices don't.
>
> I don't expect trouble h
> A major reason for this deadlock could likely be removed by shutting
> down the tap (if peered) or dropping packets in user space (in case of
> vlan) when a NIC is stopped or otherwise shut down. Currently most (if
> not all) NIC models seem to signal both "queue full" and "RX disabled"
> via !ca
> But anyway, this flow control mechanism is buggy - what if instead of
> an interface down, you just have a *slow* guest? That should not push
> back so much that it makes other guests networking with each other
> slow down.
The OP described multiple guests connected via a host bridge. In this c
> Paul Brook wrote:
> >> But anyway, this flow control mechanism is buggy - what if instead of
> >> an interface down, you just have a *slow* guest? That should not push
> >> back so much that it makes other guests networking with each other
> >> slow
> Paul Brook wrote:
> >> A major reason for this deadlock could likely be removed by shutting
> >> down the tap (if peered) or dropping packets in user space (in case of
> >> vlan) when a NIC is stopped or otherwise shut down. Currently most (if
> >> not all
> Bear in mind that its rather unfair to (as has been in this case) have a
> patch reviewed, modified based on criticism, and then rejected after
> effort has been wasted working on it.
I'm pretty sure I raised all these issues the first time this code was
submitted.
Paul
> So, rather than bike-shedding, how about making some kind of decision?
Ok, let me make this simple.
Features such as rate limiting and EGD protocol translation should not be part
of individual device emulation. They are part of the host interface, not the
guest machine. If you really want th
> This series lets interested callers ask for an -EAGAIN return from the
> chardev backends (only unix and tcp sockets as of now) to implement
> their own flow control.
As mentioned previously, I think this is a bad idea. The device has no useful
way of determining when to transmit the rest of t
> I find the way we calculate the dummy field in CPUTLBEntry funny. What
> is the point of having:
>
>((-sizeof(target_ulong) * 3) & (sizeof(unsigned long) -
> 1))
>
> in its size? Why shouldnt it be more simple, like below?
>
> uint8_t dummy[(1 << CPU_TLB_ENTRY_BITS) -
> I sent out this series as a "feeler" to see if the approach was
> acceptable.
>
> Paul didn't reply to my reply addressing his concern, so I take that as
> he's OK with the approach as well :-)
I'd probably exposed this as an asyncronous write rather than nonblocking
operation. However both ha
> Does this mean that virtio-blk supports all three combinations?
>
>1. FLUSH that isn't a barrier
>2. FLUSH that is also a barrier
>3. Barrier that is not a flush
>
> 1 is good for fsync-like operations;
> 2 is good for journalling-like ordered operations.
> 3 sounds like it doesn't
> Alexander Graf wrote:
> > They should be atomic. TCG SMP swaps between different vCPUs only
> > after translation blocks are done. In fact, the only way I'm aware
> > of to stop the execution of a TB mid-way is a page fault.
>
> A page fault would interrupt it if the atomic is implemented as
> a
> I really dislike the idea of adding another function for this. Can you
> explain why you need this functionality for virtio-console and why this
> functionality isn't needed for everything else?
This functionality should (in principle) be used by all serial port
implementations.
Physical seri
> On 05/05/2010 08:34 AM, Paul Brook wrote:
> >> I really dislike the idea of adding another function for this. Can you
> >> explain why you need this functionality for virtio-console and why this
> >> functionality isn't needed for everything else?
>
> > What about another cache=... value instead of adding more options? I'm
> > quite sure you'll only ever need this with writeback caching. So we
> > could have cache=none|writethrough|writeback|wb-noflush or something
> > like that.
I agree.
> The cache option really isn't too useful. There's
> > cache=always (or a more scary name like cache=lie to defend against
> > idiots)
> >
> >Reads and writes are cached. Guest flushes are ignored. Useful for
> >dumb guests in non-critical environments.
>
> I really don't believe that we should support a cache=lie. There are
> many othe
> > In a development environment the rules can be a bit different. For
> > example if you're testing an OS installer then you really don't want to
> > be passing magic mount options. If the host machine dies then you don't
> > care about the state of the guest because you're going to start from
> >
> > I disagree. We should not be removing or rejecting features just because
> > they allow you to shoot yourself in the foot. We probably shouldn't be
> > enabling them by default, but that's a whole different question.
>
> I disagree and think the mentality severely hurts usability. QEMU's
>
> Paul Brook wrote:
> > cache=none:
> > No host caching. Reads and writes both go directly to underlying
> > storage.
> >
> > Useful to avoid double-caching.
> >
> > cache=writethrough
> >
> > Reads are cached. Writes go directly
> > Paul Brook wrote:
> > > cache=none:
> > > No host caching. Reads and writes both go directly to underlying
> > > storage.
> > >
> > > Useful to avoid double-caching.
> > >
> > > cache=writethrough
> > >
> On 05/11/2010 10:53 AM, Paul Brook wrote:
> >>> I disagree. We should not be removing or rejecting features just
> >>> because they allow you to shoot yourself in the foot. We probably
> >>> shouldn't be enabling them by default, but that'
> > But that is no different from what we have today. Users who update their
> > qemu and see issues with libvirt can also be asked to update libvirt. I
> > have already had several cases where I needed to do that anyway.
>
> The general policy of QEMU has been to try and avoid known breakage of
> The other solution would be to use the DirectFB driver for SDL which
> would allow to do slightly the same as this patch. But that would mean
> having to deal with an additional layer in the graphical stack, which is
> not exactly what one wants from a performance or a complexity point of
> view.
> > I don't see a difference between the results. Apparently the barrier
> > option doesn't change a thing.
>
> Ok. I don't like it, but I can see how it's compelling. I'd like to
> see the documentation improved though. I also think a warning printed
> on stdio about the safety of the option w
> But it's easy to support migration to old qemu just
> by discarding the INTx state, and this is not
> at all harder, or worse, than migrating from old qemu
> to new one.
Do we really care about migrating to older versions?
Migrating to a new version (backward compatibility) I see the use, it al
On Tuesday 24 November 2009, Gerd Hoffmann wrote:
> On 11/16/09 19:53, Paul Brook wrote:
> > Capping the amount of memory required for a transfer *is* implemented, in
> > both LSI and virtio-blk. The exception being SCSI passthrough where the
> > kernel API makes it
> > Reading in old state files is a whole lot easier (to write
> > maintain, and stay sane) than producing state that is bug-compatible with
> > previous versions.
>
> It seems to me that old->new and new->old migrations are
> of about the same level of difficulty.
> Supporting one of these but no
> --- a/target-s390x/cpu.h
> +++ b/target-s390x/cpu.h
> @@ -30,8 +30,7 @@
>
> #include "softfloat.h"
>
> -#define NB_MMU_MODES 2 // guess
> -#define MMU_USER_IDX 0 // guess
> +#define NB_MMU_MODES 2
The fact that you're modifying a file you added earlier in the same patch
series gives me very
> So maybe add "use -device ? to get list of all devices"
> to help text?
>
> [...@tuck qemu]$ ~/qemu-git/bin/qemu-system-x86_64 -device ?
> /home/mst/qemu-git/bin/qemu-system-x86_64: invalid option -- '-device'
You need to stop your shell eating the ?
Paul
> No, this would slow us down because these are per-pin.
> We need a sum of interrupts so that config space
> can be updated by a single command.
> Interrupts are a fastpath, extra loops there should be avoided.
It's really not that much of a fast path. Unless you're doing something
particularly
>> It's really not that much of a fast path. Unless you're doing something
>> particularly obscure then even under heavy load you're unlikely to exceed
>> a few kHz.
>
>I think with kvm, heavy disk stressing benchmark can get higher.
I'd still expect this to be the least of your problems.
If not
> You might want to have a 'static uint8_t zero_length_malloc[0]' and
> return that instead of the magic cookie '1'. Makes the code more
> readable IMHO and you'll also have symbol in gdb when debugging qemu.
Having multiple malloc return the same pointer sounds like a really bad idea.
Paul
On Tuesday 01 December 2009, Glauber Costa wrote:
> On Tue, Dec 01, 2009 at 12:57:27PM +0000, Paul Brook wrote:
> > > You might want to have a 'static uint8_t zero_length_malloc[0]' and
> > > return that instead of the magic cookie '1'. Makes the code more
> > Our cpu keeps multiple seperate address spaces open at the same time
> > (similar to x86 with a bunch of cr0s), defined by address space control
> > elements in various control registers. Linux uses primary, secondary and
> > home space to address user space and kernel space. The third one is u
On Monday 30 November 2009, Alexander Graf wrote:
> Currently we have this stupid role of disallowing:
>
> if (r)
> break;
This has been discussed to death several times, in several different paces,
and with no clear resolution or consensus, so I'm going to make an executive
decision:
On Monday 07 December 2009, Artyom Tarasenko wrote:
> Can it be that qemu (-system-sparc in my case, but I guess it's more
> or less similar on all platforms) reacts to irqs slower than a real
> hardware due to tcg optimizations?
Interrupts generally only trigger at branch instructions, or similar
> type *qemu_new(type, n_types);
> type *qemu_new0(type, n_types);
>
> type *qemu_renew(type, mem, n_types);
> type *qemu_renew0(type, mem, n_types);
It always annoys me having to specify element count for things that aren't
arrays.
I suggestion a single object allocation function, and an array
> > Using -icount should give you precise interrupt delivery.
>
> That's what I thought, but as I reported a few days ago, I couldn't
> find a good value for icount when using OBP.
> I tried a few values but keep getting "qemu: fatal: Raised interrupt
> while not in I/O function".
That's almost c
On Thursday 10 December 2009, Michael S. Tsirkin wrote:
> The recent e1000 bug made the important of using
> symbolic macros for pci config access clear for me.
> So I started going over drivers and converting
> to symbolic constants instead of hard-coded ones.
> I did a large part until I run out
> According to comment in exec-all.h:
> /* Deterministic execution requires that IO only be performed on the last
>instruction of a TB so that interrupts take effect immediately. */
>
> Sparc generator must then violate this assumption. Is the assumption
> valid also when not using icount and
On Saturday 12 December 2009, Dave Airlie wrote:
> So I've been musing on the addition of some sort of 3D passthrough for
> qemu (as I'm sure have lots of ppl)
IIUC a typical graphics system consists of several operations:
1) Allocate space for data objects[2] on server[1].
2) Upload data from cl
> -uint32_t VF; /* V is the bit 31. All other bits are undefined */
> +uint32_t VF; /* V is the bit 28. */
No. The original comment is correct.
Paul
On Saturday 19 December 2009, Richard Henderson wrote:
> Changes from round 3:
>
> * Drop movcond for now.
> * Only use movzbl and not xor in setcond.
I'm still catching up on mail backlog from this thread, but I'm concerned that
we're exposing setcond to the target translation code if we're p
> --- a/posix-aio-compat.c
> +++ b/posix-aio-compat.c
> @@ -502,7 +502,8 @@ static void aio_signal_handler(int signum)
> if (posix_aio_state) {
> char byte = 0;
>
> - write(posix_aio_state->wfd, &byte, sizeof(byte));
> + if (write(posix_aio_state->wfd, &byte, sizeof(byte)) != sizeo
> http://thread.gmane.org/gmane.comp.emulators.qemu/44869
>
> I'm not sure why Paul never pushed it but I think he was able to create
> the syborg board purely from a device tree.
The patches referenced above include purely device-tree based Syborg and
Stellaris boards.
It works fairly nicely f
> > We should just qemu_ram_alloc() that memory regardless of whether we
> > every map it into the guest. Since roms can be large, we want to send
> > their contents over during the live part of migration. If we use
> > qemu_ram_alloc(), we get that for free.
>
> Currently live migration uses ra
> > Ram allocations should be associated with a device. The VMState stuff
> > this should make this fairly straightforward.
>
> Right, but for the sake of simplicity, you don't want to treat that ram
> any differently than main ram wrt live migration. That's why I proposed
> adding a context id f
On Tuesday 22 December 2009, Anthony Liguori wrote:
> On 12/22/2009 05:26 AM, Michael S. Tsirkin wrote:
> > On Tue, Dec 08, 2009 at 06:18:18PM +0200, Michael S. Tsirkin wrote:
> >> The following fixes a class of long-standing bugs in qemu:
> >> when kvm is enabled, guest might access device structu
> > Given this is supposed to be portable code, I wonder if we should have
> > atomic ordered memory accessors instead.
> >
> > Paul
>
> Could you clarify please?
>
> The infiniband bits I used as base are very portable,
> I know they build on a ton of platforms. I just stripped
> a couple of inf
> The problem is that the whole define is just plain wrong which tells me
> that the code is using the bswap functions incorrectly. This really
> needs to be fixed by someone who knows the dbdma device. I don't see how
> calling incorrect calls even more incorrect makes any difference.
The real pr
> > I mean have a single function that does both the atomic load/store and
> > the memory barrier. Instead of:
> >
> > stw_phys(addr, val)
> > barrier();
> >
> > We do:
> >
> > stw_phys_barrier(addr, val).
>
> Well, I think it's a good idea to use Linux APIs instead of
> inventing our own. A
> > > So possibly this means that we
> > > could optimize the barrier away, but I don't think this amounts to a
> > > serious issue, I guess portability/readability is more important.
> >
> > The more important issue is that regular devices which to not require
> > coherency or ordering can omit th
On Wednesday 23 December 2009, Michael S. Tsirkin wrote:
> On Wed, Dec 23, 2009 at 12:25:46PM +0000, Paul Brook wrote:
> > > > > So possibly this means that we
> > > > > could optimize the barrier away, but I don't think this amounts to
> > > > >
> > > > Given we need both, why not actually defined an API that gives you
> > > > this?
> > >
> > > Because, I do not want to define APIs, I want to reuse an existing one.
> >
> > Except that, say you said later in your email, no API exists for doing
> > atomic accesses, so you need different code
On Tuesday 12 January 2010, Isaku Yamahata wrote:
> To use pci host framework, use PCIHostState instead of PCIBus in
> PCIVPBState.
No.
pci_host.[ch] provides very specific functionality, it is not a generic PCI
host device. Specifically it provides indirect access to PCI config space via
a me
> I thought we will get rid of vpb_pci_config_addr, and fill in
> fields in PCIConfigAddress directly. If we don't, and still
> recode into PC format, this is not making code any prettier
> so I don't really see what this buys us.
I agree. This patch seems to be introducing churn for no benefit.
> - It would be good to limit the changes in the CPU emulation code to
> handle the TLS. For example, on MIPS, the TLS register must not be
> stored in the CPU state. Same for ARM.
I disagree. The TLS register is part of the CPU state. On many machines
(including ARMv6 CPUs) it's an actual CPU re
> Typically, gcc provides a built-in function ffs
Actually, not it doesn't.
As with many other standard functions, gcc will sometimes optimize it, maybe
expanding to inline code. However there's always the possibility of falling
back to the standard C library implementation.
Paul
On Sunday 16 December 2007, Andrzej Zaborowski wrote:
> CVSROOT: /sources/qemu
> Module name: qemu
> Changes by: Andrzej Zaborowski 07/12/16 13:17:13
>
> Modified files:
> . : vl.c
>
> Log message:
> Redundant timer rearm optimisation by Anders Melchiorsen.
I
On Sunday 16 December 2007, Anders wrote:
> Paul Brook wrote:
> >>Redundant timer rearm optimisation by Anders Melchiorsen.
> >
> > I think this is incorrect.
> >
> > When a timer is modified, we need to rearm the host timer immediately. We
> > can
> In any case, vl.c's saving arrangements do save the buffer in
> phys_ram_base - but that isn't what the guest sees in the VGA memory
It doesn't matter what the guest physical mappings (if any) are.
> area. The guest sees the vga memory-mapped IO registers (whose
> meaning _is_ generally saved
> I don't really understand why the vga is handled in this way in qemu
> but then I'm not an expert on PC graphics hardware. Is it necessary
> or desirable for the VGA RAM to take up virtual address space in this
> way, or is there some other reason why VGA RAM in the ordinary vga
> driver is rega
On Monday 17 December 2007, Fabrice Bellard wrote:
> Laurent Vivier wrote:
> > This patch enhances the "-drive ,cache=off" mode with IDE drive emulation
> > by removing the buffer used in the IDE emulation.
> > ---
> > block.c | 10 +++
> > block.h |2
> > block_int.h |1
> > cpu
> - Qemu initializes all its memory to 0. Real hardware doesn't seem to
> do that. This means that usage of uninitialized memory is very hard
> to debug (because 0 is often a good value, while [random] is not, so
> the problem can only be seen on real hardware, which makes it hard to
> de
On Tuesday 01 January 2008, Vinod E wrote:
> Hi,
>I have a special kind of PCI device on my system.
> I want QEMU to emulate that device and have Guest VM
> see that. Can someone point me to any documentation
> available on how I/O device handling is done in QEMU?
Read the source. There are p
On Wednesday 02 January 2008, Robert Reif wrote:
> Sparc32 has a 64 bit counter that should be read and written as 64
> bits but that isn't supported in QEMU. I did a quick hack to add
> 64 bit i/o and converted sparc32 to use it and it seems to work.
> I'm suppling the sparc changes to get commen
> Also the opaque parameter may need to be different for each function,
> it just didn't matter for the unassigned memory case.
Do you really have systems where independent devices need to respond to
different sized accesses to the same address?
Paul
On Wednesday 02 January 2008, Blue Swirl wrote:
> On 1/2/08, Paul Brook <[EMAIL PROTECTED]> wrote:
> > > Also the opaque parameter may need to be different for each function,
> > > it just didn't matter for the unassigned memory case.
> >
> > Do you r
> s = (ptimer_state *)qemu_mallocz(sizeof(ptimer_state));
> + if (!s)
> + return NULL;
None of the callers bother to check the return value, And even if they did I
don't think there's any point trying to gracefully handle OOM. Just abort
and be done with it.
I suggest guaranteei
> We currently don't check the return value in the init function where the
> new timer is created but do check it wherever it is used which is backwards
> and wasteful.
>
> You would prefer that qemu just segfaults rather than die gracefully?
I think qemu should die before it returns from qemu_mal
On Thursday 03 January 2008, Blue Swirl wrote:
> On 1/3/08, Paul Brook <[EMAIL PROTECTED]> wrote:
> > On Wednesday 02 January 2008, Blue Swirl wrote:
> > > On 1/2/08, Paul Brook <[EMAIL PROTECTED]> wrote:
> > > > > Also the opaque parameter may need to
> > As I said earlier, the only correct way to handle memory accesses is to
> > be able to consider a memory range and its associated I/O callbacks as
> > an object which can be installed _and_ removed. It implies that there is
> > a priority system close to what you described. It is essential to
>
> ... Ok, to cut a long question short: Is there any hardware support im qemu
> for doing monitoring (that goes deeper than using "time") and has anyone
> ever tested something that could work?
Probably your application wants the performance counters. Qemu doesn't emulate
those.
Besides which, q
> Does anyone have an idea on how I can measure performance in qemu to a
> somewhat accurate level? I have modified qemu (the memory handling) and the
> linux kernel and want to find out the penalty this introduced... does
> anyone have any comments / ideas on this?
Short answer is you probably ca
> Well, the measuring I had in mind partly concentrats on TLB misses, page
> faults, etc. (in addition to the cycle measuring). guess i'll have to
> implement something for myself in qemu :-/
Be aware that the TLB qemu uses behaves very differently to a real CPU TLB. If
you want to get TLB miss s
On Friday 04 January 2008, Markus Hitter wrote:
> Am 03.01.2008 um 15:02 schrieb Paul Brook:
> > Having to check every return value is extremely tedious and (as
> > you've proved) easy to miss.
>
> Checking every return value is a measure for programming reliable code.
> > The latter depends how general you want the solution to be. One
> > possibility is for the device DMA+registration routines map everything
> > onto CPU address space.
>
> Interesting idea, do you mean that all individual bus address spaces
> could exist in system view in the same large address
> On modern operating systems, allocations only return zero when you exhaust
> virtual memory. Returning nonzero doesn't mean you have enough memory,
> because it's given you a redundant copy on write mapping of the zero page
> and will fault in physical pages when you write to 'em, which has _no_
> +/* Some generally useful CD-ROM information */
> +#define CD_MINS 99 /* max. minutes per CD */
> +#define CD_SECS 60 /* seconds per minute */
> +#define CD_FRAMES 75 /* frames per second */
> +#define CD_FRAMESIZE204
> -The host kernel was configured with dynamic tick & hi-res timers, to
> allow the desired timer resolution. USB 2.0 microframe is 125usec.
Requiring a 8kHz timer is a non-starter.
The 100kHz "retry" timer is even more bogus.
Qemu isn't capable of this kind of realtime response. You need to fi
On Tuesday 08 January 2008, Dor Laor wrote:
> On Tue, 2008-01-08 at 01:30 +0000, Paul Brook wrote:
> > > -The host kernel was configured with dynamic tick & hi-res timers, to
> > > allow the desired timer resolution. USB 2.0 microframe is 125usec.
>
> It still wor
> the next step would be to emulate LSI SCSI chips, eh?
Qemu already does.
Paul
> In the absence of a global configuration file, a reasonably sane way to
> support this configuration system wide is to use an environmental
> variable. QEMU already uses a number of global variables for
> configuring audio options.
I'd really prefer we didn't do this, and preferably obsoleted/r
On Sunday 20 January 2008, Filip Navara wrote:
> Hello,
>
> attached is a patch that implements the SMBIOS within the Bochs BIOS code.
> Complete list of changes:
This should be submitted to the Bochs list.
Paul
> Well, what about adding a new backend phase to gcc generating what we
> expect for our purpose? Ok, it is rather easy to have a branch in gcc,
> harder to have it accepted in the main-stream gcc... :-) With a good
> argumentation...
IMHO (as a full time gcc developer) it's easier to just impleme
On Monday 21 January 2008, C.W. Betts wrote:
> I was thinking, maybe qemu could use threads for at least every processor
> it emulates (on emulated smp computers) and, at the most, every single
> device emulated. This would help users who have multiple cores, but it
> might impact performance on t
> Is this a reasonable merge strategy? We won't introduce regressions but
> I can't guarantee these new things will work cross-architecture.
I think it depends to some extent whether things will need rewriting to be
made cross-architecture. In particular if this requires interface changes.
Thi
> > Why don't you just put your custom flags in CFLAGS, not CPPFLAGS?
> > The former is deliberately left for the user to override.
>
> Several (but not all AFAICT) of the target subdirectory Makefiles set
> CFLAGS.
Really? Which ones?
Paul
> What I mean is: if you want
> for any reason to build qemu in a weird way then you're going to have
> to edit config-host.mak (or somewhere similar) in any case. You
> probably want to set some CPPFLAGS as well as various other things.
> If you do this at the moment then you have to reproduce al
> Saying CPPFLAGS+= is much more convenient if for any reason the
> external build environment would like to pass unusual CPPFLAGS.
No. This doesn't do what you thing it does.
The most common way of overriding these variables is to pass them on the
commandline, i.e. "make CPPFLAGS=-blah". This ov
On Friday 25 January 2008, Ian Jackson wrote:
> Paul Brook writes ("Re: [Qemu-devel] [PATCH] CPPFLAGS+= in
Makefile.target"):
> > In that case you should always provide a definition in config-host.mak.
> > Under some circumstances make may inherit initial values
> Providing a definition in config-host.mak involves duplicating the
> value, which can't be right.
Huh? No it doesn't. config-host.mak contains
CPPFLAGS=
then Makefile.target contains
CPPFLAGS+=whatever
> If there's no other way to do it then
> there should be a reference to USER_CPPFLAGS or
On Thursday 31 January 2008, Anthony Liguori wrote:
> KVM supports more than 2GB of memory for x86_64 hosts. The following patch
> fixes a number of type related issues where int's were being used when they
> shouldn't have been. It also introduces CMOS support so the BIOS can build
> the appropr
> -cmos_init(ram_size, above_4g_mem_size, boot_device, hd);
> +cmos_init(ram_size, above_4g_mem_size, boot_device, hd, smp_cpus);
smp_cpus is a global variable. Why bother passing it around?
Are the CMOS contents documented anywhere?
Paul
> >> +#define PHYS_RAM_MAX_SIZE (2047 * 1024 * 1024 * 1024ULL)
> >
> > This seems fairly arbitrary. Why? Any limit is certainly target specific.
>
> On a 32-bit host, a 2GB limit is pretty reasonable since you're limited
> in virtual address space. On a 64-bit host, there isn't this
> fundamental
> > Are the CMOS contents documented anywhere?
>
> No, but if you have a suggestion of where to document them, I'll add
> documentation.
I suggest in or with the BIOS sources.
As we're using a common BIOS it seems a good idea to make sure this kind of
things is coordinated.
Paul
> > I agree with the fact that ram_size should be 64 bit. Maybe each
> > machine could test the value and emit an error message if it is too
> > big. Maybe an uint64_t would be better though.
>
> uint64_t is probably more reasonable. I wouldn't begin to know what the
> appropriate amount of ram wa
On Friday 01 February 2008, Johannes Schindelin wrote:
> Hi,
>
> On Fri, 1 Feb 2008, Christian Laursen wrote:
> > Gervase Lam <[EMAIL PROTECTED]> writes:
> > >>>From my minimal understanding of what he is saying, it seems he would
> > >
> > > prefer there to be a BOCHS graphics adapter, which would
> virtio could still be made to work with map cache. You would just have
> to change it to be able to map more than one page contiguously. As I
> mentioned though, it just starts getting ugly.
That's why you should be using the cpu_physical_memory_rw routines :-)
Anything that assume large line
CVSROOT:/sources/qemu
Module name:qemu
Changes by: Paul Brook 08/02/01 22:45:05
Modified files:
. : Makefile.target
Log message:
Add missing dependencies on generated files (for parallel build).
CVSWeb URLs:
http://cvs.savannah.gnu.org/viewcvs
501 - 600 of 1782 matches
Mail list logo