At 10/10/2011 03:22 PM, Jan Kiszka Write: > On 2011-10-10 09:17, Wen Congyang wrote: >> At 10/10/2011 03:01 PM, Jan Kiszka Write: >>> On 2011-10-10 08:59, Wen Congyang wrote: >>>> At 10/10/2011 02:52 PM, Jan Kiszka Write: >>>>> On 2011-10-10 04:21, Wen Congyang wrote: >>>>>> At 10/09/2011 06:23 PM, Richard W.M. Jones Write: >>>>>>> On Sun, Oct 09, 2011 at 10:49:57AM +0200, Jan Kiszka wrote: >>>>>>>> As explained in the other replies: It is way more future-proof to use >>>>>>>> an >>>>>>>> interface for this which was designed for it (remote gdb) instead of >>>>>>>> artificially relaxing reasonable constraints of the migration mechanism >>>>>>>> plus having to follow that format with the post-processing tool. >>>>>>> >>>>>>> Any interface that isn't "get this information off my production >>>>>>> server *now*" so that I can get the server restarted, and send it to >>>>>>> an expert to analyse -- is a poor interface, whether it was designed >>>>>>> like that or not. Perhaps we don't have the right interface at all, >>>>>>> but remote gdb is not it. >>>>>> >>>>>> What about the following idea? >>>>>> >>>>>> Introduce a new monitor command named dump, and this command accepts a >>>>>> filename. >>>>>> We can use almost all migration's code. We use this command to dump >>>>>> guest's >>>>>> memory, so there is no need to check whether the guest has a >>>>>> unmigratable device. >>>>> >>>>> I do not want to reject this proposal categorically, but I would like to >>>>> see the gdb path fail /wrt essential requirements first. So far I don't >>>>> see it would. >>>> >>>> ‘gdb path fail /wrt essential requirements’ >>>> >>>> what does it mean? >>> >>> That you explain why reading reading memory and processor states via the >>> remote gdb interface and dumping it into a proper core file cannot be >>> made working for you. >> >> First, I think crash can not analyze such core file. But it is not very >> important. >> >> What is remote gdb interface? > > man qemu -> gdb. > >> Do you mean that: the supporter uses gdb from another machine > > Or locally. There are various transports possible. > >> to connect to customer's machine and get the data? If so, this way can not be >> used when the customer needs to dump the guest's memory automatically when >> watchdog timeouts. > > It is just another channel that can conceptually be used like the > monitor, by a management app like libvirt, directly or indirectly via a > scripted gdb frontend, or also by a human who wants to save some ongoing > gdb session for later analysis. This dual use make such an approach the > preferred one.
Is the following is right? 1. execute the monitor command 'gdbserver' 2. run gdb and then 'target remote :1234' But, unfortunately, the monitor command gdbserver does not exit when we use json to connect to monitor. Thanks Wen Congyang > > Jan >