[lldb-dev] A typo on the "Testing LLDB" tutorial page?
On the "Testing LLDB" tutorial page, in the description under "RUNNING TESTS -> Running the full test suite" section, should this > cmake -DLLDB_TEST_ARGS="-A i386 -C /path/to/custom/clang" -G Ninja > ninja check-lldb be > cmake -DLLDB_TEST_USER_ARGS="-A i386 -C /path/to/custom/clang" -G Ninja > ninja check-lldb Regards, Ramana ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
[lldb-dev] Debugging ELF relocatable files using LLDB
It looks like LLDB doesn't like ELF relocatable files for debugging and asserts with the following message when tried /lldb/source/Plugins/ObjectFile/ELF/ObjectFileELF.cpp:2228: unsigned int ObjectFileELF::RelocateSection(.): Assertion `false && "unexpected relocation type"' failed. Are we not supposed to debug ELF relocatable files on LLDB or am I missing something? If we cannot debug the relocatable files, is it _simply_ because those files lack program headers (program memory map) and relocations are yet to be processed (for debug info) or there are other reasons? For our target, the assembler output itself is a self contained ELF and hence will not have external references (both code and data). I am wondering if I can debug these ELF files on LLDB with minimal changes which does not require a full (or proper) linking step and would appreciate any pointers on that. Thanks, Ramana ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] Debugging ELF relocatable files using LLDB
On Thu, Feb 16, 2017 at 10:26 PM, Greg Clayton wrote: > > On Feb 16, 2017, at 3:51 AM, Ramana via lldb-dev > wrote: > > It looks like LLDB doesn't like ELF relocatable files for debugging > and asserts with the following message when tried > > /lldb/source/Plugins/ObjectFile/ELF/ObjectFileELF.cpp:2228: > unsigned int ObjectFileELF::RelocateSection(.): Assertion `false > && "unexpected relocation type"' failed. > > Are we not supposed to debug ELF relocatable files on LLDB or am I > missing something? > > If we cannot debug the relocatable files, is it _simply_ because those > files lack program headers (program memory map) and relocations are > yet to be processed (for debug info) or there are other reasons? > > For our target, the assembler output itself is a self contained ELF > and hence will not have external references (both code and data). I am > wondering if I can debug these ELF files on LLDB with minimal changes > which does not require a full (or proper) linking step and would > appreciate any pointers on that. > > Thanks, > Ramana > > > Looks like you just need to add support for the 32 bit relocations: > > > if (hdr->Is32Bit()) { > switch (reloc_type(rel)) { > case R_386_32: > case R_386_PC32: > default: > assert(false && "unexpected relocation type"); > } > } else { > switch (reloc_type(rel)) { > case R_X86_64_64: { > symbol = symtab->FindSymbolByID(reloc_symbol(rel)); > if (symbol) { > addr_t value = symbol->GetAddressRef().GetFileAddress(); > DataBufferSP &data_buffer_sp = debug_data.GetSharedDataBuffer(); > uint64_t *dst = reinterpret_cast( > data_buffer_sp->GetBytes() + rel_section->GetFileOffset() + > ELFRelocation::RelocOffset64(rel)); > *dst = value + ELFRelocation::RelocAddend64(rel); > } > break; > } > case R_X86_64_32: > case R_X86_64_32S: { > symbol = symtab->FindSymbolByID(reloc_symbol(rel)); > if (symbol) { > addr_t value = symbol->GetAddressRef().GetFileAddress(); > value += ELFRelocation::RelocAddend32(rel); > assert( > (reloc_type(rel) == R_X86_64_32 && (value <= UINT32_MAX)) || > (reloc_type(rel) == R_X86_64_32S && >((int64_t)value <= INT32_MAX && (int64_t)value >= > INT32_MIN))); > uint32_t truncated_addr = (value & 0x); > DataBufferSP &data_buffer_sp = debug_data.GetSharedDataBuffer(); > uint32_t *dst = reinterpret_cast( > data_buffer_sp->GetBytes() + rel_section->GetFileOffset() + > ELFRelocation::RelocOffset32(rel)); > *dst = truncated_addr; > } > break; > } > case R_X86_64_PC32: > default: > assert(false && "unexpected relocation type"); > } > } > > > I am guessing you will do something similar to the x86-64 stuff. I tried to mimic the x86_64 relocations handling for our target but getting segmentation fault while trying to write to the 'dst' location. In fact, the x86_64 also segfaults while trying to write to 'dst' location. I just tried to debug the following simple program for x86_64. main.c: int main () { return 0; } $ clang main.c -o main_64b.o --target=x86_64 -c -g $ lldb main_64b.o (lldb) target create "main_64b.o" Current executable set to 'main_64b.o' (x86_64). (lldb) source list Segmentation fault (core dumped) Am I doing something wrong or support for debugging the x86_64 ELF relocatable files using LLDB is broken? BTW, I am using LLVM v3.6 and LLDB v3.6. Regards, Ramana ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
[lldb-dev] Current status of simultaneous multiple target debugging in LLDB
Hi, I have to implement a debugger for our HW which comprises of CPU+GPU where the GPU is coded in OpenCL and is accelerated through OpenVX API in C++ application which runs on CPU. Our requirement is we should be able to debug the code running on both CPU and GPU simultaneously with in the same LLDB debug session. Looking at the mailing list archive I see that there were discussions about this feature in LLDB here http://lists.llvm.org/pipermail/lldb-dev/2014-August/005074.html. What is the present status i.e. what works today and what is to be improved of simultaneous multiple target debugging support in LLDB? Were the changes contributed to LLDB mainstream? How can I access the material for http://llvm.org/devmtg/2014-10/#bof5 (Future directions and features for LLDB) Appreciate any help/guidance provided on the same. Thanks, Ramana ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] Current status of simultaneous multiple target debugging in LLDB
Hi, Could someone please help me on the below? Thanks, Ramana On Mon, Jun 12, 2017 at 11:58 AM, Ramana wrote: > Hi, > > I have to implement a debugger for our HW which comprises of CPU+GPU where > the GPU is coded in OpenCL and is accelerated through OpenVX API in C++ > application which runs on CPU. Our requirement is we should be able to > debug the code running on both CPU and GPU simultaneously with in the same > LLDB debug session. > > Looking at the mailing list archive I see that there were discussions > about this feature in LLDB here http://lists.llvm.org/ > pipermail/lldb-dev/2014-August/005074.html. > > What is the present status i.e. what works today and what is to be > improved of simultaneous multiple target debugging support in LLDB? Were > the changes contributed to LLDB mainstream? > > How can I access the material for http://llvm.org/devmtg/2014-10/#bof5 > (Future directions and features for LLDB) > > Appreciate any help/guidance provided on the same. > > Thanks, > Ramana > ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
[lldb-dev] Remote debugging ARM target from x86 host
Hi, I am trying to remote debug ARM (linux) target from x86 (linux) host and I am getting the following error while trying to launch a process. The local debugging on ARM works. error: connect remote failed (invalid host:port specification: '10.10.2.3') error: process launch failed: invalid host:port specification: '10.10.2.3' It appears the above error is because the gdb-remote is returning the communication port as zero. < 36> send packet: $qLaunchGDBServer;host:svrlin249;#bb < 19> read packet: $pid:298;port:0;#bf What are the possible reasons for the above behavior from gdb-remote and how I could resolve this? If it helps, below is the full log. (lldb) log enable lldb comm (lldb) log enable gdb-remote packets (lldb) platform select remote-linux Platform: remote-linux Connected: no (lldb) platform connect connect://10.10.2.3:500 0x915bd78 Communication::Communication (name = gdb-remote.client) 0x915bd78 Communication::Disconnect () 0x915bd78 Communication::Disconnect () 0x915bd78 Communication::Connect (url = connect://10.10.2.3:500) Socket::TcpConnect (host/port = 10.10.2.3:500) TCPSocket::Connect (host/port = 10.10.2.3:500) 0x915bd78 Communication::Write (src = 0xbfcb7433, src_len = 1) connection = 0x915f578 0x915f608 Socket::Write() (socket = 7, src = 0xbfcb7433, src_len = 1, flags = 0) => 1 (error = (null)) < 1> send packet: + this = 0x0915BD78, dst = 0xBFCB53EC, dst_len = 8192, timeout = 1 us, connection = 0x0915F578 0x915bd78 Communication::Write (src = 0x916022c, src_len = 19) connection = 0x915f578 0x915f608 Socket::Write() (socket = 7, src = 0x916022c, src_len = 19, flags = 0) => 19 (error = (null)) history[1] tid=0x7cbf < 1> send packet: + < 19> send packet: $QStartNoAckMode#b0 this = 0x0915BD78, dst = 0xBFCB51AC, dst_len = 8192, timeout = 600 us, connection = 0x0915F578 0x915f608 Socket::Read() (socket = 7, src = 0xbfcb51ac, src_len = 7, flags = 0) => 7 (error = (null)) < 1> read packet: + < 6> read packet: $OK#9a 0x915bd78 Communication::Write (src = 0xbfcb50f3, src_len = 1) connection = 0x915f578 0x915f608 Socket::Write() (socket = 7, src = 0xbfcb50f3, src_len = 1, flags = 0) => 1 (error = (null)) < 1> send packet: + 0x915bd78 Communication::Write (src = 0x9161ff4, src_len = 13) connection = 0x915f578 0x915f608 Socket::Write() (socket = 7, src = 0x9161ff4, src_len = 13, flags = 0) => 13 (error = (null)) < 13> send packet: $qHostInfo#9b this = 0x0915BD78, dst = 0xBFCB510C, dst_len = 8192, timeout = 100 us, connection = 0x0915F578 0x915f608 Socket::Read() (socket = 7, src = 0xbfcb510c, src_len = 316, flags = 0) => 316 (error = (null)) < 316> read packet: $triple:61726d2d2d6c696e75782d676e75656162696866;ptrsize:4;watchpoint_exceptions_received:before;endian:little;os_version:3.10.31;os_build:332e31302e33312d6c7473692d30323836312d6738303161343066;os_kernel:233520534d5020467269204d61792031332031353a35383a3232204953542032303136;hostname:736f 63667067615f617272696135;#0a 0x915bd78 Communication::Write (src = 0x915fe9c, src_len = 18) connection = 0x915f578 0x915f608 Socket::Write() (socket = 7, src = 0x915fe9c, src_len = 18, flags = 0) => 18 (error = (null)) < 18> send packet: $qGetWorkingDir#91 this = 0x0915BD78, dst = 0xBFCB50FC, dst_len = 8192, timeout = 100 us, connection = 0x0915F578 0x915f608 Socket::Read() (socket = 7, src = 0xbfcb50fc, src_len = 24, flags = 0) => 24 (error = (null)) < 24> read packet: $2f686f6d652f726f6f74#4b 0x915bd78 Communication::Write (src = 0x915fe9c, src_len = 19) connection = 0x915f578 0x915f608 Socket::Write() (socket = 7, src = 0x915fe9c, src_len = 19, flags = 0) => 19 (error = (null)) < 19> send packet: $qQueryGDBServer#cb this = 0x0915BD78, dst = 0xBFCB531C, dst_len = 8192, timeout = 100 us, connection = 0x0915F578 0x915f608 Socket::Read() (socket = 7, src = 0xbfcb531c, src_len = 7, flags = 0) => 7 (error = (null)) < 7> read packet: $E04#a9 Platform: remote-linux Triple: arm-*-linux-gnueabihf OS Version: 3.10.31 (3.10.31-ltsi-02861-g801a40f) Kernel: #5 SMP Fri May 13 15:58:22 IST 2016 Hostname: socfpga_arria5 Connected: yes WorkingDir: /home/root (lldb) file main 0x915bd78 Communication::Write (src = 0x91638fc, src_len = 137) connection = 0x915f578 0x915f608 Socket::Write() (socket = 7, src = 0x91638fc, src_len = 137, flags = 0) => 137 (error = (null)) < 137> send packet: $qModuleInfo:2f686f6d652f72616d616e616e2f776f726b5f726f6f742f546f545f6c6c64622f74657374732f6d61696e;61726d2d2d6c696e75782d656162696866#f1 this = 0x0915BD78, dst = 0xBFCB172C, dst_len = 8192, timeout = 100 us, connection = 0x0915F578 0x915f608 Socket::Read() (socket = 7, src = 0xbfcb172c, src_len = 7, flags = 0) => 7 (error = (null)) < 7> read packet: $E03#a8 Current executable set to 'main' (arm). (lldb) b main Breakpoint 1: where = main`main + 4 at main.c:4, address = 0x000104a0 (lldb) run 0x915bd78 Communication::Write (src = 0x917bae4, src_len = 36) connection = 0x915f578 0x915f608 Socket::Write() (socket = 7, src = 0x917bae4
Re: [lldb-dev] Remote debugging ARM target from x86 host
nv[15]="XDG_RUNTIME_DIR=/run/user/0" env[16]="_=/mnt/var/binaries/arm_v5.0_orig/bin/lldb-server" env[17]=NULL GDBRemoteCommunication::StartDebugserverProcess() debugserver listens 0 port Thanks, Ramana > -- > Qualcomm Innovation Center, Inc. > The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a > Linux Foundation Collaborative Project > >> -Original Message- >> From: lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] On Behalf Of Greg >> Clayton via lldb-dev >> Sent: Wednesday, August 23, 2017 12:45 PM >> To: Hans Wennborg >> Cc: Ramana ; LLDB Dev > d...@lists.llvm.org> >> Subject: Re: [lldb-dev] Remote debugging ARM target from x86 host >> >> Port zero should never be returned as a valid port. We do bind to port zero >> just >> so we don't try and pick a port at random just to find it is being used. >> When we >> bind to port 9, we must find the actual port we bound to and return that. >> Seems >> something has gone wrong with the code that discovers the port that was >> actually bound and is not reporting that back correctly. >> >> >> Should be straight forward to do by debugging the function >> GDBRemoteCommunicationServerPlatform::Handle_qLaunchGDBServer(...) in >> GDBRemoteCommunicationServerPlatform.cpp and see what is going on and >> why it is returning 0 as the port. >> >> Greg >> >> > On Aug 23, 2017, at 9:44 AM, Hans Wennborg via lldb-dev > d...@lists.llvm.org> wrote: >> > >> > This was marked as an lldb 5.0.0 release blocker since it's a >> > regression from 4.0.1: https://bugs.llvm.org/show_bug.cgi?id=34183 >> > >> > lldb-dev: Is there any interest in fixing this bug? >> > >> > On Fri, Aug 4, 2017 at 10:13 PM, Ramana via lldb-dev >> > wrote: >> >> Hi, >> >> >> >> I am trying to remote debug ARM (linux) target from x86 (linux) host >> >> and I am getting the following error while trying to launch a process. >> >> The local debugging on ARM works. >> >> >> >> error: connect remote failed (invalid host:port specification: >> >> '10.10.2.3') >> >> error: process launch failed: invalid host:port specification: '10.10.2.3' >> >> >> >> It appears the above error is because the gdb-remote is returning the >> >> communication port as zero. >> >> >> >> < 36> send packet: $qLaunchGDBServer;host:svrlin249;#bb >> >> < 19> read packet: $pid:298;port:0;#bf >> >> >> >> What are the possible reasons for the above behavior from gdb-remote >> >> and how I could resolve this? >> >> >> >> If it helps, below is the full log. >> >> >> >> (lldb) log enable lldb comm >> >> (lldb) log enable gdb-remote packets >> >> (lldb) platform select remote-linux >> >> Platform: remote-linux >> >> Connected: no >> >> (lldb) platform connect connect://10.10.2.3:500 >> >> 0x915bd78 Communication::Communication (name = gdb-remote.client) >> >> 0x915bd78 Communication::Disconnect () >> >> 0x915bd78 Communication::Disconnect () >> >> 0x915bd78 Communication::Connect (url = connect://10.10.2.3:500) >> >> Socket::TcpConnect (host/port = 10.10.2.3:500) TCPSocket::Connect >> >> (host/port = 10.10.2.3:500) >> >> 0x915bd78 Communication::Write (src = 0xbfcb7433, src_len = 1) >> >> connection = 0x915f578 >> >> 0x915f608 Socket::Write() (socket = 7, src = 0xbfcb7433, src_len = 1, >> >> flags = 0) => 1 (error = (null)) >> >> < 1> send packet: + >> >> this = 0x0915BD78, dst = 0xBFCB53EC, dst_len = 8192, timeout = 1 >> >> us, connection = 0x0915F578 >> >> 0x915bd78 Communication::Write (src = 0x916022c, src_len = 19) >> >> connection = 0x915f578 >> >> 0x915f608 Socket::Write() (socket = 7, src = 0x916022c, src_len = 19, >> >> flags = 0) => 19 (error = (null)) >> >> history[1] tid=0x7cbf < 1> send packet: + >> >> < 19> send packet: $QStartNoAckMode#b0 this = 0x0915BD78, dst = >> >> 0xBFCB51AC, dst_len = 8192, timeout = 600 us, connection = >> >> 0x0915F578 >> >> 0x915f608 Socket::Read() (socket = 7, src = 0xbfcb51ac, src_len = 7, >> >> flags = 0) => 7 (error = (null)) >> >> < 1> read packet: + >> >> < 6> read packet: $OK#9a >&
Re: [lldb-dev] Remote debugging ARM target from x86 host
o the reason that socket_pipe.ReadWithTimeout() could not successfully >>> read the port number from the named pipe. >>> >>> Based on the above, though I am not sure, the other patch I could think of >>> having an effect on this bug is >>> https://reviews.llvm.org/rL300579 (Update LLDB Host to support IPv6 over >>> TCP) >>> which changed the socket implementation. >>> >>> >>> lldb-server log for "gdb-remote process" with lldb v4.0.1 (passing case) >>> >>> GDBRemoteCommunication::StartDebugserverProcess(url=tcp://10.10.12.3:0, >>> port=0) >>> GDBRemoteCommunication::StartDebugserverProcess() found gdb-remote stub >>> exe '/mnt/var/binaries/arm_release/bin/lldb-server' >>> launch info for gdb-remote stub: >>> Executable: lldb-server >>> Triple: *-*-* >>> Arguments: >>> argv[0]="/mnt/var/binaries/arm_release/bin/lldb-server" >>> argv[1]="gdbserver" >>> argv[2]="tcp://10.10.12.3:0" >>> argv[3]="--native-regs" >>> argv[4]="--pipe" >>> argv[5]="7" >>> argv[6]=NULL >>> >>> Environment: >>> env[0]="XDG_SESSION_ID=c3" >>> env[1]="TERM=xterm-256color" >>> env[2]="SHELL=/bin/sh" >>> env[3]="SSH_CLIENT=10.10.33.99 53542 22" >>> env[4]="SSH_TTY=/dev/pts/0" >>> env[5]="USER=root" >>> env[6]="MAIL=/var/mail/root" >>> env[7]="PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin" >>> env[8]="PWD=/home/root" >>> env[9]="EDITOR=vi" >>> env[10]="PS1=\u@\h:\w\$ " >>> env[11]="SHLVL=1" >>> env[12]="HOME=/home/root" >>> env[13]="LOGNAME=root" >>> env[14]="SSH_CONNECTION=10.10.33.99 53542 10.10.2.4 22" >>> env[15]="XDG_RUNTIME_DIR=/run/user/0" >>> env[16]="_=/mnt/var/binaries/arm_release/bin/lldb-server" >>> env[17]=NULL >>> >>> GDBRemoteCommunication::StartDebugserverProcess() debugserver listens >>> 56543 port >>> >>> >>> lldb-server log for "gdb-remote process" with lldb v5.0.0 (failing case) >>> >>> >>> GDBRemoteCommunication::StartDebugserverProcess(url=tcp://10.10.12.3:0, >>> port=0) >>> GDBRemoteCommunication::StartDebugserverProcess() found gdb-remote stub >>> exe '/mnt/var/binaries/arm_v5.0_orig/bin/lldb-server' >>> launch info for gdb-remote stub: >>> Executable: lldb-server >>> Triple: *-*-* >>> Arguments: >>> argv[0]="/mnt/var/binaries/arm_v5.0_orig/bin/lldb-server" >>> argv[1]="gdbserver" >>> argv[2]="tcp://10.10.12.3:0" >>> argv[3]="--native-regs" >>> argv[4]="--pipe" >>> argv[5]="7" >>> argv[6]=NULL >>> >>> Environment: >>> env[0]="XDG_SESSION_ID=c3" >>> env[1]="TERM=xterm-256color" >>> env[2]="SHELL=/bin/sh" >>> env[3]="SSH_CLIENT=10.10.33.99 53542 22" >>> env[4]="SSH_TTY=/dev/pts/0" >>> env[5]="USER=root" >>> env[6]="MAIL=/var/mail/root" >>> env[7]="PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin" >>> env[8]="PWD=/home/root" >>> env[9]="EDITOR=vi" >>> env[10]="PS1=\u@\h:\w\$ " >>> env[11]="SHLVL=1" >>> env[12]="HOME=/home/root" >>> env[13]="LOGNAME=root" >>> env[14]="SSH_CONNECTION=10.10.33.99 53542 10.10.2.4 22" >>> env[15]="XDG_RUNTIME_DIR=/run/user/0" >>> env[16]="_=/mnt/var/binaries/arm_v5.0_orig/bin/lldb-server" >>> env[17]=NULL >>> >>> GDBRemoteCommunication::StartDebugserverProcess() debugserver listens 0 >>> port >>> >>> >>> Thanks, >>> Ramana >>> >>>> -- >>>> Qualcomm Innovation Center, Inc. >>>> The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, >>>> a Linux Foundation Collaborative Project >>>> >>>>> -Original Message- >>>>> From: lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] On Behalf Of >>>>> Greg Clayton via lldb-dev >>>>> Sent: Wednesday, August 23, 2017 12:45 PM
Re: [lldb-dev] Remote debugging ARM target from x86 host
Thank you, Chris. Looking forward to the patch. On Tue, Aug 29, 2017 at 1:28 AM, Chris Bieneman wrote: > I had a chance to look into this more, and I found a bug in the listen > behavior. I'm testing a solution to it now. Will post it if it resolves the > issue. > > -Chris > >> On Aug 25, 2017, at 10:36 AM, Greg Clayton via lldb-dev >> wrote: >> >> Maybe we can make it open only an IPv4 socket for lldb-server for now as a >> work around? >> >>> On Aug 25, 2017, at 8:47 AM, Chris Bieneman wrote: >>> >>> Since lldb-server only supports running on a limited set of host operating >>> systems it is hard for me to diagnose the issue completely, but I suspect >>> the problem is caused by the fact that the new listening code can open more >>> than one socket, and TCPSocket::GetLocalPortNumber() may be misbehaving. >>> >>> I'm unlikely to have time to investigate further until next week, but it >>> should be possible to craft a unit test that verifies that >>> GetLocalPortNumber() returns non-zero on a socket that is listening before >>> a connection is established. That might reproduce the issue in a more easy >>> to debug environment. >>> >>> -Chris >>> >>>> On Aug 25, 2017, at 7:38 AM, Ramana via lldb-dev >>>> wrote: >>>> >>>> Ted, Greg, >>>> >>>> I have built lldb tools @r300578 and the lldb-server is returning the >>>> proper port number to lldb client and the remote debugging is working. >>>> I have given the lldb-server log at the bottom of my reply. >>>> >>>> So, it looks https://reviews.llvm.org/rL300579 (Update LLDB Host to >>>> support IPv6 over TCP) is causing the issue. >>>> >>>>> Ramana, can you stick in a log message to print port_cstr? I suspect it's >>>>> actually getting 0 back from lldb-server, which would tell us the error >>>>> is in the server code, not the client code. >>>> >>>> Ted, I did that and actually the pipe read is returning zero port >>>> number. So definitely the issue is on the server side. >>>> >>>> GDBRemoteCommunication::StartDebugserverProcess() port_cstr >>>> before socket pipe read >>>> GDBRemoteCommunication::StartDebugserverProcess() port_cstr after >>>> socket pipe read >>>> >>>> >>>>> Ted's comments are correct and I am guessing we will find the >>>>> "lldb-server gdb-server" is not doing the right thing and it isn't >>>>> returning the correctly bound port. >>>>> >>>>> When we are doing remote stuff we must use TCP so there should be >>>>> lldb-server should be opening a TCP socket, binding, listening and >>>>> accepting a connection from the remote LLDB. >>>>> >>>>> Greg >>>> >>>> Greg, thanks for the comments. Are you saying I should check what is >>>> happening on the TCP socket side? How do I do it other than walking >>>> through the code? >>>> >>>> >>>> root@arria5:~# /mnt/var/patch_bins/binaries/bin/lldb-server platform >>>> --log-file Ramana/remote.log --log-channels "gdb-remote process" >>>> --server --listen *:1400 >>>> Connection established. >>>> error: lost connection >>>> lldb-server exiting... >>>> ^C >>>> root@arria5:~# /mnt/var/patch_bins/binaries/bin/lldb --version >>>> lldb version 5.0.0 (https://llvm.org/svn/llvm-project/lldb/trunk >>>> revision 300578) >>>> clang revision 300578 >>>> llvm revision 300578 >>>> root@arria5:~# cat Ramana/remote.log >>>> GDBRemoteCommunication::StartDebugserverProcess(url=tcp://10.10.12.3:0, >>>> port=0) >>>> GDBRemoteCommunication::StartDebugserverProcess() found gdb-remote >>>> stub exe '/mnt/var/patch_bins/binaries/bin/lldb-server' >>>> launch info for gdb-remote stub: >>>> Executable: lldb-server >>>> Triple: *-*-* >>>> Arguments: >>>> argv[0]="/mnt/var/patch_bins/binaries/bin/lldb-server" >>>> argv[1]="gdbserver" >>>> argv[2]="tcp://10.10.12.3:0" >>>> argv[3]="--native-regs" >>>> argv[4]="--pipe" >>>> argv[5]="7" >>>> argv[6
Re: [lldb-dev] Remote debugging ARM target from x86 host
Thanks Chris. The patch woks for ARM remote debugging for my case. I am yet to check x86 remote debugging. Need to build the tool chain, so will update you tomorrow. ~# /mnt/var/arm_debug/bin/lldb --version lldb version 6.0.0 (https://llvm.org/svn/llvm-project/lldb/trunk revision 312008) clang revision 312008 llvm revision 312008 "gdb-remote process" log of lldb-server says GDBRemoteCommunication::StartDebugserverProcess() debugserver listens 55874 port ~/Ramana# ps af -w PID TTY STAT TIME COMMAND 8314 pts/0S+ 0:00 \_ /mnt/var/arm_debug/bin/lldb-server p --log-file Ramana/remote.log --log-channels gdb-remote process --server --listen *:1400 8421 pts/0Sl+0:01 \_ /mnt/var/arm_debug/bin/lldb-server p --log-file Ramana/remote.log --log-channels gdb-remote process --server --listen *:1400 8477 pts/0S+ 0:00 \_ /mnt/var/arm_debug/bin/lldb-server gdbserver tcp://10.10.12.3:0 --native-regs --pipe 7 8514 pts/0t 0:00 \_ /home/root/arm_main ~/work_root/ToT_lldb/tests$ ../binaries/x86_debug/bin/lldb (lldb) platform select remote-linux Platform: remote-linux Connected: no (lldb) platform connect connect://10.10.2.1:1400 Platform: remote-linux Triple: arm-*-linux-gnueabihf OS Version: 4.1.33 (4.1.33-ltsi-altera) Kernel: #1 SMP Tue May 2 08:13:11 MYT 2017 Hostname: arria5 Connected: yes WorkingDir: /home/root (lldb) file arm_main Current executable set to 'arm_main' (arm). (lldb) b main Breakpoint 1: where = arm_main`main + 4 at main.c:4, address = 0x000104a0 (lldb) run Process 8514 launched: '/home/ramanan/work_root/ToT_lldb/tests/arm_main' (arm) Process 8514 stopped * thread #1, name = 'arm_main', stop reason = breakpoint 1.1 frame #0: 0x000104a0 arm_main`main at main.c:4 1#include 2 3int main() { -> 4printf("Hello World\n"); 5} (lldb) n Hello World Process 8514 stopped * thread #1, name = 'arm_main', stop reason = step over frame #0: 0x000104ae arm_main`main at main.c:5 2 3int main() { 4printf("Hello World\n"); -> 5} Regards, Ramana On Tue, Aug 29, 2017 at 9:49 PM, Chris Bieneman wrote: > I committed a fix in r312008. Please test it to verify that it resolves your > issue. > > Thanks, > -Chris > >> On Aug 28, 2017, at 8:41 PM, Ramana wrote: >> >> Thank you, Chris. Looking forward to the patch. >> >> On Tue, Aug 29, 2017 at 1:28 AM, Chris Bieneman wrote: >>> I had a chance to look into this more, and I found a bug in the listen >>> behavior. I'm testing a solution to it now. Will post it if it resolves the >>> issue. >>> >>> -Chris >>> >>>> On Aug 25, 2017, at 10:36 AM, Greg Clayton via lldb-dev >>>> wrote: >>>> >>>> Maybe we can make it open only an IPv4 socket for lldb-server for now as a >>>> work around? >>>> >>>>> On Aug 25, 2017, at 8:47 AM, Chris Bieneman wrote: >>>>> >>>>> Since lldb-server only supports running on a limited set of host >>>>> operating systems it is hard for me to diagnose the issue completely, but >>>>> I suspect the problem is caused by the fact that the new listening code >>>>> can open more than one socket, and TCPSocket::GetLocalPortNumber() may be >>>>> misbehaving. >>>>> >>>>> I'm unlikely to have time to investigate further until next week, but it >>>>> should be possible to craft a unit test that verifies that >>>>> GetLocalPortNumber() returns non-zero on a socket that is listening >>>>> before a connection is established. That might reproduce the issue in a >>>>> more easy to debug environment. >>>>> >>>>> -Chris >>>>> >>>>>> On Aug 25, 2017, at 7:38 AM, Ramana via lldb-dev >>>>>> wrote: >>>>>> >>>>>> Ted, Greg, >>>>>> >>>>>> I have built lldb tools @r300578 and the lldb-server is returning the >>>>>> proper port number to lldb client and the remote debugging is working. >>>>>> I have given the lldb-server log at the bottom of my reply. >>>>>> >>>>>> So, it looks https://reviews.llvm.org/rL300579 (Update LLDB Host to >>>>>> support IPv6 over TCP) is causing the issue. >>>>>> >>>>>>> Ramana, can you stick in a log message to print port_cstr? I suspect >>>>>>> it's actually getting 0 back from lldb-server, which would tell us the >
Re: [lldb-dev] Remote debugging ARM target from x86 host
The patch works for x86 remote debugging as well. $ ../binaries/x86_debug/bin/lldb --version lldb version 6.0.0 (https://llvm.org/svn/llvm-project/lldb/trunk revision 312008) clang revision 312008 llvm revision 312008 On Tue, Aug 29, 2017 at 10:51 PM, Ramana wrote: > Thanks Chris. The patch woks for ARM remote debugging for my case. I > am yet to check x86 remote debugging. Need to build the tool chain, so > will update you tomorrow. > > ~# /mnt/var/arm_debug/bin/lldb --version > lldb version 6.0.0 (https://llvm.org/svn/llvm-project/lldb/trunk > revision 312008) > clang revision 312008 > llvm revision 312008 > > "gdb-remote process" log of lldb-server says > >GDBRemoteCommunication::StartDebugserverProcess() debugserver > listens 55874 port > > ~/Ramana# ps af -w > PID TTY STAT TIME COMMAND > 8314 pts/0S+ 0:00 \_ /mnt/var/arm_debug/bin/lldb-server p > --log-file Ramana/remote.log --log-channels gdb-remote process > --server --listen *:1400 > 8421 pts/0Sl+0:01 \_ /mnt/var/arm_debug/bin/lldb-server > p --log-file Ramana/remote.log --log-channels gdb-remote process > --server --listen *:1400 > 8477 pts/0S+ 0:00 \_ > /mnt/var/arm_debug/bin/lldb-server gdbserver tcp://10.10.12.3:0 > --native-regs --pipe 7 > 8514 pts/0t 0:00 \_ /home/root/arm_main > > > > ~/work_root/ToT_lldb/tests$ ../binaries/x86_debug/bin/lldb > (lldb) platform select remote-linux > Platform: remote-linux > Connected: no > (lldb) platform connect connect://10.10.2.1:1400 > Platform: remote-linux > Triple: arm-*-linux-gnueabihf > OS Version: 4.1.33 (4.1.33-ltsi-altera) > Kernel: #1 SMP Tue May 2 08:13:11 MYT 2017 > Hostname: arria5 > Connected: yes > WorkingDir: /home/root > (lldb) file arm_main > Current executable set to 'arm_main' (arm). > (lldb) b main > Breakpoint 1: where = arm_main`main + 4 at main.c:4, address = 0x000104a0 > (lldb) run > Process 8514 launched: '/home/ramanan/work_root/ToT_lldb/tests/arm_main' (arm) > Process 8514 stopped > * thread #1, name = 'arm_main', stop reason = breakpoint 1.1 > frame #0: 0x000104a0 arm_main`main at main.c:4 >1#include >2 >3int main() { > -> 4printf("Hello World\n"); >5} > (lldb) n > Hello World > Process 8514 stopped > * thread #1, name = 'arm_main', stop reason = step over > frame #0: 0x000104ae arm_main`main at main.c:5 >2 >3int main() { >4printf("Hello World\n"); > -> 5} > > > > Regards, > Ramana > > On Tue, Aug 29, 2017 at 9:49 PM, Chris Bieneman wrote: >> I committed a fix in r312008. Please test it to verify that it resolves your >> issue. >> >> Thanks, >> -Chris >> >>> On Aug 28, 2017, at 8:41 PM, Ramana wrote: >>> >>> Thank you, Chris. Looking forward to the patch. >>> >>> On Tue, Aug 29, 2017 at 1:28 AM, Chris Bieneman wrote: >>>> I had a chance to look into this more, and I found a bug in the listen >>>> behavior. I'm testing a solution to it now. Will post it if it resolves >>>> the issue. >>>> >>>> -Chris >>>> >>>>> On Aug 25, 2017, at 10:36 AM, Greg Clayton via lldb-dev >>>>> wrote: >>>>> >>>>> Maybe we can make it open only an IPv4 socket for lldb-server for now as >>>>> a work around? >>>>> >>>>>> On Aug 25, 2017, at 8:47 AM, Chris Bieneman wrote: >>>>>> >>>>>> Since lldb-server only supports running on a limited set of host >>>>>> operating systems it is hard for me to diagnose the issue completely, >>>>>> but I suspect the problem is caused by the fact that the new listening >>>>>> code can open more than one socket, and TCPSocket::GetLocalPortNumber() >>>>>> may be misbehaving. >>>>>> >>>>>> I'm unlikely to have time to investigate further until next week, but it >>>>>> should be possible to craft a unit test that verifies that >>>>>> GetLocalPortNumber() returns non-zero on a socket that is listening >>>>>> before a connection is established. That might reproduce the issue in a >>>>>> more easy to debug environment. >>>>>> >>>>>> -Chris >>>>>> >>>>>>> On Aug 25, 2017, at 7:38 AM, Ramana via lldb-dev >>>>>>> wrot
[lldb-dev] lldb_private::RegisterContext vs lldb_private::RegisterInfoInterface
Hi, When deriving RegisterContext_, why some platforms (Arch+OS) are deriving it from lldb_private::RegisterContext while others are deriving from lldb_private::RegisterInfoInterface or in other words how to decide on the base class to derive from between those two and what are the implications? Thanks, Ramana ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] lldb-5.0 on linux not finding lldb-server-5.0.0
At least I did not encounter the below issue on Ubuntu 16.04 (with x86 i.e 32bit) machine with lldb v5.0. FYI, I have built the tool chain from the RELEASE_500/final tag source. > Somebody on stack overflow is reporting: > https://stackoverflow.com/questions/46164427/lldb-is-not-starting-an-application > that they got the 5.0 tools on an Ubuntu system, and lldb is saying: > error: process launch failed: unable to locate lldb-server-5.0.0 > when he tries to run a process. The reporter says there's an lldb-server in > the package he got, but not an lldb-server-5.0.0. > Did his install just go bad or is this something other folks are seeing? I > don't have an Ubuntu system handy to check this out. > Jim ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] lldb_private::RegisterContext vs lldb_private::RegisterInfoInterface
Thank you. On Wed, Sep 13, 2017 at 6:10 PM, Tatyana Krasnukha wrote: > Hi Ramana, > > Looks like just a naming issue - classes derived from RegisterInfoInterface > should be named as RegisterInfo_, because they just implement a > common interface to access targets's register info structures. Whereas > RegisterContext relates to certain execution context and concrete frame, and > implements process-specific functions, for example restoring registers state > after expression evaluation. > > Please, correct me anyone, if I'm wrong. > > Tatyana > > -Original Message- > From: lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] On Behalf Of Ramana > via lldb-dev > Sent: Wednesday, 13 September, 2017 9:00 AM > To: lldb-dev@lists.llvm.org > Subject: [lldb-dev] lldb_private::RegisterContext vs > lldb_private::RegisterInfoInterface > > Hi, > > When deriving RegisterContext_, why some platforms (Arch+OS) are > deriving it from lldb_private::RegisterContext while others are deriving from > lldb_private::RegisterInfoInterface or in other words how to decide on the > base class to derive from between those two and what are the implications? > > Thanks, > Ramana > ___ > lldb-dev mailing list > lldb-dev@lists.llvm.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.llvm.org_cgi-2Dbin_mailman_listinfo_lldb-2Ddev&d=DwIGaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=8NZfjV_ZLY_S7gZyQMq8mj7tiN4vlymPiSt0Wl0jegw&m=YZ3Zmbvj4mvkuTSfZ9-gC0Gi1rMMrrPaSTL8YBCytAM&s=55CKoDxnRsC-dUPbL7T3sQ2HL74C2foFRhvssSATbbw&e= ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] lldb_private::RegisterContext vs lldb_private::RegisterInfoInterface
Thank you Greg for the detailed response. Can you please also shed some light on the NativeRegisterContext. When do we need to subclass NativeRegisterContext and (how) are they related to RegisterContext_? It appears that not all architectures having RegisterContext_ have sub classed NativeRegisterContext. Regards, Ramana On Thu, Sep 14, 2017 at 9:02 PM, Greg Clayton wrote: > Seems like this class was added for testing. RegisterInfoInterface is a class > that creates a common API for getting lldb_private::RegisterInfo structures. > > A RegisterContext_ class uses one of these to be able to create a > buffer large enough to store all registers defined in the > RegisterInfoInterface and will actually read/write there registers to/from > the debugged process. RegisterContext also caches registers values so they > don't get read multiple times when the process hasn't resumed. A > RegisterContext subclass is needed for each architecture so we can > dynamically tell LLDB what the registers look like for a given architecture. > It also provides abstractions by letting each register define its registers > numbers for Compilers, DWARF, and generic register numbers like PC, SP, FP, > return address, and flags registers. This allows the generic part of LLDB to > say "I need you to give me the PC register for this thread" and we don't need > to know that the register is "eip" on x86, "rip" on x86_64, "r15" on ARM. > RegisterContext classes can also determine how registers are read/written: > one at a time, or "get all general purpose regs" and "get all FPU regs". So > if someone asks a RegisterContext to read the PC, it might go read all GPR > regs and then mark them all as valid in the register context buffer cache, so > if someone subsequently asks for SP, it will be already cached. > > So RegisterInfoInterface defines a common way that many RegisterContext > classes can inherit from in order to give out the lldb_private::RegisterInfo > (which is required by all subclasses of RegisterContext) info for a register > context, and RegisterContext is the one that actually will interface with the > debugged process in order to read/write and cache those registers as > efficiently as possible for the current program being debugged. > >> On Sep 12, 2017, at 10:59 PM, Ramana via lldb-dev >> wrote: >> >> Hi, >> >> When deriving RegisterContext_, why some platforms (Arch+OS) >> are deriving it from lldb_private::RegisterContext while others are >> deriving from lldb_private::RegisterInfoInterface or in other words >> how to decide on the base class to derive from between those two and >> what are the implications? >> >> Thanks, >> Ramana >> ___ >> lldb-dev mailing list >> lldb-dev@lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev > ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] lldb_private::RegisterContext vs lldb_private::RegisterInfoInterface
l general purpose regs" >> and >> "get all FPU regs". So if someone asks a RegisterContext to read the PC, it >> might go read all GPR regs and then mark them all as valid in the register >> context buffer cache, so if someone subsequently asks for SP, it will be >> already >> cached. >> >> >> >> So RegisterInfoInterface defines a common way that many RegisterContext >> classes can inherit from in order to give out the lldb_private::RegisterInfo >> (which is required by all subclasses of RegisterContext) info for a register >> context, and RegisterContext is the one that actually will interface with the >> debugged process in order to read/write and cache those registers as >> efficiently as possible for the current program being debugged. >> >> >> >>> On Sep 12, 2017, at 10:59 PM, Ramana via lldb-dev > d...@lists.llvm.org> wrote: >> >>> >> >>> Hi, >> >>> >> >>> When deriving RegisterContext_, why some platforms >> >>> (Arch+OS) are deriving it from lldb_private::RegisterContext while >> >>> others are deriving from lldb_private::RegisterInfoInterface or in >> >>> other words how to decide on the base class to derive from between >> >>> those two and what are the implications? >> >>> >> >>> Thanks, >> >>> Ramana >> >>> ___ >> >>> lldb-dev mailing list >> >>> lldb-dev@lists.llvm.org >> >>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev >> >> >> >> ___ >> lldb-dev mailing list >> lldb-dev@lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev > ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] lldb_private::RegisterContext vs lldb_private::RegisterInfoInterface
Thank you so much Greg for your comments. > What architecture and os are you looking to support? The OS is Linux and the primary use scenario is remote debugging. Basically http://lists.llvm.org/pipermail/lldb-dev/2017-June/012445.html is what I am trying to achieve and unfortunately that query did not get much attention of the members. Thanks, Ramana On Mon, Sep 18, 2017 at 8:46 PM, Greg Clayton wrote: > When supporting a new architecture, our preferred route is to modify > lldb-server (a GDB server binary that supports native debugging) to support > your architecture. Why? Because this gets you remote debugging for free. If > you go this route, then you will subclass a > lldb_private::NativeRegisterContext and that will get used by lldb-server > (along with lldb_private::NativeProcessProtocol and > lldb_private::NativeThreadProtocol). If you are adding a new architecture to > Linux, then you will likely just need to subclass NativeRegisterContext. > > The other way to go is to subclass lldb_private::Process, > lldb_private::Thread and lldb_private::RegisterContext. > > The nice thing about the lldb_private::Native* subclasses is that you only > need to worry about native support. You can use #ifdef and use system header > files, where as the non native route, those classes need to be able to debug > remotely and you can't rely on system headers (lldb_private::Process, > lldb_private::Thread and lldb_private::RegisterContext) since they can be > compiled on any system for possibly local debugging (if current > arch/vendor/os matches the current system) and remote (if you use lldb-server > or another form for RPC). > > I would highly suggest getting going the lldb-server route as then you can > use system header files that contain the definitions of the registers and you > only need to worry about the native architecture. Linux uses ptrace and has > much the the common code filtered out into correct classes (posix ptrace, > linux specifics, and more. > > What architecture and os are you looking to support? > > Greg Clayton > >> On Sep 16, 2017, at 6:28 AM, Ramana wrote: >> >> Thank you Greg for the detailed response. >> >> Can you please also shed some light on the NativeRegisterContext. When >> do we need to subclass NativeRegisterContext and (how) are they >> related to RegisterContext_> It appears that not all architectures having >> RegisterContext_ have sub classed NativeRegisterContext. >> >> Regards, >> Ramana >> >> On Thu, Sep 14, 2017 at 9:02 PM, Greg Clayton wrote: >>> Seems like this class was added for testing. RegisterInfoInterface is a >>> class that creates a common API for getting lldb_private::RegisterInfo >>> structures. >>> >>> A RegisterContext_ class uses one of these to be able to create a >>> buffer large enough to store all registers defined in the >>> RegisterInfoInterface and will actually read/write there registers to/from >>> the debugged process. RegisterContext also caches registers values so they >>> don't get read multiple times when the process hasn't resumed. A >>> RegisterContext subclass is needed for each architecture so we can >>> dynamically tell LLDB what the registers look like for a given >>> architecture. It also provides abstractions by letting each register define >>> its registers numbers for Compilers, DWARF, and generic register numbers >>> like PC, SP, FP, return address, and flags registers. This allows the >>> generic part of LLDB to say "I need you to give me the PC register for this >>> thread" and we don't need to know that the register is "eip" on x86, "rip" >>> on x86_64, "r15" on ARM. RegisterContext classes can also determine how >>> registers are read/written: one at a time, or "get all general purpose >>> regs" and "get all FPU regs". So if someone asks a RegisterContext to read >>> the PC, it might go read all GPR regs and then mark them all as valid in >>> the register context buffer cache, so if someone subsequently asks for SP, >>> it will be already cached. >>> >>> So RegisterInfoInterface defines a common way that many RegisterContext >>> classes can inherit from in order to give out the >>> lldb_private::RegisterInfo (which is required by all subclasses of >>> RegisterContext) info for a register context, and RegisterContext is the >>> one that actually will interface with the debugged process in order to >>> read/write and cache those reg
Re: [lldb-dev] lldb_private::RegisterContext vs lldb_private::RegisterInfoInterface
etting lldb_private::RegisterInfo > structures. > > A RegisterContext_ class uses one of these to be able to create a > buffer large enough to store all registers defined in the > RegisterInfoInterface and will actually read/write there registers to/from > the debugged process. RegisterContext also caches registers values so they > don't get read multiple times when the process hasn't resumed. A > RegisterContext subclass is needed for each architecture so we can > dynamically tell LLDB what the registers look like for a given architecture. > It also provides abstractions by letting each register define its registers > numbers for Compilers, DWARF, and generic register numbers like PC, SP, FP, > return address, and flags registers. This allows the generic part of LLDB to > say "I need you to give me the PC register for this thread" and we don't > need to know that the register is "eip" on x86, "rip" on x86_64, "r15" on > ARM. RegisterContext classes can also determine how registers are > read/written: one at a time, or "get all general purpose regs" and "get all > FPU regs". So if someone asks a RegisterContext to read the PC, it might go > read all GPR regs and then mark them all as valid in the register context > buffer cache, so if someone subsequently asks for SP, it will be already > cached. > > So RegisterInfoInterface defines a common way that many RegisterContext > classes can inherit from in order to give out the lldb_private::RegisterInfo > (which is required by all subclasses of RegisterContext) info for a register > context, and RegisterContext is the one that actually will interface with > the debugged process in order to read/write and cache those registers as > efficiently as possible for the current program being debugged. > > On Sep 12, 2017, at 10:59 PM, Ramana via lldb-dev > wrote: > > Hi, > > When deriving RegisterContext_, why some platforms (Arch+OS) > are deriving it from lldb_private::RegisterContext while others are > deriving from lldb_private::RegisterInfoInterface or in other words > how to decide on the base class to derive from between those two and > what are the implications? > > Thanks, > Ramana > ___ > lldb-dev mailing list > lldb-dev@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev > > > > ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] lldb_private::RegisterContext vs lldb_private::RegisterInfoInterface
tiveRegisterContext. >>> >>> The other way to go is to subclass lldb_private::Process, >>> lldb_private::Thread and lldb_private::RegisterContext. >>> >>> The nice thing about the lldb_private::Native* subclasses is that you only >>> need to worry about native support. You can use #ifdef and use system header >>> files, where as the non native route, those classes need to be able to debug >>> remotely and you can't rely on system headers (lldb_private::Process, >>> lldb_private::Thread and lldb_private::RegisterContext) since they can be >>> compiled on any system for possibly local debugging (if current >>> arch/vendor/os matches the current system) and remote (if you use >>> lldb-server or another form for RPC). >>> >>> I would highly suggest getting going the lldb-server route as then you can >>> use system header files that contain the definitions of the registers and >>> you only need to worry about the native architecture. Linux uses ptrace and >>> has much the the common code filtered out into correct classes (posix >>> ptrace, linux specifics, and more. >>> >>> What architecture and os are you looking to support? >>> >>> Greg Clayton >>> >>> On Sep 16, 2017, at 6:28 AM, Ramana wrote: >>> >>> Thank you Greg for the detailed response. >>> >>> Can you please also shed some light on the NativeRegisterContext. When >>> do we need to subclass NativeRegisterContext and (how) are they >>> related to RegisterContext_>> It appears that not all architectures having >>> RegisterContext_ have sub classed NativeRegisterContext. >>> >>> Regards, >>> Ramana >>> >>> On Thu, Sep 14, 2017 at 9:02 PM, Greg Clayton wrote: >>> >>> Seems like this class was added for testing. RegisterInfoInterface is a >>> class that creates a common API for getting lldb_private::RegisterInfo >>> structures. >>> >>> A RegisterContext_ class uses one of these to be able to create a >>> buffer large enough to store all registers defined in the >>> RegisterInfoInterface and will actually read/write there registers to/from >>> the debugged process. RegisterContext also caches registers values so they >>> don't get read multiple times when the process hasn't resumed. A >>> RegisterContext subclass is needed for each architecture so we can >>> dynamically tell LLDB what the registers look like for a given architecture. >>> It also provides abstractions by letting each register define its registers >>> numbers for Compilers, DWARF, and generic register numbers like PC, SP, FP, >>> return address, and flags registers. This allows the generic part of LLDB to >>> say "I need you to give me the PC register for this thread" and we don't >>> need to know that the register is "eip" on x86, "rip" on x86_64, "r15" on >>> ARM. RegisterContext classes can also determine how registers are >>> read/written: one at a time, or "get all general purpose regs" and "get all >>> FPU regs". So if someone asks a RegisterContext to read the PC, it might go >>> read all GPR regs and then mark them all as valid in the register context >>> buffer cache, so if someone subsequently asks for SP, it will be already >>> cached. >>> >>> So RegisterInfoInterface defines a common way that many RegisterContext >>> classes can inherit from in order to give out the lldb_private::RegisterInfo >>> (which is required by all subclasses of RegisterContext) info for a register >>> context, and RegisterContext is the one that actually will interface with >>> the debugged process in order to read/write and cache those registers as >>> efficiently as possible for the current program being debugged. >>> >>> On Sep 12, 2017, at 10:59 PM, Ramana via lldb-dev >>> wrote: >>> >>> Hi, >>> >>> When deriving RegisterContext_, why some platforms (Arch+OS) >>> are deriving it from lldb_private::RegisterContext while others are >>> deriving from lldb_private::RegisterInfoInterface or in other words >>> how to decide on the base class to derive from between those two and >>> what are the implications? >>> >>> Thanks, >>> Ramana >>> ___ >>> lldb-dev mailing list >>> lldb-dev@lists.llvm.org >>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev >>> >>> >>> >>> > ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] lldb_private::RegisterContext vs lldb_private::RegisterInfoInterface
what is to be >> >> improved of simultaneous multiple target debugging support in LLDB? >> >> Were the changes contributed to LLDB mainstream? >> >> >> >> >> >> So we currently have no cooperative targets in LLDB. This will be the >> >> first. >> >> We will need to discuss how hand off between the targets will occur >> >> and many other aspects. We will be sure to comment when and if you get to >> this point. >> >> >> >> How can I access the material for >> >> http://llvm.org/devmtg/2014-10/#bof5 >> >> (Future directions and features for LLDB) >> >> >> >> Over the years we have talked about this, but it never really got >> >> into any real amount of detail and I don't think the BoF notes will help >> >> you >> much. >> >> >> >> Appreciate any help/guidance provided on the same. >> >> >> >> I do believe approach #1 will work the best. The easiest thing you >> >> can do is to insulate LLDB from the GPU by putting it behind a GDB server >> boundary. >> >> Then we need to really figure out how we want to do GPU debugging. >> >> >> >> Hopefully this filled in your missing answers. Let me know what >> >> questions you have. >> >> >> >> Greg >> >> >> >> Thanks, >> >> Ramana >> >> >> >> On Mon, Sep 18, 2017 at 8:46 PM, Greg Clayton >> wrote: >> >> >> >> When supporting a new architecture, our preferred route is to modify >> >> lldb-server (a GDB server binary that supports native debugging) to >> >> support your architecture. Why? Because this gets you remote >> >> debugging for free. If you go this route, then you will subclass a >> >> lldb_private::NativeRegisterContext and that will get used by >> >> lldb-server (along with lldb_private::NativeProcessProtocol and >> >> lldb_private::NativeThreadProtocol). If you are adding a new >> >> architecture to Linux, then you will likely just need to subclass >> NativeRegisterContext. >> >> >> >> The other way to go is to subclass lldb_private::Process, >> >> lldb_private::Thread and lldb_private::RegisterContext. >> >> >> >> The nice thing about the lldb_private::Native* subclasses is that you >> >> only need to worry about native support. You can use #ifdef and use >> >> system header files, where as the non native route, those classes >> >> need to be able to debug remotely and you can't rely on system >> >> headers (lldb_private::Process, lldb_private::Thread and >> >> lldb_private::RegisterContext) since they can be compiled on any >> >> system for possibly local debugging (if current arch/vendor/os >> >> matches the current system) and remote (if you use lldb-server or another >> form for RPC). >> >> >> >> I would highly suggest getting going the lldb-server route as then >> >> you can use system header files that contain the definitions of the >> >> registers and you only need to worry about the native architecture. >> >> Linux uses ptrace and has much the the common code filtered out into >> >> correct classes (posix ptrace, linux specifics, and more. >> >> >> >> What architecture and os are you looking to support? >> >> >> >> Greg Clayton >> >> >> >> On Sep 16, 2017, at 6:28 AM, Ramana >> wrote: >> >> >> >> Thank you Greg for the detailed response. >> >> >> >> Can you please also shed some light on the NativeRegisterContext. >> >> When do we need to subclass NativeRegisterContext and (how) are they >> >> related to RegisterContext_> >> architectures having RegisterContext_ have sub classed >> >> NativeRegisterContext. >> >> >> >> Regards, >> >> Ramana >> >> >> >> On Thu, Sep 14, 2017 at 9:02 PM, Greg Clayton >> wrote: >> >> >> >> Seems like this class was added for testing. RegisterInfoInterface is >> >> a class that creates a common API for getting >> >> lldb_private::RegisterInfo structures. >> >> >> >> A RegisterContext_ class uses one of these to be able to >> >> create a buffer large enough to store all registers defined in the >> >> RegisterInfoInterface and will actually read/write there registers >> >> to/from the debugged process. RegisterContext also caches registers >> >> values so they don't get read multiple times when the process hasn't >> >> resumed. A RegisterContext subclass is needed for each architecture >> >> so we can dynamically tell LLDB what the registers look like for a given >> architecture. >> >> It also provides abstractions by letting each register define its >> >> registers numbers for Compilers, DWARF, and generic register numbers >> >> like PC, SP, FP, return address, and flags registers. This allows the >> >> generic part of LLDB to say "I need you to give me the PC register >> >> for this thread" and we don't need to know that the register is "eip" >> >> on x86, "rip" on x86_64, "r15" on ARM. RegisterContext classes can >> >> also determine how registers are >> >> read/written: one at a time, or "get all general purpose regs" and >> >> "get all FPU regs". So if someone asks a RegisterContext to read the >> >> PC, it might go read all GPR regs and then mark them all as valid in >> >> the register context buffer cache, so if someone subsequently asks >> >> for SP, it will be already cached. >> >> >> >> So RegisterInfoInterface defines a common way that many >> >> RegisterContext classes can inherit from in order to give out the >> >> lldb_private::RegisterInfo (which is required by all subclasses of >> >> RegisterContext) info for a register context, and RegisterContext is >> >> the one that actually will interface with the debugged process in >> >> order to read/write and cache those registers as efficiently as possible >> >> for >> the current program being debugged. >> >> >> >> On Sep 12, 2017, at 10:59 PM, Ramana via lldb-dev >> >> >> >> wrote: >> >> >> >> Hi, >> >> >> >> When deriving RegisterContext_, why some platforms >> >> (Arch+OS) are deriving it from lldb_private::RegisterContext while >> >> others are deriving from lldb_private::RegisterInfoInterface or in >> >> other words how to decide on the base class to derive from between >> >> those two and what are the implications? >> >> >> >> Thanks, >> >> Ramana >> >> ___ >> >> lldb-dev mailing list >> >> lldb-dev@lists.llvm.org >> >> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev >> >> >> >> >> >> >> >> >> >> ___ >> lldb-dev mailing list >> lldb-dev@lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev > ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] lldb_private::RegisterContext vs lldb_private::RegisterInfoInterface
server (a GDB server binary that supports native >> >> >> debugging) to support your architecture. Why? Because this gets >> >> >> you remote debugging for free. If you go this route, then you will >> >> >> subclass a lldb_private::NativeRegisterContext and that will get >> >> >> used by lldb-server (along with >> >> >> lldb_private::NativeProcessProtocol and >> >> >> lldb_private::NativeThreadProtocol). If you are adding a new >> >> >> architecture to Linux, then you will likely just need to subclass >> >> NativeRegisterContext. >> >> >> >> >> >> The other way to go is to subclass lldb_private::Process, >> >> >> lldb_private::Thread and lldb_private::RegisterContext. >> >> >> >> >> >> The nice thing about the lldb_private::Native* subclasses is that >> >> >> you only need to worry about native support. You can use #ifdef >> >> >> and use system header files, where as the non native route, those >> >> >> classes need to be able to debug remotely and you can't rely on >> >> >> system headers (lldb_private::Process, lldb_private::Thread and >> >> >> lldb_private::RegisterContext) since they can be compiled on any >> >> >> system for possibly local debugging (if current arch/vendor/os >> >> >> matches the current system) and remote (if you use lldb-server or >> >> >> another >> >> form for RPC). >> >> >> >> >> >> I would highly suggest getting going the lldb-server route as then >> >> >> you can use system header files that contain the definitions of >> >> >> the registers and you only need to worry about the native architecture. >> >> >> Linux uses ptrace and has much the the common code filtered out >> >> >> into correct classes (posix ptrace, linux specifics, and more. >> >> >> >> >> >> What architecture and os are you looking to support? >> >> >> >> >> >> Greg Clayton >> >> >> >> >> >> On Sep 16, 2017, at 6:28 AM, Ramana >> >> wrote: >> >> >> >> >> >> Thank you Greg for the detailed response. >> >> >> >> >> >> Can you please also shed some light on the NativeRegisterContext. >> >> >> When do we need to subclass NativeRegisterContext and (how) are >> >> >> they related to RegisterContext_> >> >> architectures having RegisterContext_ have sub classed >> >> >> NativeRegisterContext. >> >> >> >> >> >> Regards, >> >> >> Ramana >> >> >> >> >> >> On Thu, Sep 14, 2017 at 9:02 PM, Greg Clayton >> >> wrote: >> >> >> >> >> >> Seems like this class was added for testing. RegisterInfoInterface >> >> >> is a class that creates a common API for getting >> >> >> lldb_private::RegisterInfo structures. >> >> >> >> >> >> A RegisterContext_ class uses one of these to be able to >> >> >> create a buffer large enough to store all registers defined in the >> >> >> RegisterInfoInterface and will actually read/write there registers >> >> >> to/from the debugged process. RegisterContext also caches >> >> >> registers values so they don't get read multiple times when the >> >> >> process hasn't resumed. A RegisterContext subclass is needed for >> >> >> each architecture so we can dynamically tell LLDB what the >> >> >> registers look like for a given >> >> architecture. >> >> >> It also provides abstractions by letting each register define its >> >> >> registers numbers for Compilers, DWARF, and generic register >> >> >> numbers like PC, SP, FP, return address, and flags registers. This >> >> >> allows the generic part of LLDB to say "I need you to give me the >> >> >> PC register for this thread" and we don't need to know that the >> >> >> register is >> "eip" >> >> >> on x86, "rip" on x86_64, "r15" on ARM. RegisterContext classes can >> >> >> also determine how registers are >> >> >> read/written: one at a time, or "get all general purpose regs" and >> >> >> "get all FPU regs". So if someone asks a RegisterContext to read >> >> >> the PC, it might go read all GPR regs and then mark them all as >> >> >> valid in the register context buffer cache, so if someone >> >> >> subsequently asks for SP, it will be already cached. >> >> >> >> >> >> So RegisterInfoInterface defines a common way that many >> >> >> RegisterContext classes can inherit from in order to give out the >> >> >> lldb_private::RegisterInfo (which is required by all subclasses of >> >> >> RegisterContext) info for a register context, and RegisterContext >> >> >> is the one that actually will interface with the debugged process >> >> >> in order to read/write and cache those registers as efficiently as >> >> >> possible for >> >> the current program being debugged. >> >> >> >> >> >> On Sep 12, 2017, at 10:59 PM, Ramana via lldb-dev >> >> >> >> >> >> wrote: >> >> >> >> >> >> Hi, >> >> >> >> >> >> When deriving RegisterContext_, why some platforms >> >> >> (Arch+OS) are deriving it from lldb_private::RegisterContext while >> >> >> others are deriving from lldb_private::RegisterInfoInterface or in >> >> >> other words how to decide on the base class to derive from between >> >> >> those two and what are the implications? >> >> >> >> >> >> Thanks, >> >> >> Ramana >> >> >> ___ >> >> >> lldb-dev mailing list >> >> >> lldb-dev@lists.llvm.org >> >> >> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> ___ >> >> lldb-dev mailing list >> >> lldb-dev@lists.llvm.org >> >> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev >> > > ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] did anyone konw LLDB support lldb + openocd to run dotest.py in bare board like ARM or other non-x86 architecture?
These sort of issues I have faced in the past and in my case the reason was I used the incorrect host triple while building LLDB (LLVM etc). Example: Assuming you are using GCC/G++ compiler to build LLVM+LLDB, I would set the LLVM_HOST_TRIPLE to the one used in the library path and not essentially that of the 'Target' in 'gcc -v'. $ gcc -v Target: i686-linux-gnu ... $ ldd /usr/bin/gcc libc.so.6 => /lib/i386-linux-gnu/libc.so.6 .. Now, I would set LLVM_HOST_TRIPLE to i386-linux-gnu but not i686-linux-gnu. See if that works for you. - Ramana > Hi > You mean should use “platform select remote-linux” ? I use it but also report > error: unable to launch a GDB server on 'debian-armhf.""' > in-addition, you said gdb-server is “GNU GDB server” or just lldb-server > services ? > Best Regards > —cuibixiong ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
[lldb-dev] GDB RSPs non-stop mode capability in v5.0
Hi, It appears that the lldb-server, as of v5.0, did not implement the GDB RSPs non-stop mode ( https://sourceware.org/gdb/onlinedocs/gdb/Remote-Non_002dStop.html#Remote-Non_002dStop). Am I wrong? If the support is actually not there, what needs to be changed to enable the same in lldb-server? Also, in lldb at least I see some code relevant to non-stop mode, but is non-stop mode fully implemented in lldb or there is only partial support? Thanks, Ramana ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0
> I’m not sure why Ramana is interested in it Basically http://lists.llvm.org/pipermail/lldb-dev/2017-June/012445.html is what I am trying to implement in lldb which has been discussed in little more details here http://lists.llvm.org/pipermail/lldb-dev/2017-September/012815.html. On Thu, Mar 29, 2018 at 9:40 PM, Frédéric Riss wrote: > On Mar 29, 2018, at 7:32 AM, Greg Clayton via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > > > > On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev > wrote: > > Hi, > > It appears that the lldb-server, as of v5.0, did not implement the GDB > RSPs non-stop mode (https://sourceware.org/gdb/onlinedocs/gdb/Remote-Non_ > 002dStop.html#Remote-Non_002dStop). Am I wrong? > > If the support is actually not there, what needs to be changed to enable > the same in lldb-server? > > > As Pavel said, adding support into lldb-server will be easy. Adding > support to LLDB will be harder. One downside of enabling this mode will be > a performance loss in the GDB remote packet transfer. Why? IIRC this mode > requires a read thread where one thread is always reading packets and > putting them into a packet buffer. Threads that want to send a packet an > get a reply must not send the packet then use a condition variable + mutex > to wait for the response. This threading overhead really slows down the > packet transfers. Currently we have a mutex on the GDB remote communication > where each thread that needs to send a packet will take the mutex and then > send the packet and wait for the response on the same thread. I know the > performance differences are large on MacOS, not sure how they are on other > systems. If you do end up enabling this, please run the "process plugin > packet speed-test" command which is available only when debugging with > ProcessGDBRemote. It will send an receive various packets of various sizes > and report speed statistics back to you. > > > Also, in lldb at least I see some code relevant to non-stop mode, but is > non-stop mode fully implemented in lldb or there is only partial support? > > > Everything in LLDB right now assumes a process centric debugging model > where when one thread stops all threads are stopped. There will be quite a > large amount of changes needed for a thread centric model. The biggest > issue I know about is breakpoints. Any time you need to step over a > breakpoint, you must stop all threads, disable the breakpoint, single step > the thread and re-enable the breakpoint, then start all threads again. So > even the thread centric model would need to start and stop all threads many > times. > > > If we work on this, that’s not the way we should approach breakpoints in > non-stop mode (and it’s not how GDB does it). I’m not sure why Ramana is > interested in it, but I think one of the main motivations to add it to GDB > was systems where stopping all some threads for even a small amount of time > would just break things. You want a way to step over breakpoints without > disrupting the other threads. > > Instead of removing the breakpoint, you can just teach the debugger to > execute the code that has been patched in a different context. You can > either move the code someplace else and execute it there or emulate it. > Sometimes you’ll need to patch it if it is PC-relative. IIRC, GDB calls > this displaced stepping. It’s relatively simple and works great. > > I’ve been interested in displaced stepping for different reasons. If we > had that capability, it would become much easier to patch code. I’d love to > use this to have breakpoint conditions injected and evaluated without round > tripping to the debugger when the condition returns false. > > Fred > > Be sure to speak with myself, Jim Ingham and Pavel in depth before > undertaking this task as there will be many changes required. > > Greg > > > Thanks, > Ramana > > > ___ > lldb-dev mailing list > lldb-dev@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev > > > ___ > lldb-dev mailing list > lldb-dev@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev > > > ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0
> Be sure to speak with myself, Jim Ingham and Pavel in depth before undertaking this task as there will be many changes required. Definitely. Thank you all for the responses. Will get back after digesting all the responses here. Regards, Ramana On Thu, Mar 29, 2018 at 8:02 PM, Greg Clayton wrote: > > > On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev > wrote: > > Hi, > > It appears that the lldb-server, as of v5.0, did not implement the GDB > RSPs non-stop mode (https://sourceware.org/gdb/onlinedocs/gdb/Remote-Non_ > 002dStop.html#Remote-Non_002dStop). Am I wrong? > > If the support is actually not there, what needs to be changed to enable > the same in lldb-server? > > > As Pavel said, adding support into lldb-server will be easy. Adding > support to LLDB will be harder. One downside of enabling this mode will be > a performance loss in the GDB remote packet transfer. Why? IIRC this mode > requires a read thread where one thread is always reading packets and > putting them into a packet buffer. Threads that want to send a packet an > get a reply must not send the packet then use a condition variable + mutex > to wait for the response. This threading overhead really slows down the > packet transfers. Currently we have a mutex on the GDB remote communication > where each thread that needs to send a packet will take the mutex and then > send the packet and wait for the response on the same thread. I know the > performance differences are large on MacOS, not sure how they are on other > systems. If you do end up enabling this, please run the "process plugin > packet speed-test" command which is available only when debugging with > ProcessGDBRemote. It will send an receive various packets of various sizes > and report speed statistics back to you. > > > Also, in lldb at least I see some code relevant to non-stop mode, but is > non-stop mode fully implemented in lldb or there is only partial support? > > > Everything in LLDB right now assumes a process centric debugging model > where when one thread stops all threads are stopped. There will be quite a > large amount of changes needed for a thread centric model. The biggest > issue I know about is breakpoints. Any time you need to step over a > breakpoint, you must stop all threads, disable the breakpoint, single step > the thread and re-enable the breakpoint, then start all threads again. So > even the thread centric model would need to start and stop all threads many > times. > > Be sure to speak with myself, Jim Ingham and Pavel in depth before > undertaking this task as there will be many changes required. > > Greg > > > Thanks, > Ramana > > > ___ > lldb-dev mailing list > lldb-dev@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev > > > ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0
On Thu, Mar 29, 2018 at 8:02 PM, Greg Clayton wrote: > > > On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev > wrote: > > Hi, > > It appears that the lldb-server, as of v5.0, did not implement the GDB > RSPs non-stop mode (https://sourceware.org/gdb/onlinedocs/gdb/Remote-Non_ > 002dStop.html#Remote-Non_002dStop). Am I wrong? > > If the support is actually not there, what needs to be changed to enable > the same in lldb-server? > > > As Pavel said, adding support into lldb-server will be easy. Adding > support to LLDB will be harder. One downside of enabling this mode will be > a performance loss in the GDB remote packet transfer. Why? IIRC this mode > requires a read thread where one thread is always reading packets and > putting them into a packet buffer. Threads that want to send a packet an > get a reply must not send the packet then use a condition variable + mutex > to wait for the response. This threading overhead really slows down the > packet transfers. Currently we have a mutex on the GDB remote communication > where each thread that needs to send a packet will take the mutex and then > send the packet and wait for the response on the same thread. I know the > performance differences are large on MacOS, not sure how they are on other > systems. If you do end up enabling this, please run the "process plugin > packet speed-test" command which is available only when debugging with > ProcessGDBRemote. It will send an receive various packets of various sizes > and report speed statistics back to you. > So, in non-stop mode, though we can have threads running asynchronously (some running, some stopped), the GDB remote packet transfer will be synchronous i.e. will get queued? And this is because the packet responses should be matched appropriately as there typically will be a single connection to the remote target and hence this queueing cannot be avoided? > > Also, in lldb at least I see some code relevant to non-stop mode, but is > non-stop mode fully implemented in lldb or there is only partial support? > > > Everything in LLDB right now assumes a process centric debugging model > where when one thread stops all threads are stopped. There will be quite a > large amount of changes needed for a thread centric model. The biggest > issue I know about is breakpoints. Any time you need to step over a > breakpoint, you must stop all threads, disable the breakpoint, single step > the thread and re-enable the breakpoint, then start all threads again. So > even the thread centric model would need to start and stop all threads many > times. > Greg, what if, while stepping over a breakpoint, the remaining threads can still continue and no need to disable the breakpoint? What else do I need to take care of? > > Be sure to speak with myself, Jim Ingham and Pavel in depth before > undertaking this task as there will be many changes required. > > Greg > > > Thanks, > Ramana > > > ___ > lldb-dev mailing list > lldb-dev@lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev > > > ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0
On Thu, Mar 29, 2018 at 11:17 PM, Jim Ingham wrote: > The breakpoints aren't a structural problem. If you can figure out a > non-code modifying way to handle breakpoints, that would be a very surgical > change. And as Fred points out, out of place execution in the target would > be really handy for other things, like offloading breakpoint conditions > into the target, and only stopping if the condition is true. So this is a > well motivated project. > > And our model for handling both expression evaluation and execution > control are already thread-centric. It would be pretty straight-forward to > treat "still running" threads the same way as threads with no interesting > stop reasons, for instance. > > I think the real difficulty will come at the higher layers. First off, we > gate a lot of Command & SB API operations on "is the process running" and > that will have to get much more fine-grained. Figuring out a good model > for this will be important. > > Then you're going to have to figure out what exactly to do when somebody > is in the middle of say running a long expression on thread A when thread B > stops. What's a useful way to present this information? If lldb is > sharing the terminal with the process, you can't just dump output in the > middle of command output, but you don't want to delay too long... > > Also, the IOHandlers are currently a stack, but that model won't work when > the process IOHandler is going to have to be live (at least the output part > of it) while the CommandInterpreter IOHandler is also live. That's going > to take reworking. > > On the event and operations side, I think the fact that we have the > separation between the private and public states will make this a lot > easier. We can use the event transition from private to public state to > serialize the activity that's going on under the covers so that it appears > coherent to the user. The fact that lldb goes through separate channels > for process I/O and command I/O and we very seldom just dump stuff to > stdout will also make solving the problem of competing demands for the > user's attention more possible. > > And I think we can't do any of this till we have a robust "ProcessMock" > plugin that we can use to emulate end-to-end through the debugger all the > corner cases that non-stop debugging will bring up. Otherwise there will > be no way to reliably test any of this stuff, and it won't ever be stable. > > I don't think any of this will be impossible, but it's going to be a lot > of work. > > Jim > Thanks Jim for the comments. Being new to lldb, that's a lot of food for thought for me. Will get back here after doing some homework on what all this means. > > > > On Mar 29, 2018, at 9:27 AM, Greg Clayton via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > > > > > > > >> On Mar 29, 2018, at 9:10 AM, Frédéric Riss wrote: > >> > >> > >> > >>> On Mar 29, 2018, at 7:32 AM, Greg Clayton via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > >>> > >>> > >>> > >>>> On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > >>>> > >>>> Hi, > >>>> > >>>> It appears that the lldb-server, as of v5.0, did not implement the > GDB RSPs non-stop mode (https://sourceware.org/gdb/ > onlinedocs/gdb/Remote-Non_002dStop.html#Remote-Non_002dStop). Am I wrong? > >>>> > >>>> If the support is actually not there, what needs to be changed to > enable the same in lldb-server? > >>> > >>> As Pavel said, adding support into lldb-server will be easy. Adding > support to LLDB will be harder. One downside of enabling this mode will be > a performance loss in the GDB remote packet transfer. Why? IIRC this mode > requires a read thread where one thread is always reading packets and > putting them into a packet buffer. Threads that want to send a packet an > get a reply must not send the packet then use a condition variable + mutex > to wait for the response. This threading overhead really slows down the > packet transfers. Currently we have a mutex on the GDB remote communication > where each thread that needs to send a packet will take the mutex and then > send the packet and wait for the response on the same thread. I know the > performance differences are large on MacOS, not sure how they are on other > systems. If you do end up enabling this, please run the "process plugin > packet speed-test" command which is available only when debugging
Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0
On Thu, Mar 29, 2018 at 11:37 PM, Jim Ingham wrote: > > > > On Mar 29, 2018, at 10:40 AM, Greg Clayton via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > > > > > > > >> On Mar 29, 2018, at 10:36 AM, Frédéric Riss wrote: > >> > >> > >> > >>> On Mar 29, 2018, at 9:27 AM, Greg Clayton wrote: > >>> > >>> > >>> > >>>> On Mar 29, 2018, at 9:10 AM, Frédéric Riss wrote: > >>>> > >>>> > >>>> > >>>>> On Mar 29, 2018, at 7:32 AM, Greg Clayton via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > >>>>> > >>>>> > >>>>> > >>>>>> On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > >>>>>> > >>>>>> Hi, > >>>>>> > >>>>>> It appears that the lldb-server, as of v5.0, did not implement the > GDB RSPs non-stop mode (https://sourceware.org/gdb/ > onlinedocs/gdb/Remote-Non_002dStop.html#Remote-Non_002dStop). Am I wrong? > >>>>>> > >>>>>> If the support is actually not there, what needs to be changed to > enable the same in lldb-server? > >>>>> > >>>>> As Pavel said, adding support into lldb-server will be easy. Adding > support to LLDB will be harder. One downside of enabling this mode will be > a performance loss in the GDB remote packet transfer. Why? IIRC this mode > requires a read thread where one thread is always reading packets and > putting them into a packet buffer. Threads that want to send a packet an > get a reply must not send the packet then use a condition variable + mutex > to wait for the response. This threading overhead really slows down the > packet transfers. Currently we have a mutex on the GDB remote communication > where each thread that needs to send a packet will take the mutex and then > send the packet and wait for the response on the same thread. I know the > performance differences are large on MacOS, not sure how they are on other > systems. If you do end up enabling this, please run the "process plugin > packet speed-test" command which is available only when debugging with > ProcessGDBRemote. It will send an receive various packets of various sizes > and report speed statistics back to you. > >>>>>> > >>>>>> Also, in lldb at least I see some code relevant to non-stop mode, > but is non-stop mode fully implemented in lldb or there is only partial > support? > >>>>> > >>>>> Everything in LLDB right now assumes a process centric debugging > model where when one thread stops all threads are stopped. There will be > quite a large amount of changes needed for a thread centric model. The > biggest issue I know about is breakpoints. Any time you need to step over a > breakpoint, you must stop all threads, disable the breakpoint, single step > the thread and re-enable the breakpoint, then start all threads again. So > even the thread centric model would need to start and stop all threads many > times. > >>>> > >>>> If we work on this, that’s not the way we should approach breakpoints > in non-stop mode (and it’s not how GDB does it). I’m not sure why Ramana is > interested in it, but I think one of the main motivations to add it to GDB > was systems where stopping all some threads for even a small amount of time > would just break things. You want a way to step over breakpoints without > disrupting the other threads. > >>>> > >>>> Instead of removing the breakpoint, you can just teach the debugger > to execute the code that has been patched in a different context. You can > either move the code someplace else and execute it there or emulate it. > Sometimes you’ll need to patch it if it is PC-relative. IIRC, GDB calls > this displaced stepping. It’s relatively simple and works great. > >>> > >>> This indeed is one of the changes we would need to do for non-stop > mode. We have the EmulateInstruction class in LLDB that is designed just > for this kind of thing. You can give the emulator function a read/write > memory and read/write register callbacks and a baton and it can execute the > instruction and read/write memory and regisrters as needed through the > context. It would be very easy to have the read register callback know to > take the PC of the original instruction and return it if the PC is > requested. > >>> > >>> We always got push back in the past about adding full instruction > emulation
Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0
On Thu, Mar 29, 2018 at 11:58 PM, Greg Clayton wrote: > > > On Mar 29, 2018, at 11:07 AM, Jim Ingham wrote: > > > > On Mar 29, 2018, at 10:40 AM, Greg Clayton via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > > > > On Mar 29, 2018, at 10:36 AM, Frédéric Riss wrote: > > > > On Mar 29, 2018, at 9:27 AM, Greg Clayton wrote: > > > > On Mar 29, 2018, at 9:10 AM, Frédéric Riss wrote: > > > > On Mar 29, 2018, at 7:32 AM, Greg Clayton via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > > > > On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev > wrote: > > Hi, > > It appears that the lldb-server, as of v5.0, did not implement the GDB > RSPs non-stop mode (https://sourceware.org/gdb/onlinedocs/gdb/Remote-Non_ > 002dStop.html#Remote-Non_002dStop). Am I wrong? > > If the support is actually not there, what needs to be changed to enable > the same in lldb-server? > > > As Pavel said, adding support into lldb-server will be easy. Adding > support to LLDB will be harder. One downside of enabling this mode will be > a performance loss in the GDB remote packet transfer. Why? IIRC this mode > requires a read thread where one thread is always reading packets and > putting them into a packet buffer. Threads that want to send a packet an > get a reply must not send the packet then use a condition variable + mutex > to wait for the response. This threading overhead really slows down the > packet transfers. Currently we have a mutex on the GDB remote communication > where each thread that needs to send a packet will take the mutex and then > send the packet and wait for the response on the same thread. I know the > performance differences are large on MacOS, not sure how they are on other > systems. If you do end up enabling this, please run the "process plugin > packet speed-test" command which is available only when debugging with > ProcessGDBRemote. It will send an receive various packets of various sizes > and report speed statistics back to you. > > > Also, in lldb at least I see some code relevant to non-stop mode, but is > non-stop mode fully implemented in lldb or there is only partial support? > > > Everything in LLDB right now assumes a process centric debugging model > where when one thread stops all threads are stopped. There will be quite a > large amount of changes needed for a thread centric model. The biggest > issue I know about is breakpoints. Any time you need to step over a > breakpoint, you must stop all threads, disable the breakpoint, single step > the thread and re-enable the breakpoint, then start all threads again. So > even the thread centric model would need to start and stop all threads many > times. > > > If we work on this, that’s not the way we should approach breakpoints in > non-stop mode (and it’s not how GDB does it). I’m not sure why Ramana is > interested in it, but I think one of the main motivations to add it to GDB > was systems where stopping all some threads for even a small amount of time > would just break things. You want a way to step over breakpoints without > disrupting the other threads. > > Instead of removing the breakpoint, you can just teach the debugger to > execute the code that has been patched in a different context. You can > either move the code someplace else and execute it there or emulate it. > Sometimes you’ll need to patch it if it is PC-relative. IIRC, GDB calls > this displaced stepping. It’s relatively simple and works great. > > > This indeed is one of the changes we would need to do for non-stop mode. > We have the EmulateInstruction class in LLDB that is designed just for this > kind of thing. You can give the emulator function a read/write memory and > read/write register callbacks and a baton and it can execute the > instruction and read/write memory and regisrters as needed through the > context. It would be very easy to have the read register callback know to > take the PC of the original instruction and return it if the PC is > requested. > > We always got push back in the past about adding full instruction > emulation support as Chris Lattner wanted it to exist in LLVM in the > tablegen tables, but no one ever got around to doing that part. So we added > prologue instruction parsing and any instructions that can modify the PC > (for single stepping) to the supported emulated instructions. > > So yes, emulating instructions without removing them from the code is one > of the things required for this feature. Not impossible, just very time > consuming to be able to emulate every instruction out of place. I would > _love_ to see that go in and would be happy to review patches for anyone > wanting to take this on. Tho
Re: [lldb-dev] GDB RSPs non-stop mode capability in v5.0
On Thu, Mar 29, 2018 at 11:17 PM, Jim Ingham wrote: > The breakpoints aren't a structural problem. If you can figure out a > non-code modifying way to handle breakpoints, that would be a very surgical > change. And as Fred points out, out of place execution in the target would > be really handy for other things, like offloading breakpoint conditions > into the target, and only stopping if the condition is true. So this is a > well motivated project. > > And our model for handling both expression evaluation and execution > control are already thread-centric. It would be pretty straight-forward to > treat "still running" threads the same way as threads with no interesting > stop reasons, for instance. > > I think the real difficulty will come at the higher layers. First off, we > gate a lot of Command & SB API operations on "is the process running" and > that will have to get much more fine-grained. Figuring out a good model > for this will be important. > > Then you're going to have to figure out what exactly to do when somebody > is in the middle of say running a long expression on thread A when thread B > stops. What's a useful way to present this information? If lldb is > sharing the terminal with the process, you can't just dump output in the > middle of command output, but you don't want to delay too long... > > Also, the IOHandlers are currently a stack, but that model won't work when > the process IOHandler is going to have to be live (at least the output part > of it) while the CommandInterpreter IOHandler is also live. That's going > to take reworking. > > On the event and operations side, I think the fact that we have the > separation between the private and public states will make this a lot > easier. We can use the event transition from private to public state to > serialize the activity that's going on under the covers so that it appears > coherent to the user. The fact that lldb goes through separate channels > for process I/O and command I/O and we very seldom just dump stuff to > stdout will also make solving the problem of competing demands for the > user's attention more possible. > Thanks Jim for the elaborate view on the non-stop mode support. BTW my understanding on public vs private states is that the public state is as known by the user and all the process state changes will be first tracked with private state which then will be made public, i.e. public state will be updated, should the user need to know about that process state change. Is there anything else I am missing on public vs private states? > And I think we can't do any of this till we have a robust "ProcessMock" > plugin that we can use to emulate end-to-end through the debugger all the > corner cases that non-stop debugging will bring up. Otherwise there will > be no way to reliably test any of this stuff, and it won't ever be stable. > > I don't think any of this will be impossible, but it's going to be a lot > of work. > > Jim > > > > On Mar 29, 2018, at 9:27 AM, Greg Clayton via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > > > > > > > >> On Mar 29, 2018, at 9:10 AM, Frédéric Riss wrote: > >> > >> > >> > >>> On Mar 29, 2018, at 7:32 AM, Greg Clayton via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > >>> > >>> > >>> > >>>> On Mar 29, 2018, at 2:08 AM, Ramana via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > >>>> > >>>> Hi, > >>>> > >>>> It appears that the lldb-server, as of v5.0, did not implement the > GDB RSPs non-stop mode (https://sourceware.org/gdb/ > onlinedocs/gdb/Remote-Non_002dStop.html#Remote-Non_002dStop). Am I wrong? > >>>> > >>>> If the support is actually not there, what needs to be changed to > enable the same in lldb-server? > >>> > >>> As Pavel said, adding support into lldb-server will be easy. Adding > support to LLDB will be harder. One downside of enabling this mode will be > a performance loss in the GDB remote packet transfer. Why? IIRC this mode > requires a read thread where one thread is always reading packets and > putting them into a packet buffer. Threads that want to send a packet an > get a reply must not send the packet then use a condition variable + mutex > to wait for the response. This threading overhead really slows down the > packet transfers. Currently we have a mutex on the GDB remote communication > where each thread that needs to send a packet will take the mutex and then > send the packet and wait for the re
[lldb-dev] Where "thread until " should set breakpoints?
On the subject line, the ToT lldb (see code around CommandObjectThread.cpp:1230) sets the breakpoint on the first exact matching line of 'line-number' or the closest line number > 'line-number' i.e. the best match. And along with that, starting from the above exact/best matching line number index in the line table, the breakpoints are also being set on every other line number available in the line table in the current function scope. This latter part, I believe, is incorrect. What, I think, should happen is we just set only one breakpoint on the first exact/best match for the given 'line-number' and another on the return address from the current frame. And this leaves us with one special case where the machine code for one single source line is scattered across (aka scheduled across) and I do not know what is the expected behaviour in this case. If I my above understanding is correct and after discussing here on how to handle the scattered code scenario, I will submit a patch. Regards, Venkata Ramanaiah ___ lldb-dev mailing list lldb-dev@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
Re: [lldb-dev] Where "thread until " should set breakpoints?
On Thu, Aug 2, 2018 at 3:32 AM, Jim Ingham wrote: > > > > On Jul 24, 2018, at 9:05 PM, Ramana via lldb-dev < > lldb-dev@lists.llvm.org> wrote: > > > > On the subject line, the ToT lldb (see code around > CommandObjectThread.cpp:1230) sets the breakpoint on the first exact > matching line of 'line-number' or the closest line number > 'line-number' > i.e. the best match. > > > > And along with that, starting from the above exact/best matching line > number index in the line table, the breakpoints are also being set on every > other line number available in the line table in the current function > scope. This latter part, I believe, is incorrect. > > Why do you think this is incorrect? > > The requirements for "thread until " are: > > a) If any code contributed by is executed before leaving the > function, stop > b) If you end up leaving the function w/o triggering (a), then stop > Understood and no concerns on this. > Correct or incorrect should be determined by how well the implementation > fits those requirements. > > There isn't currently a reliable indication from the debug information or > line tables that "line N will always be entered starting with the block at > 0x123". So you can't tell without doing control flow analysis, which if > any of the separate entries in the line table for the same line will get > hit in the course of executing the function. So the safest thing to do is > to set breakpoints on them all. >From the above, I understand that we have to do this when the debug line table has more than one entry for a particular source line. And this is what I referred to as "machine code for one single source line is scattered across" in my previous mail. Thanks for sharing why we had to do that. Besides setting a few more breakpoints - which should be pretty cheap - I > don't see much downside to the way it is currently implemented. > > Anyway, why did this bother you? > > Jim > However, I am concerned about the below 'thread until' behaviour. For the attached test case (kernels.cpp - OpenCL code), following is the debug line table generated by the compiler. File nameLine numberStarting address ./kernels.cpp:[++] kernels.cpp9 0xacc74d00 kernels.cpp 12 0xacc74d00 kernels.cpp 14 0xacc74d40 kernels.cpp 13 0xacc74dc0 kernels.cpp 14 0xacc74e00 kernels.cpp 25 0xacc74e80 kernels.cpp 25 0xacc74ec0 kernels.cpp 26 0xacc74f00 kernels.cpp 26 0xacc74f40 kernels.cpp 26 0xacc74f80 kernels.cpp 17 0xacc74fc0 kernels.cpp 18 0xacc75000 kernels.cpp 18 0xacc75040 kernels.cpp 19 0xacc75080 kernels.cpp 27 0xacc750c0 kernels.cpp 27 0xacc75140 kernels.cpp 28 0xacc75180 kernels.cpp 28 0xacc751c0 kernels.cpp 29 0xacc75200 kernels.cpp 29 0xacc75240 kernels.cpp 30 0xacc75280 With the ToT lldb, when I am at line 12 (0xacc74d00), if I say 'thread until 18', the lldb log gives me the following w.r.t breakpoints. GDBRemoteCommunicationClient::SendGDBStoppointTypePacket() add at addr = 0xacc75280 Thread::PushPlan(0x0xa48b38f0): "Stepping from address 0xacc74d00 until we reach one of: 0xacc75000 (bp: -4) 0xacc75040 (bp: -5) 0xacc75080 (bp: -6) 0xacc750c0 (bp: -7) 0xacc75140 (bp: -8) 0xacc75180 (bp: -9) 0xacc751c0 (bp: -10) 0xacc75200 (bp: -11) 0xacc75240 (bp: -12) 0xacc75280 (bp: -13) Setting two breakpoints for line number 18 i.e. at 0xacc75000 and 0xacc75040 is understandable from your above reasoning and since we are anyway setting a breakpoint at the end of the function (line 30 - 0xacc75280), is it necessary to set the breakpoints on line numbers 19, 27, 28, 29 as well i.e. at 0xacc75080 (line 19), 0xacc750c0 (line 27), 0xacc75140 (line 27), 0xacc75180 (line 28), 0xacc751c0 (line 28), 0xacc75200 (line 29), 0xacc75240 (line 29)? The latter part i.e. setting breakpoints on 19, 27, 28,