• Home
  • History
  • Annotate
  • only in /external/valgrind/main/coregrind/m_gdbserver/

..12-Mar-20154 KiB

32bit-core-valgrind-s1.xml12-Mar-20152.9 KiB

32bit-core-valgrind-s2.xml12-Mar-20152.9 KiB

32bit-core.xml12-Mar-20152.7 KiB




32bit-sse-valgrind-s1.xml12-Mar-20152.1 KiB

32bit-sse-valgrind-s2.xml12-Mar-20152.1 KiB

32bit-sse.xml12-Mar-20152 KiB

64bit-avx-valgrind-s1.xml12-Mar-20151.2 KiB

64bit-avx-valgrind-s2.xml12-Mar-20151.2 KiB

64bit-avx.xml12-Mar-20151.2 KiB

64bit-core-valgrind-s1.xml12-Mar-20153.3 KiB

64bit-core-valgrind-s2.xml12-Mar-20153.3 KiB

64bit-core.xml12-Mar-20153 KiB




64bit-sse-valgrind-s1.xml12-Mar-20152.5 KiB

64bit-sse-valgrind-s2.xml12-Mar-20152.5 KiB

64bit-sse.xml12-Mar-20152.4 KiB







arm-core-valgrind-s1.xml12-Mar-20151.2 KiB

arm-core-valgrind-s2.xml12-Mar-20151.2 KiB

arm-core.xml12-Mar-20151.1 KiB

arm-vfpv3-valgrind-s1.xml12-Mar-20152 KiB

arm-vfpv3-valgrind-s2.xml12-Mar-20152 KiB

arm-vfpv3.xml12-Mar-20152 KiB



gdb/12-Mar-20154 KiB



inferiors.c12-Mar-20155.5 KiB

m_gdbserver.c12-Mar-201548.4 KiB




mips-cpu-valgrind-s1.xml12-Mar-20151.6 KiB

mips-cpu-valgrind-s2.xml12-Mar-20151.6 KiB

mips-cpu.xml12-Mar-20151.5 KiB

mips-fpu-valgrind-s1.xml12-Mar-20152.2 KiB

mips-fpu-valgrind-s2.xml12-Mar-20152.2 KiB

mips-fpu.xml12-Mar-20152.1 KiB






mips64-cpu-valgrind-s1.xml12-Mar-20151.6 KiB

mips64-cpu-valgrind-s2.xml12-Mar-20151.6 KiB

mips64-cpu.xml12-Mar-20151.5 KiB

mips64-fpu-valgrind-s1.xml12-Mar-20152.2 KiB

mips64-fpu-valgrind-s2.xml12-Mar-20152.2 KiB

mips64-fpu.xml12-Mar-20152.1 KiB



power-altivec-valgrind-s1.xml12-Mar-20152.5 KiB

power-altivec-valgrind-s2.xml12-Mar-20152.5 KiB

power-altivec.xml12-Mar-20152.4 KiB

power-core-valgrind-s1.xml12-Mar-20152.2 KiB

power-core-valgrind-s2.xml12-Mar-20152.2 KiB

power-core.xml12-Mar-20152.1 KiB

power-fpu-valgrind-s1.xml12-Mar-20152 KiB

power-fpu-valgrind-s2.xml12-Mar-20152 KiB

power-fpu.xml12-Mar-20152.1 KiB




power64-core-valgrind-s1.xml12-Mar-20152.2 KiB

power64-core-valgrind-s2.xml12-Mar-20152.2 KiB

power64-core.xml12-Mar-20152.1 KiB




powerpc-altivec32l-valgrind.xml12-Mar-20151.1 KiB


powerpc-altivec64l-valgrind.xml12-Mar-20151.1 KiB



regcache.c12-Mar-20156.9 KiB

regcache.h12-Mar-20152.6 KiB

regdef.h12-Mar-20151.6 KiB

remote-utils.c12-Mar-201534.9 KiB

s390-acr-valgrind-s1.xml12-Mar-20151.4 KiB

s390-acr-valgrind-s2.xml12-Mar-20151.4 KiB

s390-acr.xml12-Mar-20151.4 KiB

s390-fpr-valgrind-s1.xml12-Mar-20151.4 KiB

s390-fpr-valgrind-s2.xml12-Mar-20151.4 KiB

s390-fpr.xml12-Mar-20151.4 KiB

s390x-core64-valgrind-s1.xml12-Mar-20151.5 KiB

s390x-core64-valgrind-s2.xml12-Mar-20151.5 KiB

s390x-core64.xml12-Mar-20151.5 KiB






server.c12-Mar-201541.3 KiB

server.h12-Mar-201513.6 KiB

signals.c12-Mar-201522.8 KiB

target.c12-Mar-201520.4 KiB

target.h12-Mar-20159.7 KiB

utils.c12-Mar-20152.5 KiB

valgrind-low-amd64.c12-Mar-201513.4 KiB

valgrind-low-arm.c12-Mar-201510.3 KiB

valgrind-low-arm64.c12-Mar-20159.4 KiB

valgrind-low-mips32.c12-Mar-201513.8 KiB

valgrind-low-mips64.c12-Mar-201513.9 KiB

valgrind-low-ppc32.c12-Mar-201514 KiB

valgrind-low-ppc64.c12-Mar-201514 KiB

valgrind-low-s390x.c12-Mar-20157.9 KiB

valgrind-low-x86.c12-Mar-20159.1 KiB

valgrind_low.h12-Mar-20153.3 KiB



1This file contains various notes/ideas/history/... related
2to gdbserver in valgrind.
4How to use Valgrind gdbserver ?
6This is described in the Valgrind user manual.
7Before reading the below, you better read the user manual first.
9What is gdbserver ?
11gdb debugger typically is used to debug a process running
12on the same machine : gdb uses system calls (such as ptrace) 
13to fetch data from the process being debugged
14or to change data in the process 
15or interrupt the process 
16or ...
18gdb can also debug processes running in a different computer
19(e.g. it can debug a process running on a small real time
22gdb does this by sending some commands (e.g. using tcp/ip) to a piece
23of code running on the remote computer. This piece of code (called a
24gdb stub in small boards, or gdbserver when the remote computer runs
25an OS such as GNU/linux) will provide a set of commands allowing gdb
26to remotely debug the process.  Examples of commands are: "get the
27registers", "get the list of running threads", "read xxx bytes at
28address yyyyyyyy", etc.  The definition of all these commands and the
29associated replies is the gdb remote serial protocol, which is
30documented in Appendix D of gdb user manual.
32The standard gdb distribution has a standalone gdbserver (a small
33executable) which implements this protocol and the needed system calls
34to allow gdb to remotely debug process running on a linux or MacOS or
37Activation of gdbserver code inside valgrind
39The gdbserver code (from gdb 6.6, GPL2+) has been modified so as to
40link it with valgrind and allow the valgrind guest process to be
41debugged by a gdb speaking to this gdbserver embedded in valgrind.
42The ptrace system calls inside gdbserver have been replaced by reading
43the state of the guest.
45The gdbserver functionality is activated with valgrind command line
46options. If gdbserver is not enabled, then the impact on valgrind
47runtime is minimal: basically it just checks at startup the command
48line option to see that there is nothing to do for what concerns gdb
49server: there is a "if gdbserver is active" check in the translate
50function of translate.c and an "if" in the valgrind scheduler.
51If the valgrind gdbserver is activated (--vgdb=yes), the impact
52is minimal (from time to time, the valgrind scheduler checks a counter
53in memory). Option --vgdb-poll=yyyyy controls how often the scheduler
54will do a (somewhat) more heavy check to see if gdbserver needs to
55stop execution of the guest to allow debugging.
56If valgrind gdbserver is activated with --vgdb=full, then
57each instruction is instrumented with an additional call to a dirty
60How does gdbserver code interacts with valgrind ?
62When an error is reported, the gdbserver code is called.  It reads
63commands from gdb using read system call on a FIFO (e.g. a command
64such as "get the registers").  It executes the command (e.g. fetches
65the registers from the guest state) and writes the reply (e.g. a
66packet containing the register data).  When gdb instructs gdbserver to
67"continue", the control is returned to valgrind, which then continues
68to execute guest code.  The FIFOs used to communication between
69valgrind and gdb are created at startup if gdbserver is activated
70according to the --vgdb=no/yes/full command line option.
72How are signals "handled" ?
74When a signal is to be given to the guest, valgrind core first calls
75gdbserver (if a gdb is currently connected to valgrind, otherwise the
76signal is delivered immediately). If gdb instructs to give the signal
77to the process, the signal is delivered to the guest.  Otherwise, the
78signal is ignored (not given to the guest). The user can
79with gdb further decide to pass (or not pass) the signal.
80Note that some (fatal) signals cannot be ignored.
82How are "break/step/stepi/next/..." implemented ?
84When a break is put by gdb on an instruction, a command is sent to the
85gdbserver in valgrind. This causes the basic block of this instruction
86to be discarded and then re-instrumented so as to insert calls to a
87dirty helper which calls the gdb server code.  When a block is
88instrumented for gdbserver, all the "jump targets" of this block are
89invalidated, so as to allow step/stepi/next to properly work: these
90blocks will themselves automatically be re-instrumented for gdbserver
91if they are jumped to.
92The valgrind gdbserver remembers which blocks have been instrumented
93due to this "lazy 'jump targets' debugging instrumentation" so as to
94discard these "debugging translation" when gdb instructs to continue
95the execution normally.
96The blocks in which an explicit break has been put by the user
97are kept instrumented for gdbserver.
98(but note that by default, gdb removes all breaks when the
99process is stopped, and re-inserts all breaks when the process
100is continued). This behaviour can be changed using the gdb
101command 'set breakpoint always-inserted'.
103How are watchpoints implemented ?
105Watchpoints implies support from the tool to detect that
106a location is read and/or written. Currently, only memcheck
107supports this : when a watchpoint is placed, memcheck changes
108the addressability bits of the watched memory zone to be unacessible.
109Before an access, memcheck then detects an error, but sees this error
110is due to a watchpoint and gives the control back to gdb.
111Stopping on the exact instruction for a write watchpoint implies
112to use --vgdb=full. This is because the error is detected by memcheck
113before modifying the value. gdb checks that the value has not changed
114and so "does not believe" the information that the write watchpoint
115was triggered, and continues the execution. At the next watchpoint
116occurence, gdb sees the value has changed. But the watchpoints are all
117reported "off by one". To avoid this, Valgrind gdbserver must
118terminate the current instruction before reporting the write watchpoint.
119Terminating precisely the current instruction implies to have
120instrumented all the instructions of the block for gdbserver even
121if there is no break in this block. This is ensured by --vgdb=full.
122See m_gdbserver.c Bool VG_(is_watched) where watchpoint handling
123is implemented.
125How is the Valgrind gdbserver receiving commands/packets from gdb ?
127The embedded gdbserver reads gdb commands on a named pipe having
128(by default) the name   /tmp/vgdb-pipe-from-vgdb-to-PID-by-USER-on-HOST
129where PID, USER, and HOST will be replaced by the actual pid, the user id,
130and the host name, respectively.
131The embedded gdbserver will reply to gdb commands on a named pipe
134gdb does not speak directly with gdbserver in valgrind: a relay application
135called vgdb is needed between gdb and the valgrind-ified process.
136gdb writes commands on the stdin of vgdb. vgdb reads these
137commands and writes them on FIFO /tmp/vgdb-pipe-from-vgdb-to-PID-by-USER-on-HOST.
138vgdb reads replies on FIFO /tmp/vgdb-pipe-to-vgdb-from-PID-by-USER-on-HOST
139and writes them on its stdout. 
141Note: The solution of named pipes was preferred to tcp ip connections as
142it allows a discovery of which valgrind-ified processes are ready to accept
143command by looking at files starting with the /tmp/vgdb-pipe- prefix
144(changeable by a command line option).
145Also, the usual unix protections are protecting 
146the valgrind process against other users sending commands.
147The relay process also takes into account the wake up of the valgrind
148process in case all threads are blocked in a system call.
149The relay process can also be used in a shell to send commands
150without a gdb (this allows to have a standard mechanism to control
151valgrind tools from the command line, rather than specialized mechanism
152e.g. in callgrind).
154How is gdbserver activated if all Valgrind threads are blocked in a syscall ?
156vgdb relays characters from gdb to valgrind. The scheduler will from
157time to time check if gdbserver has to handle incoming characters.
158(the check is efficient i.e. most of the time consists in checking
159a counter in (shared) memory).
161However, it might be that all the threads in the valgrind process are
162blocked in a system call. In such a case, no polling will be done by
163the valgrind scheduler (as no activity takes place).  By default, vgdb
164will check after 100ms if the characters it has written have been read
165by valgrind. If not, vgdb will force the invocation of the gdbserver
166code inside the valgrind process.
168On Linux, this forced invocation is implemented using the ptrace system call:
169using ptrace, vgdb will cause the valgrind process to call the
170gdbserver code.
172This wake up is *not* done using signals as this would imply to
173implement a syscall restart logic in valgrind for all system
174calls. When using ptrace as above, the linux kernel is responsible to
175restart the system call.
177This wakeup is also *not* implemented by having a "system thread"
178started by valgrind as this would transform all non-threaded programs
179in threaded programs when running under valgrind. Also, such a 'system
180thread' for gdbserver was tried by Greg Parker in the early MacOS
181port, and was unreliable.  
183So, the ptrace based solution was chosen instead.
185There used to be some bugs in the kernel when using ptrace on 
186a process blocked in a system call : the symptom is that the system
187call fails with an unknown errno 512. This typically happens
188with a vgdb in 64bits ptrace-ing a 32 bits process.
189A bypass for old kernels has been integrated in vgdb.c (sign extend
190register rax).
192At least on a fedora core 12 (kernel 2.6.32), syscall restart of read
193and select are working ok and red-hat 5.3 (an old kernel), everything
194works properly.
196Need to investigate if darwin can similarly do syscall
197restart with ptrace.
199The vgdb argument --max-invoke-ms=xxx allows to control the nr of
200milli-seconds after which vgdb will force the invocation of gdbserver
201code.  If xxx is 0, this disables the forced invocation.
202Also, disabling this ptrace mechanism is necessary in case you are
203debugging the valgrind code at the same time as debugging the guest
204process using gdbserver.
206Do not kill -9 vgdb while it has interrupted the valgrind process,
207otherwise the valgrind process will very probably stay stopped or die.
210Implementation is based on the gdbserver code from gdb 6.6
212The gdbserver implementation is derived from the gdbserver included
213in the gdb distribution.
214The files originating from gdb are : inferiors.c, regcache.[ch],
215regdef.h, remote-utils.c, server.[ch], signals.c, target.[ch], utils.c,
217valgrind-low-* are inspired from gdb files.
219This code had to be changed to integrate properly within valgrind
220(e.g. no libc usage).  Some of these changes have been ensured by
221using the preprocessor to replace calls by valgrind equivalent,
222e.g. #define strcmp(...) VG_(strcmp) (...).
224Some "control flow" changes are due to the fact that gdbserver inside
225valgrind must return the control to valgrind when the 'debugged'
226process has to run, while in a classical gdbserver usage, the
227gdbserver process waits for a debugged process to stop on a break or
228similar.  This has implied to have some variables to remember the
229state of gdbserver before returning to valgrind (search for
230resume_packet_needed in server.c) and "goto" the place where gdbserver
231expects a stopped process to return control to gdbserver.
233How does a tool need to be changed to be "debuggable" ?
235There is no need to modify a tool to have it "debuggable" via
236gdbserver : e.g. reports of errors, break etc will work "out of the
237box".  If an interactive usage of tool client requests or similar is
238desired for a tool, then simple code can be written for that via a
239specific client request VG_USERREQ__GDB_MONITOR_COMMAND code. The tool
240function "handle_client_request" must then parse the string received
241in argument and call the expected valgrind or tool code.  See
242e.g. massif ms_handle_client_request as an example.
245Automatic regression tests:
247Automatic Valgrind gdbserver tests are in the directory
249Read $(top_srcdir)/gdbserver_tests/README_DEVELOPERS for more
250info about testing.
252How to integrate support for a new architecture xxx?
254Let's imagine a new architecture hal9000 has to be supported.
257The main thing to do is to make a file valgrind-low-hal9000.c.
258Start from an existing file (e.g. valgrind-low-x86.c).
259The data structures 'struct reg regs'
260and 'const char *expedite_regs' are build from files
261in the gdb sources, e.g. for an new arch hal9000
262   cd gdb/regformats
263   sh ./regdat.sh reg-hal9000.dat hal9000
265From the generated file hal9000, you copy/paste in
266valgrind-low-hal9000.c the two needed data structures and change their
267name to 'regs' and 'expedite_regs'
269Then adapt the set of functions needed to initialize the structure
270'static struct valgrind_target_ops low_target'.
272Optional but heavily recommended:
273To have a proper wake up of a Valgrind process with all threads
274blocked in a system call, some architecture specific code
275has to be done in vgdb-invoker-*.c.
276Typically, for a linux system supporting ptrace, you have to modify
279For Linux based platforms, all the ptrace calls in vgdb-invoker-ptrace.c
280should be ok.
281The only thing needed is the code needed to "push a dummy call" on the stack,
282i.e. assign the relevant registers in the struct user_regs_struct, and push
283values on the stack according to the ABI.
285For other platforms (i.e. Macos), more work is needed as the ptrace calls
286on Macos are either different and/or incomplete (and so, 'Mach' specific
287things are needed e.g. to attach to threads etc).
288A courageous Mac aficionado is welcome on this aspect.
291To let gdb see the Valgrind shadow registers, xml description
292files have to be provided + valgrind-low-hal9000.c has
293to give the top xml file.
294Start from the xml files found in the gdb distribution directory
295gdb/features. You need to duplicate and modify these files to provide
296shadow1 and shadow2 register sets description.
298Modify coregrind/Makefile.am:
299    add valgrind-low-hal9000.c
300    If you have target xml description, also add them to GDBSERVER_XML_FILES
303TODO and/or additional nice things to have
305* many options can be changed on-line without problems.
306  => would be nice to have a v.option command that would evaluate
307  its arguments like the  startup options of m_main.c and tool clo processing.
309* have a memcheck monitor command
310  show_dangling_pointers [last_n_recently_released_blocks]
311  showing which of the n last recently released blocks are still
312  referenced. These references are (potential) dangling pointers.
314* some GDBTD in the code 
316(GDBTD = GDB To Do = something still to look at and/or a question)
318* All architectures and platforms are done.
319  But there are still some "GDBTD" to convert between gdb registers
320  and VEX registers :
321  e.g. some registers in x86 or amd64 that I could not
322  translate to VEX registers. Someone with a good knowledge
323  of these architectures might complete this 
324  (see the GDBTD in valgrind-low-*.c)
326* Currently, at least on recent linux kernel, vgdb can properly wake
327  up a valgrind process which is blocked in system calls. Maybe we
328  need to see till which kernel version the ptrace + syscall restart
329  is broken, and put the default value of --max-invoke-ms to 0 in this
330  case.
332* more client requests can be programmed in various tools.  Currently,
333  there are only a few standard valgrind or memcheck client requests
334  implemented.
335  v.suppression [generate|add|delete] might be an interesting command: 
336     generate would output a suppression, add/delete would add a suppression
337     in memory for the last (or selected?) error.
338  v.break on fn calls/entry/exit + commands associated to it 
339    (such as search leaks)?
342* currently jump(s) and inferior call(s) are somewhat dangerous
343  when called from a block not yet instrumented : instead
344  of continuing till the next Imark, where there will be a
345  debugger call that can properly jump at an instruction boundary,
346  the jump/call will quit the "middle" of an instruction.
347  We could detect if the current block is instrumented by a trick
348  like this:
349     /* Each time helperc_CallDebugger is called, we will store
350        the address from which is it called and the nr of bbs_done
351        when called. This allows to detect that gdbserver is called
352        from a block which is instrumented. */
353     static HWord CallDebugger_addr;
354     static ULong CallDebugger_bbs_done;
356     Bool VG_(gdbserver_current_IP_instrumented) (ThreadId tid)
357     {
358        if (VG_(get_IP) (tid) != CallDebugger_addr
359            || CallDebugger_bbs_done != VG_(bbs_done)())
360           return False;
361        return True;
362     }
364  Alternatively, we ensure we can re-instrument the current
365  block for gdbserver while executing it.
366  Something like:
367  keep current block till the end of the current instruction, then
368  go back to scheduler.
369  Unsure if and how this is do-able.
372* ensure that all non static symbols of gdbserver files are #define
373  xxxxx VG_(xxxxx) ???? Is this really needed ? I have tried to put in
374  a test program variables and functions with the same name as valgrind
375  stuff, and everything seems to be ok.
376  I see that all exported symbols in valgrind have a unique prefix
377  created with VG_ or MC_ or ...
378  This is not done for the "gdb gdbserver code", where I have kept
379  the original names. Is this a problem ? I could not create
380  a "symbol" collision between the user symbol and the valgrind
381  core gdbserver symbol.
383* currently, gdbserver can only stop/continue the whole process. It
384  might be interesting to have a fine-grained thread control (vCont
385  packet) maybe for tools such as helgrind, drd.  This would allow the
386  user to stop/resume specific threads.  Also, maybe this would solve
387  the following problem: wait for a breakpoint to be encountered,
388  switch thread, next. This sometimes causes an internal error in gdb,
389  probably because gdb believes the current thread will be continued ?
391* would be nice to have some more tests.
393* better valgrind target support in gdb (see comments of Tom Tromey).
396-------- description of how gdb invokes a function in the inferior
397to call a function in the inferior (below is for x86):
398gdb writes ESP and EBP to have some more stack space
399push a return address equal to  0x8048390 <_start>
400puts a break                at  0x8048390
401put address of the function to call (e.g. hello_world in EIP (0x8048444))
403break encountered at 0x8048391 (90 after decrement)
404  => report stop to gdb
405  => gdb restores esp/ebp/eip to what it was (eg. 0x804848C)
406  => gdb "s" => causes the EIP to go to the new EIP (i.e. 0x804848C)
407     gdbserver tells "resuming from 0x804848c"
408                     "stop pc is 0x8048491" => informed gdb of this