FreeBSD: add file descriptor tracking for _umtx_op
[valgrind.git] / coregrind / m_signals.c
blob09acb7cb7ab9f6e70446e467a37a58d9bb928f71
1 /* -*- mode: C; c-basic-offset: 3; -*- */
3 /*--------------------------------------------------------------------*/
4 /*--- Implementation of POSIX signals. m_signals.c ---*/
5 /*--------------------------------------------------------------------*/
7 /*
8 This file is part of Valgrind, a dynamic binary instrumentation
9 framework.
11 Copyright (C) 2000-2017 Julian Seward
12 jseward@acm.org
14 This program is free software; you can redistribute it and/or
15 modify it under the terms of the GNU General Public License as
16 published by the Free Software Foundation; either version 2 of the
17 License, or (at your option) any later version.
19 This program is distributed in the hope that it will be useful, but
20 WITHOUT ANY WARRANTY; without even the implied warranty of
21 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
22 General Public License for more details.
24 You should have received a copy of the GNU General Public License
25 along with this program; if not, see <http://www.gnu.org/licenses/>.
27 The GNU General Public License is contained in the file COPYING.
30 /*
31 Signal handling.
33 There are 4 distinct classes of signal:
35 1. Synchronous, instruction-generated (SIGILL, FPE, BUS, SEGV and
36 TRAP): these are signals as a result of an instruction fault. If
37 we get one while running client code, then we just do the
38 appropriate thing. If it happens while running Valgrind code, then
39 it indicates a Valgrind bug. Note that we "manually" implement
40 automatic stack growth, such that if a fault happens near the
41 client process stack, it is extended in the same way the kernel
42 would, and the fault is never reported to the client program.
44 2. Asynchronous variants of the above signals: If the kernel tries
45 to deliver a sync signal while it is blocked, it just kills the
46 process. Therefore, we can't block those signals if we want to be
47 able to report on bugs in Valgrind. This means that we're also
48 open to receiving those signals from other processes, sent with
49 kill. We could get away with just dropping them, since they aren't
50 really signals that processes send to each other.
52 3. Synchronous, general signals. If a thread/process sends itself
53 a signal with kill, its expected to be synchronous: ie, the signal
54 will have been delivered by the time the syscall finishes.
56 4. Asynchronous, general signals. All other signals, sent by
57 another process with kill. These are generally blocked, except for
58 two special cases: we poll for them each time we're about to run a
59 thread for a time quanta, and while running blocking syscalls.
62 In addition, we reserve one signal for internal use: SIGVGKILL.
63 SIGVGKILL is used to terminate threads. When one thread wants
64 another to exit, it will set its exitreason and send it SIGVGKILL
65 if it appears to be blocked in a syscall.
68 We use a kernel thread for each application thread. When the
69 thread allows itself to be open to signals, it sets the thread
70 signal mask to what the client application set it to. This means
71 that we get the kernel to do all signal routing: under Valgrind,
72 signals get delivered in the same way as in the non-Valgrind case
73 (the exception being for the sync signal set, since they're almost
74 always unblocked).
78 Some more details...
80 First off, we take note of the client's requests (via sys_sigaction
81 and sys_sigprocmask) to set the signal state (handlers for each
82 signal, which are process-wide, + a mask for each signal, which is
83 per-thread). This info is duly recorded in the SCSS (static Client
84 signal state) in m_signals.c, and if the client later queries what
85 the state is, we merely fish the relevant info out of SCSS and give
86 it back.
88 However, we set the real signal state in the kernel to something
89 entirely different. This is recorded in SKSS, the static Kernel
90 signal state. What's nice (to the extent that anything is nice w.r.t
91 signals) is that there's a pure function to calculate SKSS from SCSS,
92 calculate_SKSS_from_SCSS. So when the client changes SCSS then we
93 recompute the associated SKSS and apply any changes from the previous
94 SKSS through to the kernel.
96 Now, that said, the general scheme we have now is, that regardless of
97 what the client puts into the SCSS (viz, asks for), what we would
98 like to do is as follows:
100 (1) run code on the virtual CPU with all signals blocked
102 (2) at convenient moments for us (that is, when the VCPU stops, and
103 control is back with the scheduler), ask the kernel "do you have
104 any signals for me?" and if it does, collect up the info, and
105 deliver them to the client (by building sigframes).
107 And that's almost what we do. The signal polling is done by
108 VG_(poll_signals), which calls through to VG_(sigtimedwait_zero) to
109 do the dirty work. (of which more later).
111 By polling signals, rather than catching them, we get to deal with
112 them only at convenient moments, rather than having to recover from
113 taking a signal while generated code is running.
115 Now unfortunately .. the above scheme only works for so-called async
116 signals. An async signal is one which isn't associated with any
117 particular instruction, eg Control-C (SIGINT). For those, it doesn't
118 matter if we don't deliver the signal to the client immediately; it
119 only matters that we deliver it eventually. Hence polling is OK.
121 But the other group -- sync signals -- are all related by the fact
122 that they are various ways for the host CPU to fail to execute an
123 instruction: SIGILL, SIGSEGV, SIGFPU. And they can't be deferred,
124 because obviously if a host instruction can't execute, well then we
125 have to immediately do Plan B, whatever that is.
127 So the next approximation of what happens is:
129 (1) run code on vcpu with all async signals blocked
131 (2) at convenient moments (when NOT running the vcpu), poll for async
132 signals.
134 (1) and (2) together imply that if the host does deliver a signal to
135 async_signalhandler while the VCPU is running, something's
136 seriously wrong.
138 (3) when running code on vcpu, don't block sync signals. Instead
139 register sync_signalhandler and catch any such via that. Of
140 course, that means an ugly recovery path if we do -- the
141 sync_signalhandler has to longjump, exiting out of the generated
142 code, and the assembly-dispatcher thingy that runs it, and gets
143 caught in m_scheduler, which then tells m_signals to deliver the
144 signal.
146 Now naturally (ha ha) even that might be tolerable, but there's
147 something worse: dealing with signals delivered to threads in
148 syscalls.
150 Obviously from the above, SKSS's signal mask (viz, what we really run
151 with) is way different from SCSS's signal mask (viz, what the client
152 thread thought it asked for). (eg) It may well be that the client
153 did not block control-C, so that it just expects to drop dead if it
154 receives ^C whilst blocked in a syscall, but by default we are
155 running with all async signals blocked, and so that signal could be
156 arbitrarily delayed, or perhaps even lost (not sure).
158 So what we have to do, when doing any syscall which SfMayBlock, is to
159 quickly switch in the SCSS-specified signal mask just before the
160 syscall, and switch it back just afterwards, and hope that we don't
161 get caught up in some weird race condition. This is the primary
162 purpose of the ultra-magical pieces of assembly code in
163 coregrind/m_syswrap/syscall-<plat>.S
165 -----------
167 The ways in which V can come to hear of signals that need to be
168 forwarded to the client as are follows:
170 sync signals: can arrive at any time whatsoever. These are caught
171 by sync_signalhandler
173 async signals:
175 if running generated code
176 then these are blocked, so we don't expect to catch them in
177 async_signalhandler
179 else
180 if thread is blocked in a syscall marked SfMayBlock
181 then signals may be delivered to async_sighandler, since we
182 temporarily unblocked them for the duration of the syscall,
183 by using the real (SCSS) mask for this thread
185 else we're doing misc housekeeping activities (eg, making a translation,
186 washing our hair, etc). As in the normal case, these signals are
187 blocked, but we can and do poll for them using VG_(poll_signals).
189 Now, re VG_(poll_signals), it polls the kernel by doing
190 VG_(sigtimedwait_zero). This is trivial on Linux, since it's just a
191 syscall. But on Darwin and AIX, we have to cobble together the
192 functionality in a tedious, longwinded and probably error-prone way.
194 Finally, if a gdb is debugging the process under valgrind,
195 the signal can be ignored if gdb tells this. So, before resuming the
196 scheduler/delivering the signal, a call to VG_(gdbserver_report_signal)
197 is done. If this returns True, the signal is delivered.
200 #include "pub_core_basics.h"
201 #include "pub_core_vki.h"
202 #include "pub_core_vkiscnums.h"
203 #include "pub_core_debuglog.h"
204 #include "pub_core_threadstate.h"
205 #include "pub_core_xarray.h"
206 #include "pub_core_clientstate.h"
207 #include "pub_core_aspacemgr.h"
208 #include "pub_core_errormgr.h"
209 #include "pub_core_gdbserver.h"
210 #include "pub_core_hashtable.h"
211 #include "pub_core_libcbase.h"
212 #include "pub_core_libcassert.h"
213 #include "pub_core_libcprint.h"
214 #include "pub_core_libcproc.h"
215 #include "pub_core_libcsignal.h"
216 #include "pub_core_machine.h"
217 #include "pub_core_mallocfree.h"
218 #include "pub_core_options.h"
219 #include "pub_core_scheduler.h"
220 #include "pub_core_signals.h"
221 #include "pub_core_sigframe.h" // For VG_(sigframe_create)()
222 #include "pub_core_stacks.h" // For VG_(change_stack)()
223 #include "pub_core_stacktrace.h" // For VG_(get_and_pp_StackTrace)()
224 #include "pub_core_syscall.h"
225 #include "pub_core_syswrap.h"
226 #include "pub_core_tooliface.h"
227 #include "pub_core_coredump.h"
230 /* ---------------------------------------------------------------------
231 Forwards decls.
232 ------------------------------------------------------------------ */
234 static void sync_signalhandler ( Int sigNo, vki_siginfo_t *info,
235 struct vki_ucontext * );
236 static void async_signalhandler ( Int sigNo, vki_siginfo_t *info,
237 struct vki_ucontext * );
238 static void sigvgkill_handler ( Int sigNo, vki_siginfo_t *info,
239 struct vki_ucontext * );
241 /* Maximum usable signal. */
242 Int VG_(max_signal) = _VKI_NSIG;
244 #define N_QUEUED_SIGNALS 8
246 typedef struct SigQueue {
247 Int next;
248 vki_siginfo_t sigs[N_QUEUED_SIGNALS];
249 } SigQueue;
251 /* Hash table of PIDs from which SIGCHLD is ignored. */
252 VgHashTable *ht_sigchld_ignore = NULL;
254 /* ------ Macros for pulling stuff out of ucontexts ------ */
256 /* Q: what does VG_UCONTEXT_SYSCALL_SYSRES do? A: let's suppose the
257 machine context (uc) reflects the situation that a syscall had just
258 completed, quite literally -- that is, that the program counter was
259 now at the instruction following the syscall. (or we're slightly
260 downstream, but we're sure no relevant register has yet changed
261 value.) Then VG_UCONTEXT_SYSCALL_SYSRES returns a SysRes reflecting
262 the result of the syscall; it does this by fishing relevant bits of
263 the machine state out of the uc. Of course if the program counter
264 was somewhere else entirely then the result is likely to be
265 meaningless, so the caller of VG_UCONTEXT_SYSCALL_SYSRES has to be
266 very careful to pay attention to the results only when it is sure
267 that the said constraint on the program counter is indeed valid. */
269 #if defined(VGP_x86_linux)
270 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.eip)
271 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.esp)
272 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
273 /* Convert the value in uc_mcontext.eax into a SysRes. */ \
274 VG_(mk_SysRes_x86_linux)( (uc)->uc_mcontext.eax )
275 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
276 { (srP)->r_pc = (ULong)((uc)->uc_mcontext.eip); \
277 (srP)->r_sp = (ULong)((uc)->uc_mcontext.esp); \
278 (srP)->misc.X86.r_ebp = (uc)->uc_mcontext.ebp; \
281 #elif defined(VGP_amd64_linux)
282 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.rip)
283 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.rsp)
284 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
285 /* Convert the value in uc_mcontext.rax into a SysRes. */ \
286 VG_(mk_SysRes_amd64_linux)( (uc)->uc_mcontext.rax )
287 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
288 { (srP)->r_pc = (uc)->uc_mcontext.rip; \
289 (srP)->r_sp = (uc)->uc_mcontext.rsp; \
290 (srP)->misc.AMD64.r_rbp = (uc)->uc_mcontext.rbp; \
293 #elif defined(VGP_ppc32_linux)
294 /* Comments from Paul Mackerras 25 Nov 05:
296 > I'm tracking down a problem where V's signal handling doesn't
297 > work properly on a ppc440gx running 2.4.20. The problem is that
298 > the ucontext being presented to V's sighandler seems completely
299 > bogus.
301 > V's kernel headers and hence ucontext layout are derived from
302 > 2.6.9. I compared include/asm-ppc/ucontext.h from 2.4.20 and
303 > 2.6.13.
305 > Can I just check my interpretation: the 2.4.20 one contains the
306 > uc_mcontext field in line, whereas the 2.6.13 one has a pointer
307 > to said struct? And so if V is using the 2.6.13 struct then a
308 > 2.4.20 one will make no sense to it.
310 Not quite... what is inline in the 2.4.20 version is a
311 sigcontext_struct, not an mcontext. The sigcontext looks like
312 this:
314 struct sigcontext_struct {
315 unsigned long _unused[4];
316 int signal;
317 unsigned long handler;
318 unsigned long oldmask;
319 struct pt_regs *regs;
322 The regs pointer of that struct ends up at the same offset as the
323 uc_regs of the 2.6 struct ucontext, and a struct pt_regs is the
324 same as the mc_gregs field of the mcontext. In fact the integer
325 regs are followed in memory by the floating point regs on 2.4.20.
327 Thus if you are using the 2.6 definitions, it should work on 2.4.20
328 provided that you go via uc->uc_regs rather than looking in
329 uc->uc_mcontext directly.
331 There is another subtlety: 2.4.20 doesn't save the vector regs when
332 delivering a signal, and 2.6.x only saves the vector regs if the
333 process has ever used an altivec instructions. If 2.6.x does save
334 the vector regs, it sets the MSR_VEC bit in
335 uc->uc_regs->mc_gregs[PT_MSR], otherwise it clears it. That bit
336 will always be clear under 2.4.20. So you can use that bit to tell
337 whether uc->uc_regs->mc_vregs is valid. */
338 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_regs->mc_gregs[VKI_PT_NIP])
339 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_regs->mc_gregs[VKI_PT_R1])
340 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
341 /* Convert the values in uc_mcontext r3,cr into a SysRes. */ \
342 VG_(mk_SysRes_ppc32_linux)( \
343 (uc)->uc_regs->mc_gregs[VKI_PT_R3], \
344 (((uc)->uc_regs->mc_gregs[VKI_PT_CCR] >> 28) & 1) \
346 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
347 { (srP)->r_pc = (ULong)((uc)->uc_regs->mc_gregs[VKI_PT_NIP]); \
348 (srP)->r_sp = (ULong)((uc)->uc_regs->mc_gregs[VKI_PT_R1]); \
349 (srP)->misc.PPC32.r_lr = (uc)->uc_regs->mc_gregs[VKI_PT_LNK]; \
352 #elif defined(VGP_ppc64be_linux) || defined(VGP_ppc64le_linux)
353 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.gp_regs[VKI_PT_NIP])
354 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.gp_regs[VKI_PT_R1])
355 /* Dubious hack: if there is an error, only consider the lowest 8
356 bits of r3. memcheck/tests/post-syscall shows a case where an
357 interrupted syscall should have produced a ucontext with 0x4
358 (VKI_EINTR) in r3 but is in fact producing 0x204. */
359 /* Awaiting clarification from PaulM. Evidently 0x204 is
360 ERESTART_RESTARTBLOCK, which shouldn't have made it into user
361 space. */
362 static inline SysRes VG_UCONTEXT_SYSCALL_SYSRES( struct vki_ucontext* uc )
364 ULong err = (uc->uc_mcontext.gp_regs[VKI_PT_CCR] >> 28) & 1;
365 ULong r3 = uc->uc_mcontext.gp_regs[VKI_PT_R3];
366 ThreadId tid = VG_(lwpid_to_vgtid)(VG_(gettid)());
367 ThreadState *tst = VG_(get_ThreadState)(tid);
369 if (err) r3 &= 0xFF;
370 return VG_(mk_SysRes_ppc64_linux)( r3, err,
371 tst->arch.vex.guest_syscall_flag);
373 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
374 { (srP)->r_pc = (uc)->uc_mcontext.gp_regs[VKI_PT_NIP]; \
375 (srP)->r_sp = (uc)->uc_mcontext.gp_regs[VKI_PT_R1]; \
376 (srP)->misc.PPC64.r_lr = (uc)->uc_mcontext.gp_regs[VKI_PT_LNK]; \
379 #elif defined(VGP_arm_linux)
380 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.arm_pc)
381 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.arm_sp)
382 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
383 /* Convert the value in uc_mcontext.rax into a SysRes. */ \
384 VG_(mk_SysRes_arm_linux)( (uc)->uc_mcontext.arm_r0 )
385 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
386 { (srP)->r_pc = (uc)->uc_mcontext.arm_pc; \
387 (srP)->r_sp = (uc)->uc_mcontext.arm_sp; \
388 (srP)->misc.ARM.r14 = (uc)->uc_mcontext.arm_lr; \
389 (srP)->misc.ARM.r12 = (uc)->uc_mcontext.arm_ip; \
390 (srP)->misc.ARM.r11 = (uc)->uc_mcontext.arm_fp; \
391 (srP)->misc.ARM.r7 = (uc)->uc_mcontext.arm_r7; \
394 #elif defined(VGP_arm64_linux)
395 # define VG_UCONTEXT_INSTR_PTR(uc) ((UWord)((uc)->uc_mcontext.pc))
396 # define VG_UCONTEXT_STACK_PTR(uc) ((UWord)((uc)->uc_mcontext.sp))
397 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
398 /* Convert the value in uc_mcontext.regs[0] into a SysRes. */ \
399 VG_(mk_SysRes_arm64_linux)( (uc)->uc_mcontext.regs[0] )
400 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
401 { (srP)->r_pc = (uc)->uc_mcontext.pc; \
402 (srP)->r_sp = (uc)->uc_mcontext.sp; \
403 (srP)->misc.ARM64.x29 = (uc)->uc_mcontext.regs[29]; \
404 (srP)->misc.ARM64.x30 = (uc)->uc_mcontext.regs[30]; \
407 #elif defined(VGP_x86_darwin)
409 static inline Addr VG_UCONTEXT_INSTR_PTR( void* ucV ) {
410 ucontext_t* uc = (ucontext_t*)ucV;
411 struct __darwin_mcontext32* mc = uc->uc_mcontext;
412 struct __darwin_i386_thread_state* ss = &mc->__ss;
413 return ss->__eip;
415 static inline Addr VG_UCONTEXT_STACK_PTR( void* ucV ) {
416 ucontext_t* uc = (ucontext_t*)ucV;
417 struct __darwin_mcontext32* mc = uc->uc_mcontext;
418 struct __darwin_i386_thread_state* ss = &mc->__ss;
419 return ss->__esp;
421 static inline SysRes VG_UCONTEXT_SYSCALL_SYSRES( void* ucV,
422 UWord scclass ) {
423 /* this is complicated by the problem that there are 3 different
424 kinds of syscalls, each with its own return convention.
425 NB: scclass is a host word, hence UWord is good for both
426 amd64-darwin and x86-darwin */
427 ucontext_t* uc = (ucontext_t*)ucV;
428 struct __darwin_mcontext32* mc = uc->uc_mcontext;
429 struct __darwin_i386_thread_state* ss = &mc->__ss;
430 /* duplicates logic in m_syswrap.getSyscallStatusFromGuestState */
431 UInt carry = 1 & ss->__eflags;
432 UInt err = 0;
433 UInt wLO = 0;
434 UInt wHI = 0;
435 switch (scclass) {
436 case VG_DARWIN_SYSCALL_CLASS_UNIX:
437 err = carry;
438 wLO = ss->__eax;
439 wHI = ss->__edx;
440 break;
441 case VG_DARWIN_SYSCALL_CLASS_MACH:
442 wLO = ss->__eax;
443 break;
444 case VG_DARWIN_SYSCALL_CLASS_MDEP:
445 wLO = ss->__eax;
446 break;
447 default:
448 vg_assert(0);
449 break;
451 return VG_(mk_SysRes_x86_darwin)( scclass, err ? True : False,
452 wHI, wLO );
454 static inline
455 void VG_UCONTEXT_TO_UnwindStartRegs( UnwindStartRegs* srP,
456 void* ucV ) {
457 ucontext_t* uc = (ucontext_t*)(ucV);
458 struct __darwin_mcontext32* mc = uc->uc_mcontext;
459 struct __darwin_i386_thread_state* ss = &mc->__ss;
460 srP->r_pc = (ULong)(ss->__eip);
461 srP->r_sp = (ULong)(ss->__esp);
462 srP->misc.X86.r_ebp = (UInt)(ss->__ebp);
465 #elif defined(VGP_amd64_darwin)
467 static inline Addr VG_UCONTEXT_INSTR_PTR( void* ucV ) {
468 ucontext_t* uc = (ucontext_t*)ucV;
469 struct __darwin_mcontext64* mc = uc->uc_mcontext;
470 struct __darwin_x86_thread_state64* ss = &mc->__ss;
471 return ss->__rip;
473 static inline Addr VG_UCONTEXT_STACK_PTR( void* ucV ) {
474 ucontext_t* uc = (ucontext_t*)ucV;
475 struct __darwin_mcontext64* mc = uc->uc_mcontext;
476 struct __darwin_x86_thread_state64* ss = &mc->__ss;
477 return ss->__rsp;
479 static inline SysRes VG_UCONTEXT_SYSCALL_SYSRES( void* ucV,
480 UWord scclass ) {
481 /* This is copied from the x86-darwin case. I'm not sure if it
482 is correct. */
483 ucontext_t* uc = (ucontext_t*)ucV;
484 struct __darwin_mcontext64* mc = uc->uc_mcontext;
485 struct __darwin_x86_thread_state64* ss = &mc->__ss;
486 /* duplicates logic in m_syswrap.getSyscallStatusFromGuestState */
487 ULong carry = 1 & ss->__rflags;
488 ULong err = 0;
489 ULong wLO = 0;
490 ULong wHI = 0;
491 switch (scclass) {
492 case VG_DARWIN_SYSCALL_CLASS_UNIX:
493 err = carry;
494 wLO = ss->__rax;
495 wHI = ss->__rdx;
496 break;
497 case VG_DARWIN_SYSCALL_CLASS_MACH:
498 wLO = ss->__rax;
499 break;
500 case VG_DARWIN_SYSCALL_CLASS_MDEP:
501 wLO = ss->__rax;
502 break;
503 default:
504 vg_assert(0);
505 break;
507 return VG_(mk_SysRes_amd64_darwin)( scclass, err ? True : False,
508 wHI, wLO );
510 static inline
511 void VG_UCONTEXT_TO_UnwindStartRegs( UnwindStartRegs* srP,
512 void* ucV ) {
513 ucontext_t* uc = (ucontext_t*)ucV;
514 struct __darwin_mcontext64* mc = uc->uc_mcontext;
515 struct __darwin_x86_thread_state64* ss = &mc->__ss;
516 srP->r_pc = (ULong)(ss->__rip);
517 srP->r_sp = (ULong)(ss->__rsp);
518 srP->misc.AMD64.r_rbp = (ULong)(ss->__rbp);
521 #elif defined(VGP_x86_freebsd)
522 # define VG_UCONTEXT_INSTR_PTR(uc) ((UWord)(uc)->uc_mcontext.eip)
523 # define VG_UCONTEXT_STACK_PTR(uc) ((UWord)(uc)->uc_mcontext.esp)
524 # define VG_UCONTEXT_FRAME_PTR(uc) ((UWord)(uc)->uc_mcontext.ebp)
525 # define VG_UCONTEXT_SYSCALL_NUM(uc) ((UWord)(uc)->uc_mcontext.eax)
526 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
527 /* Convert the value in uc_mcontext.eax into a SysRes. */ \
528 VG_(mk_SysRes_x86_freebsd)( (uc)->uc_mcontext.eax, \
529 (uc)->uc_mcontext.edx, ((uc)->uc_mcontext.eflags & 1) != 0 ? True : False)
530 # define VG_UCONTEXT_LINK_REG(uc) 0 /* What is an LR for anyway? */
531 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
532 { (srP)->r_pc = (ULong)((uc)->uc_mcontext.eip); \
533 (srP)->r_sp = (ULong)((uc)->uc_mcontext.esp); \
534 (srP)->misc.X86.r_ebp = (uc)->uc_mcontext.ebp; \
537 #elif defined(VGP_amd64_freebsd)
538 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.rip)
539 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.rsp)
540 # define VG_UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.rbp)
541 # define VG_UCONTEXT_SYSCALL_NUM(uc) ((uc)->uc_mcontext.rax)
542 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
543 /* Convert the value in uc_mcontext.rax into a SysRes. */ \
544 VG_(mk_SysRes_amd64_freebsd)( (uc)->uc_mcontext.rax, \
545 (uc)->uc_mcontext.rdx, ((uc)->uc_mcontext.rflags & 1) != 0 ? True : False )
546 # define VG_UCONTEXT_LINK_REG(uc) 0 /* No LR on amd64 either */
547 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
548 { (srP)->r_pc = (uc)->uc_mcontext.rip; \
549 (srP)->r_sp = (uc)->uc_mcontext.rsp; \
550 (srP)->misc.AMD64.r_rbp = (uc)->uc_mcontext.rbp; \
552 #elif defined(VGP_arm64_freebsd)
554 # define VG_UCONTEXT_INSTR_PTR(uc) ((UWord)((uc)->uc_mcontext.mc_gpregs.gp_elr))
555 # define VG_UCONTEXT_STACK_PTR(uc) ((UWord)((uc)->uc_mcontext.mc_gpregs.gp_sp))
556 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
557 /* Convert the value in uc_mcontext.regs[0] into a SysRes. */ \
558 VG_(mk_SysRes_arm64_freebsd)( (uc)->uc_mcontext.mc_gpregs.gp_x[0], \
559 (uc)->uc_mcontext.mc_gpregs.gp_x[1], \
560 ((uc)->uc_mcontext.mc_gpregs.gp_spsr & VKI_PSR_C) != 0 ? True : False )
561 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
562 { (srP)->r_pc = (uc)->uc_mcontext.mc_gpregs.gp_elr; \
563 (srP)->r_sp = (uc)->uc_mcontext.mc_gpregs.gp_sp; \
564 (srP)->misc.ARM64.x29 = (uc)->uc_mcontext.mc_gpregs.gp_x[29]; \
565 (srP)->misc.ARM64.x30 = (uc)->uc_mcontext.mc_gpregs.gp_lr; \
568 #elif defined(VGP_s390x_linux)
570 # define VG_UCONTEXT_INSTR_PTR(uc) ((uc)->uc_mcontext.regs.psw.addr)
571 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.regs.gprs[15])
572 # define VG_UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.regs.gprs[11])
573 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
574 VG_(mk_SysRes_s390x_linux)((uc)->uc_mcontext.regs.gprs[2])
575 # define VG_UCONTEXT_LINK_REG(uc) ((uc)->uc_mcontext.regs.gprs[14])
577 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
578 { (srP)->r_pc = (ULong)((uc)->uc_mcontext.regs.psw.addr); \
579 (srP)->r_sp = (ULong)((uc)->uc_mcontext.regs.gprs[15]); \
580 (srP)->misc.S390X.r_fp = (uc)->uc_mcontext.regs.gprs[11]; \
581 (srP)->misc.S390X.r_lr = (uc)->uc_mcontext.regs.gprs[14]; \
582 (srP)->misc.S390X.r_f0 = (uc)->uc_mcontext.fpregs.fprs[0]; \
583 (srP)->misc.S390X.r_f1 = (uc)->uc_mcontext.fpregs.fprs[1]; \
584 (srP)->misc.S390X.r_f2 = (uc)->uc_mcontext.fpregs.fprs[2]; \
585 (srP)->misc.S390X.r_f3 = (uc)->uc_mcontext.fpregs.fprs[3]; \
586 (srP)->misc.S390X.r_f4 = (uc)->uc_mcontext.fpregs.fprs[4]; \
587 (srP)->misc.S390X.r_f5 = (uc)->uc_mcontext.fpregs.fprs[5]; \
588 (srP)->misc.S390X.r_f6 = (uc)->uc_mcontext.fpregs.fprs[6]; \
589 (srP)->misc.S390X.r_f7 = (uc)->uc_mcontext.fpregs.fprs[7]; \
592 #elif defined(VGP_mips32_linux)
593 # define VG_UCONTEXT_INSTR_PTR(uc) ((UWord)(((uc)->uc_mcontext.sc_pc)))
594 # define VG_UCONTEXT_STACK_PTR(uc) ((UWord)((uc)->uc_mcontext.sc_regs[29]))
595 # define VG_UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.sc_regs[30])
596 # define VG_UCONTEXT_SYSCALL_NUM(uc) ((uc)->uc_mcontext.sc_regs[2])
597 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
598 /* Convert the value in uc_mcontext.rax into a SysRes. */ \
599 VG_(mk_SysRes_mips32_linux)( (uc)->uc_mcontext.sc_regs[2], \
600 (uc)->uc_mcontext.sc_regs[3], \
601 (uc)->uc_mcontext.sc_regs[7])
603 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
604 { (srP)->r_pc = (uc)->uc_mcontext.sc_pc; \
605 (srP)->r_sp = (uc)->uc_mcontext.sc_regs[29]; \
606 (srP)->misc.MIPS32.r30 = (uc)->uc_mcontext.sc_regs[30]; \
607 (srP)->misc.MIPS32.r31 = (uc)->uc_mcontext.sc_regs[31]; \
608 (srP)->misc.MIPS32.r28 = (uc)->uc_mcontext.sc_regs[28]; \
611 #elif defined(VGP_mips64_linux)
612 # define VG_UCONTEXT_INSTR_PTR(uc) (((uc)->uc_mcontext.sc_pc))
613 # define VG_UCONTEXT_STACK_PTR(uc) ((uc)->uc_mcontext.sc_regs[29])
614 # define VG_UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.sc_regs[30])
615 # define VG_UCONTEXT_SYSCALL_NUM(uc) ((uc)->uc_mcontext.sc_regs[2])
616 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
617 /* Convert the value in uc_mcontext.rax into a SysRes. */ \
618 VG_(mk_SysRes_mips64_linux)((uc)->uc_mcontext.sc_regs[2], \
619 (uc)->uc_mcontext.sc_regs[3], \
620 (uc)->uc_mcontext.sc_regs[7])
622 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
623 { (srP)->r_pc = (uc)->uc_mcontext.sc_pc; \
624 (srP)->r_sp = (uc)->uc_mcontext.sc_regs[29]; \
625 (srP)->misc.MIPS64.r30 = (uc)->uc_mcontext.sc_regs[30]; \
626 (srP)->misc.MIPS64.r31 = (uc)->uc_mcontext.sc_regs[31]; \
627 (srP)->misc.MIPS64.r28 = (uc)->uc_mcontext.sc_regs[28]; \
630 #elif defined(VGP_nanomips_linux)
631 # define VG_UCONTEXT_INSTR_PTR(uc) ((UWord)(((uc)->uc_mcontext.sc_pc)))
632 # define VG_UCONTEXT_STACK_PTR(uc) ((UWord)((uc)->uc_mcontext.sc_regs[29]))
633 # define VG_UCONTEXT_FRAME_PTR(uc) ((uc)->uc_mcontext.sc_regs[30])
634 # define VG_UCONTEXT_SYSCALL_NUM(uc) ((uc)->uc_mcontext.sc_regs[2])
635 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
636 VG_(mk_SysRes_nanomips_linux)((uc)->uc_mcontext.sc_regs[4])
638 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
639 { (srP)->r_pc = (uc)->uc_mcontext.sc_pc; \
640 (srP)->r_sp = (uc)->uc_mcontext.sc_regs[29]; \
641 (srP)->misc.MIPS32.r30 = (uc)->uc_mcontext.sc_regs[30]; \
642 (srP)->misc.MIPS32.r31 = (uc)->uc_mcontext.sc_regs[31]; \
643 (srP)->misc.MIPS32.r28 = (uc)->uc_mcontext.sc_regs[28]; \
646 #elif defined(VGP_x86_solaris)
647 # define VG_UCONTEXT_INSTR_PTR(uc) ((Addr)(uc)->uc_mcontext.gregs[VKI_EIP])
648 # define VG_UCONTEXT_STACK_PTR(uc) ((Addr)(uc)->uc_mcontext.gregs[VKI_UESP])
649 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
650 VG_(mk_SysRes_x86_solaris)((uc)->uc_mcontext.gregs[VKI_EFL] & 1, \
651 (uc)->uc_mcontext.gregs[VKI_EAX], \
652 (uc)->uc_mcontext.gregs[VKI_EFL] & 1 \
653 ? 0 : (uc)->uc_mcontext.gregs[VKI_EDX])
654 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
655 { (srP)->r_pc = (ULong)(uc)->uc_mcontext.gregs[VKI_EIP]; \
656 (srP)->r_sp = (ULong)(uc)->uc_mcontext.gregs[VKI_UESP]; \
657 (srP)->misc.X86.r_ebp = (uc)->uc_mcontext.gregs[VKI_EBP]; \
660 #elif defined(VGP_amd64_solaris)
661 # define VG_UCONTEXT_INSTR_PTR(uc) ((Addr)(uc)->uc_mcontext.gregs[VKI_REG_RIP])
662 # define VG_UCONTEXT_STACK_PTR(uc) ((Addr)(uc)->uc_mcontext.gregs[VKI_REG_RSP])
663 # define VG_UCONTEXT_SYSCALL_SYSRES(uc) \
664 VG_(mk_SysRes_amd64_solaris)((uc)->uc_mcontext.gregs[VKI_REG_RFL] & 1, \
665 (uc)->uc_mcontext.gregs[VKI_REG_RAX], \
666 (uc)->uc_mcontext.gregs[VKI_REG_RFL] & 1 \
667 ? 0 : (uc)->uc_mcontext.gregs[VKI_REG_RDX])
668 # define VG_UCONTEXT_TO_UnwindStartRegs(srP, uc) \
669 { (srP)->r_pc = (uc)->uc_mcontext.gregs[VKI_REG_RIP]; \
670 (srP)->r_sp = (uc)->uc_mcontext.gregs[VKI_REG_RSP]; \
671 (srP)->misc.AMD64.r_rbp = (uc)->uc_mcontext.gregs[VKI_REG_RBP]; \
673 #else
674 # error Unknown platform
675 #endif
678 /* ------ Macros for pulling stuff out of siginfos ------ */
680 /* These macros allow use of uniform names when working with
681 both the Linux and Darwin vki definitions. */
682 #if defined(VGO_linux)
683 # define VKI_SIGINFO_si_addr _sifields._sigfault._addr
684 # define VKI_SIGINFO_si_pid _sifields._kill._pid
685 #elif defined(VGO_darwin) || defined(VGO_solaris) || defined(VGO_freebsd)
686 # define VKI_SIGINFO_si_addr si_addr
687 # define VKI_SIGINFO_si_pid si_pid
688 #else
689 # error Unknown OS
690 #endif
693 /* ---------------------------------------------------------------------
694 HIGH LEVEL STUFF TO DO WITH SIGNALS: POLICY (MOSTLY)
695 ------------------------------------------------------------------ */
697 /* ---------------------------------------------------------------------
698 Signal state for this process.
699 ------------------------------------------------------------------ */
702 /* Base-ment of these arrays[_VKI_NSIG].
704 Valid signal numbers are 1 .. _VKI_NSIG inclusive.
705 Rather than subtracting 1 for indexing these arrays, which
706 is tedious and error-prone, they are simply dimensioned 1 larger,
707 and entry [0] is not used.
711 /* -----------------------------------------------------
712 Static client signal state (SCSS). This is the state
713 that the client thinks it has the kernel in.
714 SCSS records verbatim the client's settings. These
715 are mashed around only when SKSS is calculated from it.
716 -------------------------------------------------- */
718 typedef
719 struct {
720 void* scss_handler; /* VKI_SIG_DFL or VKI_SIG_IGN or ptr to
721 client's handler */
722 UInt scss_flags;
723 vki_sigset_t scss_mask;
724 void* scss_restorer; /* where sigreturn goes */
725 void* scss_sa_tramp; /* sa_tramp setting, Darwin only */
726 /* re _restorer and _sa_tramp, we merely record the values
727 supplied when the client does 'sigaction' and give them back
728 when requested. Otherwise they are simply ignored. */
730 SCSS_Per_Signal;
732 typedef
733 struct {
734 /* per-signal info */
735 SCSS_Per_Signal scss_per_sig[1+_VKI_NSIG];
737 /* Additional elements to SCSS not stored here:
738 - for each thread, the thread's blocking mask
739 - for each thread in WaitSIG, the set of waited-on sigs
742 SCSS;
744 static SCSS scss;
747 /* -----------------------------------------------------
748 Static kernel signal state (SKSS). This is the state
749 that we have the kernel in. It is computed from SCSS.
750 -------------------------------------------------- */
752 /* Let's do:
753 sigprocmask assigns to all thread masks
754 so that at least everything is always consistent
755 Flags:
756 SA_SIGINFO -- we always set it, and honour it for the client
757 SA_NOCLDSTOP -- passed to kernel
758 SA_ONESHOT or SA_RESETHAND -- pass through
759 SA_RESTART -- we observe this but set our handlers to always restart
760 (this doesn't apply to the Solaris port)
761 SA_NOMASK or SA_NODEFER -- we observe this, but our handlers block everything
762 SA_ONSTACK -- pass through
763 SA_NOCLDWAIT -- pass through
767 typedef
768 struct {
769 void* skss_handler; /* VKI_SIG_DFL or VKI_SIG_IGN
770 or ptr to our handler */
771 UInt skss_flags;
772 /* There is no skss_mask, since we know that we will always ask
773 for all signals to be blocked in our sighandlers. */
774 /* Also there is no skss_restorer. */
776 SKSS_Per_Signal;
778 typedef
779 struct {
780 SKSS_Per_Signal skss_per_sig[1+_VKI_NSIG];
782 SKSS;
784 static SKSS skss;
786 /* returns True if signal is to be ignored.
787 To check this, possibly call gdbserver with tid. */
788 static Bool is_sig_ign(vki_siginfo_t *info, ThreadId tid)
790 vg_assert(info->si_signo >= 1 && info->si_signo <= _VKI_NSIG);
792 /* If VG_(gdbserver_report_signal) tells to report the signal,
793 then verify if this signal is not to be ignored. GDB might have
794 modified si_signo, so we check after the call to gdbserver. */
795 return !VG_(gdbserver_report_signal) (info, tid)
796 || scss.scss_per_sig[info->si_signo].scss_handler == VKI_SIG_IGN;
799 /* ---------------------------------------------------------------------
800 Compute the SKSS required by the current SCSS.
801 ------------------------------------------------------------------ */
803 static
804 void pp_SKSS ( void )
806 Int sig;
807 VG_(printf)("\n\nSKSS:\n");
808 for (sig = 1; sig <= _VKI_NSIG; sig++) {
809 VG_(printf)("sig %d: handler %p, flags 0x%x\n", sig,
810 skss.skss_per_sig[sig].skss_handler,
811 skss.skss_per_sig[sig].skss_flags );
816 /* This is the core, clever bit. Computation is as follows:
818 For each signal
819 handler = if client has a handler, then our handler
820 else if client is DFL, then our handler as well
821 else (client must be IGN)
822 then hander is IGN
824 static
825 void calculate_SKSS_from_SCSS ( SKSS* dst )
827 Int sig;
828 UInt scss_flags;
829 UInt skss_flags;
831 for (sig = 1; sig <= _VKI_NSIG; sig++) {
832 void *skss_handler;
833 void *scss_handler;
835 scss_handler = scss.scss_per_sig[sig].scss_handler;
836 scss_flags = scss.scss_per_sig[sig].scss_flags;
838 switch(sig) {
839 case VKI_SIGSEGV:
840 case VKI_SIGBUS:
841 case VKI_SIGFPE:
842 case VKI_SIGILL:
843 case VKI_SIGTRAP:
844 #if defined(VGO_freebsd)
845 case VKI_SIGSYS:
846 #endif
847 /* For these, we always want to catch them and report, even
848 if the client code doesn't. */
849 skss_handler = sync_signalhandler;
850 break;
852 case VKI_SIGCONT:
853 /* Let the kernel handle SIGCONT unless the client is actually
854 catching it. */
855 case VKI_SIGCHLD:
856 case VKI_SIGWINCH:
857 case VKI_SIGURG:
858 /* For signals which are have a default action of Ignore,
859 only set a handler if the client has set a signal handler.
860 Otherwise the kernel will interrupt a syscall which
861 wouldn't have otherwise been interrupted. */
862 if (scss.scss_per_sig[sig].scss_handler == VKI_SIG_DFL)
863 skss_handler = VKI_SIG_DFL;
864 else if (scss.scss_per_sig[sig].scss_handler == VKI_SIG_IGN)
865 skss_handler = VKI_SIG_IGN;
866 else
867 skss_handler = async_signalhandler;
868 break;
870 default:
871 // VKI_SIGVG* are runtime variables, so we can't make them
872 // cases in the switch, so we handle them in the 'default' case.
873 if (sig == VG_SIGVGKILL)
874 skss_handler = sigvgkill_handler;
875 else {
876 if (scss_handler == VKI_SIG_IGN)
877 skss_handler = VKI_SIG_IGN;
878 else
879 skss_handler = async_signalhandler;
881 break;
884 /* Flags */
886 skss_flags = 0;
888 /* SA_NOCLDSTOP, SA_NOCLDWAIT: pass to kernel */
889 skss_flags |= scss_flags & (VKI_SA_NOCLDSTOP | VKI_SA_NOCLDWAIT);
891 /* SA_ONESHOT: ignore client setting */
893 # if !defined(VGO_solaris)
894 /* SA_RESTART: ignore client setting and always set it for us.
895 Though we never rely on the kernel to restart a
896 syscall, we observe whether it wanted to restart the syscall
897 or not, which is needed by
898 VG_(fixup_guest_state_after_syscall_interrupted) */
899 skss_flags |= VKI_SA_RESTART;
900 #else
901 /* The above does not apply to the Solaris port, where the kernel does
902 not directly restart syscalls, but instead it checks SA_RESTART flag
903 and if it is set then it returns ERESTART to libc and the library
904 actually restarts the syscall. */
905 skss_flags |= scss_flags & VKI_SA_RESTART;
906 # endif
908 /* SA_NOMASK: ignore it */
910 /* SA_ONSTACK: client setting is irrelevant here */
911 /* We don't set a signal stack, so ignore */
913 /* always ask for SA_SIGINFO */
914 if (skss_handler != VKI_SIG_IGN && skss_handler != VKI_SIG_DFL)
915 skss_flags |= VKI_SA_SIGINFO;
917 /* use our own restorer */
918 skss_flags |= VKI_SA_RESTORER;
920 /* Create SKSS entry for this signal. */
921 if (sig != VKI_SIGKILL && sig != VKI_SIGSTOP)
922 dst->skss_per_sig[sig].skss_handler = skss_handler;
923 else
924 dst->skss_per_sig[sig].skss_handler = VKI_SIG_DFL;
926 dst->skss_per_sig[sig].skss_flags = skss_flags;
929 /* Sanity checks. */
930 vg_assert(dst->skss_per_sig[VKI_SIGKILL].skss_handler == VKI_SIG_DFL);
931 vg_assert(dst->skss_per_sig[VKI_SIGSTOP].skss_handler == VKI_SIG_DFL);
933 if (0)
934 pp_SKSS();
938 /* ---------------------------------------------------------------------
939 After a possible SCSS change, update SKSS and the kernel itself.
940 ------------------------------------------------------------------ */
942 // We need two levels of macro-expansion here to convert __NR_rt_sigreturn
943 // to a number before converting it to a string... sigh.
944 extern void my_sigreturn(void);
946 #if defined(VGP_x86_linux)
947 # define _MY_SIGRETURN(name) \
948 ".text\n" \
949 ".globl my_sigreturn\n" \
950 "my_sigreturn:\n" \
951 " movl $" #name ", %eax\n" \
952 " int $0x80\n" \
953 ".previous\n"
955 #elif defined(VGP_amd64_linux)
956 # define _MY_SIGRETURN(name) \
957 ".text\n" \
958 ".globl my_sigreturn\n" \
959 "my_sigreturn:\n" \
960 " movq $" #name ", %rax\n" \
961 " syscall\n" \
962 ".previous\n"
964 #elif defined(VGP_ppc32_linux)
965 # define _MY_SIGRETURN(name) \
966 ".text\n" \
967 ".globl my_sigreturn\n" \
968 "my_sigreturn:\n" \
969 " li 0, " #name "\n" \
970 " sc\n" \
971 ".previous\n"
973 #elif defined(VGP_ppc64be_linux)
974 # define _MY_SIGRETURN(name) \
975 ".align 2\n" \
976 ".globl my_sigreturn\n" \
977 ".section \".opd\",\"aw\"\n" \
978 ".align 3\n" \
979 "my_sigreturn:\n" \
980 ".quad .my_sigreturn,.TOC.@tocbase,0\n" \
981 ".previous\n" \
982 ".type .my_sigreturn,@function\n" \
983 ".globl .my_sigreturn\n" \
984 ".my_sigreturn:\n" \
985 " li 0, " #name "\n" \
986 " sc\n"
988 #elif defined(VGP_ppc64le_linux)
989 /* Little Endian supports ELF version 2. In the future, it may
990 * support other versions.
992 # define _MY_SIGRETURN(name) \
993 ".align 2\n" \
994 ".globl my_sigreturn\n" \
995 ".type .my_sigreturn,@function\n" \
996 "my_sigreturn:\n" \
997 "#if _CALL_ELF == 2 \n" \
998 "0: addis 2,12,.TOC.-0b@ha\n" \
999 " addi 2,2,.TOC.-0b@l\n" \
1000 " .localentry my_sigreturn,.-my_sigreturn\n" \
1001 "#endif \n" \
1002 " sc\n" \
1003 " .size my_sigreturn,.-my_sigreturn\n"
1005 #elif defined(VGP_arm_linux)
1006 # define _MY_SIGRETURN(name) \
1007 ".text\n" \
1008 ".globl my_sigreturn\n" \
1009 "my_sigreturn:\n\t" \
1010 " mov r7, #" #name "\n\t" \
1011 " svc 0x00000000\n" \
1012 ".previous\n"
1014 #elif defined(VGP_arm64_linux)
1015 # define _MY_SIGRETURN(name) \
1016 ".text\n" \
1017 ".globl my_sigreturn\n" \
1018 "my_sigreturn:\n\t" \
1019 " mov x8, #" #name "\n\t" \
1020 " svc 0x0\n" \
1021 ".previous\n"
1023 #elif defined(VGP_x86_darwin)
1024 # define _MY_SIGRETURN(name) \
1025 ".text\n" \
1026 ".globl my_sigreturn\n" \
1027 "my_sigreturn:\n" \
1028 " movl $" VG_STRINGIFY(__NR_DARWIN_FAKE_SIGRETURN) ",%eax\n" \
1029 " int $0x80\n"
1031 #elif defined(VGP_amd64_darwin)
1032 # define _MY_SIGRETURN(name) \
1033 ".text\n" \
1034 ".globl my_sigreturn\n" \
1035 "my_sigreturn:\n" \
1036 " movq $" VG_STRINGIFY(__NR_DARWIN_FAKE_SIGRETURN) ",%rax\n" \
1037 " syscall\n"
1039 #elif defined(VGP_s390x_linux)
1040 # define _MY_SIGRETURN(name) \
1041 ".text\n" \
1042 ".globl my_sigreturn\n" \
1043 "my_sigreturn:\n" \
1044 " svc " #name "\n" \
1045 ".previous\n"
1047 #elif defined(VGP_mips32_linux)
1048 # define _MY_SIGRETURN(name) \
1049 ".text\n" \
1050 "my_sigreturn:\n" \
1051 " li $2, " #name "\n" /* apparently $2 is v0 */ \
1052 " syscall\n" \
1053 ".previous\n"
1055 #elif defined(VGP_mips64_linux)
1056 # define _MY_SIGRETURN(name) \
1057 ".text\n" \
1058 "my_sigreturn:\n" \
1059 " li $2, " #name "\n" \
1060 " syscall\n" \
1061 ".previous\n"
1063 #elif defined(VGP_nanomips_linux)
1064 # define _MY_SIGRETURN(name) \
1065 ".text\n" \
1066 "my_sigreturn:\n" \
1067 " li $t4, " #name "\n" \
1068 " syscall[32]\n" \
1069 ".previous\n"
1070 #elif defined(VGP_x86_solaris) || defined(VGP_amd64_solaris)
1071 /* Not used on Solaris. */
1072 # define _MY_SIGRETURN(name) \
1073 ".text\n" \
1074 ".globl my_sigreturn\n" \
1075 "my_sigreturn:\n" \
1076 "ud2\n" \
1077 ".previous\n"
1078 #elif defined(VGP_x86_freebsd) || defined(VGP_amd64_freebsd)
1079 /* Not used on FreeBSD */
1080 # define _MY_SIGRETURN(name) \
1081 ".text\n" \
1082 ".globl my_sigreturn\n" \
1083 "my_sigreturn:\n" \
1084 "ud2\n" \
1085 ".previous\n"
1086 #elif defined(VGP_arm64_freebsd)
1087 /* Not used on FreeBSD */
1088 # define _MY_SIGRETURN(name) \
1089 ".text\n" \
1090 ".globl my_sigreturn\n" \
1091 "my_sigreturn:\n" \
1092 "udf #0\n" \
1093 ".previous\n"
1094 #else
1095 # error Unknown platform
1096 #endif
1098 #define MY_SIGRETURN(name) _MY_SIGRETURN(name)
1099 asm(
1100 MY_SIGRETURN(__NR_rt_sigreturn)
1104 static void handle_SCSS_change ( Bool force_update )
1106 Int res, sig;
1107 SKSS skss_old;
1108 vki_sigaction_toK_t ksa;
1109 vki_sigaction_fromK_t ksa_old;
1111 /* Remember old SKSS and calculate new one. */
1112 skss_old = skss;
1113 calculate_SKSS_from_SCSS ( &skss );
1115 /* Compare the new SKSS entries vs the old ones, and update kernel
1116 where they differ. */
1117 for (sig = 1; sig <= VG_(max_signal); sig++) {
1119 /* Trying to do anything with SIGKILL is pointless; just ignore
1120 it. */
1121 if (sig == VKI_SIGKILL || sig == VKI_SIGSTOP)
1122 continue;
1124 if (!force_update) {
1125 if ((skss_old.skss_per_sig[sig].skss_handler
1126 == skss.skss_per_sig[sig].skss_handler)
1127 && (skss_old.skss_per_sig[sig].skss_flags
1128 == skss.skss_per_sig[sig].skss_flags))
1129 /* no difference */
1130 continue;
1133 ksa.ksa_handler = skss.skss_per_sig[sig].skss_handler;
1134 ksa.sa_flags = skss.skss_per_sig[sig].skss_flags;
1135 # if !defined(VGP_ppc32_linux) && \
1136 !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
1137 !defined(VGP_mips32_linux) && !defined(VGO_solaris) && !defined(VGO_freebsd)
1138 ksa.sa_restorer = my_sigreturn;
1139 # endif
1140 /* Re above ifdef (also the assertion below), PaulM says:
1141 The sa_restorer field is not used at all on ppc. Glibc
1142 converts the sigaction you give it into a kernel sigaction,
1143 but it doesn't put anything in the sa_restorer field.
1146 /* block all signals in handler */
1147 VG_(sigfillset)( &ksa.sa_mask );
1148 VG_(sigdelset)( &ksa.sa_mask, VKI_SIGKILL );
1149 VG_(sigdelset)( &ksa.sa_mask, VKI_SIGSTOP );
1151 if (VG_(clo_trace_signals) && VG_(clo_verbosity) > 2)
1152 VG_(dmsg)("setting ksig %d to: hdlr %p, flags 0x%lx, "
1153 "mask(msb..lsb) 0x%llx 0x%llx\n",
1154 sig, ksa.ksa_handler,
1155 (UWord)ksa.sa_flags,
1156 _VKI_NSIG_WORDS > 1 ? (ULong)ksa.sa_mask.sig[1] : 0,
1157 (ULong)ksa.sa_mask.sig[0]);
1159 res = VG_(sigaction)( sig, &ksa, &ksa_old );
1160 vg_assert(res == 0);
1162 /* Since we got the old sigaction more or less for free, might
1163 as well extract the maximum sanity-check value from it. */
1164 if (!force_update) {
1165 vg_assert(ksa_old.ksa_handler
1166 == skss_old.skss_per_sig[sig].skss_handler);
1167 # if defined(VGO_solaris)
1168 if (ksa_old.ksa_handler == VKI_SIG_DFL
1169 || ksa_old.ksa_handler == VKI_SIG_IGN) {
1170 /* The Solaris kernel ignores signal flags (except SA_NOCLDWAIT
1171 and SA_NOCLDSTOP) and a signal mask if a handler is set to
1172 SIG_DFL or SIG_IGN. */
1173 skss_old.skss_per_sig[sig].skss_flags
1174 &= (VKI_SA_NOCLDWAIT | VKI_SA_NOCLDSTOP);
1175 vg_assert(VG_(isemptysigset)( &ksa_old.sa_mask ));
1176 VG_(sigfillset)( &ksa_old.sa_mask );
1178 # endif
1179 vg_assert(ksa_old.sa_flags
1180 == skss_old.skss_per_sig[sig].skss_flags);
1181 # if !defined(VGP_ppc32_linux) && \
1182 !defined(VGP_x86_darwin) && !defined(VGP_amd64_darwin) && \
1183 !defined(VGP_mips32_linux) && !defined(VGP_mips64_linux) && \
1184 !defined(VGP_nanomips_linux) && !defined(VGO_solaris) && \
1185 !defined(VGO_freebsd)
1186 vg_assert(ksa_old.sa_restorer == my_sigreturn);
1187 # endif
1188 VG_(sigaddset)( &ksa_old.sa_mask, VKI_SIGKILL );
1189 VG_(sigaddset)( &ksa_old.sa_mask, VKI_SIGSTOP );
1190 vg_assert(VG_(isfullsigset)( &ksa_old.sa_mask ));
1196 /* ---------------------------------------------------------------------
1197 Update/query SCSS in accordance with client requests.
1198 ------------------------------------------------------------------ */
1200 /* Logic for this alt-stack stuff copied directly from do_sigaltstack
1201 in kernel/signal.[ch] */
1203 /* True if we are on the alternate signal stack. */
1204 static Bool on_sig_stack ( ThreadId tid, Addr m_SP )
1206 ThreadState *tst = VG_(get_ThreadState)(tid);
1208 return (m_SP - (Addr)tst->altstack.ss_sp < (Addr)tst->altstack.ss_size);
1211 static Int sas_ss_flags ( ThreadId tid, Addr m_SP )
1213 ThreadState *tst = VG_(get_ThreadState)(tid);
1215 return (tst->altstack.ss_size == 0
1216 ? VKI_SS_DISABLE
1217 : on_sig_stack(tid, m_SP) ? VKI_SS_ONSTACK : 0);
1221 SysRes VG_(do_sys_sigaltstack) ( ThreadId tid, vki_stack_t* ss, vki_stack_t* oss )
1223 Addr m_SP;
1225 vg_assert(VG_(is_valid_tid)(tid));
1226 m_SP = VG_(get_SP)(tid);
1228 if (VG_(clo_trace_signals))
1229 VG_(dmsg)("sys_sigaltstack: tid %u, "
1230 "ss %p{%p,sz=%llu,flags=0x%llx}, oss %p (current SP %p)\n",
1231 tid, (void*)ss,
1232 ss ? ss->ss_sp : 0,
1233 (ULong)(ss ? ss->ss_size : 0),
1234 (ULong)(ss ? ss->ss_flags : 0),
1235 (void*)oss, (void*)m_SP);
1237 if (oss != NULL) {
1238 oss->ss_sp = VG_(threads)[tid].altstack.ss_sp;
1239 oss->ss_size = VG_(threads)[tid].altstack.ss_size;
1240 oss->ss_flags = VG_(threads)[tid].altstack.ss_flags
1241 | sas_ss_flags(tid, m_SP);
1244 if (ss != NULL) {
1245 if (on_sig_stack(tid, VG_(get_SP)(tid))) {
1246 return VG_(mk_SysRes_Error)( VKI_EPERM );
1248 if (ss->ss_flags != VKI_SS_DISABLE
1249 && ss->ss_flags != VKI_SS_ONSTACK
1250 && ss->ss_flags != 0) {
1251 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1253 if (ss->ss_flags == VKI_SS_DISABLE) {
1254 VG_(threads)[tid].altstack.ss_flags = VKI_SS_DISABLE;
1255 } else {
1256 if (ss->ss_size < VKI_MINSIGSTKSZ) {
1257 return VG_(mk_SysRes_Error)( VKI_ENOMEM );
1260 VG_(threads)[tid].altstack.ss_sp = ss->ss_sp;
1261 VG_(threads)[tid].altstack.ss_size = ss->ss_size;
1262 VG_(threads)[tid].altstack.ss_flags = 0;
1265 return VG_(mk_SysRes_Success)( 0 );
1269 SysRes VG_(do_sys_sigaction) ( Int signo,
1270 const vki_sigaction_toK_t* new_act,
1271 vki_sigaction_fromK_t* old_act )
1273 if (VG_(clo_trace_signals))
1274 VG_(dmsg)("sys_sigaction: sigNo %d, "
1275 "new %#lx, old %#lx, new flags 0x%llx\n",
1276 signo, (UWord)new_act, (UWord)old_act,
1277 (ULong)(new_act ? new_act->sa_flags : 0));
1279 /* Rule out various error conditions. The aim is to ensure that if
1280 when the call is passed to the kernel it will definitely
1281 succeed. */
1283 /* Reject out-of-range signal numbers. */
1284 if (signo < 1 || signo > VG_(max_signal)) goto bad_signo;
1286 /* don't let them use our signals */
1287 if ( (signo > VG_SIGVGRTUSERMAX)
1288 && new_act
1289 && !(new_act->ksa_handler == VKI_SIG_DFL
1290 || new_act->ksa_handler == VKI_SIG_IGN) )
1291 goto bad_signo_reserved;
1293 /* Reject attempts to set a handler (or set ignore) for SIGKILL. */
1294 if ( (signo == VKI_SIGKILL || signo == VKI_SIGSTOP)
1295 && new_act
1296 && new_act->ksa_handler != VKI_SIG_DFL)
1297 goto bad_sigkill_or_sigstop;
1299 /* If the client supplied non-NULL old_act, copy the relevant SCSS
1300 entry into it. */
1301 if (old_act) {
1302 old_act->ksa_handler = scss.scss_per_sig[signo].scss_handler;
1303 old_act->sa_flags = scss.scss_per_sig[signo].scss_flags;
1304 old_act->sa_mask = scss.scss_per_sig[signo].scss_mask;
1305 # if !defined(VGO_darwin) && !defined(VGO_freebsd) && \
1306 !defined(VGO_solaris)
1307 old_act->sa_restorer = scss.scss_per_sig[signo].scss_restorer;
1308 # endif
1311 /* And now copy new SCSS entry from new_act. */
1312 if (new_act) {
1313 scss.scss_per_sig[signo].scss_handler = new_act->ksa_handler;
1314 scss.scss_per_sig[signo].scss_flags = new_act->sa_flags;
1315 scss.scss_per_sig[signo].scss_mask = new_act->sa_mask;
1317 scss.scss_per_sig[signo].scss_restorer = NULL;
1318 # if !defined(VGO_darwin) && !defined(VGO_freebsd) && \
1319 !defined(VGO_solaris)
1320 scss.scss_per_sig[signo].scss_restorer = new_act->sa_restorer;
1321 # endif
1323 scss.scss_per_sig[signo].scss_sa_tramp = NULL;
1324 # if defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
1325 scss.scss_per_sig[signo].scss_sa_tramp = new_act->sa_tramp;
1326 # endif
1328 VG_(sigdelset)(&scss.scss_per_sig[signo].scss_mask, VKI_SIGKILL);
1329 VG_(sigdelset)(&scss.scss_per_sig[signo].scss_mask, VKI_SIGSTOP);
1332 /* All happy bunnies ... */
1333 if (new_act) {
1334 handle_SCSS_change( False /* lazy update */ );
1336 return VG_(mk_SysRes_Success)( 0 );
1338 bad_signo:
1339 if (VG_(showing_core_errors)() && !VG_(clo_xml)) {
1340 VG_(umsg)("Warning: bad signal number %d in sigaction()\n", signo);
1342 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1344 bad_signo_reserved:
1345 if (VG_(showing_core_errors)() && !VG_(clo_xml)) {
1346 VG_(umsg)("Warning: ignored attempt to set %s handler in sigaction();\n",
1347 VG_(signame)(signo));
1348 VG_(umsg)(" the %s signal is used internally by Valgrind\n",
1349 VG_(signame)(signo));
1351 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1353 bad_sigkill_or_sigstop:
1354 if (VG_(showing_core_errors)() && !VG_(clo_xml)) {
1355 VG_(umsg)("Warning: ignored attempt to set %s handler in sigaction();\n",
1356 VG_(signame)(signo));
1357 VG_(umsg)(" the %s signal is uncatchable\n",
1358 VG_(signame)(signo));
1360 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1364 static
1365 void do_sigprocmask_bitops ( Int vki_how,
1366 vki_sigset_t* orig_set,
1367 vki_sigset_t* modifier )
1369 switch (vki_how) {
1370 case VKI_SIG_BLOCK:
1371 VG_(sigaddset_from_set)( orig_set, modifier );
1372 break;
1373 case VKI_SIG_UNBLOCK:
1374 VG_(sigdelset_from_set)( orig_set, modifier );
1375 break;
1376 case VKI_SIG_SETMASK:
1377 *orig_set = *modifier;
1378 break;
1379 default:
1380 VG_(core_panic)("do_sigprocmask_bitops");
1381 break;
1385 static
1386 HChar* format_sigset ( const vki_sigset_t* set )
1388 static HChar buf[_VKI_NSIG_WORDS * 16 + 1];
1389 int w;
1391 VG_(strcpy)(buf, "");
1393 for (w = _VKI_NSIG_WORDS - 1; w >= 0; w--)
1395 # if _VKI_NSIG_BPW == 32
1396 VG_(sprintf)(buf + VG_(strlen)(buf), "%08llx",
1397 set ? (ULong)set->sig[w] : 0);
1398 # elif _VKI_NSIG_BPW == 64
1399 VG_(sprintf)(buf + VG_(strlen)(buf), "%16llx",
1400 set ? (ULong)set->sig[w] : 0);
1401 # else
1402 # error "Unsupported value for _VKI_NSIG_BPW"
1403 # endif
1406 return buf;
1410 This updates the thread's signal mask. There's no such thing as a
1411 process-wide signal mask.
1413 Note that the thread signal masks are an implicit part of SCSS,
1414 which is why this routine is allowed to mess with them.
1416 static
1417 void do_setmask ( ThreadId tid,
1418 Int how,
1419 vki_sigset_t* newset,
1420 vki_sigset_t* oldset )
1422 if (VG_(clo_trace_signals))
1423 VG_(dmsg)("do_setmask: tid = %u how = %d (%s), newset = %p (%s)\n",
1424 tid, how,
1425 how==VKI_SIG_BLOCK ? "SIG_BLOCK" : (
1426 how==VKI_SIG_UNBLOCK ? "SIG_UNBLOCK" : (
1427 how==VKI_SIG_SETMASK ? "SIG_SETMASK" : "???")),
1428 newset, newset ? format_sigset(newset) : "NULL" );
1430 /* Just do this thread. */
1431 vg_assert(VG_(is_valid_tid)(tid));
1432 if (oldset) {
1433 *oldset = VG_(threads)[tid].sig_mask;
1434 if (VG_(clo_trace_signals))
1435 VG_(dmsg)("\toldset=%p %s\n", oldset, format_sigset(oldset));
1437 if (newset) {
1438 do_sigprocmask_bitops (how, &VG_(threads)[tid].sig_mask, newset );
1439 VG_(sigdelset)(&VG_(threads)[tid].sig_mask, VKI_SIGKILL);
1440 VG_(sigdelset)(&VG_(threads)[tid].sig_mask, VKI_SIGSTOP);
1441 VG_(threads)[tid].tmp_sig_mask = VG_(threads)[tid].sig_mask;
1446 SysRes VG_(do_sys_sigprocmask) ( ThreadId tid,
1447 Int how,
1448 vki_sigset_t* set,
1449 vki_sigset_t* oldset )
1451 /* Fix for case when ,,set,, is NULL.
1452 In this case ,,how,, flag should be ignored
1453 because we are only requesting from kernel
1454 to put current mask into ,,oldset,,.
1455 Taken from linux man pages (sigprocmask).
1456 The same is specified for POSIX.
1458 if (set != NULL) {
1459 switch(how) {
1460 case VKI_SIG_BLOCK:
1461 case VKI_SIG_UNBLOCK:
1462 case VKI_SIG_SETMASK:
1463 break;
1465 default:
1466 VG_(dmsg)("sigprocmask: unknown 'how' field %d\n", how);
1467 return VG_(mk_SysRes_Error)( VKI_EINVAL );
1471 vg_assert(VG_(is_valid_tid)(tid));
1472 do_setmask(tid, how, set, oldset);
1473 return VG_(mk_SysRes_Success)( 0 );
1477 /* ---------------------------------------------------------------------
1478 LOW LEVEL STUFF TO DO WITH SIGNALS: IMPLEMENTATION
1479 ------------------------------------------------------------------ */
1481 /* ---------------------------------------------------------------------
1482 Handy utilities to block/restore all host signals.
1483 ------------------------------------------------------------------ */
1485 /* Block all host signals, dumping the old mask in *saved_mask. */
1486 static void block_all_host_signals ( /* OUT */ vki_sigset_t* saved_mask )
1488 Int ret;
1489 vki_sigset_t block_procmask;
1490 VG_(sigfillset)(&block_procmask);
1491 ret = VG_(sigprocmask)
1492 (VKI_SIG_SETMASK, &block_procmask, saved_mask);
1493 vg_assert(ret == 0);
1496 /* Restore the blocking mask using the supplied saved one. */
1497 static void restore_all_host_signals ( /* IN */ vki_sigset_t* saved_mask )
1499 Int ret;
1500 ret = VG_(sigprocmask)(VKI_SIG_SETMASK, saved_mask, NULL);
1501 vg_assert(ret == 0);
1504 void VG_(clear_out_queued_signals)( ThreadId tid, vki_sigset_t* saved_mask )
1506 block_all_host_signals(saved_mask);
1507 if (VG_(threads)[tid].sig_queue != NULL) {
1508 VG_(free)(VG_(threads)[tid].sig_queue);
1509 VG_(threads)[tid].sig_queue = NULL;
1511 restore_all_host_signals(saved_mask);
1514 /* ---------------------------------------------------------------------
1515 The signal simulation proper. A simplified version of what the
1516 Linux kernel does.
1517 ------------------------------------------------------------------ */
1519 /* Set up a stack frame (VgSigContext) for the client's signal
1520 handler. */
1521 static
1522 void push_signal_frame ( ThreadId tid, const vki_siginfo_t *siginfo,
1523 const struct vki_ucontext *uc )
1525 Bool on_altstack;
1526 Addr esp_top_of_frame;
1527 ThreadState* tst;
1528 Int sigNo = siginfo->si_signo;
1530 vg_assert(sigNo >= 1 && sigNo <= VG_(max_signal));
1531 vg_assert(VG_(is_valid_tid)(tid));
1532 tst = & VG_(threads)[tid];
1534 if (VG_(clo_trace_signals)) {
1535 VG_(dmsg)("push_signal_frame (thread %u): signal %d\n", tid, sigNo);
1536 VG_(get_and_pp_StackTrace)(tid, 10);
1539 if (/* this signal asked to run on an alt stack */
1540 (scss.scss_per_sig[sigNo].scss_flags & VKI_SA_ONSTACK )
1541 && /* there is a defined and enabled alt stack, which we're not
1542 already using. Logic from get_sigframe in
1543 arch/i386/kernel/signal.c. */
1544 sas_ss_flags(tid, VG_(get_SP)(tid)) == 0
1546 on_altstack = True;
1547 esp_top_of_frame
1548 = (Addr)(tst->altstack.ss_sp) + tst->altstack.ss_size;
1549 if (VG_(clo_trace_signals))
1550 VG_(dmsg)("delivering signal %d (%s) to thread %u: "
1551 "on ALT STACK (%p-%p; %ld bytes)\n",
1552 sigNo, VG_(signame)(sigNo), tid, tst->altstack.ss_sp,
1553 (UChar *)tst->altstack.ss_sp + tst->altstack.ss_size,
1554 (Word)tst->altstack.ss_size );
1555 } else {
1556 on_altstack = False;
1557 esp_top_of_frame = VG_(get_SP)(tid) - VG_STACK_REDZONE_SZB;
1560 /* Signal delivery to tools */
1561 VG_TRACK( pre_deliver_signal, tid, sigNo, on_altstack );
1563 vg_assert(scss.scss_per_sig[sigNo].scss_handler != VKI_SIG_IGN);
1564 vg_assert(scss.scss_per_sig[sigNo].scss_handler != VKI_SIG_DFL);
1566 /* This may fail if the client stack is busted; if that happens,
1567 the whole process will exit rather than simply calling the
1568 signal handler. */
1569 VG_(sigframe_create) (tid, on_altstack, esp_top_of_frame, siginfo, uc,
1570 scss.scss_per_sig[sigNo].scss_handler,
1571 scss.scss_per_sig[sigNo].scss_flags,
1572 &tst->sig_mask,
1573 scss.scss_per_sig[sigNo].scss_restorer);
1577 const HChar *VG_(signame)(Int sigNo)
1579 static HChar buf[20]; // large enough
1581 switch(sigNo) {
1582 case VKI_SIGHUP: return "SIGHUP";
1583 case VKI_SIGINT: return "SIGINT";
1584 case VKI_SIGQUIT: return "SIGQUIT";
1585 case VKI_SIGILL: return "SIGILL";
1586 case VKI_SIGTRAP: return "SIGTRAP";
1587 case VKI_SIGABRT: return "SIGABRT";
1588 case VKI_SIGBUS: return "SIGBUS";
1589 case VKI_SIGFPE: return "SIGFPE";
1590 case VKI_SIGKILL: return "SIGKILL";
1591 case VKI_SIGUSR1: return "SIGUSR1";
1592 case VKI_SIGUSR2: return "SIGUSR2";
1593 case VKI_SIGSEGV: return "SIGSEGV";
1594 case VKI_SIGSYS: return "SIGSYS";
1595 case VKI_SIGPIPE: return "SIGPIPE";
1596 case VKI_SIGALRM: return "SIGALRM";
1597 case VKI_SIGTERM: return "SIGTERM";
1598 # if defined(VKI_SIGSTKFLT)
1599 case VKI_SIGSTKFLT: return "SIGSTKFLT";
1600 # endif
1601 case VKI_SIGCHLD: return "SIGCHLD";
1602 case VKI_SIGCONT: return "SIGCONT";
1603 case VKI_SIGSTOP: return "SIGSTOP";
1604 case VKI_SIGTSTP: return "SIGTSTP";
1605 case VKI_SIGTTIN: return "SIGTTIN";
1606 case VKI_SIGTTOU: return "SIGTTOU";
1607 case VKI_SIGURG: return "SIGURG";
1608 case VKI_SIGXCPU: return "SIGXCPU";
1609 case VKI_SIGXFSZ: return "SIGXFSZ";
1610 case VKI_SIGVTALRM: return "SIGVTALRM";
1611 case VKI_SIGPROF: return "SIGPROF";
1612 case VKI_SIGWINCH: return "SIGWINCH";
1613 case VKI_SIGIO: return "SIGIO";
1614 # if defined(VKI_SIGPWR)
1615 case VKI_SIGPWR: return "SIGPWR";
1616 # endif
1617 # if defined(VKI_SIGUNUSED) && (VKI_SIGUNUSED != VKI_SIGSYS)
1618 case VKI_SIGUNUSED: return "SIGUNUSED";
1619 # endif
1620 # if defined(VKI_SIGINFO)
1621 case VKI_SIGINFO: return "SIGINFO";
1622 # endif
1624 /* Solaris-specific signals. */
1625 # if defined(VKI_SIGEMT)
1626 case VKI_SIGEMT: return "SIGEMT";
1627 # endif
1628 # if defined(VKI_SIGWAITING)
1629 case VKI_SIGWAITING: return "SIGWAITING";
1630 # endif
1631 # if defined(VKI_SIGLWP)
1632 case VKI_SIGLWP: return "SIGLWP";
1633 # endif
1634 # if defined(VKI_SIGFREEZE)
1635 case VKI_SIGFREEZE: return "SIGFREEZE";
1636 # endif
1637 # if defined(VKI_SIGTHAW)
1638 case VKI_SIGTHAW: return "SIGTHAW";
1639 # endif
1640 # if defined(VKI_SIGCANCEL)
1641 case VKI_SIGCANCEL: return "SIGCANCEL";
1642 # endif
1643 # if defined(VKI_SIGLOST)
1644 case VKI_SIGLOST: return "SIGLOST";
1645 # endif
1646 # if defined(VKI_SIGXRES)
1647 case VKI_SIGXRES: return "SIGXRES";
1648 # endif
1649 # if defined(VKI_SIGJVM1)
1650 case VKI_SIGJVM1: return "SIGJVM1";
1651 # endif
1652 # if defined(VKI_SIGJVM2)
1653 case VKI_SIGJVM2: return "SIGJVM2";
1654 # endif
1656 # if defined(VKI_SIGRTMIN) && defined(VKI_SIGRTMAX)
1657 case VKI_SIGRTMIN ... VKI_SIGRTMAX:
1658 VG_(sprintf)(buf, "SIGRT%d", sigNo-VKI_SIGRTMIN);
1659 return buf;
1660 # endif
1662 default:
1663 VG_(sprintf)(buf, "SIG%d", sigNo);
1664 return buf;
1668 /* Hit ourselves with a signal using the default handler */
1669 void VG_(kill_self)(Int sigNo)
1671 Int r;
1672 vki_sigset_t mask, origmask;
1673 vki_sigaction_toK_t sa, origsa2;
1674 vki_sigaction_fromK_t origsa;
1676 sa.ksa_handler = VKI_SIG_DFL;
1677 sa.sa_flags = 0;
1678 # if !defined(VGO_darwin) && !defined(VGO_freebsd) && \
1679 !defined(VGO_solaris)
1680 sa.sa_restorer = 0;
1681 # endif
1682 VG_(sigemptyset)(&sa.sa_mask);
1684 VG_(sigaction)(sigNo, &sa, &origsa);
1686 VG_(sigemptyset)(&mask);
1687 VG_(sigaddset)(&mask, sigNo);
1688 VG_(sigprocmask)(VKI_SIG_UNBLOCK, &mask, &origmask);
1690 r = VG_(kill)(VG_(getpid)(), sigNo);
1691 # if !defined(VGO_darwin)
1692 /* This sometimes fails with EPERM on Darwin. I don't know why. */
1693 vg_assert(r == 0);
1694 # endif
1696 VG_(convert_sigaction_fromK_to_toK)( &origsa, &origsa2 );
1697 VG_(sigaction)(sigNo, &origsa2, NULL);
1698 VG_(sigprocmask)(VKI_SIG_SETMASK, &origmask, NULL);
1701 // The si_code describes where the signal came from. Some come from the
1702 // kernel, eg.: seg faults, illegal opcodes. Some come from the user, eg.:
1703 // from kill() (SI_USER), or timer_settime() (SI_TIMER), or an async I/O
1704 // request (SI_ASYNCIO). There's lots of implementation-defined leeway in
1705 // POSIX, but the user vs. kernal distinction is what we want here. We also
1706 // pass in some other details that can help when si_code is unreliable.
1707 static Bool is_signal_from_kernel(ThreadId tid, int signum, int si_code)
1709 # if defined(VGO_linux) || defined(VGO_solaris)
1710 // On Linux, SI_USER is zero, negative values are from the user, positive
1711 // values are from the kernel. There are SI_FROMUSER and SI_FROMKERNEL
1712 // macros but we don't use them here because other platforms don't have
1713 // them.
1714 return ( si_code > VKI_SI_USER ? True : False );
1715 #elif defined(VGO_freebsd)
1717 // The comment below seems a bit out of date. From the siginfo manpage
1719 // Full support for POSIX signal information first appeared in FreeBSD 7.0.
1720 // The codes SI_USER and SI_KERNEL can be generated as of FreeBSD 8.1. The
1721 // code SI_LWP can be generated as of FreeBSD 9.0.
1722 if (si_code == VKI_SI_USER || si_code == VKI_SI_LWP)
1723 return False;
1725 // It looks like there's no reliable way to say where the signal came from
1726 if (VG_(threads)[tid].status == VgTs_WaitSys) {
1727 return False;
1728 } else
1729 return True;
1730 # elif defined(VGO_darwin)
1731 // On Darwin 9.6.0, the si_code is completely unreliable. It should be the
1732 // case that 0 means "user", and >0 means "kernel". But:
1733 // - For SIGSEGV, it seems quite reliable.
1734 // - For SIGBUS, it's always 2.
1735 // - For SIGFPE, it's often 0, even for kernel ones (eg.
1736 // div-by-integer-zero always gives zero).
1737 // - For SIGILL, it's unclear.
1738 // - For SIGTRAP, it's always 1.
1739 // You can see the "NOTIMP" (not implemented) status of a number of the
1740 // sub-cases in sys/signal.h. Hopefully future versions of Darwin will
1741 // get this right.
1743 // If we're blocked waiting on a syscall, it must be a user signal, because
1744 // the kernel won't generate sync signals within syscalls.
1745 if (VG_(threads)[tid].status == VgTs_WaitSys) {
1746 return False;
1748 // If it's a SIGSEGV, use the proper condition, since it's fairly reliable.
1749 } else if (SIGSEGV == signum) {
1750 return ( si_code > 0 ? True : False );
1752 // If it's anything else, assume it's kernel-generated. Reason being that
1753 // kernel-generated sync signals are more common, and it's probable that
1754 // misdiagnosing a user signal as a kernel signal is better than the
1755 // opposite.
1756 } else {
1757 return True;
1759 # else
1760 # error Unknown OS
1761 # endif
1765 Perform the default action of a signal. If the signal is fatal, it
1766 terminates all other threads, but it doesn't actually kill
1767 the process and calling thread.
1769 If we're not being quiet, then print out some more detail about
1770 fatal signals (esp. core dumping signals).
1772 static void default_action(const vki_siginfo_t *info, ThreadId tid)
1774 Int sigNo = info->si_signo;
1775 Bool terminate = False; /* kills process */
1776 Bool core = False; /* kills process w/ core */
1777 struct vki_rlimit corelim;
1778 Bool could_core;
1779 ThreadState* tst = VG_(get_ThreadState)(tid);
1781 vg_assert(VG_(is_running_thread)(tid));
1783 switch(sigNo) {
1784 case VKI_SIGQUIT: /* core */
1785 case VKI_SIGILL: /* core */
1786 case VKI_SIGABRT: /* core */
1787 case VKI_SIGFPE: /* core */
1788 case VKI_SIGSEGV: /* core */
1789 case VKI_SIGBUS: /* core */
1790 case VKI_SIGTRAP: /* core */
1791 case VKI_SIGSYS: /* core */
1792 case VKI_SIGXCPU: /* core */
1793 case VKI_SIGXFSZ: /* core */
1795 /* Solaris-specific signals. */
1796 # if defined(VKI_SIGEMT)
1797 case VKI_SIGEMT: /* core */
1798 # endif
1800 terminate = True;
1801 core = True;
1802 break;
1804 case VKI_SIGHUP: /* term */
1805 case VKI_SIGINT: /* term */
1806 case VKI_SIGKILL: /* term - we won't see this */
1807 case VKI_SIGPIPE: /* term */
1808 case VKI_SIGALRM: /* term */
1809 case VKI_SIGTERM: /* term */
1810 case VKI_SIGUSR1: /* term */
1811 case VKI_SIGUSR2: /* term */
1812 case VKI_SIGIO: /* term */
1813 # if defined(VKI_SIGPWR)
1814 case VKI_SIGPWR: /* term */
1815 # endif
1816 case VKI_SIGPROF: /* term */
1817 case VKI_SIGVTALRM: /* term */
1818 # if defined(VKI_SIGRTMIN) && defined(VKI_SIGRTMAX)
1819 case VKI_SIGRTMIN ... VKI_SIGRTMAX: /* term */
1820 # endif
1822 /* Solaris-specific signals. */
1823 # if defined(VKI_SIGLOST)
1824 case VKI_SIGLOST: /* term */
1825 # endif
1827 terminate = True;
1828 break;
1831 vg_assert(!core || (core && terminate));
1833 if (VG_(clo_trace_signals))
1834 VG_(dmsg)("delivering %d (code %d) to default handler; action: %s%s\n",
1835 sigNo, info->si_code, terminate ? "terminate" : "ignore",
1836 core ? "+core" : "");
1838 if (!terminate)
1839 return; /* nothing to do */
1841 #if defined(VGO_linux)
1842 if (terminate && (tst->ptrace & VKI_PT_PTRACED)
1843 && (sigNo != VKI_SIGKILL)) {
1844 VG_(kill)(VG_(getpid)(), VKI_SIGSTOP);
1845 return;
1847 #endif
1849 could_core = core;
1851 if (core) {
1852 /* If they set the core-size limit to zero, don't generate a
1853 core file */
1855 VG_(getrlimit)(VKI_RLIMIT_CORE, &corelim);
1857 if (corelim.rlim_cur == 0)
1858 core = False;
1861 if ( VG_(clo_verbosity) >= 1
1862 || (could_core && is_signal_from_kernel(tid, sigNo, info->si_code))
1863 || VG_(clo_xml) ) {
1864 if (VG_(clo_xml)) {
1865 VG_(printf_xml)("<fatal_signal>\n");
1866 VG_(printf_xml)(" <tid>%u</tid>\n", tid);
1867 if (tst->thread_name) {
1868 VG_(printf_xml)(" <threadname>%s</threadname>\n",
1869 tst->thread_name);
1871 VG_(printf_xml)(" <signo>%d</signo>\n", sigNo);
1872 VG_(printf_xml)(" <signame>%s</signame>\n", VG_(signame)(sigNo));
1873 VG_(printf_xml)(" <sicode>%d</sicode>\n", info->si_code);
1874 } else {
1875 VG_(umsg)(
1876 "\n"
1877 "Process terminating with default action of signal %d (%s)%s\n",
1878 sigNo, VG_(signame)(sigNo), core ? ": dumping core" : "");
1881 /* Be helpful - decode some more details about this fault */
1882 if (is_signal_from_kernel(tid, sigNo, info->si_code)) {
1883 const HChar *event = NULL;
1884 Bool haveaddr = True;
1886 switch(sigNo) {
1887 case VKI_SIGSEGV:
1888 switch(info->si_code) {
1889 case VKI_SEGV_MAPERR: event = "Access not within mapped region";
1890 break;
1891 case VKI_SEGV_ACCERR: event = "Bad permissions for mapped region";
1892 break;
1893 case VKI_SEGV_MADE_UP_GPF:
1894 /* General Protection Fault: The CPU/kernel
1895 isn't telling us anything useful, but this
1896 is commonly the result of exceeding a
1897 segment limit. */
1898 event = "General Protection Fault";
1899 haveaddr = False;
1900 break;
1902 #if 0
1904 HChar buf[50]; // large enough
1905 VG_(am_show_nsegments)(0,"post segfault");
1906 VG_(sprintf)(buf, "/bin/cat /proc/%d/maps", VG_(getpid)());
1907 VG_(system)(buf);
1909 #endif
1910 break;
1912 case VKI_SIGILL:
1913 switch(info->si_code) {
1914 case VKI_ILL_ILLOPC: event = "Illegal opcode"; break;
1915 case VKI_ILL_ILLOPN: event = "Illegal operand"; break;
1916 case VKI_ILL_ILLADR: event = "Illegal addressing mode"; break;
1917 case VKI_ILL_ILLTRP: event = "Illegal trap"; break;
1918 case VKI_ILL_PRVOPC: event = "Privileged opcode"; break;
1919 case VKI_ILL_PRVREG: event = "Privileged register"; break;
1920 case VKI_ILL_COPROC: event = "Coprocessor error"; break;
1921 case VKI_ILL_BADSTK: event = "Internal stack error"; break;
1923 break;
1925 case VKI_SIGFPE:
1926 switch (info->si_code) {
1927 case VKI_FPE_INTDIV: event = "Integer divide by zero"; break;
1928 case VKI_FPE_INTOVF: event = "Integer overflow"; break;
1929 case VKI_FPE_FLTDIV: event = "FP divide by zero"; break;
1930 case VKI_FPE_FLTOVF: event = "FP overflow"; break;
1931 case VKI_FPE_FLTUND: event = "FP underflow"; break;
1932 case VKI_FPE_FLTRES: event = "FP inexact"; break;
1933 case VKI_FPE_FLTINV: event = "FP invalid operation"; break;
1934 case VKI_FPE_FLTSUB: event = "FP subscript out of range"; break;
1936 /* Solaris-specific codes. */
1937 # if defined(VKI_FPE_FLTDEN)
1938 case VKI_FPE_FLTDEN: event = "FP denormalize"; break;
1939 # endif
1941 break;
1943 case VKI_SIGBUS:
1944 switch (info->si_code) {
1945 case VKI_BUS_ADRALN: event = "Invalid address alignment"; break;
1946 case VKI_BUS_ADRERR: event = "Non-existent physical address"; break;
1947 case VKI_BUS_OBJERR: event = "Hardware error"; break;
1948 #if defined(VGO_freebsd)
1949 // This si_code can be generated for both SIGBUS and SIGSEGV on FreeBSD
1950 // This is undocumented
1951 case VKI_SEGV_PAGE_FAULT:
1952 // It should get replaced with this non-standard value, which is documented.
1953 case VKI_BUS_OOMERR:
1954 event = "Access not within mapped region";
1955 #endif
1957 break;
1958 } /* switch (sigNo) */
1960 if (VG_(clo_xml)) {
1961 if (event != NULL)
1962 VG_(printf_xml)(" <event>%s</event>\n", event);
1963 if (haveaddr)
1964 VG_(printf_xml)(" <siaddr>%p</siaddr>\n",
1965 info->VKI_SIGINFO_si_addr);
1966 } else {
1967 if (event != NULL) {
1968 if (haveaddr)
1969 VG_(umsg)(" %s at address %p\n",
1970 event, info->VKI_SIGINFO_si_addr);
1971 else
1972 VG_(umsg)(" %s\n", event);
1976 /* Print a stack trace. Be cautious if the thread's SP is in an
1977 obviously stupid place (not mapped readable) that would
1978 likely cause a segfault. */
1979 if (VG_(is_valid_tid)(tid)) {
1980 Word first_ip_delta = 0;
1981 #if defined(VGO_linux) || defined(VGO_solaris)
1982 /* Make sure that the address stored in the stack pointer is
1983 located in a mapped page. That is not necessarily so. E.g.
1984 consider the scenario where the stack pointer was decreased
1985 and now has a value that is just below the end of a page that has
1986 not been mapped yet. In that case VG_(am_is_valid_for_client)
1987 will consider the address of the stack pointer invalid and that
1988 would cause a back-trace of depth 1 to be printed, instead of a
1989 full back-trace. */
1990 if (tid == 1) { // main thread
1991 Addr esp = VG_(get_SP)(tid);
1992 Addr base = VG_PGROUNDDN(esp - VG_STACK_REDZONE_SZB);
1993 if (VG_(am_addr_is_in_extensible_client_stack)(base)
1994 && VG_(extend_stack)(tid, base)) {
1995 if (VG_(clo_trace_signals))
1996 VG_(dmsg)(" -> extended stack base to %#lx\n",
1997 VG_PGROUNDDN(esp));
2000 #endif
2001 #if defined(VGA_s390x)
2002 if (sigNo == VKI_SIGILL) {
2003 /* The guest instruction address has been adjusted earlier to
2004 point to the insn following the one that could not be decoded.
2005 When printing the back-trace here we need to undo that
2006 adjustment so the first line in the back-trace reports the
2007 correct address. */
2008 Addr addr = (Addr)info->VKI_SIGINFO_si_addr;
2009 UChar byte = ((UChar *)addr)[0];
2010 Int insn_length = ((((byte >> 6) + 1) >> 1) + 1) << 1;
2012 first_ip_delta = -insn_length;
2014 #endif
2015 ExeContext* ec = VG_(am_is_valid_for_client)
2016 (VG_(get_SP)(tid), sizeof(Addr), VKI_PROT_READ)
2017 ? VG_(record_ExeContext)( tid, first_ip_delta )
2018 : VG_(record_depth_1_ExeContext)( tid,
2019 first_ip_delta );
2020 vg_assert(ec);
2021 VG_(pp_ExeContext)( ec );
2023 if (sigNo == VKI_SIGSEGV
2024 && is_signal_from_kernel(tid, sigNo, info->si_code)
2025 && info->si_code == VKI_SEGV_MAPERR) {
2026 VG_(umsg)(" If you believe this happened as a result of a stack\n" );
2027 VG_(umsg)(" overflow in your program's main thread (unlikely but\n");
2028 VG_(umsg)(" possible), you can try to increase the size of the\n" );
2029 VG_(umsg)(" main thread stack using the --main-stacksize= flag.\n" );
2030 // FIXME: assumes main ThreadId == 1
2031 if (VG_(is_valid_tid)(1)) {
2032 VG_(umsg)(
2033 " The main thread stack size used in this run was %lu.\n",
2034 VG_(threads)[1].client_stack_szB);
2037 if (VG_(clo_xml)) {
2038 /* postamble */
2039 VG_(printf_xml)("</fatal_signal>\n");
2040 VG_(printf_xml)("\n");
2044 if (VG_(clo_vgdb) != Vg_VgdbNo
2045 && VG_(clo_vgdb_error) <= VG_(get_n_errs_shown)() + 1) {
2046 /* Note: we add + 1 to n_errs_shown as the fatal signal was not
2047 reported through error msg, and so was not counted. */
2048 VG_(gdbserver_report_fatal_signal) (info, tid);
2051 if (core) {
2052 static const struct vki_rlimit zero = { 0, 0 };
2054 VG_(make_coredump)(tid, info, corelim.rlim_cur);
2056 /* Make sure we don't get a confusing kernel-generated
2057 coredump when we finally exit */
2058 VG_(setrlimit)(VKI_RLIMIT_CORE, &zero);
2061 // what's this for?
2062 //VG_(threads)[VG_(master_tid)].os_state.fatalsig = sigNo;
2064 /* everyone but tid dies */
2065 VG_(nuke_all_threads_except)(tid, VgSrc_FatalSig);
2066 VG_(reap_threads)(tid);
2067 /* stash fatal signal in this thread */
2068 VG_(threads)[tid].exitreason = VgSrc_FatalSig;
2069 VG_(threads)[tid].os_state.fatalsig = sigNo;
2073 This does the business of delivering a signal to a thread. It may
2074 be called from either a real signal handler, or from normal code to
2075 cause the thread to enter the signal handler.
2077 This updates the thread state, but it does not set it to be
2078 Runnable.
2080 static void deliver_signal ( ThreadId tid, const vki_siginfo_t *info,
2081 const struct vki_ucontext *uc )
2083 Int sigNo = info->si_signo;
2084 SCSS_Per_Signal *handler = &scss.scss_per_sig[sigNo];
2085 void *handler_fn;
2086 ThreadState *tst = VG_(get_ThreadState)(tid);
2088 #if defined(VGO_linux)
2089 /* If this signal is SIGCHLD and it came from a process which valgrind
2090 created for some internal use, then it should not be delivered to
2091 the client. */
2092 if (sigNo == VKI_SIGCHLD && ht_sigchld_ignore != NULL) {
2093 Int pid = info->_sifields._sigchld._pid;
2094 ht_ignore_node *n = VG_(HT_lookup)(ht_sigchld_ignore, pid);
2096 if (n != NULL) {
2097 /* If the child has terminated, remove its PID from the
2098 ignore list. */
2099 if (info->si_code == VKI_CLD_EXITED
2100 || info->si_code == VKI_CLD_KILLED
2101 || info->si_code == VKI_CLD_DUMPED) {
2102 VG_(HT_remove)(ht_sigchld_ignore, pid);
2103 VG_(free)(n);
2105 return;
2108 #endif
2110 if (VG_(clo_trace_signals))
2111 VG_(dmsg)("delivering signal %d (%s):%d to thread %u\n",
2112 sigNo, VG_(signame)(sigNo), info->si_code, tid );
2114 if (sigNo == VG_SIGVGKILL) {
2115 /* If this is a SIGVGKILL, we're expecting it to interrupt any
2116 blocked syscall. It doesn't matter whether the VCPU state is
2117 set to restart or not, because we don't expect it will
2118 execute any more client instructions. */
2119 vg_assert(VG_(is_exiting)(tid));
2120 return;
2123 /* If the client specifies SIG_IGN, treat it as SIG_DFL.
2125 If deliver_signal() is being called on a thread, we want
2126 the signal to get through no matter what; if they're ignoring
2127 it, then we do this override (this is so we can send it SIGSEGV,
2128 etc). */
2129 handler_fn = handler->scss_handler;
2130 if (handler_fn == VKI_SIG_IGN)
2131 handler_fn = VKI_SIG_DFL;
2133 vg_assert(handler_fn != VKI_SIG_IGN);
2135 if (handler_fn == VKI_SIG_DFL) {
2136 default_action(info, tid);
2137 } else {
2138 /* Create a signal delivery frame, and set the client's %ESP and
2139 %EIP so that when execution continues, we will enter the
2140 signal handler with the frame on top of the client's stack,
2141 as it expects.
2143 Signal delivery can fail if the client stack is too small or
2144 missing, and we can't push the frame. If that happens,
2145 push_signal_frame will cause the whole process to exit when
2146 we next hit the scheduler.
2148 vg_assert(VG_(is_valid_tid)(tid));
2150 push_signal_frame ( tid, info, uc );
2152 if (handler->scss_flags & VKI_SA_ONESHOT) {
2153 /* Do the ONESHOT thing. */
2154 handler->scss_handler = VKI_SIG_DFL;
2156 handle_SCSS_change( False /* lazy update */ );
2159 /* At this point:
2160 tst->sig_mask is the current signal mask
2161 tst->tmp_sig_mask is the same as sig_mask, unless we're in sigsuspend
2162 handler->scss_mask is the mask set by the handler
2164 Handler gets a mask of tmp_sig_mask|handler_mask|signo
2166 tst->sig_mask = tst->tmp_sig_mask;
2167 if (!(handler->scss_flags & VKI_SA_NOMASK)) {
2168 VG_(sigaddset_from_set)(&tst->sig_mask, &handler->scss_mask);
2169 VG_(sigaddset)(&tst->sig_mask, sigNo);
2170 tst->tmp_sig_mask = tst->sig_mask;
2174 /* Thread state is ready to go - just add Runnable */
2177 static void resume_scheduler(ThreadId tid)
2179 ThreadState *tst = VG_(get_ThreadState)(tid);
2181 vg_assert(tst->os_state.lwpid == VG_(gettid)());
2183 if (tst->sched_jmpbuf_valid) {
2184 /* Can't continue; must longjmp back to the scheduler and thus
2185 enter the sighandler immediately. */
2186 VG_MINIMAL_LONGJMP(tst->sched_jmpbuf);
2190 static void synth_fault_common(ThreadId tid, Addr addr, Int si_code)
2192 vki_siginfo_t info;
2194 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2196 VG_(memset)(&info, 0, sizeof(info));
2197 info.si_signo = VKI_SIGSEGV;
2198 info.si_code = si_code;
2199 info.VKI_SIGINFO_si_addr = (void*)addr;
2201 /* Even if gdbserver indicates to ignore the signal, we must deliver it.
2202 So ignore the return value of VG_(gdbserver_report_signal). */
2203 (void) VG_(gdbserver_report_signal) (&info, tid);
2205 /* If they're trying to block the signal, force it to be delivered */
2206 if (VG_(sigismember)(&VG_(threads)[tid].sig_mask, VKI_SIGSEGV))
2207 VG_(set_default_handler)(VKI_SIGSEGV);
2209 deliver_signal(tid, &info, NULL);
2212 // Synthesize a fault where the address is OK, but the page
2213 // permissions are bad.
2214 void VG_(synth_fault_perms)(ThreadId tid, Addr addr)
2216 synth_fault_common(tid, addr, VKI_SEGV_ACCERR);
2219 // Synthesize a fault where the address there's nothing mapped at the address.
2220 void VG_(synth_fault_mapping)(ThreadId tid, Addr addr)
2222 synth_fault_common(tid, addr, VKI_SEGV_MAPERR);
2225 // Synthesize a misc memory fault.
2226 void VG_(synth_fault)(ThreadId tid)
2228 synth_fault_common(tid, 0, VKI_SEGV_MADE_UP_GPF);
2231 // Synthesise a SIGILL.
2232 void VG_(synth_sigill)(ThreadId tid, Addr addr)
2234 vki_siginfo_t info;
2236 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2238 VG_(memset)(&info, 0, sizeof(info));
2239 info.si_signo = VKI_SIGILL;
2240 info.si_code = VKI_ILL_ILLOPC; /* jrs: no idea what this should be */
2241 info.VKI_SIGINFO_si_addr = (void*)addr;
2243 if (VG_(gdbserver_report_signal) (&info, tid)) {
2244 resume_scheduler(tid);
2245 deliver_signal(tid, &info, NULL);
2247 else
2248 resume_scheduler(tid);
2251 // Synthesise a SIGBUS.
2252 void VG_(synth_sigbus)(ThreadId tid)
2254 vki_siginfo_t info;
2256 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2258 VG_(memset)(&info, 0, sizeof(info));
2259 info.si_signo = VKI_SIGBUS;
2260 /* There are several meanings to SIGBUS (as per POSIX, presumably),
2261 but the most widely understood is "invalid address alignment",
2262 so let's use that. */
2263 info.si_code = VKI_BUS_ADRALN;
2264 /* If we knew the invalid address in question, we could put it
2265 in .si_addr. Oh well. */
2266 /* info.VKI_SIGINFO_si_addr = (void*)addr; */
2268 if (VG_(gdbserver_report_signal) (&info, tid)) {
2269 resume_scheduler(tid);
2270 deliver_signal(tid, &info, NULL);
2272 else
2273 resume_scheduler(tid);
2276 // Synthesise a SIGTRAP.
2277 void VG_(synth_sigtrap)(ThreadId tid)
2279 vki_siginfo_t info;
2280 struct vki_ucontext uc;
2281 # if defined(VGP_x86_darwin)
2282 struct __darwin_mcontext32 mc;
2283 # elif defined(VGP_amd64_darwin)
2284 struct __darwin_mcontext64 mc;
2285 # endif
2287 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2289 VG_(memset)(&info, 0, sizeof(info));
2290 VG_(memset)(&uc, 0, sizeof(uc));
2291 info.si_signo = VKI_SIGTRAP;
2292 info.si_code = VKI_TRAP_BRKPT; /* tjh: only ever called for a brkpt ins */
2294 # if defined(VGP_x86_linux) || defined(VGP_amd64_linux)
2295 uc.uc_mcontext.trapno = 3; /* tjh: this is the x86 trap number
2296 for a breakpoint trap... */
2297 uc.uc_mcontext.err = 0; /* tjh: no error code for x86
2298 breakpoint trap... */
2299 # elif defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
2300 /* the same thing, but using Darwin field/struct names */
2301 VG_(memset)(&mc, 0, sizeof(mc));
2302 uc.uc_mcontext = &mc;
2303 uc.uc_mcontext->__es.__trapno = 3;
2304 uc.uc_mcontext->__es.__err = 0;
2305 # elif defined(VGP_x86_solaris)
2306 uc.uc_mcontext.gregs[VKI_ERR] = 0;
2307 uc.uc_mcontext.gregs[VKI_TRAPNO] = VKI_T_BPTFLT;
2308 # endif
2310 /* fixs390: do we need to do anything here for s390 ? */
2311 if (VG_(gdbserver_report_signal) (&info, tid)) {
2312 resume_scheduler(tid);
2313 deliver_signal(tid, &info, &uc);
2315 else
2316 resume_scheduler(tid);
2319 // Synthesise a SIGFPE.
2320 void VG_(synth_sigfpe)(ThreadId tid, UInt code)
2322 // Only tested on mips32, mips64, s390x and nanomips.
2323 #if !defined(VGA_mips32) && !defined(VGA_mips64) && !defined(VGA_s390x) && !defined(VGA_nanomips)
2324 vg_assert(0);
2325 #else
2326 vki_siginfo_t info;
2327 struct vki_ucontext uc;
2329 vg_assert(VG_(threads)[tid].status == VgTs_Runnable);
2331 VG_(memset)(&info, 0, sizeof(info));
2332 VG_(memset)(&uc, 0, sizeof(uc));
2333 info.si_signo = VKI_SIGFPE;
2334 info.si_code = code;
2336 if (VG_(gdbserver_report_signal) (&info, tid)) {
2337 resume_scheduler(tid);
2338 deliver_signal(tid, &info, &uc);
2340 else
2341 resume_scheduler(tid);
2342 #endif
2345 /* Make a signal pending for a thread, for later delivery.
2346 VG_(poll_signals) will arrange for it to be delivered at the right
2347 time.
2349 tid==0 means add it to the process-wide queue, and not sent it to a
2350 specific thread.
2352 static
2353 void queue_signal(ThreadId tid, const vki_siginfo_t *si)
2355 ThreadState *tst;
2356 SigQueue *sq;
2357 vki_sigset_t savedmask;
2359 tst = VG_(get_ThreadState)(tid);
2361 /* Protect the signal queue against async deliveries */
2362 block_all_host_signals(&savedmask);
2364 if (tst->sig_queue == NULL) {
2365 tst->sig_queue = VG_(malloc)("signals.qs.1", sizeof(*tst->sig_queue));
2366 VG_(memset)(tst->sig_queue, 0, sizeof(*tst->sig_queue));
2368 sq = tst->sig_queue;
2370 if (VG_(clo_trace_signals))
2371 VG_(dmsg)("Queueing signal %d (idx %d) to thread %u\n",
2372 si->si_signo, sq->next, tid);
2374 /* Add signal to the queue. If the queue gets overrun, then old
2375 queued signals may get lost.
2377 XXX We should also keep a sigset of pending signals, so that at
2378 least a non-siginfo signal gets deliviered.
2380 if (sq->sigs[sq->next].si_signo != 0)
2381 VG_(umsg)("Signal %d being dropped from thread %u's queue\n",
2382 sq->sigs[sq->next].si_signo, tid);
2384 sq->sigs[sq->next] = *si;
2385 sq->next = (sq->next+1) % N_QUEUED_SIGNALS;
2387 restore_all_host_signals(&savedmask);
2391 Returns the next queued signal for thread tid which is in "set".
2392 tid==0 means process-wide signal. Set si_signo to 0 when the
2393 signal has been delivered.
2395 Must be called with all signals blocked, to protect against async
2396 deliveries.
2398 static vki_siginfo_t *next_queued(ThreadId tid, const vki_sigset_t *set)
2400 ThreadState *tst = VG_(get_ThreadState)(tid);
2401 SigQueue *sq;
2402 Int idx;
2403 vki_siginfo_t *ret = NULL;
2405 sq = tst->sig_queue;
2406 if (sq == NULL)
2407 goto out;
2409 idx = sq->next;
2410 do {
2411 if (0)
2412 VG_(printf)("idx=%d si_signo=%d inset=%d\n", idx,
2413 sq->sigs[idx].si_signo,
2414 VG_(sigismember)(set, sq->sigs[idx].si_signo));
2416 if (sq->sigs[idx].si_signo != 0
2417 && VG_(sigismember)(set, sq->sigs[idx].si_signo)) {
2418 if (VG_(clo_trace_signals))
2419 VG_(dmsg)("Returning queued signal %d (idx %d) for thread %u\n",
2420 sq->sigs[idx].si_signo, idx, tid);
2421 ret = &sq->sigs[idx];
2422 goto out;
2425 idx = (idx + 1) % N_QUEUED_SIGNALS;
2426 } while(idx != sq->next);
2427 out:
2428 return ret;
2431 static int sanitize_si_code(int si_code)
2433 #if defined(VGO_linux)
2434 /* The linux kernel uses the top 16 bits of si_code for it's own
2435 use and only exports the bottom 16 bits to user space - at least
2436 that is the theory, but it turns out that there are some kernels
2437 around that forget to mask out the top 16 bits so we do it here.
2439 The kernel treats the bottom 16 bits as signed and (when it does
2440 mask them off) sign extends them when exporting to user space so
2441 we do the same thing here. */
2442 return (Short)si_code;
2443 #elif defined(VGO_darwin) || defined(VGO_solaris) || defined(VGO_freebsd)
2444 return si_code;
2445 #else
2446 # error Unknown OS
2447 #endif
2450 #if defined(VGO_solaris)
2451 /* Following function is used to switch Valgrind from a client stack back onto
2452 a Valgrind stack. It is used only when the door_return call was invoked by
2453 the client because this is the only syscall which is executed directly on
2454 the client stack (see syscall-{x86,amd64}-solaris.S). The switch onto the
2455 Valgrind stack has to be made as soon as possible because there is no
2456 guarantee that there is enough space on the client stack to run the
2457 complete signal machinery. Also, Valgrind has to be switched back onto its
2458 stack before a simulated signal frame is created because that will
2459 overwrite the real sigframe built by the kernel. */
2460 static void async_signalhandler_solaris_preprocess(ThreadId tid, Int *signo,
2461 vki_siginfo_t *info,
2462 struct vki_ucontext *uc)
2464 # define RECURSION_BIT 0x1000
2465 Addr sp;
2466 vki_sigframe_t *frame;
2467 ThreadState *tst = VG_(get_ThreadState)(tid);
2468 Int rec_signo;
2470 /* If not doing door_return then return instantly. */
2471 if (!tst->os_state.in_door_return)
2472 return;
2474 /* Check for the recursion:
2475 v ...
2476 | async_signalhandler - executed on the client stack
2477 v async_signalhandler_solaris_preprocess - first call switches the
2478 | stacks and sets the RECURSION_BIT flag
2479 v async_signalhandler - executed on the Valgrind stack
2480 | async_signalhandler_solaris_preprocess - the RECURSION_BIT flag is
2481 v set, clear it and return
2483 if (*signo & RECURSION_BIT) {
2484 *signo &= ~RECURSION_BIT;
2485 return;
2488 rec_signo = *signo | RECURSION_BIT;
2490 # if defined(VGP_x86_solaris)
2491 /* Register %ebx/%rbx points to the top of the original V stack. */
2492 sp = uc->uc_mcontext.gregs[VKI_EBX];
2493 # elif defined(VGP_amd64_solaris)
2494 sp = uc->uc_mcontext.gregs[VKI_REG_RBX];
2495 # else
2496 # error "Unknown platform"
2497 # endif
2499 /* Build a fake signal frame, similarly as in sigframe-solaris.c. */
2500 /* Calculate a new stack pointer. */
2501 sp -= sizeof(vki_sigframe_t);
2502 sp = VG_ROUNDDN(sp, 16) - sizeof(UWord);
2504 /* Fill in the frame. */
2505 frame = (vki_sigframe_t*)sp;
2506 /* Set a bogus return address. */
2507 frame->return_addr = (void*)~0UL;
2508 frame->a1_signo = rec_signo;
2509 /* The first parameter has to be 16-byte aligned, resembling a function
2510 call. */
2512 /* Using
2513 vg_assert(VG_IS_16_ALIGNED(&frame->a1_signo));
2514 seems to get miscompiled on amd64 with GCC 4.7.2. */
2515 Addr signo_addr = (Addr)&frame->a1_signo;
2516 vg_assert(VG_IS_16_ALIGNED(signo_addr));
2518 frame->a2_siginfo = &frame->siginfo;
2519 frame->siginfo = *info;
2520 frame->ucontext = *uc;
2522 # if defined(VGP_x86_solaris)
2523 frame->a3_ucontext = &frame->ucontext;
2525 /* Switch onto the V stack and restart the signal processing. */
2526 __asm__ __volatile__(
2527 "xorl %%ebp, %%ebp\n"
2528 "movl %[sp], %%esp\n"
2529 "jmp async_signalhandler\n"
2531 : [sp] "a" (sp)
2532 : /*"ebp"*/);
2534 # elif defined(VGP_amd64_solaris)
2535 __asm__ __volatile__(
2536 "xorq %%rbp, %%rbp\n"
2537 "movq %[sp], %%rsp\n"
2538 "jmp async_signalhandler\n"
2540 : [sp] "a" (sp), "D" (rec_signo), "S" (&frame->siginfo),
2541 "d" (&frame->ucontext)
2542 : /*"rbp"*/);
2543 # else
2544 # error "Unknown platform"
2545 # endif
2547 /* We should never get here. */
2548 vg_assert(0);
2550 # undef RECURSION_BIT
2552 #endif
2555 Receive an async signal from the kernel.
2557 This should only happen when the thread is blocked in a syscall,
2558 since that's the only time this set of signals is unblocked.
2560 static
2561 void async_signalhandler ( Int sigNo,
2562 vki_siginfo_t *info, struct vki_ucontext *uc )
2564 ThreadId tid = VG_(lwpid_to_vgtid)(VG_(gettid)());
2565 ThreadState* tst = VG_(get_ThreadState)(tid);
2566 SysRes sres;
2568 vg_assert(tst->status == VgTs_WaitSys);
2570 # if defined(VGO_solaris)
2571 async_signalhandler_solaris_preprocess(tid, &sigNo, info, uc);
2572 # endif
2574 /* The thread isn't currently running, make it so before going on */
2575 VG_(acquire_BigLock)(tid, "async_signalhandler");
2577 info->si_code = sanitize_si_code(info->si_code);
2579 if (VG_(clo_trace_signals))
2580 VG_(dmsg)("async signal handler: signal=%d, vgtid=%d, tid=%u, si_code=%d, "
2581 "exitreason %s\n",
2582 sigNo, VG_(gettid)(), tid, info->si_code,
2583 VG_(name_of_VgSchedReturnCode)(tst->exitreason));
2585 /* See similar logic in VG_(poll_signals). */
2586 if (tst->exitreason != VgSrc_None)
2587 resume_scheduler(tid);
2589 /* Update thread state properly. The signal can only have been
2590 delivered whilst we were in
2591 coregrind/m_syswrap/syscall-<PLAT>.S, and only then in the
2592 window between the two sigprocmask calls, since at all other
2593 times, we run with async signals on the host blocked. Hence
2594 make enquiries on the basis that we were in or very close to a
2595 syscall, and attempt to fix up the guest state accordingly.
2597 (normal async signals occurring during computation are blocked,
2598 but periodically polled for using VG_(sigtimedwait_zero), and
2599 delivered at a point convenient for us. Hence this routine only
2600 deals with signals that are delivered to a thread during a
2601 syscall.) */
2603 /* First, extract a SysRes from the ucontext_t* given to this
2604 handler. If it is subsequently established by
2605 VG_(fixup_guest_state_after_syscall_interrupted) that the
2606 syscall was complete but the results had not been committed yet
2607 to the guest state, then it'll have to commit the results itself
2608 "by hand", and so we need to extract the SysRes. Of course if
2609 the thread was not in that particular window then the
2610 SysRes will be meaningless, but that's OK too because
2611 VG_(fixup_guest_state_after_syscall_interrupted) will detect
2612 that the thread was not in said window and ignore the SysRes. */
2614 /* To make matters more complex still, on Darwin we need to know
2615 the "class" of the syscall under consideration in order to be
2616 able to extract the a correct SysRes. The class will have been
2617 saved just before the syscall, by VG_(client_syscall), into this
2618 thread's tst->arch.vex.guest_SC_CLASS. Hence: */
2619 # if defined(VGO_darwin)
2620 sres = VG_UCONTEXT_SYSCALL_SYSRES(uc, tst->arch.vex.guest_SC_CLASS);
2621 # else
2622 sres = VG_UCONTEXT_SYSCALL_SYSRES(uc);
2623 # endif
2625 /* (1) */
2626 VG_(fixup_guest_state_after_syscall_interrupted)(
2627 tid,
2628 VG_UCONTEXT_INSTR_PTR(uc),
2629 sres,
2630 !!(scss.scss_per_sig[sigNo].scss_flags & VKI_SA_RESTART) || VG_(is_in_kernel_restart_syscall)(tid),
2634 /* (2) */
2635 /* Set up the thread's state to deliver a signal.
2636 However, if exitreason is VgSrc_FatalSig, then thread tid was
2637 taken out of a syscall by VG_(nuke_all_threads_except).
2638 But after the emission of VKI_SIGKILL, another (fatal) async
2639 signal might be sent. In such a case, we must not handle this
2640 signal, as the thread is supposed to die first.
2641 => resume the scheduler for such a thread, so that the scheduler
2642 can let the thread die. */
2643 if (tst->exitreason != VgSrc_FatalSig
2644 && !is_sig_ign(info, tid))
2645 deliver_signal(tid, info, uc);
2647 /* It's crucial that (1) and (2) happen in the order (1) then (2)
2648 and not the other way around. (1) fixes up the guest thread
2649 state to reflect the fact that the syscall was interrupted --
2650 either to restart the syscall or to return EINTR. (2) then sets
2651 up the thread state to deliver the signal. Then we resume
2652 execution. First, the signal handler is run, since that's the
2653 second adjustment we made to the thread state. If that returns,
2654 then we resume at the guest state created by (1), viz, either
2655 the syscall returns EINTR or is restarted.
2657 If (2) was done before (1) the outcome would be completely
2658 different, and wrong. */
2660 /* longjmp back to the thread's main loop to start executing the
2661 handler. */
2662 resume_scheduler(tid);
2664 VG_(core_panic)("async_signalhandler: got unexpected signal "
2665 "while outside of scheduler");
2668 /* Extend the stack of thread #tid to cover addr. It is expected that
2669 addr either points into an already mapped anonymous segment or into a
2670 reservation segment abutting the stack segment. Everything else is a bug.
2672 Returns True on success, False on failure.
2674 Succeeds without doing anything if addr is already within a segment.
2676 Failure could be caused by:
2677 - addr not below a growable segment
2678 - new stack size would exceed the stack limit for the given thread
2679 - mmap failed for some other reason
2681 Bool VG_(extend_stack)(ThreadId tid, Addr addr)
2683 SizeT udelta;
2684 Addr new_stack_base;
2686 /* Get the segment containing addr. */
2687 const NSegment* seg = VG_(am_find_nsegment)(addr);
2688 vg_assert(seg != NULL);
2690 /* TODO: the test "seg->kind == SkAnonC" is really inadequate,
2691 because although it tests whether the segment is mapped
2692 _somehow_, it doesn't check that it has the right permissions
2693 (r,w, maybe x) ? */
2694 if (seg->kind == SkAnonC)
2695 /* addr is already mapped. Nothing to do. */
2696 return True;
2698 const NSegment* seg_next = VG_(am_next_nsegment)( seg, True/*fwds*/ );
2699 vg_assert(seg_next != NULL);
2701 udelta = VG_PGROUNDUP(seg_next->start - addr);
2702 new_stack_base = seg_next->start - udelta;
2704 VG_(debugLog)(1, "signals",
2705 "extending a stack base 0x%lx down by %lu"
2706 " new base 0x%lx to cover 0x%lx\n",
2707 seg_next->start, udelta, new_stack_base, addr);
2708 Bool overflow;
2709 if (! VG_(am_extend_into_adjacent_reservation_client)
2710 ( seg_next->start, -(SSizeT)udelta, &overflow )) {
2711 if (overflow)
2712 VG_(umsg)("Stack overflow in thread #%u: can't grow stack to %#lx\n",
2713 tid, new_stack_base);
2714 else
2715 VG_(umsg)("Cannot map memory to grow the stack for thread #%u "
2716 "to %#lx\n", tid, new_stack_base);
2717 return False;
2720 /* When we change the main stack, we have to let the stack handling
2721 code know about it. */
2722 VG_(change_stack)(VG_(clstk_id), new_stack_base, VG_(clstk_end));
2724 if (VG_(clo_sanity_level) >= 3)
2725 VG_(sanity_check_general)(False);
2727 return True;
2730 static fault_catcher_t fault_catcher = NULL;
2732 fault_catcher_t VG_(set_fault_catcher)(fault_catcher_t catcher)
2734 fault_catcher_t prev_catcher = fault_catcher;
2735 fault_catcher = catcher;
2736 return prev_catcher;
2739 static
2740 void sync_signalhandler_from_user ( ThreadId tid,
2741 Int sigNo, vki_siginfo_t *info, struct vki_ucontext *uc )
2743 ThreadId qtid;
2745 /* If some user-process sent us a sync signal (ie. it's not the result
2746 of a faulting instruction), then how we treat it depends on when it
2747 arrives... */
2749 if (VG_(threads)[tid].status == VgTs_WaitSys
2750 # if defined(VGO_solaris)
2751 /* Check if the signal was really received while doing a blocking
2752 syscall. Only then the async_signalhandler() path can be used. */
2753 && VG_(is_ip_in_blocking_syscall)(tid, VG_UCONTEXT_INSTR_PTR(uc))
2754 # endif
2756 /* Signal arrived while we're blocked in a syscall. This means that
2757 the client's signal mask was applied. In other words, so we can't
2758 get here unless the client wants this signal right now. This means
2759 we can simply use the async_signalhandler. */
2760 if (VG_(clo_trace_signals))
2761 VG_(dmsg)("Delivering user-sent sync signal %d as async signal\n",
2762 sigNo);
2764 async_signalhandler(sigNo, info, uc);
2765 VG_(core_panic)("async_signalhandler returned!?\n");
2767 } else {
2768 /* Signal arrived while in generated client code, or while running
2769 Valgrind core code. That means that every thread has these signals
2770 unblocked, so we can't rely on the kernel to route them properly, so
2771 we need to queue them manually. */
2772 if (VG_(clo_trace_signals))
2773 VG_(dmsg)("Routing user-sent sync signal %d via queue\n", sigNo);
2775 # if defined(VGO_linux)
2776 /* On Linux, first we have to do a sanity check of the siginfo. */
2777 if (info->VKI_SIGINFO_si_pid == 0) {
2778 /* There's a per-user limit of pending siginfo signals. If
2779 you exceed this, by having more than that number of
2780 pending signals with siginfo, then new signals are
2781 delivered without siginfo. This condition can be caused
2782 by any unrelated program you're running at the same time
2783 as Valgrind, if it has a large number of pending siginfo
2784 signals which it isn't taking delivery of.
2786 Since we depend on siginfo to work out why we were sent a
2787 signal and what we should do about it, we really can't
2788 continue unless we get it. */
2789 VG_(umsg)("Signal %d (%s) appears to have lost its siginfo; "
2790 "I can't go on.\n", sigNo, VG_(signame)(sigNo));
2791 VG_(printf)(
2792 " This may be because one of your programs has consumed your ration of\n"
2793 " siginfo structures. For more information, see:\n"
2794 " http://kerneltrap.org/mailarchive/1/message/25599/thread\n"
2795 " Basically, some program on your system is building up a large queue of\n"
2796 " pending signals, and this causes the siginfo data for other signals to\n"
2797 " be dropped because it's exceeding a system limit. However, Valgrind\n"
2798 " absolutely needs siginfo for SIGSEGV. A workaround is to track down the\n"
2799 " offending program and avoid running it while using Valgrind, but there\n"
2800 " is no easy way to do this. Apparently the problem was fixed in kernel\n"
2801 " 2.6.12.\n");
2803 /* It's a fatal signal, so we force the default handler. */
2804 VG_(set_default_handler)(sigNo);
2805 deliver_signal(tid, info, uc);
2806 resume_scheduler(tid);
2807 VG_(exit)(99); /* If we can't resume, then just exit */
2809 # endif
2811 qtid = 0; /* shared pending by default */
2812 # if defined(VGO_linux)
2813 if (info->si_code == VKI_SI_TKILL)
2814 qtid = tid; /* directed to us specifically */
2815 # endif
2816 queue_signal(qtid, info);
2820 /* Returns the reported fault address for an exact address */
2821 static Addr fault_mask(Addr in)
2823 /* We have to use VG_PGROUNDDN because faults on s390x only deliver
2824 the page address but not the address within a page.
2826 # if defined(VGA_s390x)
2827 return VG_PGROUNDDN(in);
2828 # else
2829 return in;
2830 #endif
2833 /* Returns True if the sync signal was due to the stack requiring extension
2834 and the extension was successful.
2836 static Bool extend_stack_if_appropriate(ThreadId tid, vki_siginfo_t* info)
2838 Addr fault;
2839 Addr esp;
2840 NSegment const *seg, *seg_next;
2842 if (info->si_signo != VKI_SIGSEGV)
2843 return False;
2845 fault = (Addr)info->VKI_SIGINFO_si_addr;
2846 esp = VG_(get_SP)(tid);
2847 seg = VG_(am_find_nsegment)(fault);
2848 seg_next = seg ? VG_(am_next_nsegment)( seg, True/*fwds*/ )
2849 : NULL;
2851 if (VG_(clo_trace_signals)) {
2852 if (seg == NULL)
2853 VG_(dmsg)("SIGSEGV: si_code=%d faultaddr=%#lx tid=%u ESP=%#lx "
2854 "seg=NULL\n",
2855 info->si_code, fault, tid, esp);
2856 else
2857 VG_(dmsg)("SIGSEGV: si_code=%d faultaddr=%#lx tid=%u ESP=%#lx "
2858 "seg=%#lx-%#lx\n",
2859 info->si_code, fault, tid, esp, seg->start, seg->end);
2862 if (info->si_code == VKI_SEGV_MAPERR
2863 && seg
2864 && seg->kind == SkResvn
2865 && seg->smode == SmUpper
2866 && seg_next
2867 && seg_next->kind == SkAnonC
2868 && fault >= fault_mask(esp - VG_STACK_REDZONE_SZB)) {
2869 /* If the fault address is above esp but below the current known
2870 stack segment base, and it was a fault because there was
2871 nothing mapped there (as opposed to a permissions fault),
2872 then extend the stack segment.
2874 Addr base = VG_PGROUNDDN(esp - VG_STACK_REDZONE_SZB);
2875 if (VG_(am_addr_is_in_extensible_client_stack)(base)
2876 && VG_(extend_stack)(tid, base)) {
2877 if (VG_(clo_trace_signals))
2878 VG_(dmsg)(" -> extended stack base to %#lx\n",
2879 VG_PGROUNDDN(fault));
2880 return True;
2881 } else {
2882 return False;
2884 } else {
2885 return False;
2889 static
2890 void sync_signalhandler_from_kernel ( ThreadId tid,
2891 Int sigNo, vki_siginfo_t *info, struct vki_ucontext *uc )
2893 /* Check to see if some part of Valgrind itself is interested in faults.
2894 The fault catcher should never be set whilst we're in generated code, so
2895 check for that. AFAIK the only use of the catcher right now is
2896 memcheck's leak detector. */
2897 if (fault_catcher) {
2898 vg_assert(VG_(in_generated_code) == False);
2900 (*fault_catcher)(sigNo, (Addr)info->VKI_SIGINFO_si_addr);
2901 /* If the catcher returns, then it didn't handle the fault,
2902 so carry on panicking. */
2905 if (extend_stack_if_appropriate(tid, info)) {
2906 /* Stack extension occurred, so we don't need to do anything else; upon
2907 returning from this function, we'll restart the host (hence guest)
2908 instruction. */
2909 } else {
2910 /* OK, this is a signal we really have to deal with. If it came
2911 from the client's code, then we can jump back into the scheduler
2912 and have it delivered. Otherwise it's a Valgrind bug. */
2913 ThreadState *tst = VG_(get_ThreadState)(tid);
2915 if (VG_(sigismember)(&tst->sig_mask, sigNo)) {
2916 /* signal is blocked, but they're not allowed to block faults */
2917 VG_(set_default_handler)(sigNo);
2920 if (VG_(in_generated_code)) {
2921 if (VG_(gdbserver_report_signal) (info, tid)
2922 || VG_(sigismember)(&tst->sig_mask, sigNo)) {
2923 /* Can't continue; must longjmp back to the scheduler and thus
2924 enter the sighandler immediately. */
2925 deliver_signal(tid, info, uc);
2926 resume_scheduler(tid);
2928 else
2929 resume_scheduler(tid);
2932 /* If resume_scheduler returns or its our fault, it means we
2933 don't have longjmp set up, implying that we weren't running
2934 client code, and therefore it was actually generated by
2935 Valgrind internally.
2937 VG_(dmsg)("VALGRIND INTERNAL ERROR: Valgrind received "
2938 "a signal %d (%s) - exiting\n",
2939 sigNo, VG_(signame)(sigNo));
2941 VG_(dmsg)("si_code=%d; Faulting address: %p; sp: %#lx\n",
2942 info->si_code, info->VKI_SIGINFO_si_addr,
2943 (Addr)VG_UCONTEXT_STACK_PTR(uc));
2945 if (0)
2946 VG_(kill_self)(sigNo); /* generate a core dump */
2948 /* tid == 0 could happen after everyone has exited, which indicates
2949 a bug in the core (cleanup) code. Don't assert tid must be valid,
2950 that will mess up the valgrind core backtrace if it fails, coming
2951 from the signal handler. */
2952 // vg_assert(tid != 0);
2954 UnwindStartRegs startRegs;
2955 VG_(memset)(&startRegs, 0, sizeof(startRegs));
2957 VG_UCONTEXT_TO_UnwindStartRegs(&startRegs, uc);
2958 VG_(core_panic_at)("Killed by fatal signal", &startRegs);
2963 Receive a sync signal from the host.
2965 static
2966 void sync_signalhandler ( Int sigNo,
2967 vki_siginfo_t *info, struct vki_ucontext *uc )
2969 ThreadId tid = VG_(lwpid_to_vgtid)(VG_(gettid)());
2970 Bool from_user;
2972 if (0)
2973 VG_(printf)("sync_sighandler(%d, %p, %p)\n", sigNo, info, uc);
2975 vg_assert(info != NULL);
2976 vg_assert(info->si_signo == sigNo);
2977 vg_assert(sigNo == VKI_SIGSEGV
2978 || sigNo == VKI_SIGBUS
2979 || sigNo == VKI_SIGFPE
2980 || sigNo == VKI_SIGILL
2981 || sigNo == VKI_SIGTRAP);
2983 info->si_code = sanitize_si_code(info->si_code);
2985 from_user = !is_signal_from_kernel(tid, sigNo, info->si_code);
2987 if (VG_(clo_trace_signals)) {
2988 VG_(dmsg)("sync signal handler: "
2989 "signal=%d, si_code=%d, EIP=%#lx, eip=%#lx, from %s\n",
2990 sigNo, info->si_code, VG_(get_IP)(tid),
2991 (Addr)VG_UCONTEXT_INSTR_PTR(uc),
2992 ( from_user ? "user" : "kernel" ));
2994 vg_assert(sigNo >= 1 && sigNo <= VG_(max_signal));
2996 /* // debug code:
2997 if (0) {
2998 VG_(printf)("info->si_signo %d\n", info->si_signo);
2999 VG_(printf)("info->si_errno %d\n", info->si_errno);
3000 VG_(printf)("info->si_code %d\n", info->si_code);
3001 VG_(printf)("info->si_pid %d\n", info->si_pid);
3002 VG_(printf)("info->si_uid %d\n", info->si_uid);
3003 VG_(printf)("info->si_status %d\n", info->si_status);
3004 VG_(printf)("info->si_addr %p\n", info->si_addr);
3008 /* Figure out if the signal is being sent from outside the process.
3009 (Why do we care?) If the signal is from the user rather than the
3010 kernel, then treat it more like an async signal than a sync signal --
3011 that is, merely queue it for later delivery. */
3012 if (from_user) {
3013 sync_signalhandler_from_user( tid, sigNo, info, uc);
3014 } else {
3015 sync_signalhandler_from_kernel(tid, sigNo, info, uc);
3018 # if defined(VGO_solaris)
3019 /* On Solaris we have to return from signal handler manually. */
3020 VG_(do_syscall2)(__NR_context, VKI_SETCONTEXT, (UWord)uc);
3021 # endif
3026 Kill this thread. Makes it leave any syscall it might be currently
3027 blocked in, and return to the scheduler. This doesn't mark the thread
3028 as exiting; that's the caller's job.
3030 static void sigvgkill_handler(int signo, vki_siginfo_t *si,
3031 struct vki_ucontext *uc)
3033 ThreadId tid = VG_(lwpid_to_vgtid)(VG_(gettid)());
3034 ThreadStatus at_signal = VG_(threads)[tid].status;
3036 if (VG_(clo_trace_signals))
3037 VG_(dmsg)("sigvgkill for lwp %d tid %u\n", VG_(gettid)(), tid);
3039 VG_(acquire_BigLock)(tid, "sigvgkill_handler");
3041 vg_assert(signo == VG_SIGVGKILL);
3042 vg_assert(si->si_signo == signo);
3044 /* jrs 2006 August 3: the following assertion seems incorrect to
3045 me, and fails on AIX. sigvgkill could be sent to a thread which
3046 is runnable - see VG_(nuke_all_threads_except) in the scheduler.
3047 Hence comment these out ..
3049 vg_assert(VG_(threads)[tid].status == VgTs_WaitSys);
3050 VG_(post_syscall)(tid);
3052 and instead do:
3054 if (at_signal == VgTs_WaitSys)
3055 VG_(post_syscall)(tid);
3056 /* jrs 2006 August 3 ends */
3058 resume_scheduler(tid);
3060 VG_(core_panic)("sigvgkill_handler couldn't return to the scheduler\n");
3063 static __attribute((unused))
3064 void pp_ksigaction ( vki_sigaction_toK_t* sa )
3066 Int i;
3067 VG_(printf)("pp_ksigaction: handler %p, flags 0x%x, restorer %p\n",
3068 sa->ksa_handler,
3069 (UInt)sa->sa_flags,
3070 # if !defined(VGO_darwin) && !defined(VGO_freebsd) && \
3071 !defined(VGO_solaris)
3072 sa->sa_restorer
3073 # else
3074 (void*)0
3075 # endif
3077 VG_(printf)("pp_ksigaction: { ");
3078 for (i = 1; i <= VG_(max_signal); i++)
3079 if (VG_(sigismember)(&(sa->sa_mask),i))
3080 VG_(printf)("%d ", i);
3081 VG_(printf)("}\n");
3085 Force signal handler to default
3087 void VG_(set_default_handler)(Int signo)
3089 vki_sigaction_toK_t sa;
3091 sa.ksa_handler = VKI_SIG_DFL;
3092 sa.sa_flags = 0;
3093 # if !defined(VGO_darwin) && !defined(VGO_freebsd) && \
3094 !defined(VGO_solaris)
3095 sa.sa_restorer = 0;
3096 # endif
3097 VG_(sigemptyset)(&sa.sa_mask);
3099 VG_(do_sys_sigaction)(signo, &sa, NULL);
3103 Poll for pending signals, and set the next one up for delivery.
3105 void VG_(poll_signals)(ThreadId tid)
3107 vki_siginfo_t si, *sip;
3108 vki_sigset_t pollset;
3109 ThreadState *tst = VG_(get_ThreadState)(tid);
3110 vki_sigset_t saved_mask;
3112 if (tst->exitreason != VgSrc_None ) {
3113 /* This task has been requested to die (e.g. due to a fatal signal
3114 received by the process, or because of a call to exit syscall).
3115 So, we cannot poll new signals, as we are supposed to die asap.
3116 If we would poll and deliver
3117 a new (maybe fatal) signal, this could cause a deadlock, as
3118 this thread would believe it has to terminate the other threads
3119 and wait for them to die, while we already have a thread doing
3120 that. */
3121 if (VG_(clo_trace_signals))
3122 VG_(dmsg)("poll_signals: not polling as thread %u is exitreason %s\n",
3123 tid, VG_(name_of_VgSchedReturnCode)(tst->exitreason));
3124 return;
3127 /* look for all the signals this thread isn't blocking */
3128 /* pollset = ~tst->sig_mask */
3129 VG_(sigcomplementset)( &pollset, &tst->sig_mask );
3131 block_all_host_signals(&saved_mask); // protect signal queue
3133 /* First look for any queued pending signals */
3134 sip = next_queued(tid, &pollset); /* this thread */
3136 if (sip == NULL)
3137 sip = next_queued(0, &pollset); /* process-wide */
3139 /* If there was nothing queued, ask the kernel for a pending signal */
3140 if (sip == NULL && VG_(sigtimedwait_zero)(&pollset, &si) > 0) {
3141 if (VG_(clo_trace_signals))
3142 VG_(dmsg)("poll_signals: got signal %d for thread %u exitreason %s\n",
3143 si.si_signo, tid,
3144 VG_(name_of_VgSchedReturnCode)(tst->exitreason));
3145 sip = &si;
3148 if (sip != NULL) {
3149 /* OK, something to do; deliver it */
3150 if (VG_(clo_trace_signals))
3151 VG_(dmsg)("Polling found signal %d for tid %u exitreason %s\n",
3152 sip->si_signo, tid,
3153 VG_(name_of_VgSchedReturnCode)(tst->exitreason));
3154 if (!is_sig_ign(sip, tid))
3155 deliver_signal(tid, sip, NULL);
3156 else if (VG_(clo_trace_signals))
3157 VG_(dmsg)(" signal %d ignored\n", sip->si_signo);
3159 sip->si_signo = 0; /* remove from signal queue, if that's
3160 where it came from */
3163 restore_all_host_signals(&saved_mask);
3166 /* At startup, copy the process' real signal state to the SCSS.
3167 Whilst doing this, block all real signals. Then calculate SKSS and
3168 set the kernel to that. Also initialise DCSS.
3170 void VG_(sigstartup_actions) ( void )
3172 Int i, ret, vKI_SIGRTMIN;
3173 vki_sigset_t saved_procmask;
3174 vki_sigaction_fromK_t sa;
3176 VG_(memset)(&scss, 0, sizeof(scss));
3177 VG_(memset)(&skss, 0, sizeof(skss));
3179 # if defined(VKI_SIGRTMIN)
3180 vKI_SIGRTMIN = VKI_SIGRTMIN;
3181 # else
3182 vKI_SIGRTMIN = 0; /* eg Darwin */
3183 # endif
3185 /* VG_(printf)("SIGSTARTUP\n"); */
3186 /* Block all signals. saved_procmask remembers the previous mask,
3187 which the first thread inherits.
3189 block_all_host_signals( &saved_procmask );
3191 /* Copy per-signal settings to SCSS. */
3192 for (i = 1; i <= _VKI_NSIG; i++) {
3193 /* Get the old host action */
3194 ret = VG_(sigaction)(i, NULL, &sa);
3196 # if defined(VGP_x86_darwin) || defined(VGP_amd64_darwin) \
3197 || defined(VGP_nanomips_linux)
3198 /* apparently we may not even ask about the disposition of these
3199 signals, let alone change them */
3200 if (ret != 0 && (i == VKI_SIGKILL || i == VKI_SIGSTOP))
3201 continue;
3202 # endif
3204 if (ret != 0)
3205 break;
3207 /* Try setting it back to see if this signal is really
3208 available */
3209 if (vKI_SIGRTMIN > 0 /* it actually exists on this platform */
3210 && i >= vKI_SIGRTMIN) {
3211 vki_sigaction_toK_t tsa, sa2;
3213 tsa.ksa_handler = (void *)sync_signalhandler;
3214 tsa.sa_flags = VKI_SA_SIGINFO;
3215 # if !defined(VGO_darwin) && !defined(VGO_freebsd) && \
3216 !defined(VGO_solaris)
3217 tsa.sa_restorer = 0;
3218 # endif
3219 VG_(sigfillset)(&tsa.sa_mask);
3221 /* try setting it to some arbitrary handler */
3222 if (VG_(sigaction)(i, &tsa, NULL) != 0) {
3223 /* failed - not really usable */
3224 break;
3227 VG_(convert_sigaction_fromK_to_toK)( &sa, &sa2 );
3228 ret = VG_(sigaction)(i, &sa2, NULL);
3229 vg_assert(ret == 0);
3232 VG_(max_signal) = i;
3234 if (VG_(clo_trace_signals) && VG_(clo_verbosity) > 2)
3235 VG_(printf)("snaffling handler 0x%lx for signal %d\n",
3236 (Addr)(sa.ksa_handler), i );
3238 scss.scss_per_sig[i].scss_handler = sa.ksa_handler;
3239 scss.scss_per_sig[i].scss_flags = sa.sa_flags;
3240 scss.scss_per_sig[i].scss_mask = sa.sa_mask;
3242 scss.scss_per_sig[i].scss_restorer = NULL;
3243 # if !defined(VGO_darwin) && !defined(VGO_freebsd) && \
3244 !defined(VGO_solaris)
3245 scss.scss_per_sig[i].scss_restorer = sa.sa_restorer;
3246 # endif
3248 scss.scss_per_sig[i].scss_sa_tramp = NULL;
3249 # if defined(VGP_x86_darwin) || defined(VGP_amd64_darwin)
3250 scss.scss_per_sig[i].scss_sa_tramp = NULL;
3251 /*sa.sa_tramp;*/
3252 /* We can't know what it was, because Darwin's sys_sigaction
3253 doesn't tell us. */
3254 # endif
3257 if (VG_(clo_trace_signals))
3258 VG_(dmsg)("Max kernel-supported signal is %d, VG_SIGVGKILL is %d\n",
3259 VG_(max_signal), VG_SIGVGKILL);
3261 /* Our private internal signals are treated as ignored */
3262 scss.scss_per_sig[VG_SIGVGKILL].scss_handler = VKI_SIG_IGN;
3263 scss.scss_per_sig[VG_SIGVGKILL].scss_flags = VKI_SA_SIGINFO;
3264 VG_(sigfillset)(&scss.scss_per_sig[VG_SIGVGKILL].scss_mask);
3266 /* Copy the process' signal mask into the root thread. */
3267 vg_assert(VG_(threads)[1].status == VgTs_Init);
3268 for (i = 2; i < VG_N_THREADS; i++)
3269 vg_assert(VG_(threads)[i].status == VgTs_Empty);
3271 VG_(threads)[1].sig_mask = saved_procmask;
3272 VG_(threads)[1].tmp_sig_mask = saved_procmask;
3274 /* Calculate SKSS and apply it. This also sets the initial kernel
3275 mask we need to run with. */
3276 handle_SCSS_change( True /* forced update */ );
3278 /* Leave with all signals still blocked; the thread scheduler loop
3279 will set the appropriate mask at the appropriate time. */
3282 /*--------------------------------------------------------------------*/
3283 /*--- end ---*/
3284 /*--------------------------------------------------------------------*/