1 //===- MemorySanitizer.cpp - detector of uninitialized reads --------------===//
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4 // See https://llvm.org/LICENSE.txt for license information.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
7 //===----------------------------------------------------------------------===//
10 /// This file is a part of MemorySanitizer, a detector of uninitialized
13 /// The algorithm of the tool is similar to Memcheck
14 /// (http://goo.gl/QKbem). We associate a few shadow bits with every
15 /// byte of the application memory, poison the shadow of the malloc-ed
16 /// or alloca-ed memory, load the shadow bits on every memory read,
17 /// propagate the shadow bits through some of the arithmetic
18 /// instruction (including MOV), store the shadow bits on every memory
19 /// write, report a bug on some other instructions (e.g. JMP) if the
20 /// associated shadow is poisoned.
22 /// But there are differences too. The first and the major one:
23 /// compiler instrumentation instead of binary instrumentation. This
24 /// gives us much better register allocation, possible compiler
25 /// optimizations and a fast start-up. But this brings the major issue
26 /// as well: msan needs to see all program events, including system
27 /// calls and reads/writes in system libraries, so we either need to
28 /// compile *everything* with msan or use a binary translation
29 /// component (e.g. DynamoRIO) to instrument pre-built libraries.
30 /// Another difference from Memcheck is that we use 8 shadow bits per
31 /// byte of application memory and use a direct shadow mapping. This
32 /// greatly simplifies the instrumentation code and avoids races on
33 /// shadow updates (Memcheck is single-threaded so races are not a
34 /// concern there. Memcheck uses 2 shadow bits per byte with a slow
35 /// path storage that uses 8 bits per byte).
37 /// The default value of shadow is 0, which means "clean" (not poisoned).
39 /// Every module initializer should call __msan_init to ensure that the
40 /// shadow memory is ready. On error, __msan_warning is called. Since
41 /// parameters and return values may be passed via registers, we have a
42 /// specialized thread-local shadow for return values
43 /// (__msan_retval_tls) and parameters (__msan_param_tls).
47 /// MemorySanitizer can track origins (allocation points) of all uninitialized
48 /// values. This behavior is controlled with a flag (msan-track-origins) and is
49 /// disabled by default.
51 /// Origins are 4-byte values created and interpreted by the runtime library.
52 /// They are stored in a second shadow mapping, one 4-byte value for 4 bytes
53 /// of application memory. Propagation of origins is basically a bunch of
54 /// "select" instructions that pick the origin of a dirty argument, if an
55 /// instruction has one.
57 /// Every 4 aligned, consecutive bytes of application memory have one origin
58 /// value associated with them. If these bytes contain uninitialized data
59 /// coming from 2 different allocations, the last store wins. Because of this,
60 /// MemorySanitizer reports can show unrelated origins, but this is unlikely in
63 /// Origins are meaningless for fully initialized values, so MemorySanitizer
64 /// avoids storing origin to memory when a fully initialized value is stored.
65 /// This way it avoids needless overwritting origin of the 4-byte region on
66 /// a short (i.e. 1 byte) clean store, and it is also good for performance.
70 /// Ideally, every atomic store of application value should update the
71 /// corresponding shadow location in an atomic way. Unfortunately, atomic store
72 /// of two disjoint locations can not be done without severe slowdown.
74 /// Therefore, we implement an approximation that may err on the safe side.
75 /// In this implementation, every atomically accessed location in the program
76 /// may only change from (partially) uninitialized to fully initialized, but
77 /// not the other way around. We load the shadow _after_ the application load,
78 /// and we store the shadow _before_ the app store. Also, we always store clean
79 /// shadow (if the application store is atomic). This way, if the store-load
80 /// pair constitutes a happens-before arc, shadow store and load are correctly
81 /// ordered such that the load will get either the value that was stored, or
82 /// some later value (which is always clean).
84 /// This does not work very well with Compare-And-Swap (CAS) and
85 /// Read-Modify-Write (RMW) operations. To follow the above logic, CAS and RMW
86 /// must store the new shadow before the app operation, and load the shadow
87 /// after the app operation. Computers don't work this way. Current
88 /// implementation ignores the load aspect of CAS/RMW, always returning a clean
89 /// value. It implements the store part as a simple atomic store by storing a
92 /// Instrumenting inline assembly.
94 /// For inline assembly code LLVM has little idea about which memory locations
95 /// become initialized depending on the arguments. It can be possible to figure
96 /// out which arguments are meant to point to inputs and outputs, but the
97 /// actual semantics can be only visible at runtime. In the Linux kernel it's
98 /// also possible that the arguments only indicate the offset for a base taken
99 /// from a segment register, so it's dangerous to treat any asm() arguments as
100 /// pointers. We take a conservative approach generating calls to
101 /// __msan_instrument_asm_store(ptr, size)
102 /// , which defer the memory unpoisoning to the runtime library.
103 /// The latter can perform more complex address checks to figure out whether
104 /// it's safe to touch the shadow memory.
105 /// Like with atomic operations, we call __msan_instrument_asm_store() before
106 /// the assembly call, so that changes to the shadow memory will be seen by
107 /// other threads together with main memory initialization.
109 /// KernelMemorySanitizer (KMSAN) implementation.
111 /// The major differences between KMSAN and MSan instrumentation are:
112 /// - KMSAN always tracks the origins and implies msan-keep-going=true;
113 /// - KMSAN allocates shadow and origin memory for each page separately, so
114 /// there are no explicit accesses to shadow and origin in the
116 /// Shadow and origin values for a particular X-byte memory location
117 /// (X=1,2,4,8) are accessed through pointers obtained via the
118 /// __msan_metadata_ptr_for_load_X(ptr)
119 /// __msan_metadata_ptr_for_store_X(ptr)
120 /// functions. The corresponding functions check that the X-byte accesses
121 /// are possible and returns the pointers to shadow and origin memory.
122 /// Arbitrary sized accesses are handled with:
123 /// __msan_metadata_ptr_for_load_n(ptr, size)
124 /// __msan_metadata_ptr_for_store_n(ptr, size);
125 /// - TLS variables are stored in a single per-task struct. A call to a
126 /// function __msan_get_context_state() returning a pointer to that struct
127 /// is inserted into every instrumented function before the entry block;
128 /// - __msan_warning() takes a 32-bit origin parameter;
129 /// - local variables are poisoned with __msan_poison_alloca() upon function
130 /// entry and unpoisoned with __msan_unpoison_alloca() before leaving the
132 /// - the pass doesn't declare any global variables or add global constructors
133 /// to the translation unit.
135 /// Also, KMSAN currently ignores uninitialized memory passed into inline asm
136 /// calls, making sure we're on the safe side wrt. possible false positives.
138 /// KernelMemorySanitizer only supports X86_64 at the moment.
140 //===----------------------------------------------------------------------===//
142 #include "llvm/Transforms/Instrumentation/MemorySanitizer.h"
143 #include "llvm/ADT/APInt.h"
144 #include "llvm/ADT/ArrayRef.h"
145 #include "llvm/ADT/DepthFirstIterator.h"
146 #include "llvm/ADT/SmallSet.h"
147 #include "llvm/ADT/SmallString.h"
148 #include "llvm/ADT/SmallVector.h"
149 #include "llvm/ADT/StringExtras.h"
150 #include "llvm/ADT/StringRef.h"
151 #include "llvm/ADT/Triple.h"
152 #include "llvm/Analysis/TargetLibraryInfo.h"
153 #include "llvm/IR/Argument.h"
154 #include "llvm/IR/Attributes.h"
155 #include "llvm/IR/BasicBlock.h"
156 #include "llvm/IR/CallSite.h"
157 #include "llvm/IR/CallingConv.h"
158 #include "llvm/IR/Constant.h"
159 #include "llvm/IR/Constants.h"
160 #include "llvm/IR/DataLayout.h"
161 #include "llvm/IR/DerivedTypes.h"
162 #include "llvm/IR/Function.h"
163 #include "llvm/IR/GlobalValue.h"
164 #include "llvm/IR/GlobalVariable.h"
165 #include "llvm/IR/IRBuilder.h"
166 #include "llvm/IR/InlineAsm.h"
167 #include "llvm/IR/InstVisitor.h"
168 #include "llvm/IR/InstrTypes.h"
169 #include "llvm/IR/Instruction.h"
170 #include "llvm/IR/Instructions.h"
171 #include "llvm/IR/IntrinsicInst.h"
172 #include "llvm/IR/Intrinsics.h"
173 #include "llvm/IR/LLVMContext.h"
174 #include "llvm/IR/MDBuilder.h"
175 #include "llvm/IR/Module.h"
176 #include "llvm/IR/Type.h"
177 #include "llvm/IR/Value.h"
178 #include "llvm/IR/ValueMap.h"
179 #include "llvm/Pass.h"
180 #include "llvm/Support/AtomicOrdering.h"
181 #include "llvm/Support/Casting.h"
182 #include "llvm/Support/CommandLine.h"
183 #include "llvm/Support/Compiler.h"
184 #include "llvm/Support/Debug.h"
185 #include "llvm/Support/ErrorHandling.h"
186 #include "llvm/Support/MathExtras.h"
187 #include "llvm/Support/raw_ostream.h"
188 #include "llvm/Transforms/Instrumentation.h"
189 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
190 #include "llvm/Transforms/Utils/Local.h"
191 #include "llvm/Transforms/Utils/ModuleUtils.h"
200 using namespace llvm
;
202 #define DEBUG_TYPE "msan"
204 static const unsigned kOriginSize
= 4;
205 static const unsigned kMinOriginAlignment
= 4;
206 static const unsigned kShadowTLSAlignment
= 8;
208 // These constants must be kept in sync with the ones in msan.h.
209 static const unsigned kParamTLSSize
= 800;
210 static const unsigned kRetvalTLSSize
= 800;
212 // Accesses sizes are powers of two: 1, 2, 4, 8.
213 static const size_t kNumberOfAccessSizes
= 4;
215 /// Track origins of uninitialized values.
217 /// Adds a section to MemorySanitizer report that points to the allocation
218 /// (stack or heap) the uninitialized bits came from originally.
219 static cl::opt
<int> ClTrackOrigins("msan-track-origins",
220 cl::desc("Track origins (allocation sites) of poisoned memory"),
221 cl::Hidden
, cl::init(0));
223 static cl::opt
<bool> ClKeepGoing("msan-keep-going",
224 cl::desc("keep going after reporting a UMR"),
225 cl::Hidden
, cl::init(false));
227 static cl::opt
<bool> ClPoisonStack("msan-poison-stack",
228 cl::desc("poison uninitialized stack variables"),
229 cl::Hidden
, cl::init(true));
231 static cl::opt
<bool> ClPoisonStackWithCall("msan-poison-stack-with-call",
232 cl::desc("poison uninitialized stack variables with a call"),
233 cl::Hidden
, cl::init(false));
235 static cl::opt
<int> ClPoisonStackPattern("msan-poison-stack-pattern",
236 cl::desc("poison uninitialized stack variables with the given pattern"),
237 cl::Hidden
, cl::init(0xff));
239 static cl::opt
<bool> ClPoisonUndef("msan-poison-undef",
240 cl::desc("poison undef temps"),
241 cl::Hidden
, cl::init(true));
243 static cl::opt
<bool> ClHandleICmp("msan-handle-icmp",
244 cl::desc("propagate shadow through ICmpEQ and ICmpNE"),
245 cl::Hidden
, cl::init(true));
247 static cl::opt
<bool> ClHandleICmpExact("msan-handle-icmp-exact",
248 cl::desc("exact handling of relational integer ICmp"),
249 cl::Hidden
, cl::init(false));
251 static cl::opt
<bool> ClHandleLifetimeIntrinsics(
252 "msan-handle-lifetime-intrinsics",
254 "when possible, poison scoped variables at the beginning of the scope "
255 "(slower, but more precise)"),
256 cl::Hidden
, cl::init(true));
258 // When compiling the Linux kernel, we sometimes see false positives related to
259 // MSan being unable to understand that inline assembly calls may initialize
261 // This flag makes the compiler conservatively unpoison every memory location
262 // passed into an assembly call. Note that this may cause false positives.
263 // Because it's impossible to figure out the array sizes, we can only unpoison
264 // the first sizeof(type) bytes for each type* pointer.
265 // The instrumentation is only enabled in KMSAN builds, and only if
266 // -msan-handle-asm-conservative is on. This is done because we may want to
267 // quickly disable assembly instrumentation when it breaks.
268 static cl::opt
<bool> ClHandleAsmConservative(
269 "msan-handle-asm-conservative",
270 cl::desc("conservative handling of inline assembly"), cl::Hidden
,
273 // This flag controls whether we check the shadow of the address
274 // operand of load or store. Such bugs are very rare, since load from
275 // a garbage address typically results in SEGV, but still happen
276 // (e.g. only lower bits of address are garbage, or the access happens
277 // early at program startup where malloc-ed memory is more likely to
278 // be zeroed. As of 2012-08-28 this flag adds 20% slowdown.
279 static cl::opt
<bool> ClCheckAccessAddress("msan-check-access-address",
280 cl::desc("report accesses through a pointer which has poisoned shadow"),
281 cl::Hidden
, cl::init(true));
283 static cl::opt
<bool> ClDumpStrictInstructions("msan-dump-strict-instructions",
284 cl::desc("print out instructions with default strict semantics"),
285 cl::Hidden
, cl::init(false));
287 static cl::opt
<int> ClInstrumentationWithCallThreshold(
288 "msan-instrumentation-with-call-threshold",
290 "If the function being instrumented requires more than "
291 "this number of checks and origin stores, use callbacks instead of "
292 "inline checks (-1 means never use callbacks)."),
293 cl::Hidden
, cl::init(3500));
296 ClEnableKmsan("msan-kernel",
297 cl::desc("Enable KernelMemorySanitizer instrumentation"),
298 cl::Hidden
, cl::init(false));
300 // This is an experiment to enable handling of cases where shadow is a non-zero
301 // compile-time constant. For some unexplainable reason they were silently
302 // ignored in the instrumentation.
303 static cl::opt
<bool> ClCheckConstantShadow("msan-check-constant-shadow",
304 cl::desc("Insert checks for constant shadow values"),
305 cl::Hidden
, cl::init(false));
307 // This is off by default because of a bug in gold:
308 // https://sourceware.org/bugzilla/show_bug.cgi?id=19002
309 static cl::opt
<bool> ClWithComdat("msan-with-comdat",
310 cl::desc("Place MSan constructors in comdat sections"),
311 cl::Hidden
, cl::init(false));
313 // These options allow to specify custom memory map parameters
314 // See MemoryMapParams for details.
315 static cl::opt
<uint64_t> ClAndMask("msan-and-mask",
316 cl::desc("Define custom MSan AndMask"),
317 cl::Hidden
, cl::init(0));
319 static cl::opt
<uint64_t> ClXorMask("msan-xor-mask",
320 cl::desc("Define custom MSan XorMask"),
321 cl::Hidden
, cl::init(0));
323 static cl::opt
<uint64_t> ClShadowBase("msan-shadow-base",
324 cl::desc("Define custom MSan ShadowBase"),
325 cl::Hidden
, cl::init(0));
327 static cl::opt
<uint64_t> ClOriginBase("msan-origin-base",
328 cl::desc("Define custom MSan OriginBase"),
329 cl::Hidden
, cl::init(0));
331 static const char *const kMsanModuleCtorName
= "msan.module_ctor";
332 static const char *const kMsanInitName
= "__msan_init";
336 // Memory map parameters used in application-to-shadow address calculation.
337 // Offset = (Addr & ~AndMask) ^ XorMask
338 // Shadow = ShadowBase + Offset
339 // Origin = OriginBase + Offset
340 struct MemoryMapParams
{
347 struct PlatformMemoryMapParams
{
348 const MemoryMapParams
*bits32
;
349 const MemoryMapParams
*bits64
;
352 } // end anonymous namespace
355 static const MemoryMapParams Linux_I386_MemoryMapParams
= {
356 0x000080000000, // AndMask
357 0, // XorMask (not used)
358 0, // ShadowBase (not used)
359 0x000040000000, // OriginBase
363 static const MemoryMapParams Linux_X86_64_MemoryMapParams
= {
364 #ifdef MSAN_LINUX_X86_64_OLD_MAPPING
365 0x400000000000, // AndMask
366 0, // XorMask (not used)
367 0, // ShadowBase (not used)
368 0x200000000000, // OriginBase
370 0, // AndMask (not used)
371 0x500000000000, // XorMask
372 0, // ShadowBase (not used)
373 0x100000000000, // OriginBase
378 static const MemoryMapParams Linux_MIPS64_MemoryMapParams
= {
379 0, // AndMask (not used)
380 0x008000000000, // XorMask
381 0, // ShadowBase (not used)
382 0x002000000000, // OriginBase
386 static const MemoryMapParams Linux_PowerPC64_MemoryMapParams
= {
387 0xE00000000000, // AndMask
388 0x100000000000, // XorMask
389 0x080000000000, // ShadowBase
390 0x1C0000000000, // OriginBase
394 static const MemoryMapParams Linux_AArch64_MemoryMapParams
= {
395 0, // AndMask (not used)
396 0x06000000000, // XorMask
397 0, // ShadowBase (not used)
398 0x01000000000, // OriginBase
402 static const MemoryMapParams FreeBSD_I386_MemoryMapParams
= {
403 0x000180000000, // AndMask
404 0x000040000000, // XorMask
405 0x000020000000, // ShadowBase
406 0x000700000000, // OriginBase
410 static const MemoryMapParams FreeBSD_X86_64_MemoryMapParams
= {
411 0xc00000000000, // AndMask
412 0x200000000000, // XorMask
413 0x100000000000, // ShadowBase
414 0x380000000000, // OriginBase
418 static const MemoryMapParams NetBSD_X86_64_MemoryMapParams
= {
420 0x500000000000, // XorMask
422 0x100000000000, // OriginBase
425 static const PlatformMemoryMapParams Linux_X86_MemoryMapParams
= {
426 &Linux_I386_MemoryMapParams
,
427 &Linux_X86_64_MemoryMapParams
,
430 static const PlatformMemoryMapParams Linux_MIPS_MemoryMapParams
= {
432 &Linux_MIPS64_MemoryMapParams
,
435 static const PlatformMemoryMapParams Linux_PowerPC_MemoryMapParams
= {
437 &Linux_PowerPC64_MemoryMapParams
,
440 static const PlatformMemoryMapParams Linux_ARM_MemoryMapParams
= {
442 &Linux_AArch64_MemoryMapParams
,
445 static const PlatformMemoryMapParams FreeBSD_X86_MemoryMapParams
= {
446 &FreeBSD_I386_MemoryMapParams
,
447 &FreeBSD_X86_64_MemoryMapParams
,
450 static const PlatformMemoryMapParams NetBSD_X86_MemoryMapParams
= {
452 &NetBSD_X86_64_MemoryMapParams
,
457 /// Instrument functions of a module to detect uninitialized reads.
459 /// Instantiating MemorySanitizer inserts the msan runtime library API function
460 /// declarations into the module if they don't exist already. Instantiating
461 /// ensures the __msan_init function is in the list of global constructors for
463 class MemorySanitizer
{
465 MemorySanitizer(Module
&M
, MemorySanitizerOptions Options
) {
466 this->CompileKernel
=
467 ClEnableKmsan
.getNumOccurrences() > 0 ? ClEnableKmsan
: Options
.Kernel
;
468 if (ClTrackOrigins
.getNumOccurrences() > 0)
469 this->TrackOrigins
= ClTrackOrigins
;
471 this->TrackOrigins
= this->CompileKernel
? 2 : Options
.TrackOrigins
;
472 this->Recover
= ClKeepGoing
.getNumOccurrences() > 0
474 : (this->CompileKernel
| Options
.Recover
);
478 // MSan cannot be moved or copied because of MapParams.
479 MemorySanitizer(MemorySanitizer
&&) = delete;
480 MemorySanitizer
&operator=(MemorySanitizer
&&) = delete;
481 MemorySanitizer(const MemorySanitizer
&) = delete;
482 MemorySanitizer
&operator=(const MemorySanitizer
&) = delete;
484 bool sanitizeFunction(Function
&F
, TargetLibraryInfo
&TLI
);
487 friend struct MemorySanitizerVisitor
;
488 friend struct VarArgAMD64Helper
;
489 friend struct VarArgMIPS64Helper
;
490 friend struct VarArgAArch64Helper
;
491 friend struct VarArgPowerPC64Helper
;
493 void initializeModule(Module
&M
);
494 void initializeCallbacks(Module
&M
);
495 void createKernelApi(Module
&M
);
496 void createUserspaceApi(Module
&M
);
498 /// True if we're compiling the Linux kernel.
500 /// Track origins (allocation points) of uninitialized values.
508 // XxxTLS variables represent the per-thread state in MSan and per-task state
510 // For the userspace these point to thread-local globals. In the kernel land
511 // they point to the members of a per-task struct obtained via a call to
512 // __msan_get_context_state().
514 /// Thread-local shadow storage for function parameters.
517 /// Thread-local origin storage for function parameters.
518 Value
*ParamOriginTLS
;
520 /// Thread-local shadow storage for function return value.
523 /// Thread-local origin storage for function return value.
524 Value
*RetvalOriginTLS
;
526 /// Thread-local shadow storage for in-register va_arg function
527 /// parameters (x86_64-specific).
530 /// Thread-local shadow storage for in-register va_arg function
531 /// parameters (x86_64-specific).
532 Value
*VAArgOriginTLS
;
534 /// Thread-local shadow storage for va_arg overflow area
535 /// (x86_64-specific).
536 Value
*VAArgOverflowSizeTLS
;
538 /// Thread-local space used to pass origin value to the UMR reporting
542 /// Are the instrumentation callbacks set up?
543 bool CallbacksInitialized
= false;
545 /// The run-time callback to print a warning.
546 FunctionCallee WarningFn
;
548 // These arrays are indexed by log2(AccessSize).
549 FunctionCallee MaybeWarningFn
[kNumberOfAccessSizes
];
550 FunctionCallee MaybeStoreOriginFn
[kNumberOfAccessSizes
];
552 /// Run-time helper that generates a new origin value for a stack
554 FunctionCallee MsanSetAllocaOrigin4Fn
;
556 /// Run-time helper that poisons stack on function entry.
557 FunctionCallee MsanPoisonStackFn
;
559 /// Run-time helper that records a store (or any event) of an
560 /// uninitialized value and returns an updated origin id encoding this info.
561 FunctionCallee MsanChainOriginFn
;
563 /// MSan runtime replacements for memmove, memcpy and memset.
564 FunctionCallee MemmoveFn
, MemcpyFn
, MemsetFn
;
566 /// KMSAN callback for task-local function argument shadow.
567 StructType
*MsanContextStateTy
;
568 FunctionCallee MsanGetContextStateFn
;
570 /// Functions for poisoning/unpoisoning local variables
571 FunctionCallee MsanPoisonAllocaFn
, MsanUnpoisonAllocaFn
;
573 /// Each of the MsanMetadataPtrXxx functions returns a pair of shadow/origin
575 FunctionCallee MsanMetadataPtrForLoadN
, MsanMetadataPtrForStoreN
;
576 FunctionCallee MsanMetadataPtrForLoad_1_8
[4];
577 FunctionCallee MsanMetadataPtrForStore_1_8
[4];
578 FunctionCallee MsanInstrumentAsmStoreFn
;
580 /// Helper to choose between different MsanMetadataPtrXxx().
581 FunctionCallee
getKmsanShadowOriginAccessFn(bool isStore
, int size
);
583 /// Memory map parameters used in application-to-shadow calculation.
584 const MemoryMapParams
*MapParams
;
586 /// Custom memory map parameters used when -msan-shadow-base or
587 // -msan-origin-base is provided.
588 MemoryMapParams CustomMapParams
;
590 MDNode
*ColdCallWeights
;
592 /// Branch weights for origin store.
593 MDNode
*OriginStoreWeights
;
595 /// An empty volatile inline asm that prevents callback merge.
598 Function
*MsanCtorFunction
;
601 /// A legacy function pass for msan instrumentation.
603 /// Instruments functions to detect unitialized reads.
604 struct MemorySanitizerLegacyPass
: public FunctionPass
{
605 // Pass identification, replacement for typeid.
608 MemorySanitizerLegacyPass(MemorySanitizerOptions Options
= {})
609 : FunctionPass(ID
), Options(Options
) {}
610 StringRef
getPassName() const override
{ return "MemorySanitizerLegacyPass"; }
612 void getAnalysisUsage(AnalysisUsage
&AU
) const override
{
613 AU
.addRequired
<TargetLibraryInfoWrapperPass
>();
616 bool runOnFunction(Function
&F
) override
{
617 return MSan
->sanitizeFunction(
618 F
, getAnalysis
<TargetLibraryInfoWrapperPass
>().getTLI());
620 bool doInitialization(Module
&M
) override
;
622 Optional
<MemorySanitizer
> MSan
;
623 MemorySanitizerOptions Options
;
626 } // end anonymous namespace
628 PreservedAnalyses
MemorySanitizerPass::run(Function
&F
,
629 FunctionAnalysisManager
&FAM
) {
630 MemorySanitizer
Msan(*F
.getParent(), Options
);
631 if (Msan
.sanitizeFunction(F
, FAM
.getResult
<TargetLibraryAnalysis
>(F
)))
632 return PreservedAnalyses::none();
633 return PreservedAnalyses::all();
636 char MemorySanitizerLegacyPass::ID
= 0;
638 INITIALIZE_PASS_BEGIN(MemorySanitizerLegacyPass
, "msan",
639 "MemorySanitizer: detects uninitialized reads.", false,
641 INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfoWrapperPass
)
642 INITIALIZE_PASS_END(MemorySanitizerLegacyPass
, "msan",
643 "MemorySanitizer: detects uninitialized reads.", false,
647 llvm::createMemorySanitizerLegacyPassPass(MemorySanitizerOptions Options
) {
648 return new MemorySanitizerLegacyPass(Options
);
651 /// Create a non-const global initialized with the given string.
653 /// Creates a writable global for Str so that we can pass it to the
654 /// run-time lib. Runtime uses first 4 bytes of the string to store the
655 /// frame ID, so the string needs to be mutable.
656 static GlobalVariable
*createPrivateNonConstGlobalForString(Module
&M
,
658 Constant
*StrConst
= ConstantDataArray::getString(M
.getContext(), Str
);
659 return new GlobalVariable(M
, StrConst
->getType(), /*isConstant=*/false,
660 GlobalValue::PrivateLinkage
, StrConst
, "");
663 /// Create KMSAN API callbacks.
664 void MemorySanitizer::createKernelApi(Module
&M
) {
667 // These will be initialized in insertKmsanPrologue().
669 RetvalOriginTLS
= nullptr;
671 ParamOriginTLS
= nullptr;
673 VAArgOriginTLS
= nullptr;
674 VAArgOverflowSizeTLS
= nullptr;
675 // OriginTLS is unused in the kernel.
678 // __msan_warning() in the kernel takes an origin.
679 WarningFn
= M
.getOrInsertFunction("__msan_warning", IRB
.getVoidTy(),
681 // Requests the per-task context state (kmsan_context_state*) from the
683 MsanContextStateTy
= StructType::get(
684 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8),
685 ArrayType::get(IRB
.getInt64Ty(), kRetvalTLSSize
/ 8),
686 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8),
687 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8), /* va_arg_origin */
688 IRB
.getInt64Ty(), ArrayType::get(OriginTy
, kParamTLSSize
/ 4), OriginTy
,
690 MsanGetContextStateFn
= M
.getOrInsertFunction(
691 "__msan_get_context_state", PointerType::get(MsanContextStateTy
, 0));
693 Type
*RetTy
= StructType::get(PointerType::get(IRB
.getInt8Ty(), 0),
694 PointerType::get(IRB
.getInt32Ty(), 0));
696 for (int ind
= 0, size
= 1; ind
< 4; ind
++, size
<<= 1) {
697 std::string name_load
=
698 "__msan_metadata_ptr_for_load_" + std::to_string(size
);
699 std::string name_store
=
700 "__msan_metadata_ptr_for_store_" + std::to_string(size
);
701 MsanMetadataPtrForLoad_1_8
[ind
] = M
.getOrInsertFunction(
702 name_load
, RetTy
, PointerType::get(IRB
.getInt8Ty(), 0));
703 MsanMetadataPtrForStore_1_8
[ind
] = M
.getOrInsertFunction(
704 name_store
, RetTy
, PointerType::get(IRB
.getInt8Ty(), 0));
707 MsanMetadataPtrForLoadN
= M
.getOrInsertFunction(
708 "__msan_metadata_ptr_for_load_n", RetTy
,
709 PointerType::get(IRB
.getInt8Ty(), 0), IRB
.getInt64Ty());
710 MsanMetadataPtrForStoreN
= M
.getOrInsertFunction(
711 "__msan_metadata_ptr_for_store_n", RetTy
,
712 PointerType::get(IRB
.getInt8Ty(), 0), IRB
.getInt64Ty());
714 // Functions for poisoning and unpoisoning memory.
716 M
.getOrInsertFunction("__msan_poison_alloca", IRB
.getVoidTy(),
717 IRB
.getInt8PtrTy(), IntptrTy
, IRB
.getInt8PtrTy());
718 MsanUnpoisonAllocaFn
= M
.getOrInsertFunction(
719 "__msan_unpoison_alloca", IRB
.getVoidTy(), IRB
.getInt8PtrTy(), IntptrTy
);
722 static Constant
*getOrInsertGlobal(Module
&M
, StringRef Name
, Type
*Ty
) {
723 return M
.getOrInsertGlobal(Name
, Ty
, [&] {
724 return new GlobalVariable(M
, Ty
, false, GlobalVariable::ExternalLinkage
,
725 nullptr, Name
, nullptr,
726 GlobalVariable::InitialExecTLSModel
);
730 /// Insert declarations for userspace-specific functions and globals.
731 void MemorySanitizer::createUserspaceApi(Module
&M
) {
733 // Create the callback.
734 // FIXME: this function should have "Cold" calling conv,
735 // which is not yet implemented.
736 StringRef WarningFnName
= Recover
? "__msan_warning"
737 : "__msan_warning_noreturn";
738 WarningFn
= M
.getOrInsertFunction(WarningFnName
, IRB
.getVoidTy());
740 // Create the global TLS variables.
742 getOrInsertGlobal(M
, "__msan_retval_tls",
743 ArrayType::get(IRB
.getInt64Ty(), kRetvalTLSSize
/ 8));
745 RetvalOriginTLS
= getOrInsertGlobal(M
, "__msan_retval_origin_tls", OriginTy
);
748 getOrInsertGlobal(M
, "__msan_param_tls",
749 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8));
752 getOrInsertGlobal(M
, "__msan_param_origin_tls",
753 ArrayType::get(OriginTy
, kParamTLSSize
/ 4));
756 getOrInsertGlobal(M
, "__msan_va_arg_tls",
757 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8));
760 getOrInsertGlobal(M
, "__msan_va_arg_origin_tls",
761 ArrayType::get(OriginTy
, kParamTLSSize
/ 4));
763 VAArgOverflowSizeTLS
=
764 getOrInsertGlobal(M
, "__msan_va_arg_overflow_size_tls", IRB
.getInt64Ty());
765 OriginTLS
= getOrInsertGlobal(M
, "__msan_origin_tls", IRB
.getInt32Ty());
767 for (size_t AccessSizeIndex
= 0; AccessSizeIndex
< kNumberOfAccessSizes
;
769 unsigned AccessSize
= 1 << AccessSizeIndex
;
770 std::string FunctionName
= "__msan_maybe_warning_" + itostr(AccessSize
);
771 MaybeWarningFn
[AccessSizeIndex
] = M
.getOrInsertFunction(
772 FunctionName
, IRB
.getVoidTy(), IRB
.getIntNTy(AccessSize
* 8),
775 FunctionName
= "__msan_maybe_store_origin_" + itostr(AccessSize
);
776 MaybeStoreOriginFn
[AccessSizeIndex
] = M
.getOrInsertFunction(
777 FunctionName
, IRB
.getVoidTy(), IRB
.getIntNTy(AccessSize
* 8),
778 IRB
.getInt8PtrTy(), IRB
.getInt32Ty());
781 MsanSetAllocaOrigin4Fn
= M
.getOrInsertFunction(
782 "__msan_set_alloca_origin4", IRB
.getVoidTy(), IRB
.getInt8PtrTy(), IntptrTy
,
783 IRB
.getInt8PtrTy(), IntptrTy
);
785 M
.getOrInsertFunction("__msan_poison_stack", IRB
.getVoidTy(),
786 IRB
.getInt8PtrTy(), IntptrTy
);
789 /// Insert extern declaration of runtime-provided functions and globals.
790 void MemorySanitizer::initializeCallbacks(Module
&M
) {
791 // Only do this once.
792 if (CallbacksInitialized
)
796 // Initialize callbacks that are common for kernel and userspace
798 MsanChainOriginFn
= M
.getOrInsertFunction(
799 "__msan_chain_origin", IRB
.getInt32Ty(), IRB
.getInt32Ty());
800 MemmoveFn
= M
.getOrInsertFunction(
801 "__msan_memmove", IRB
.getInt8PtrTy(), IRB
.getInt8PtrTy(),
802 IRB
.getInt8PtrTy(), IntptrTy
);
803 MemcpyFn
= M
.getOrInsertFunction(
804 "__msan_memcpy", IRB
.getInt8PtrTy(), IRB
.getInt8PtrTy(), IRB
.getInt8PtrTy(),
806 MemsetFn
= M
.getOrInsertFunction(
807 "__msan_memset", IRB
.getInt8PtrTy(), IRB
.getInt8PtrTy(), IRB
.getInt32Ty(),
809 // We insert an empty inline asm after __msan_report* to avoid callback merge.
810 EmptyAsm
= InlineAsm::get(FunctionType::get(IRB
.getVoidTy(), false),
811 StringRef(""), StringRef(""),
812 /*hasSideEffects=*/true);
814 MsanInstrumentAsmStoreFn
=
815 M
.getOrInsertFunction("__msan_instrument_asm_store", IRB
.getVoidTy(),
816 PointerType::get(IRB
.getInt8Ty(), 0), IntptrTy
);
821 createUserspaceApi(M
);
823 CallbacksInitialized
= true;
826 FunctionCallee
MemorySanitizer::getKmsanShadowOriginAccessFn(bool isStore
,
828 FunctionCallee
*Fns
=
829 isStore
? MsanMetadataPtrForStore_1_8
: MsanMetadataPtrForLoad_1_8
;
844 /// Module-level initialization.
846 /// inserts a call to __msan_init to the module's constructor list.
847 void MemorySanitizer::initializeModule(Module
&M
) {
848 auto &DL
= M
.getDataLayout();
850 bool ShadowPassed
= ClShadowBase
.getNumOccurrences() > 0;
851 bool OriginPassed
= ClOriginBase
.getNumOccurrences() > 0;
852 // Check the overrides first
853 if (ShadowPassed
|| OriginPassed
) {
854 CustomMapParams
.AndMask
= ClAndMask
;
855 CustomMapParams
.XorMask
= ClXorMask
;
856 CustomMapParams
.ShadowBase
= ClShadowBase
;
857 CustomMapParams
.OriginBase
= ClOriginBase
;
858 MapParams
= &CustomMapParams
;
860 Triple
TargetTriple(M
.getTargetTriple());
861 switch (TargetTriple
.getOS()) {
862 case Triple::FreeBSD
:
863 switch (TargetTriple
.getArch()) {
865 MapParams
= FreeBSD_X86_MemoryMapParams
.bits64
;
868 MapParams
= FreeBSD_X86_MemoryMapParams
.bits32
;
871 report_fatal_error("unsupported architecture");
875 switch (TargetTriple
.getArch()) {
877 MapParams
= NetBSD_X86_MemoryMapParams
.bits64
;
880 report_fatal_error("unsupported architecture");
884 switch (TargetTriple
.getArch()) {
886 MapParams
= Linux_X86_MemoryMapParams
.bits64
;
889 MapParams
= Linux_X86_MemoryMapParams
.bits32
;
892 case Triple::mips64el
:
893 MapParams
= Linux_MIPS_MemoryMapParams
.bits64
;
896 case Triple::ppc64le
:
897 MapParams
= Linux_PowerPC_MemoryMapParams
.bits64
;
899 case Triple::aarch64
:
900 case Triple::aarch64_be
:
901 MapParams
= Linux_ARM_MemoryMapParams
.bits64
;
904 report_fatal_error("unsupported architecture");
908 report_fatal_error("unsupported operating system");
912 C
= &(M
.getContext());
914 IntptrTy
= IRB
.getIntPtrTy(DL
);
915 OriginTy
= IRB
.getInt32Ty();
917 ColdCallWeights
= MDBuilder(*C
).createBranchWeights(1, 1000);
918 OriginStoreWeights
= MDBuilder(*C
).createBranchWeights(1, 1000);
920 if (!CompileKernel
) {
921 std::tie(MsanCtorFunction
, std::ignore
) =
922 getOrCreateSanitizerCtorAndInitFunctions(
923 M
, kMsanModuleCtorName
, kMsanInitName
,
926 // This callback is invoked when the functions are created the first
927 // time. Hook them into the global ctors list in that case:
928 [&](Function
*Ctor
, FunctionCallee
) {
930 appendToGlobalCtors(M
, Ctor
, 0);
933 Comdat
*MsanCtorComdat
= M
.getOrInsertComdat(kMsanModuleCtorName
);
934 Ctor
->setComdat(MsanCtorComdat
);
935 appendToGlobalCtors(M
, Ctor
, 0, Ctor
);
939 M
.getOrInsertGlobal("__msan_track_origins", IRB
.getInt32Ty(), [&] {
940 return new GlobalVariable(
941 M
, IRB
.getInt32Ty(), true, GlobalValue::WeakODRLinkage
,
942 IRB
.getInt32(TrackOrigins
), "__msan_track_origins");
946 M
.getOrInsertGlobal("__msan_keep_going", IRB
.getInt32Ty(), [&] {
947 return new GlobalVariable(M
, IRB
.getInt32Ty(), true,
948 GlobalValue::WeakODRLinkage
,
949 IRB
.getInt32(Recover
), "__msan_keep_going");
954 bool MemorySanitizerLegacyPass::doInitialization(Module
&M
) {
955 MSan
.emplace(M
, Options
);
961 /// A helper class that handles instrumentation of VarArg
962 /// functions on a particular platform.
964 /// Implementations are expected to insert the instrumentation
965 /// necessary to propagate argument shadow through VarArg function
966 /// calls. Visit* methods are called during an InstVisitor pass over
967 /// the function, and should avoid creating new basic blocks. A new
968 /// instance of this class is created for each instrumented function.
969 struct VarArgHelper
{
970 virtual ~VarArgHelper() = default;
972 /// Visit a CallSite.
973 virtual void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) = 0;
975 /// Visit a va_start call.
976 virtual void visitVAStartInst(VAStartInst
&I
) = 0;
978 /// Visit a va_copy call.
979 virtual void visitVACopyInst(VACopyInst
&I
) = 0;
981 /// Finalize function instrumentation.
983 /// This method is called after visiting all interesting (see above)
984 /// instructions in a function.
985 virtual void finalizeInstrumentation() = 0;
988 struct MemorySanitizerVisitor
;
990 } // end anonymous namespace
992 static VarArgHelper
*CreateVarArgHelper(Function
&Func
, MemorySanitizer
&Msan
,
993 MemorySanitizerVisitor
&Visitor
);
995 static unsigned TypeSizeToSizeIndex(unsigned TypeSize
) {
996 if (TypeSize
<= 8) return 0;
997 return Log2_32_Ceil((TypeSize
+ 7) / 8);
1002 /// This class does all the work for a given function. Store and Load
1003 /// instructions store and load corresponding shadow and origin
1004 /// values. Most instructions propagate shadow from arguments to their
1005 /// return values. Certain instructions (most importantly, BranchInst)
1006 /// test their argument shadow and print reports (with a runtime call) if it's
1008 struct MemorySanitizerVisitor
: public InstVisitor
<MemorySanitizerVisitor
> {
1010 MemorySanitizer
&MS
;
1011 SmallVector
<PHINode
*, 16> ShadowPHINodes
, OriginPHINodes
;
1012 ValueMap
<Value
*, Value
*> ShadowMap
, OriginMap
;
1013 std::unique_ptr
<VarArgHelper
> VAHelper
;
1014 const TargetLibraryInfo
*TLI
;
1015 BasicBlock
*ActualFnStart
;
1017 // The following flags disable parts of MSan instrumentation based on
1018 // blacklist contents and command-line options.
1020 bool PropagateShadow
;
1023 bool CheckReturnValue
;
1025 struct ShadowOriginAndInsertPoint
{
1028 Instruction
*OrigIns
;
1030 ShadowOriginAndInsertPoint(Value
*S
, Value
*O
, Instruction
*I
)
1031 : Shadow(S
), Origin(O
), OrigIns(I
) {}
1033 SmallVector
<ShadowOriginAndInsertPoint
, 16> InstrumentationList
;
1034 bool InstrumentLifetimeStart
= ClHandleLifetimeIntrinsics
;
1035 SmallSet
<AllocaInst
*, 16> AllocaSet
;
1036 SmallVector
<std::pair
<IntrinsicInst
*, AllocaInst
*>, 16> LifetimeStartList
;
1037 SmallVector
<StoreInst
*, 16> StoreList
;
1039 MemorySanitizerVisitor(Function
&F
, MemorySanitizer
&MS
,
1040 const TargetLibraryInfo
&TLI
)
1041 : F(F
), MS(MS
), VAHelper(CreateVarArgHelper(F
, MS
, *this)), TLI(&TLI
) {
1042 bool SanitizeFunction
= F
.hasFnAttribute(Attribute::SanitizeMemory
);
1043 InsertChecks
= SanitizeFunction
;
1044 PropagateShadow
= SanitizeFunction
;
1045 PoisonStack
= SanitizeFunction
&& ClPoisonStack
;
1046 PoisonUndef
= SanitizeFunction
&& ClPoisonUndef
;
1047 // FIXME: Consider using SpecialCaseList to specify a list of functions that
1048 // must always return fully initialized values. For now, we hardcode "main".
1049 CheckReturnValue
= SanitizeFunction
&& (F
.getName() == "main");
1051 MS
.initializeCallbacks(*F
.getParent());
1052 if (MS
.CompileKernel
)
1053 ActualFnStart
= insertKmsanPrologue(F
);
1055 ActualFnStart
= &F
.getEntryBlock();
1057 LLVM_DEBUG(if (!InsertChecks
) dbgs()
1058 << "MemorySanitizer is not inserting checks into '"
1059 << F
.getName() << "'\n");
1062 Value
*updateOrigin(Value
*V
, IRBuilder
<> &IRB
) {
1063 if (MS
.TrackOrigins
<= 1) return V
;
1064 return IRB
.CreateCall(MS
.MsanChainOriginFn
, V
);
1067 Value
*originToIntptr(IRBuilder
<> &IRB
, Value
*Origin
) {
1068 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1069 unsigned IntptrSize
= DL
.getTypeStoreSize(MS
.IntptrTy
);
1070 if (IntptrSize
== kOriginSize
) return Origin
;
1071 assert(IntptrSize
== kOriginSize
* 2);
1072 Origin
= IRB
.CreateIntCast(Origin
, MS
.IntptrTy
, /* isSigned */ false);
1073 return IRB
.CreateOr(Origin
, IRB
.CreateShl(Origin
, kOriginSize
* 8));
1076 /// Fill memory range with the given origin value.
1077 void paintOrigin(IRBuilder
<> &IRB
, Value
*Origin
, Value
*OriginPtr
,
1078 unsigned Size
, unsigned Alignment
) {
1079 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1080 unsigned IntptrAlignment
= DL
.getABITypeAlignment(MS
.IntptrTy
);
1081 unsigned IntptrSize
= DL
.getTypeStoreSize(MS
.IntptrTy
);
1082 assert(IntptrAlignment
>= kMinOriginAlignment
);
1083 assert(IntptrSize
>= kOriginSize
);
1086 unsigned CurrentAlignment
= Alignment
;
1087 if (Alignment
>= IntptrAlignment
&& IntptrSize
> kOriginSize
) {
1088 Value
*IntptrOrigin
= originToIntptr(IRB
, Origin
);
1089 Value
*IntptrOriginPtr
=
1090 IRB
.CreatePointerCast(OriginPtr
, PointerType::get(MS
.IntptrTy
, 0));
1091 for (unsigned i
= 0; i
< Size
/ IntptrSize
; ++i
) {
1092 Value
*Ptr
= i
? IRB
.CreateConstGEP1_32(MS
.IntptrTy
, IntptrOriginPtr
, i
)
1094 IRB
.CreateAlignedStore(IntptrOrigin
, Ptr
, CurrentAlignment
);
1095 Ofs
+= IntptrSize
/ kOriginSize
;
1096 CurrentAlignment
= IntptrAlignment
;
1100 for (unsigned i
= Ofs
; i
< (Size
+ kOriginSize
- 1) / kOriginSize
; ++i
) {
1102 i
? IRB
.CreateConstGEP1_32(MS
.OriginTy
, OriginPtr
, i
) : OriginPtr
;
1103 IRB
.CreateAlignedStore(Origin
, GEP
, CurrentAlignment
);
1104 CurrentAlignment
= kMinOriginAlignment
;
1108 void storeOrigin(IRBuilder
<> &IRB
, Value
*Addr
, Value
*Shadow
, Value
*Origin
,
1109 Value
*OriginPtr
, unsigned Alignment
, bool AsCall
) {
1110 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1111 unsigned OriginAlignment
= std::max(kMinOriginAlignment
, Alignment
);
1112 unsigned StoreSize
= DL
.getTypeStoreSize(Shadow
->getType());
1113 if (Shadow
->getType()->isAggregateType()) {
1114 paintOrigin(IRB
, updateOrigin(Origin
, IRB
), OriginPtr
, StoreSize
,
1117 Value
*ConvertedShadow
= convertToShadowTyNoVec(Shadow
, IRB
);
1118 Constant
*ConstantShadow
= dyn_cast_or_null
<Constant
>(ConvertedShadow
);
1119 if (ConstantShadow
) {
1120 if (ClCheckConstantShadow
&& !ConstantShadow
->isZeroValue())
1121 paintOrigin(IRB
, updateOrigin(Origin
, IRB
), OriginPtr
, StoreSize
,
1126 unsigned TypeSizeInBits
=
1127 DL
.getTypeSizeInBits(ConvertedShadow
->getType());
1128 unsigned SizeIndex
= TypeSizeToSizeIndex(TypeSizeInBits
);
1129 if (AsCall
&& SizeIndex
< kNumberOfAccessSizes
&& !MS
.CompileKernel
) {
1130 FunctionCallee Fn
= MS
.MaybeStoreOriginFn
[SizeIndex
];
1131 Value
*ConvertedShadow2
= IRB
.CreateZExt(
1132 ConvertedShadow
, IRB
.getIntNTy(8 * (1 << SizeIndex
)));
1133 IRB
.CreateCall(Fn
, {ConvertedShadow2
,
1134 IRB
.CreatePointerCast(Addr
, IRB
.getInt8PtrTy()),
1137 Value
*Cmp
= IRB
.CreateICmpNE(
1138 ConvertedShadow
, getCleanShadow(ConvertedShadow
), "_mscmp");
1139 Instruction
*CheckTerm
= SplitBlockAndInsertIfThen(
1140 Cmp
, &*IRB
.GetInsertPoint(), false, MS
.OriginStoreWeights
);
1141 IRBuilder
<> IRBNew(CheckTerm
);
1142 paintOrigin(IRBNew
, updateOrigin(Origin
, IRBNew
), OriginPtr
, StoreSize
,
1148 void materializeStores(bool InstrumentWithCalls
) {
1149 for (StoreInst
*SI
: StoreList
) {
1150 IRBuilder
<> IRB(SI
);
1151 Value
*Val
= SI
->getValueOperand();
1152 Value
*Addr
= SI
->getPointerOperand();
1153 Value
*Shadow
= SI
->isAtomic() ? getCleanShadow(Val
) : getShadow(Val
);
1154 Value
*ShadowPtr
, *OriginPtr
;
1155 Type
*ShadowTy
= Shadow
->getType();
1156 unsigned Alignment
= SI
->getAlignment();
1157 unsigned OriginAlignment
= std::max(kMinOriginAlignment
, Alignment
);
1158 std::tie(ShadowPtr
, OriginPtr
) =
1159 getShadowOriginPtr(Addr
, IRB
, ShadowTy
, Alignment
, /*isStore*/ true);
1161 StoreInst
*NewSI
= IRB
.CreateAlignedStore(Shadow
, ShadowPtr
, Alignment
);
1162 LLVM_DEBUG(dbgs() << " STORE: " << *NewSI
<< "\n");
1166 SI
->setOrdering(addReleaseOrdering(SI
->getOrdering()));
1168 if (MS
.TrackOrigins
&& !SI
->isAtomic())
1169 storeOrigin(IRB
, Addr
, Shadow
, getOrigin(Val
), OriginPtr
,
1170 OriginAlignment
, InstrumentWithCalls
);
1174 /// Helper function to insert a warning at IRB's current insert point.
1175 void insertWarningFn(IRBuilder
<> &IRB
, Value
*Origin
) {
1177 Origin
= (Value
*)IRB
.getInt32(0);
1178 if (MS
.CompileKernel
) {
1179 IRB
.CreateCall(MS
.WarningFn
, Origin
);
1181 if (MS
.TrackOrigins
) {
1182 IRB
.CreateStore(Origin
, MS
.OriginTLS
);
1184 IRB
.CreateCall(MS
.WarningFn
, {});
1186 IRB
.CreateCall(MS
.EmptyAsm
, {});
1187 // FIXME: Insert UnreachableInst if !MS.Recover?
1188 // This may invalidate some of the following checks and needs to be done
1192 void materializeOneCheck(Instruction
*OrigIns
, Value
*Shadow
, Value
*Origin
,
1194 IRBuilder
<> IRB(OrigIns
);
1195 LLVM_DEBUG(dbgs() << " SHAD0 : " << *Shadow
<< "\n");
1196 Value
*ConvertedShadow
= convertToShadowTyNoVec(Shadow
, IRB
);
1197 LLVM_DEBUG(dbgs() << " SHAD1 : " << *ConvertedShadow
<< "\n");
1199 Constant
*ConstantShadow
= dyn_cast_or_null
<Constant
>(ConvertedShadow
);
1200 if (ConstantShadow
) {
1201 if (ClCheckConstantShadow
&& !ConstantShadow
->isZeroValue()) {
1202 insertWarningFn(IRB
, Origin
);
1207 const DataLayout
&DL
= OrigIns
->getModule()->getDataLayout();
1209 unsigned TypeSizeInBits
= DL
.getTypeSizeInBits(ConvertedShadow
->getType());
1210 unsigned SizeIndex
= TypeSizeToSizeIndex(TypeSizeInBits
);
1211 if (AsCall
&& SizeIndex
< kNumberOfAccessSizes
&& !MS
.CompileKernel
) {
1212 FunctionCallee Fn
= MS
.MaybeWarningFn
[SizeIndex
];
1213 Value
*ConvertedShadow2
=
1214 IRB
.CreateZExt(ConvertedShadow
, IRB
.getIntNTy(8 * (1 << SizeIndex
)));
1215 IRB
.CreateCall(Fn
, {ConvertedShadow2
, MS
.TrackOrigins
&& Origin
1217 : (Value
*)IRB
.getInt32(0)});
1219 Value
*Cmp
= IRB
.CreateICmpNE(ConvertedShadow
,
1220 getCleanShadow(ConvertedShadow
), "_mscmp");
1221 Instruction
*CheckTerm
= SplitBlockAndInsertIfThen(
1223 /* Unreachable */ !MS
.Recover
, MS
.ColdCallWeights
);
1225 IRB
.SetInsertPoint(CheckTerm
);
1226 insertWarningFn(IRB
, Origin
);
1227 LLVM_DEBUG(dbgs() << " CHECK: " << *Cmp
<< "\n");
1231 void materializeChecks(bool InstrumentWithCalls
) {
1232 for (const auto &ShadowData
: InstrumentationList
) {
1233 Instruction
*OrigIns
= ShadowData
.OrigIns
;
1234 Value
*Shadow
= ShadowData
.Shadow
;
1235 Value
*Origin
= ShadowData
.Origin
;
1236 materializeOneCheck(OrigIns
, Shadow
, Origin
, InstrumentWithCalls
);
1238 LLVM_DEBUG(dbgs() << "DONE:\n" << F
);
1241 BasicBlock
*insertKmsanPrologue(Function
&F
) {
1243 SplitBlock(&F
.getEntryBlock(), F
.getEntryBlock().getFirstNonPHI());
1244 IRBuilder
<> IRB(F
.getEntryBlock().getFirstNonPHI());
1245 Value
*ContextState
= IRB
.CreateCall(MS
.MsanGetContextStateFn
, {});
1246 Constant
*Zero
= IRB
.getInt32(0);
1247 MS
.ParamTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1248 {Zero
, IRB
.getInt32(0)}, "param_shadow");
1249 MS
.RetvalTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1250 {Zero
, IRB
.getInt32(1)}, "retval_shadow");
1251 MS
.VAArgTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1252 {Zero
, IRB
.getInt32(2)}, "va_arg_shadow");
1253 MS
.VAArgOriginTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1254 {Zero
, IRB
.getInt32(3)}, "va_arg_origin");
1255 MS
.VAArgOverflowSizeTLS
=
1256 IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1257 {Zero
, IRB
.getInt32(4)}, "va_arg_overflow_size");
1258 MS
.ParamOriginTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1259 {Zero
, IRB
.getInt32(5)}, "param_origin");
1260 MS
.RetvalOriginTLS
=
1261 IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1262 {Zero
, IRB
.getInt32(6)}, "retval_origin");
1266 /// Add MemorySanitizer instrumentation to a function.
1267 bool runOnFunction() {
1268 // In the presence of unreachable blocks, we may see Phi nodes with
1269 // incoming nodes from such blocks. Since InstVisitor skips unreachable
1270 // blocks, such nodes will not have any shadow value associated with them.
1271 // It's easier to remove unreachable blocks than deal with missing shadow.
1272 removeUnreachableBlocks(F
);
1274 // Iterate all BBs in depth-first order and create shadow instructions
1275 // for all instructions (where applicable).
1276 // For PHI nodes we create dummy shadow PHIs which will be finalized later.
1277 for (BasicBlock
*BB
: depth_first(ActualFnStart
))
1280 // Finalize PHI nodes.
1281 for (PHINode
*PN
: ShadowPHINodes
) {
1282 PHINode
*PNS
= cast
<PHINode
>(getShadow(PN
));
1283 PHINode
*PNO
= MS
.TrackOrigins
? cast
<PHINode
>(getOrigin(PN
)) : nullptr;
1284 size_t NumValues
= PN
->getNumIncomingValues();
1285 for (size_t v
= 0; v
< NumValues
; v
++) {
1286 PNS
->addIncoming(getShadow(PN
, v
), PN
->getIncomingBlock(v
));
1287 if (PNO
) PNO
->addIncoming(getOrigin(PN
, v
), PN
->getIncomingBlock(v
));
1291 VAHelper
->finalizeInstrumentation();
1293 // Poison llvm.lifetime.start intrinsics, if we haven't fallen back to
1294 // instrumenting only allocas.
1295 if (InstrumentLifetimeStart
) {
1296 for (auto Item
: LifetimeStartList
) {
1297 instrumentAlloca(*Item
.second
, Item
.first
);
1298 AllocaSet
.erase(Item
.second
);
1301 // Poison the allocas for which we didn't instrument the corresponding
1302 // lifetime intrinsics.
1303 for (AllocaInst
*AI
: AllocaSet
)
1304 instrumentAlloca(*AI
);
1306 bool InstrumentWithCalls
= ClInstrumentationWithCallThreshold
>= 0 &&
1307 InstrumentationList
.size() + StoreList
.size() >
1308 (unsigned)ClInstrumentationWithCallThreshold
;
1310 // Insert shadow value checks.
1311 materializeChecks(InstrumentWithCalls
);
1313 // Delayed instrumentation of StoreInst.
1314 // This may not add new address checks.
1315 materializeStores(InstrumentWithCalls
);
1320 /// Compute the shadow type that corresponds to a given Value.
1321 Type
*getShadowTy(Value
*V
) {
1322 return getShadowTy(V
->getType());
1325 /// Compute the shadow type that corresponds to a given Type.
1326 Type
*getShadowTy(Type
*OrigTy
) {
1327 if (!OrigTy
->isSized()) {
1330 // For integer type, shadow is the same as the original type.
1331 // This may return weird-sized types like i1.
1332 if (IntegerType
*IT
= dyn_cast
<IntegerType
>(OrigTy
))
1334 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1335 if (VectorType
*VT
= dyn_cast
<VectorType
>(OrigTy
)) {
1336 uint32_t EltSize
= DL
.getTypeSizeInBits(VT
->getElementType());
1337 return VectorType::get(IntegerType::get(*MS
.C
, EltSize
),
1338 VT
->getNumElements());
1340 if (ArrayType
*AT
= dyn_cast
<ArrayType
>(OrigTy
)) {
1341 return ArrayType::get(getShadowTy(AT
->getElementType()),
1342 AT
->getNumElements());
1344 if (StructType
*ST
= dyn_cast
<StructType
>(OrigTy
)) {
1345 SmallVector
<Type
*, 4> Elements
;
1346 for (unsigned i
= 0, n
= ST
->getNumElements(); i
< n
; i
++)
1347 Elements
.push_back(getShadowTy(ST
->getElementType(i
)));
1348 StructType
*Res
= StructType::get(*MS
.C
, Elements
, ST
->isPacked());
1349 LLVM_DEBUG(dbgs() << "getShadowTy: " << *ST
<< " ===> " << *Res
<< "\n");
1352 uint32_t TypeSize
= DL
.getTypeSizeInBits(OrigTy
);
1353 return IntegerType::get(*MS
.C
, TypeSize
);
1356 /// Flatten a vector type.
1357 Type
*getShadowTyNoVec(Type
*ty
) {
1358 if (VectorType
*vt
= dyn_cast
<VectorType
>(ty
))
1359 return IntegerType::get(*MS
.C
, vt
->getBitWidth());
1363 /// Convert a shadow value to it's flattened variant.
1364 Value
*convertToShadowTyNoVec(Value
*V
, IRBuilder
<> &IRB
) {
1365 Type
*Ty
= V
->getType();
1366 Type
*NoVecTy
= getShadowTyNoVec(Ty
);
1367 if (Ty
== NoVecTy
) return V
;
1368 return IRB
.CreateBitCast(V
, NoVecTy
);
1371 /// Compute the integer shadow offset that corresponds to a given
1372 /// application address.
1374 /// Offset = (Addr & ~AndMask) ^ XorMask
1375 Value
*getShadowPtrOffset(Value
*Addr
, IRBuilder
<> &IRB
) {
1376 Value
*OffsetLong
= IRB
.CreatePointerCast(Addr
, MS
.IntptrTy
);
1378 uint64_t AndMask
= MS
.MapParams
->AndMask
;
1381 IRB
.CreateAnd(OffsetLong
, ConstantInt::get(MS
.IntptrTy
, ~AndMask
));
1383 uint64_t XorMask
= MS
.MapParams
->XorMask
;
1386 IRB
.CreateXor(OffsetLong
, ConstantInt::get(MS
.IntptrTy
, XorMask
));
1390 /// Compute the shadow and origin addresses corresponding to a given
1391 /// application address.
1393 /// Shadow = ShadowBase + Offset
1394 /// Origin = (OriginBase + Offset) & ~3ULL
1395 std::pair
<Value
*, Value
*> getShadowOriginPtrUserspace(Value
*Addr
,
1398 unsigned Alignment
) {
1399 Value
*ShadowOffset
= getShadowPtrOffset(Addr
, IRB
);
1400 Value
*ShadowLong
= ShadowOffset
;
1401 uint64_t ShadowBase
= MS
.MapParams
->ShadowBase
;
1402 if (ShadowBase
!= 0) {
1404 IRB
.CreateAdd(ShadowLong
,
1405 ConstantInt::get(MS
.IntptrTy
, ShadowBase
));
1408 IRB
.CreateIntToPtr(ShadowLong
, PointerType::get(ShadowTy
, 0));
1409 Value
*OriginPtr
= nullptr;
1410 if (MS
.TrackOrigins
) {
1411 Value
*OriginLong
= ShadowOffset
;
1412 uint64_t OriginBase
= MS
.MapParams
->OriginBase
;
1413 if (OriginBase
!= 0)
1414 OriginLong
= IRB
.CreateAdd(OriginLong
,
1415 ConstantInt::get(MS
.IntptrTy
, OriginBase
));
1416 if (Alignment
< kMinOriginAlignment
) {
1417 uint64_t Mask
= kMinOriginAlignment
- 1;
1419 IRB
.CreateAnd(OriginLong
, ConstantInt::get(MS
.IntptrTy
, ~Mask
));
1422 IRB
.CreateIntToPtr(OriginLong
, PointerType::get(MS
.OriginTy
, 0));
1424 return std::make_pair(ShadowPtr
, OriginPtr
);
1427 std::pair
<Value
*, Value
*>
1428 getShadowOriginPtrKernel(Value
*Addr
, IRBuilder
<> &IRB
, Type
*ShadowTy
,
1429 unsigned Alignment
, bool isStore
) {
1430 Value
*ShadowOriginPtrs
;
1431 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1432 int Size
= DL
.getTypeStoreSize(ShadowTy
);
1434 FunctionCallee Getter
= MS
.getKmsanShadowOriginAccessFn(isStore
, Size
);
1436 IRB
.CreatePointerCast(Addr
, PointerType::get(IRB
.getInt8Ty(), 0));
1438 ShadowOriginPtrs
= IRB
.CreateCall(Getter
, AddrCast
);
1440 Value
*SizeVal
= ConstantInt::get(MS
.IntptrTy
, Size
);
1441 ShadowOriginPtrs
= IRB
.CreateCall(isStore
? MS
.MsanMetadataPtrForStoreN
1442 : MS
.MsanMetadataPtrForLoadN
,
1443 {AddrCast
, SizeVal
});
1445 Value
*ShadowPtr
= IRB
.CreateExtractValue(ShadowOriginPtrs
, 0);
1446 ShadowPtr
= IRB
.CreatePointerCast(ShadowPtr
, PointerType::get(ShadowTy
, 0));
1447 Value
*OriginPtr
= IRB
.CreateExtractValue(ShadowOriginPtrs
, 1);
1449 return std::make_pair(ShadowPtr
, OriginPtr
);
1452 std::pair
<Value
*, Value
*> getShadowOriginPtr(Value
*Addr
, IRBuilder
<> &IRB
,
1456 std::pair
<Value
*, Value
*> ret
;
1457 if (MS
.CompileKernel
)
1458 ret
= getShadowOriginPtrKernel(Addr
, IRB
, ShadowTy
, Alignment
, isStore
);
1460 ret
= getShadowOriginPtrUserspace(Addr
, IRB
, ShadowTy
, Alignment
);
1464 /// Compute the shadow address for a given function argument.
1466 /// Shadow = ParamTLS+ArgOffset.
1467 Value
*getShadowPtrForArgument(Value
*A
, IRBuilder
<> &IRB
,
1469 Value
*Base
= IRB
.CreatePointerCast(MS
.ParamTLS
, MS
.IntptrTy
);
1471 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
1472 return IRB
.CreateIntToPtr(Base
, PointerType::get(getShadowTy(A
), 0),
1476 /// Compute the origin address for a given function argument.
1477 Value
*getOriginPtrForArgument(Value
*A
, IRBuilder
<> &IRB
,
1479 if (!MS
.TrackOrigins
)
1481 Value
*Base
= IRB
.CreatePointerCast(MS
.ParamOriginTLS
, MS
.IntptrTy
);
1483 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
1484 return IRB
.CreateIntToPtr(Base
, PointerType::get(MS
.OriginTy
, 0),
1488 /// Compute the shadow address for a retval.
1489 Value
*getShadowPtrForRetval(Value
*A
, IRBuilder
<> &IRB
) {
1490 return IRB
.CreatePointerCast(MS
.RetvalTLS
,
1491 PointerType::get(getShadowTy(A
), 0),
1495 /// Compute the origin address for a retval.
1496 Value
*getOriginPtrForRetval(IRBuilder
<> &IRB
) {
1497 // We keep a single origin for the entire retval. Might be too optimistic.
1498 return MS
.RetvalOriginTLS
;
1501 /// Set SV to be the shadow value for V.
1502 void setShadow(Value
*V
, Value
*SV
) {
1503 assert(!ShadowMap
.count(V
) && "Values may only have one shadow");
1504 ShadowMap
[V
] = PropagateShadow
? SV
: getCleanShadow(V
);
1507 /// Set Origin to be the origin value for V.
1508 void setOrigin(Value
*V
, Value
*Origin
) {
1509 if (!MS
.TrackOrigins
) return;
1510 assert(!OriginMap
.count(V
) && "Values may only have one origin");
1511 LLVM_DEBUG(dbgs() << "ORIGIN: " << *V
<< " ==> " << *Origin
<< "\n");
1512 OriginMap
[V
] = Origin
;
1515 Constant
*getCleanShadow(Type
*OrigTy
) {
1516 Type
*ShadowTy
= getShadowTy(OrigTy
);
1519 return Constant::getNullValue(ShadowTy
);
1522 /// Create a clean shadow value for a given value.
1524 /// Clean shadow (all zeroes) means all bits of the value are defined
1526 Constant
*getCleanShadow(Value
*V
) {
1527 return getCleanShadow(V
->getType());
1530 /// Create a dirty shadow of a given shadow type.
1531 Constant
*getPoisonedShadow(Type
*ShadowTy
) {
1533 if (isa
<IntegerType
>(ShadowTy
) || isa
<VectorType
>(ShadowTy
))
1534 return Constant::getAllOnesValue(ShadowTy
);
1535 if (ArrayType
*AT
= dyn_cast
<ArrayType
>(ShadowTy
)) {
1536 SmallVector
<Constant
*, 4> Vals(AT
->getNumElements(),
1537 getPoisonedShadow(AT
->getElementType()));
1538 return ConstantArray::get(AT
, Vals
);
1540 if (StructType
*ST
= dyn_cast
<StructType
>(ShadowTy
)) {
1541 SmallVector
<Constant
*, 4> Vals
;
1542 for (unsigned i
= 0, n
= ST
->getNumElements(); i
< n
; i
++)
1543 Vals
.push_back(getPoisonedShadow(ST
->getElementType(i
)));
1544 return ConstantStruct::get(ST
, Vals
);
1546 llvm_unreachable("Unexpected shadow type");
1549 /// Create a dirty shadow for a given value.
1550 Constant
*getPoisonedShadow(Value
*V
) {
1551 Type
*ShadowTy
= getShadowTy(V
);
1554 return getPoisonedShadow(ShadowTy
);
1557 /// Create a clean (zero) origin.
1558 Value
*getCleanOrigin() {
1559 return Constant::getNullValue(MS
.OriginTy
);
1562 /// Get the shadow value for a given Value.
1564 /// This function either returns the value set earlier with setShadow,
1565 /// or extracts if from ParamTLS (for function arguments).
1566 Value
*getShadow(Value
*V
) {
1567 if (!PropagateShadow
) return getCleanShadow(V
);
1568 if (Instruction
*I
= dyn_cast
<Instruction
>(V
)) {
1569 if (I
->getMetadata("nosanitize"))
1570 return getCleanShadow(V
);
1571 // For instructions the shadow is already stored in the map.
1572 Value
*Shadow
= ShadowMap
[V
];
1574 LLVM_DEBUG(dbgs() << "No shadow: " << *V
<< "\n" << *(I
->getParent()));
1576 assert(Shadow
&& "No shadow for a value");
1580 if (UndefValue
*U
= dyn_cast
<UndefValue
>(V
)) {
1581 Value
*AllOnes
= PoisonUndef
? getPoisonedShadow(V
) : getCleanShadow(V
);
1582 LLVM_DEBUG(dbgs() << "Undef: " << *U
<< " ==> " << *AllOnes
<< "\n");
1586 if (Argument
*A
= dyn_cast
<Argument
>(V
)) {
1587 // For arguments we compute the shadow on demand and store it in the map.
1588 Value
**ShadowPtr
= &ShadowMap
[V
];
1591 Function
*F
= A
->getParent();
1592 IRBuilder
<> EntryIRB(ActualFnStart
->getFirstNonPHI());
1593 unsigned ArgOffset
= 0;
1594 const DataLayout
&DL
= F
->getParent()->getDataLayout();
1595 for (auto &FArg
: F
->args()) {
1596 if (!FArg
.getType()->isSized()) {
1597 LLVM_DEBUG(dbgs() << "Arg is not sized\n");
1602 ? DL
.getTypeAllocSize(FArg
.getType()->getPointerElementType())
1603 : DL
.getTypeAllocSize(FArg
.getType());
1605 bool Overflow
= ArgOffset
+ Size
> kParamTLSSize
;
1606 Value
*Base
= getShadowPtrForArgument(&FArg
, EntryIRB
, ArgOffset
);
1607 if (FArg
.hasByValAttr()) {
1608 // ByVal pointer itself has clean shadow. We copy the actual
1609 // argument shadow to the underlying memory.
1610 // Figure out maximal valid memcpy alignment.
1611 unsigned ArgAlign
= FArg
.getParamAlignment();
1612 if (ArgAlign
== 0) {
1613 Type
*EltType
= A
->getType()->getPointerElementType();
1614 ArgAlign
= DL
.getABITypeAlignment(EltType
);
1616 Value
*CpShadowPtr
=
1617 getShadowOriginPtr(V
, EntryIRB
, EntryIRB
.getInt8Ty(), ArgAlign
,
1620 // TODO(glider): need to copy origins.
1622 // ParamTLS overflow.
1623 EntryIRB
.CreateMemSet(
1624 CpShadowPtr
, Constant::getNullValue(EntryIRB
.getInt8Ty()),
1627 unsigned CopyAlign
= std::min(ArgAlign
, kShadowTLSAlignment
);
1628 Value
*Cpy
= EntryIRB
.CreateMemCpy(CpShadowPtr
, CopyAlign
, Base
,
1630 LLVM_DEBUG(dbgs() << " ByValCpy: " << *Cpy
<< "\n");
1633 *ShadowPtr
= getCleanShadow(V
);
1636 // ParamTLS overflow.
1637 *ShadowPtr
= getCleanShadow(V
);
1639 *ShadowPtr
= EntryIRB
.CreateAlignedLoad(getShadowTy(&FArg
), Base
,
1640 kShadowTLSAlignment
);
1644 << " ARG: " << FArg
<< " ==> " << **ShadowPtr
<< "\n");
1645 if (MS
.TrackOrigins
&& !Overflow
) {
1647 getOriginPtrForArgument(&FArg
, EntryIRB
, ArgOffset
);
1648 setOrigin(A
, EntryIRB
.CreateLoad(MS
.OriginTy
, OriginPtr
));
1650 setOrigin(A
, getCleanOrigin());
1653 ArgOffset
+= alignTo(Size
, kShadowTLSAlignment
);
1655 assert(*ShadowPtr
&& "Could not find shadow for an argument");
1658 // For everything else the shadow is zero.
1659 return getCleanShadow(V
);
1662 /// Get the shadow for i-th argument of the instruction I.
1663 Value
*getShadow(Instruction
*I
, int i
) {
1664 return getShadow(I
->getOperand(i
));
1667 /// Get the origin for a value.
1668 Value
*getOrigin(Value
*V
) {
1669 if (!MS
.TrackOrigins
) return nullptr;
1670 if (!PropagateShadow
) return getCleanOrigin();
1671 if (isa
<Constant
>(V
)) return getCleanOrigin();
1672 assert((isa
<Instruction
>(V
) || isa
<Argument
>(V
)) &&
1673 "Unexpected value type in getOrigin()");
1674 if (Instruction
*I
= dyn_cast
<Instruction
>(V
)) {
1675 if (I
->getMetadata("nosanitize"))
1676 return getCleanOrigin();
1678 Value
*Origin
= OriginMap
[V
];
1679 assert(Origin
&& "Missing origin");
1683 /// Get the origin for i-th argument of the instruction I.
1684 Value
*getOrigin(Instruction
*I
, int i
) {
1685 return getOrigin(I
->getOperand(i
));
1688 /// Remember the place where a shadow check should be inserted.
1690 /// This location will be later instrumented with a check that will print a
1691 /// UMR warning in runtime if the shadow value is not 0.
1692 void insertShadowCheck(Value
*Shadow
, Value
*Origin
, Instruction
*OrigIns
) {
1694 if (!InsertChecks
) return;
1696 Type
*ShadowTy
= Shadow
->getType();
1697 assert((isa
<IntegerType
>(ShadowTy
) || isa
<VectorType
>(ShadowTy
)) &&
1698 "Can only insert checks for integer and vector shadow types");
1700 InstrumentationList
.push_back(
1701 ShadowOriginAndInsertPoint(Shadow
, Origin
, OrigIns
));
1704 /// Remember the place where a shadow check should be inserted.
1706 /// This location will be later instrumented with a check that will print a
1707 /// UMR warning in runtime if the value is not fully defined.
1708 void insertShadowCheck(Value
*Val
, Instruction
*OrigIns
) {
1710 Value
*Shadow
, *Origin
;
1711 if (ClCheckConstantShadow
) {
1712 Shadow
= getShadow(Val
);
1713 if (!Shadow
) return;
1714 Origin
= getOrigin(Val
);
1716 Shadow
= dyn_cast_or_null
<Instruction
>(getShadow(Val
));
1717 if (!Shadow
) return;
1718 Origin
= dyn_cast_or_null
<Instruction
>(getOrigin(Val
));
1720 insertShadowCheck(Shadow
, Origin
, OrigIns
);
1723 AtomicOrdering
addReleaseOrdering(AtomicOrdering a
) {
1725 case AtomicOrdering::NotAtomic
:
1726 return AtomicOrdering::NotAtomic
;
1727 case AtomicOrdering::Unordered
:
1728 case AtomicOrdering::Monotonic
:
1729 case AtomicOrdering::Release
:
1730 return AtomicOrdering::Release
;
1731 case AtomicOrdering::Acquire
:
1732 case AtomicOrdering::AcquireRelease
:
1733 return AtomicOrdering::AcquireRelease
;
1734 case AtomicOrdering::SequentiallyConsistent
:
1735 return AtomicOrdering::SequentiallyConsistent
;
1737 llvm_unreachable("Unknown ordering");
1740 AtomicOrdering
addAcquireOrdering(AtomicOrdering a
) {
1742 case AtomicOrdering::NotAtomic
:
1743 return AtomicOrdering::NotAtomic
;
1744 case AtomicOrdering::Unordered
:
1745 case AtomicOrdering::Monotonic
:
1746 case AtomicOrdering::Acquire
:
1747 return AtomicOrdering::Acquire
;
1748 case AtomicOrdering::Release
:
1749 case AtomicOrdering::AcquireRelease
:
1750 return AtomicOrdering::AcquireRelease
;
1751 case AtomicOrdering::SequentiallyConsistent
:
1752 return AtomicOrdering::SequentiallyConsistent
;
1754 llvm_unreachable("Unknown ordering");
1757 // ------------------- Visitors.
1758 using InstVisitor
<MemorySanitizerVisitor
>::visit
;
1759 void visit(Instruction
&I
) {
1760 if (!I
.getMetadata("nosanitize"))
1761 InstVisitor
<MemorySanitizerVisitor
>::visit(I
);
1764 /// Instrument LoadInst
1766 /// Loads the corresponding shadow and (optionally) origin.
1767 /// Optionally, checks that the load address is fully defined.
1768 void visitLoadInst(LoadInst
&I
) {
1769 assert(I
.getType()->isSized() && "Load type must have size");
1770 assert(!I
.getMetadata("nosanitize"));
1771 IRBuilder
<> IRB(I
.getNextNode());
1772 Type
*ShadowTy
= getShadowTy(&I
);
1773 Value
*Addr
= I
.getPointerOperand();
1774 Value
*ShadowPtr
, *OriginPtr
;
1775 unsigned Alignment
= I
.getAlignment();
1776 if (PropagateShadow
) {
1777 std::tie(ShadowPtr
, OriginPtr
) =
1778 getShadowOriginPtr(Addr
, IRB
, ShadowTy
, Alignment
, /*isStore*/ false);
1780 IRB
.CreateAlignedLoad(ShadowTy
, ShadowPtr
, Alignment
, "_msld"));
1782 setShadow(&I
, getCleanShadow(&I
));
1785 if (ClCheckAccessAddress
)
1786 insertShadowCheck(I
.getPointerOperand(), &I
);
1789 I
.setOrdering(addAcquireOrdering(I
.getOrdering()));
1791 if (MS
.TrackOrigins
) {
1792 if (PropagateShadow
) {
1793 unsigned OriginAlignment
= std::max(kMinOriginAlignment
, Alignment
);
1795 &I
, IRB
.CreateAlignedLoad(MS
.OriginTy
, OriginPtr
, OriginAlignment
));
1797 setOrigin(&I
, getCleanOrigin());
1802 /// Instrument StoreInst
1804 /// Stores the corresponding shadow and (optionally) origin.
1805 /// Optionally, checks that the store address is fully defined.
1806 void visitStoreInst(StoreInst
&I
) {
1807 StoreList
.push_back(&I
);
1808 if (ClCheckAccessAddress
)
1809 insertShadowCheck(I
.getPointerOperand(), &I
);
1812 void handleCASOrRMW(Instruction
&I
) {
1813 assert(isa
<AtomicRMWInst
>(I
) || isa
<AtomicCmpXchgInst
>(I
));
1815 IRBuilder
<> IRB(&I
);
1816 Value
*Addr
= I
.getOperand(0);
1817 Value
*ShadowPtr
= getShadowOriginPtr(Addr
, IRB
, I
.getType(),
1818 /*Alignment*/ 1, /*isStore*/ true)
1821 if (ClCheckAccessAddress
)
1822 insertShadowCheck(Addr
, &I
);
1824 // Only test the conditional argument of cmpxchg instruction.
1825 // The other argument can potentially be uninitialized, but we can not
1826 // detect this situation reliably without possible false positives.
1827 if (isa
<AtomicCmpXchgInst
>(I
))
1828 insertShadowCheck(I
.getOperand(1), &I
);
1830 IRB
.CreateStore(getCleanShadow(&I
), ShadowPtr
);
1832 setShadow(&I
, getCleanShadow(&I
));
1833 setOrigin(&I
, getCleanOrigin());
1836 void visitAtomicRMWInst(AtomicRMWInst
&I
) {
1838 I
.setOrdering(addReleaseOrdering(I
.getOrdering()));
1841 void visitAtomicCmpXchgInst(AtomicCmpXchgInst
&I
) {
1843 I
.setSuccessOrdering(addReleaseOrdering(I
.getSuccessOrdering()));
1846 // Vector manipulation.
1847 void visitExtractElementInst(ExtractElementInst
&I
) {
1848 insertShadowCheck(I
.getOperand(1), &I
);
1849 IRBuilder
<> IRB(&I
);
1850 setShadow(&I
, IRB
.CreateExtractElement(getShadow(&I
, 0), I
.getOperand(1),
1852 setOrigin(&I
, getOrigin(&I
, 0));
1855 void visitInsertElementInst(InsertElementInst
&I
) {
1856 insertShadowCheck(I
.getOperand(2), &I
);
1857 IRBuilder
<> IRB(&I
);
1858 setShadow(&I
, IRB
.CreateInsertElement(getShadow(&I
, 0), getShadow(&I
, 1),
1859 I
.getOperand(2), "_msprop"));
1860 setOriginForNaryOp(I
);
1863 void visitShuffleVectorInst(ShuffleVectorInst
&I
) {
1864 insertShadowCheck(I
.getOperand(2), &I
);
1865 IRBuilder
<> IRB(&I
);
1866 setShadow(&I
, IRB
.CreateShuffleVector(getShadow(&I
, 0), getShadow(&I
, 1),
1867 I
.getOperand(2), "_msprop"));
1868 setOriginForNaryOp(I
);
1872 void visitSExtInst(SExtInst
&I
) {
1873 IRBuilder
<> IRB(&I
);
1874 setShadow(&I
, IRB
.CreateSExt(getShadow(&I
, 0), I
.getType(), "_msprop"));
1875 setOrigin(&I
, getOrigin(&I
, 0));
1878 void visitZExtInst(ZExtInst
&I
) {
1879 IRBuilder
<> IRB(&I
);
1880 setShadow(&I
, IRB
.CreateZExt(getShadow(&I
, 0), I
.getType(), "_msprop"));
1881 setOrigin(&I
, getOrigin(&I
, 0));
1884 void visitTruncInst(TruncInst
&I
) {
1885 IRBuilder
<> IRB(&I
);
1886 setShadow(&I
, IRB
.CreateTrunc(getShadow(&I
, 0), I
.getType(), "_msprop"));
1887 setOrigin(&I
, getOrigin(&I
, 0));
1890 void visitBitCastInst(BitCastInst
&I
) {
1891 // Special case: if this is the bitcast (there is exactly 1 allowed) between
1892 // a musttail call and a ret, don't instrument. New instructions are not
1893 // allowed after a musttail call.
1894 if (auto *CI
= dyn_cast
<CallInst
>(I
.getOperand(0)))
1895 if (CI
->isMustTailCall())
1897 IRBuilder
<> IRB(&I
);
1898 setShadow(&I
, IRB
.CreateBitCast(getShadow(&I
, 0), getShadowTy(&I
)));
1899 setOrigin(&I
, getOrigin(&I
, 0));
1902 void visitPtrToIntInst(PtrToIntInst
&I
) {
1903 IRBuilder
<> IRB(&I
);
1904 setShadow(&I
, IRB
.CreateIntCast(getShadow(&I
, 0), getShadowTy(&I
), false,
1905 "_msprop_ptrtoint"));
1906 setOrigin(&I
, getOrigin(&I
, 0));
1909 void visitIntToPtrInst(IntToPtrInst
&I
) {
1910 IRBuilder
<> IRB(&I
);
1911 setShadow(&I
, IRB
.CreateIntCast(getShadow(&I
, 0), getShadowTy(&I
), false,
1912 "_msprop_inttoptr"));
1913 setOrigin(&I
, getOrigin(&I
, 0));
1916 void visitFPToSIInst(CastInst
& I
) { handleShadowOr(I
); }
1917 void visitFPToUIInst(CastInst
& I
) { handleShadowOr(I
); }
1918 void visitSIToFPInst(CastInst
& I
) { handleShadowOr(I
); }
1919 void visitUIToFPInst(CastInst
& I
) { handleShadowOr(I
); }
1920 void visitFPExtInst(CastInst
& I
) { handleShadowOr(I
); }
1921 void visitFPTruncInst(CastInst
& I
) { handleShadowOr(I
); }
1923 /// Propagate shadow for bitwise AND.
1925 /// This code is exact, i.e. if, for example, a bit in the left argument
1926 /// is defined and 0, then neither the value not definedness of the
1927 /// corresponding bit in B don't affect the resulting shadow.
1928 void visitAnd(BinaryOperator
&I
) {
1929 IRBuilder
<> IRB(&I
);
1930 // "And" of 0 and a poisoned value results in unpoisoned value.
1931 // 1&1 => 1; 0&1 => 0; p&1 => p;
1932 // 1&0 => 0; 0&0 => 0; p&0 => 0;
1933 // 1&p => p; 0&p => 0; p&p => p;
1934 // S = (S1 & S2) | (V1 & S2) | (S1 & V2)
1935 Value
*S1
= getShadow(&I
, 0);
1936 Value
*S2
= getShadow(&I
, 1);
1937 Value
*V1
= I
.getOperand(0);
1938 Value
*V2
= I
.getOperand(1);
1939 if (V1
->getType() != S1
->getType()) {
1940 V1
= IRB
.CreateIntCast(V1
, S1
->getType(), false);
1941 V2
= IRB
.CreateIntCast(V2
, S2
->getType(), false);
1943 Value
*S1S2
= IRB
.CreateAnd(S1
, S2
);
1944 Value
*V1S2
= IRB
.CreateAnd(V1
, S2
);
1945 Value
*S1V2
= IRB
.CreateAnd(S1
, V2
);
1946 setShadow(&I
, IRB
.CreateOr({S1S2
, V1S2
, S1V2
}));
1947 setOriginForNaryOp(I
);
1950 void visitOr(BinaryOperator
&I
) {
1951 IRBuilder
<> IRB(&I
);
1952 // "Or" of 1 and a poisoned value results in unpoisoned value.
1953 // 1|1 => 1; 0|1 => 1; p|1 => 1;
1954 // 1|0 => 1; 0|0 => 0; p|0 => p;
1955 // 1|p => 1; 0|p => p; p|p => p;
1956 // S = (S1 & S2) | (~V1 & S2) | (S1 & ~V2)
1957 Value
*S1
= getShadow(&I
, 0);
1958 Value
*S2
= getShadow(&I
, 1);
1959 Value
*V1
= IRB
.CreateNot(I
.getOperand(0));
1960 Value
*V2
= IRB
.CreateNot(I
.getOperand(1));
1961 if (V1
->getType() != S1
->getType()) {
1962 V1
= IRB
.CreateIntCast(V1
, S1
->getType(), false);
1963 V2
= IRB
.CreateIntCast(V2
, S2
->getType(), false);
1965 Value
*S1S2
= IRB
.CreateAnd(S1
, S2
);
1966 Value
*V1S2
= IRB
.CreateAnd(V1
, S2
);
1967 Value
*S1V2
= IRB
.CreateAnd(S1
, V2
);
1968 setShadow(&I
, IRB
.CreateOr({S1S2
, V1S2
, S1V2
}));
1969 setOriginForNaryOp(I
);
1972 /// Default propagation of shadow and/or origin.
1974 /// This class implements the general case of shadow propagation, used in all
1975 /// cases where we don't know and/or don't care about what the operation
1976 /// actually does. It converts all input shadow values to a common type
1977 /// (extending or truncating as necessary), and bitwise OR's them.
1979 /// This is much cheaper than inserting checks (i.e. requiring inputs to be
1980 /// fully initialized), and less prone to false positives.
1982 /// This class also implements the general case of origin propagation. For a
1983 /// Nary operation, result origin is set to the origin of an argument that is
1984 /// not entirely initialized. If there is more than one such arguments, the
1985 /// rightmost of them is picked. It does not matter which one is picked if all
1986 /// arguments are initialized.
1987 template <bool CombineShadow
>
1989 Value
*Shadow
= nullptr;
1990 Value
*Origin
= nullptr;
1992 MemorySanitizerVisitor
*MSV
;
1995 Combiner(MemorySanitizerVisitor
*MSV
, IRBuilder
<> &IRB
)
1996 : IRB(IRB
), MSV(MSV
) {}
1998 /// Add a pair of shadow and origin values to the mix.
1999 Combiner
&Add(Value
*OpShadow
, Value
*OpOrigin
) {
2000 if (CombineShadow
) {
2005 OpShadow
= MSV
->CreateShadowCast(IRB
, OpShadow
, Shadow
->getType());
2006 Shadow
= IRB
.CreateOr(Shadow
, OpShadow
, "_msprop");
2010 if (MSV
->MS
.TrackOrigins
) {
2015 Constant
*ConstOrigin
= dyn_cast
<Constant
>(OpOrigin
);
2016 // No point in adding something that might result in 0 origin value.
2017 if (!ConstOrigin
|| !ConstOrigin
->isNullValue()) {
2018 Value
*FlatShadow
= MSV
->convertToShadowTyNoVec(OpShadow
, IRB
);
2020 IRB
.CreateICmpNE(FlatShadow
, MSV
->getCleanShadow(FlatShadow
));
2021 Origin
= IRB
.CreateSelect(Cond
, OpOrigin
, Origin
);
2028 /// Add an application value to the mix.
2029 Combiner
&Add(Value
*V
) {
2030 Value
*OpShadow
= MSV
->getShadow(V
);
2031 Value
*OpOrigin
= MSV
->MS
.TrackOrigins
? MSV
->getOrigin(V
) : nullptr;
2032 return Add(OpShadow
, OpOrigin
);
2035 /// Set the current combined values as the given instruction's shadow
2037 void Done(Instruction
*I
) {
2038 if (CombineShadow
) {
2040 Shadow
= MSV
->CreateShadowCast(IRB
, Shadow
, MSV
->getShadowTy(I
));
2041 MSV
->setShadow(I
, Shadow
);
2043 if (MSV
->MS
.TrackOrigins
) {
2045 MSV
->setOrigin(I
, Origin
);
2050 using ShadowAndOriginCombiner
= Combiner
<true>;
2051 using OriginCombiner
= Combiner
<false>;
2053 /// Propagate origin for arbitrary operation.
2054 void setOriginForNaryOp(Instruction
&I
) {
2055 if (!MS
.TrackOrigins
) return;
2056 IRBuilder
<> IRB(&I
);
2057 OriginCombiner
OC(this, IRB
);
2058 for (Instruction::op_iterator OI
= I
.op_begin(); OI
!= I
.op_end(); ++OI
)
2063 size_t VectorOrPrimitiveTypeSizeInBits(Type
*Ty
) {
2064 assert(!(Ty
->isVectorTy() && Ty
->getScalarType()->isPointerTy()) &&
2065 "Vector of pointers is not a valid shadow type");
2066 return Ty
->isVectorTy() ?
2067 Ty
->getVectorNumElements() * Ty
->getScalarSizeInBits() :
2068 Ty
->getPrimitiveSizeInBits();
2071 /// Cast between two shadow types, extending or truncating as
2073 Value
*CreateShadowCast(IRBuilder
<> &IRB
, Value
*V
, Type
*dstTy
,
2074 bool Signed
= false) {
2075 Type
*srcTy
= V
->getType();
2076 size_t srcSizeInBits
= VectorOrPrimitiveTypeSizeInBits(srcTy
);
2077 size_t dstSizeInBits
= VectorOrPrimitiveTypeSizeInBits(dstTy
);
2078 if (srcSizeInBits
> 1 && dstSizeInBits
== 1)
2079 return IRB
.CreateICmpNE(V
, getCleanShadow(V
));
2081 if (dstTy
->isIntegerTy() && srcTy
->isIntegerTy())
2082 return IRB
.CreateIntCast(V
, dstTy
, Signed
);
2083 if (dstTy
->isVectorTy() && srcTy
->isVectorTy() &&
2084 dstTy
->getVectorNumElements() == srcTy
->getVectorNumElements())
2085 return IRB
.CreateIntCast(V
, dstTy
, Signed
);
2086 Value
*V1
= IRB
.CreateBitCast(V
, Type::getIntNTy(*MS
.C
, srcSizeInBits
));
2088 IRB
.CreateIntCast(V1
, Type::getIntNTy(*MS
.C
, dstSizeInBits
), Signed
);
2089 return IRB
.CreateBitCast(V2
, dstTy
);
2090 // TODO: handle struct types.
2093 /// Cast an application value to the type of its own shadow.
2094 Value
*CreateAppToShadowCast(IRBuilder
<> &IRB
, Value
*V
) {
2095 Type
*ShadowTy
= getShadowTy(V
);
2096 if (V
->getType() == ShadowTy
)
2098 if (V
->getType()->isPtrOrPtrVectorTy())
2099 return IRB
.CreatePtrToInt(V
, ShadowTy
);
2101 return IRB
.CreateBitCast(V
, ShadowTy
);
2104 /// Propagate shadow for arbitrary operation.
2105 void handleShadowOr(Instruction
&I
) {
2106 IRBuilder
<> IRB(&I
);
2107 ShadowAndOriginCombiner
SC(this, IRB
);
2108 for (Instruction::op_iterator OI
= I
.op_begin(); OI
!= I
.op_end(); ++OI
)
2113 void visitFNeg(UnaryOperator
&I
) { handleShadowOr(I
); }
2115 // Handle multiplication by constant.
2117 // Handle a special case of multiplication by constant that may have one or
2118 // more zeros in the lower bits. This makes corresponding number of lower bits
2119 // of the result zero as well. We model it by shifting the other operand
2120 // shadow left by the required number of bits. Effectively, we transform
2121 // (X * (A * 2**B)) to ((X << B) * A) and instrument (X << B) as (Sx << B).
2122 // We use multiplication by 2**N instead of shift to cover the case of
2123 // multiplication by 0, which may occur in some elements of a vector operand.
2124 void handleMulByConstant(BinaryOperator
&I
, Constant
*ConstArg
,
2126 Constant
*ShadowMul
;
2127 Type
*Ty
= ConstArg
->getType();
2128 if (Ty
->isVectorTy()) {
2129 unsigned NumElements
= Ty
->getVectorNumElements();
2130 Type
*EltTy
= Ty
->getSequentialElementType();
2131 SmallVector
<Constant
*, 16> Elements
;
2132 for (unsigned Idx
= 0; Idx
< NumElements
; ++Idx
) {
2133 if (ConstantInt
*Elt
=
2134 dyn_cast
<ConstantInt
>(ConstArg
->getAggregateElement(Idx
))) {
2135 const APInt
&V
= Elt
->getValue();
2136 APInt V2
= APInt(V
.getBitWidth(), 1) << V
.countTrailingZeros();
2137 Elements
.push_back(ConstantInt::get(EltTy
, V2
));
2139 Elements
.push_back(ConstantInt::get(EltTy
, 1));
2142 ShadowMul
= ConstantVector::get(Elements
);
2144 if (ConstantInt
*Elt
= dyn_cast
<ConstantInt
>(ConstArg
)) {
2145 const APInt
&V
= Elt
->getValue();
2146 APInt V2
= APInt(V
.getBitWidth(), 1) << V
.countTrailingZeros();
2147 ShadowMul
= ConstantInt::get(Ty
, V2
);
2149 ShadowMul
= ConstantInt::get(Ty
, 1);
2153 IRBuilder
<> IRB(&I
);
2155 IRB
.CreateMul(getShadow(OtherArg
), ShadowMul
, "msprop_mul_cst"));
2156 setOrigin(&I
, getOrigin(OtherArg
));
2159 void visitMul(BinaryOperator
&I
) {
2160 Constant
*constOp0
= dyn_cast
<Constant
>(I
.getOperand(0));
2161 Constant
*constOp1
= dyn_cast
<Constant
>(I
.getOperand(1));
2162 if (constOp0
&& !constOp1
)
2163 handleMulByConstant(I
, constOp0
, I
.getOperand(1));
2164 else if (constOp1
&& !constOp0
)
2165 handleMulByConstant(I
, constOp1
, I
.getOperand(0));
2170 void visitFAdd(BinaryOperator
&I
) { handleShadowOr(I
); }
2171 void visitFSub(BinaryOperator
&I
) { handleShadowOr(I
); }
2172 void visitFMul(BinaryOperator
&I
) { handleShadowOr(I
); }
2173 void visitAdd(BinaryOperator
&I
) { handleShadowOr(I
); }
2174 void visitSub(BinaryOperator
&I
) { handleShadowOr(I
); }
2175 void visitXor(BinaryOperator
&I
) { handleShadowOr(I
); }
2177 void handleIntegerDiv(Instruction
&I
) {
2178 IRBuilder
<> IRB(&I
);
2179 // Strict on the second argument.
2180 insertShadowCheck(I
.getOperand(1), &I
);
2181 setShadow(&I
, getShadow(&I
, 0));
2182 setOrigin(&I
, getOrigin(&I
, 0));
2185 void visitUDiv(BinaryOperator
&I
) { handleIntegerDiv(I
); }
2186 void visitSDiv(BinaryOperator
&I
) { handleIntegerDiv(I
); }
2187 void visitURem(BinaryOperator
&I
) { handleIntegerDiv(I
); }
2188 void visitSRem(BinaryOperator
&I
) { handleIntegerDiv(I
); }
2190 // Floating point division is side-effect free. We can not require that the
2191 // divisor is fully initialized and must propagate shadow. See PR37523.
2192 void visitFDiv(BinaryOperator
&I
) { handleShadowOr(I
); }
2193 void visitFRem(BinaryOperator
&I
) { handleShadowOr(I
); }
2195 /// Instrument == and != comparisons.
2197 /// Sometimes the comparison result is known even if some of the bits of the
2198 /// arguments are not.
2199 void handleEqualityComparison(ICmpInst
&I
) {
2200 IRBuilder
<> IRB(&I
);
2201 Value
*A
= I
.getOperand(0);
2202 Value
*B
= I
.getOperand(1);
2203 Value
*Sa
= getShadow(A
);
2204 Value
*Sb
= getShadow(B
);
2206 // Get rid of pointers and vectors of pointers.
2207 // For ints (and vectors of ints), types of A and Sa match,
2208 // and this is a no-op.
2209 A
= IRB
.CreatePointerCast(A
, Sa
->getType());
2210 B
= IRB
.CreatePointerCast(B
, Sb
->getType());
2212 // A == B <==> (C = A^B) == 0
2213 // A != B <==> (C = A^B) != 0
2215 Value
*C
= IRB
.CreateXor(A
, B
);
2216 Value
*Sc
= IRB
.CreateOr(Sa
, Sb
);
2217 // Now dealing with i = (C == 0) comparison (or C != 0, does not matter now)
2218 // Result is defined if one of the following is true
2219 // * there is a defined 1 bit in C
2220 // * C is fully defined
2221 // Si = !(C & ~Sc) && Sc
2222 Value
*Zero
= Constant::getNullValue(Sc
->getType());
2223 Value
*MinusOne
= Constant::getAllOnesValue(Sc
->getType());
2225 IRB
.CreateAnd(IRB
.CreateICmpNE(Sc
, Zero
),
2227 IRB
.CreateAnd(IRB
.CreateXor(Sc
, MinusOne
), C
), Zero
));
2228 Si
->setName("_msprop_icmp");
2230 setOriginForNaryOp(I
);
2233 /// Build the lowest possible value of V, taking into account V's
2234 /// uninitialized bits.
2235 Value
*getLowestPossibleValue(IRBuilder
<> &IRB
, Value
*A
, Value
*Sa
,
2238 // Split shadow into sign bit and other bits.
2239 Value
*SaOtherBits
= IRB
.CreateLShr(IRB
.CreateShl(Sa
, 1), 1);
2240 Value
*SaSignBit
= IRB
.CreateXor(Sa
, SaOtherBits
);
2241 // Maximise the undefined shadow bit, minimize other undefined bits.
2243 IRB
.CreateOr(IRB
.CreateAnd(A
, IRB
.CreateNot(SaOtherBits
)), SaSignBit
);
2245 // Minimize undefined bits.
2246 return IRB
.CreateAnd(A
, IRB
.CreateNot(Sa
));
2250 /// Build the highest possible value of V, taking into account V's
2251 /// uninitialized bits.
2252 Value
*getHighestPossibleValue(IRBuilder
<> &IRB
, Value
*A
, Value
*Sa
,
2255 // Split shadow into sign bit and other bits.
2256 Value
*SaOtherBits
= IRB
.CreateLShr(IRB
.CreateShl(Sa
, 1), 1);
2257 Value
*SaSignBit
= IRB
.CreateXor(Sa
, SaOtherBits
);
2258 // Minimise the undefined shadow bit, maximise other undefined bits.
2260 IRB
.CreateOr(IRB
.CreateAnd(A
, IRB
.CreateNot(SaSignBit
)), SaOtherBits
);
2262 // Maximize undefined bits.
2263 return IRB
.CreateOr(A
, Sa
);
2267 /// Instrument relational comparisons.
2269 /// This function does exact shadow propagation for all relational
2270 /// comparisons of integers, pointers and vectors of those.
2271 /// FIXME: output seems suboptimal when one of the operands is a constant
2272 void handleRelationalComparisonExact(ICmpInst
&I
) {
2273 IRBuilder
<> IRB(&I
);
2274 Value
*A
= I
.getOperand(0);
2275 Value
*B
= I
.getOperand(1);
2276 Value
*Sa
= getShadow(A
);
2277 Value
*Sb
= getShadow(B
);
2279 // Get rid of pointers and vectors of pointers.
2280 // For ints (and vectors of ints), types of A and Sa match,
2281 // and this is a no-op.
2282 A
= IRB
.CreatePointerCast(A
, Sa
->getType());
2283 B
= IRB
.CreatePointerCast(B
, Sb
->getType());
2285 // Let [a0, a1] be the interval of possible values of A, taking into account
2286 // its undefined bits. Let [b0, b1] be the interval of possible values of B.
2287 // Then (A cmp B) is defined iff (a0 cmp b1) == (a1 cmp b0).
2288 bool IsSigned
= I
.isSigned();
2289 Value
*S1
= IRB
.CreateICmp(I
.getPredicate(),
2290 getLowestPossibleValue(IRB
, A
, Sa
, IsSigned
),
2291 getHighestPossibleValue(IRB
, B
, Sb
, IsSigned
));
2292 Value
*S2
= IRB
.CreateICmp(I
.getPredicate(),
2293 getHighestPossibleValue(IRB
, A
, Sa
, IsSigned
),
2294 getLowestPossibleValue(IRB
, B
, Sb
, IsSigned
));
2295 Value
*Si
= IRB
.CreateXor(S1
, S2
);
2297 setOriginForNaryOp(I
);
2300 /// Instrument signed relational comparisons.
2302 /// Handle sign bit tests: x<0, x>=0, x<=-1, x>-1 by propagating the highest
2303 /// bit of the shadow. Everything else is delegated to handleShadowOr().
2304 void handleSignedRelationalComparison(ICmpInst
&I
) {
2306 Value
*op
= nullptr;
2307 CmpInst::Predicate pre
;
2308 if ((constOp
= dyn_cast
<Constant
>(I
.getOperand(1)))) {
2309 op
= I
.getOperand(0);
2310 pre
= I
.getPredicate();
2311 } else if ((constOp
= dyn_cast
<Constant
>(I
.getOperand(0)))) {
2312 op
= I
.getOperand(1);
2313 pre
= I
.getSwappedPredicate();
2319 if ((constOp
->isNullValue() &&
2320 (pre
== CmpInst::ICMP_SLT
|| pre
== CmpInst::ICMP_SGE
)) ||
2321 (constOp
->isAllOnesValue() &&
2322 (pre
== CmpInst::ICMP_SGT
|| pre
== CmpInst::ICMP_SLE
))) {
2323 IRBuilder
<> IRB(&I
);
2324 Value
*Shadow
= IRB
.CreateICmpSLT(getShadow(op
), getCleanShadow(op
),
2326 setShadow(&I
, Shadow
);
2327 setOrigin(&I
, getOrigin(op
));
2333 void visitICmpInst(ICmpInst
&I
) {
2334 if (!ClHandleICmp
) {
2338 if (I
.isEquality()) {
2339 handleEqualityComparison(I
);
2343 assert(I
.isRelational());
2344 if (ClHandleICmpExact
) {
2345 handleRelationalComparisonExact(I
);
2349 handleSignedRelationalComparison(I
);
2353 assert(I
.isUnsigned());
2354 if ((isa
<Constant
>(I
.getOperand(0)) || isa
<Constant
>(I
.getOperand(1)))) {
2355 handleRelationalComparisonExact(I
);
2362 void visitFCmpInst(FCmpInst
&I
) {
2366 void handleShift(BinaryOperator
&I
) {
2367 IRBuilder
<> IRB(&I
);
2368 // If any of the S2 bits are poisoned, the whole thing is poisoned.
2369 // Otherwise perform the same shift on S1.
2370 Value
*S1
= getShadow(&I
, 0);
2371 Value
*S2
= getShadow(&I
, 1);
2372 Value
*S2Conv
= IRB
.CreateSExt(IRB
.CreateICmpNE(S2
, getCleanShadow(S2
)),
2374 Value
*V2
= I
.getOperand(1);
2375 Value
*Shift
= IRB
.CreateBinOp(I
.getOpcode(), S1
, V2
);
2376 setShadow(&I
, IRB
.CreateOr(Shift
, S2Conv
));
2377 setOriginForNaryOp(I
);
2380 void visitShl(BinaryOperator
&I
) { handleShift(I
); }
2381 void visitAShr(BinaryOperator
&I
) { handleShift(I
); }
2382 void visitLShr(BinaryOperator
&I
) { handleShift(I
); }
2384 /// Instrument llvm.memmove
2386 /// At this point we don't know if llvm.memmove will be inlined or not.
2387 /// If we don't instrument it and it gets inlined,
2388 /// our interceptor will not kick in and we will lose the memmove.
2389 /// If we instrument the call here, but it does not get inlined,
2390 /// we will memove the shadow twice: which is bad in case
2391 /// of overlapping regions. So, we simply lower the intrinsic to a call.
2393 /// Similar situation exists for memcpy and memset.
2394 void visitMemMoveInst(MemMoveInst
&I
) {
2395 IRBuilder
<> IRB(&I
);
2398 {IRB
.CreatePointerCast(I
.getArgOperand(0), IRB
.getInt8PtrTy()),
2399 IRB
.CreatePointerCast(I
.getArgOperand(1), IRB
.getInt8PtrTy()),
2400 IRB
.CreateIntCast(I
.getArgOperand(2), MS
.IntptrTy
, false)});
2401 I
.eraseFromParent();
2404 // Similar to memmove: avoid copying shadow twice.
2405 // This is somewhat unfortunate as it may slowdown small constant memcpys.
2406 // FIXME: consider doing manual inline for small constant sizes and proper
2408 void visitMemCpyInst(MemCpyInst
&I
) {
2409 IRBuilder
<> IRB(&I
);
2412 {IRB
.CreatePointerCast(I
.getArgOperand(0), IRB
.getInt8PtrTy()),
2413 IRB
.CreatePointerCast(I
.getArgOperand(1), IRB
.getInt8PtrTy()),
2414 IRB
.CreateIntCast(I
.getArgOperand(2), MS
.IntptrTy
, false)});
2415 I
.eraseFromParent();
2419 void visitMemSetInst(MemSetInst
&I
) {
2420 IRBuilder
<> IRB(&I
);
2423 {IRB
.CreatePointerCast(I
.getArgOperand(0), IRB
.getInt8PtrTy()),
2424 IRB
.CreateIntCast(I
.getArgOperand(1), IRB
.getInt32Ty(), false),
2425 IRB
.CreateIntCast(I
.getArgOperand(2), MS
.IntptrTy
, false)});
2426 I
.eraseFromParent();
2429 void visitVAStartInst(VAStartInst
&I
) {
2430 VAHelper
->visitVAStartInst(I
);
2433 void visitVACopyInst(VACopyInst
&I
) {
2434 VAHelper
->visitVACopyInst(I
);
2437 /// Handle vector store-like intrinsics.
2439 /// Instrument intrinsics that look like a simple SIMD store: writes memory,
2440 /// has 1 pointer argument and 1 vector argument, returns void.
2441 bool handleVectorStoreIntrinsic(IntrinsicInst
&I
) {
2442 IRBuilder
<> IRB(&I
);
2443 Value
* Addr
= I
.getArgOperand(0);
2444 Value
*Shadow
= getShadow(&I
, 1);
2445 Value
*ShadowPtr
, *OriginPtr
;
2447 // We don't know the pointer alignment (could be unaligned SSE store!).
2448 // Have to assume to worst case.
2449 std::tie(ShadowPtr
, OriginPtr
) = getShadowOriginPtr(
2450 Addr
, IRB
, Shadow
->getType(), /*Alignment*/ 1, /*isStore*/ true);
2451 IRB
.CreateAlignedStore(Shadow
, ShadowPtr
, 1);
2453 if (ClCheckAccessAddress
)
2454 insertShadowCheck(Addr
, &I
);
2456 // FIXME: factor out common code from materializeStores
2457 if (MS
.TrackOrigins
) IRB
.CreateStore(getOrigin(&I
, 1), OriginPtr
);
2461 /// Handle vector load-like intrinsics.
2463 /// Instrument intrinsics that look like a simple SIMD load: reads memory,
2464 /// has 1 pointer argument, returns a vector.
2465 bool handleVectorLoadIntrinsic(IntrinsicInst
&I
) {
2466 IRBuilder
<> IRB(&I
);
2467 Value
*Addr
= I
.getArgOperand(0);
2469 Type
*ShadowTy
= getShadowTy(&I
);
2470 Value
*ShadowPtr
, *OriginPtr
;
2471 if (PropagateShadow
) {
2472 // We don't know the pointer alignment (could be unaligned SSE load!).
2473 // Have to assume to worst case.
2474 unsigned Alignment
= 1;
2475 std::tie(ShadowPtr
, OriginPtr
) =
2476 getShadowOriginPtr(Addr
, IRB
, ShadowTy
, Alignment
, /*isStore*/ false);
2478 IRB
.CreateAlignedLoad(ShadowTy
, ShadowPtr
, Alignment
, "_msld"));
2480 setShadow(&I
, getCleanShadow(&I
));
2483 if (ClCheckAccessAddress
)
2484 insertShadowCheck(Addr
, &I
);
2486 if (MS
.TrackOrigins
) {
2487 if (PropagateShadow
)
2488 setOrigin(&I
, IRB
.CreateLoad(MS
.OriginTy
, OriginPtr
));
2490 setOrigin(&I
, getCleanOrigin());
2495 /// Handle (SIMD arithmetic)-like intrinsics.
2497 /// Instrument intrinsics with any number of arguments of the same type,
2498 /// equal to the return type. The type should be simple (no aggregates or
2499 /// pointers; vectors are fine).
2500 /// Caller guarantees that this intrinsic does not access memory.
2501 bool maybeHandleSimpleNomemIntrinsic(IntrinsicInst
&I
) {
2502 Type
*RetTy
= I
.getType();
2503 if (!(RetTy
->isIntOrIntVectorTy() ||
2504 RetTy
->isFPOrFPVectorTy() ||
2505 RetTy
->isX86_MMXTy()))
2508 unsigned NumArgOperands
= I
.getNumArgOperands();
2510 for (unsigned i
= 0; i
< NumArgOperands
; ++i
) {
2511 Type
*Ty
= I
.getArgOperand(i
)->getType();
2516 IRBuilder
<> IRB(&I
);
2517 ShadowAndOriginCombiner
SC(this, IRB
);
2518 for (unsigned i
= 0; i
< NumArgOperands
; ++i
)
2519 SC
.Add(I
.getArgOperand(i
));
2525 /// Heuristically instrument unknown intrinsics.
2527 /// The main purpose of this code is to do something reasonable with all
2528 /// random intrinsics we might encounter, most importantly - SIMD intrinsics.
2529 /// We recognize several classes of intrinsics by their argument types and
2530 /// ModRefBehaviour and apply special intrumentation when we are reasonably
2531 /// sure that we know what the intrinsic does.
2533 /// We special-case intrinsics where this approach fails. See llvm.bswap
2534 /// handling as an example of that.
2535 bool handleUnknownIntrinsic(IntrinsicInst
&I
) {
2536 unsigned NumArgOperands
= I
.getNumArgOperands();
2537 if (NumArgOperands
== 0)
2540 if (NumArgOperands
== 2 &&
2541 I
.getArgOperand(0)->getType()->isPointerTy() &&
2542 I
.getArgOperand(1)->getType()->isVectorTy() &&
2543 I
.getType()->isVoidTy() &&
2544 !I
.onlyReadsMemory()) {
2545 // This looks like a vector store.
2546 return handleVectorStoreIntrinsic(I
);
2549 if (NumArgOperands
== 1 &&
2550 I
.getArgOperand(0)->getType()->isPointerTy() &&
2551 I
.getType()->isVectorTy() &&
2552 I
.onlyReadsMemory()) {
2553 // This looks like a vector load.
2554 return handleVectorLoadIntrinsic(I
);
2557 if (I
.doesNotAccessMemory())
2558 if (maybeHandleSimpleNomemIntrinsic(I
))
2561 // FIXME: detect and handle SSE maskstore/maskload
2565 void handleLifetimeStart(IntrinsicInst
&I
) {
2568 DenseMap
<Value
*, AllocaInst
*> AllocaForValue
;
2570 llvm::findAllocaForValue(I
.getArgOperand(1), AllocaForValue
);
2572 InstrumentLifetimeStart
= false;
2573 LifetimeStartList
.push_back(std::make_pair(&I
, AI
));
2576 void handleBswap(IntrinsicInst
&I
) {
2577 IRBuilder
<> IRB(&I
);
2578 Value
*Op
= I
.getArgOperand(0);
2579 Type
*OpType
= Op
->getType();
2580 Function
*BswapFunc
= Intrinsic::getDeclaration(
2581 F
.getParent(), Intrinsic::bswap
, makeArrayRef(&OpType
, 1));
2582 setShadow(&I
, IRB
.CreateCall(BswapFunc
, getShadow(Op
)));
2583 setOrigin(&I
, getOrigin(Op
));
2586 // Instrument vector convert instrinsic.
2588 // This function instruments intrinsics like cvtsi2ss:
2589 // %Out = int_xxx_cvtyyy(%ConvertOp)
2591 // %Out = int_xxx_cvtyyy(%CopyOp, %ConvertOp)
2592 // Intrinsic converts \p NumUsedElements elements of \p ConvertOp to the same
2593 // number \p Out elements, and (if has 2 arguments) copies the rest of the
2594 // elements from \p CopyOp.
2595 // In most cases conversion involves floating-point value which may trigger a
2596 // hardware exception when not fully initialized. For this reason we require
2597 // \p ConvertOp[0:NumUsedElements] to be fully initialized and trap otherwise.
2598 // We copy the shadow of \p CopyOp[NumUsedElements:] to \p
2599 // Out[NumUsedElements:]. This means that intrinsics without \p CopyOp always
2600 // return a fully initialized value.
2601 void handleVectorConvertIntrinsic(IntrinsicInst
&I
, int NumUsedElements
) {
2602 IRBuilder
<> IRB(&I
);
2603 Value
*CopyOp
, *ConvertOp
;
2605 switch (I
.getNumArgOperands()) {
2607 assert(isa
<ConstantInt
>(I
.getArgOperand(2)) && "Invalid rounding mode");
2610 CopyOp
= I
.getArgOperand(0);
2611 ConvertOp
= I
.getArgOperand(1);
2614 ConvertOp
= I
.getArgOperand(0);
2618 llvm_unreachable("Cvt intrinsic with unsupported number of arguments.");
2621 // The first *NumUsedElements* elements of ConvertOp are converted to the
2622 // same number of output elements. The rest of the output is copied from
2623 // CopyOp, or (if not available) filled with zeroes.
2624 // Combine shadow for elements of ConvertOp that are used in this operation,
2625 // and insert a check.
2626 // FIXME: consider propagating shadow of ConvertOp, at least in the case of
2627 // int->any conversion.
2628 Value
*ConvertShadow
= getShadow(ConvertOp
);
2629 Value
*AggShadow
= nullptr;
2630 if (ConvertOp
->getType()->isVectorTy()) {
2631 AggShadow
= IRB
.CreateExtractElement(
2632 ConvertShadow
, ConstantInt::get(IRB
.getInt32Ty(), 0));
2633 for (int i
= 1; i
< NumUsedElements
; ++i
) {
2634 Value
*MoreShadow
= IRB
.CreateExtractElement(
2635 ConvertShadow
, ConstantInt::get(IRB
.getInt32Ty(), i
));
2636 AggShadow
= IRB
.CreateOr(AggShadow
, MoreShadow
);
2639 AggShadow
= ConvertShadow
;
2641 assert(AggShadow
->getType()->isIntegerTy());
2642 insertShadowCheck(AggShadow
, getOrigin(ConvertOp
), &I
);
2644 // Build result shadow by zero-filling parts of CopyOp shadow that come from
2647 assert(CopyOp
->getType() == I
.getType());
2648 assert(CopyOp
->getType()->isVectorTy());
2649 Value
*ResultShadow
= getShadow(CopyOp
);
2650 Type
*EltTy
= ResultShadow
->getType()->getVectorElementType();
2651 for (int i
= 0; i
< NumUsedElements
; ++i
) {
2652 ResultShadow
= IRB
.CreateInsertElement(
2653 ResultShadow
, ConstantInt::getNullValue(EltTy
),
2654 ConstantInt::get(IRB
.getInt32Ty(), i
));
2656 setShadow(&I
, ResultShadow
);
2657 setOrigin(&I
, getOrigin(CopyOp
));
2659 setShadow(&I
, getCleanShadow(&I
));
2660 setOrigin(&I
, getCleanOrigin());
2664 // Given a scalar or vector, extract lower 64 bits (or less), and return all
2665 // zeroes if it is zero, and all ones otherwise.
2666 Value
*Lower64ShadowExtend(IRBuilder
<> &IRB
, Value
*S
, Type
*T
) {
2667 if (S
->getType()->isVectorTy())
2668 S
= CreateShadowCast(IRB
, S
, IRB
.getInt64Ty(), /* Signed */ true);
2669 assert(S
->getType()->getPrimitiveSizeInBits() <= 64);
2670 Value
*S2
= IRB
.CreateICmpNE(S
, getCleanShadow(S
));
2671 return CreateShadowCast(IRB
, S2
, T
, /* Signed */ true);
2674 // Given a vector, extract its first element, and return all
2675 // zeroes if it is zero, and all ones otherwise.
2676 Value
*LowerElementShadowExtend(IRBuilder
<> &IRB
, Value
*S
, Type
*T
) {
2677 Value
*S1
= IRB
.CreateExtractElement(S
, (uint64_t)0);
2678 Value
*S2
= IRB
.CreateICmpNE(S1
, getCleanShadow(S1
));
2679 return CreateShadowCast(IRB
, S2
, T
, /* Signed */ true);
2682 Value
*VariableShadowExtend(IRBuilder
<> &IRB
, Value
*S
) {
2683 Type
*T
= S
->getType();
2684 assert(T
->isVectorTy());
2685 Value
*S2
= IRB
.CreateICmpNE(S
, getCleanShadow(S
));
2686 return IRB
.CreateSExt(S2
, T
);
2689 // Instrument vector shift instrinsic.
2691 // This function instruments intrinsics like int_x86_avx2_psll_w.
2692 // Intrinsic shifts %In by %ShiftSize bits.
2693 // %ShiftSize may be a vector. In that case the lower 64 bits determine shift
2694 // size, and the rest is ignored. Behavior is defined even if shift size is
2695 // greater than register (or field) width.
2696 void handleVectorShiftIntrinsic(IntrinsicInst
&I
, bool Variable
) {
2697 assert(I
.getNumArgOperands() == 2);
2698 IRBuilder
<> IRB(&I
);
2699 // If any of the S2 bits are poisoned, the whole thing is poisoned.
2700 // Otherwise perform the same shift on S1.
2701 Value
*S1
= getShadow(&I
, 0);
2702 Value
*S2
= getShadow(&I
, 1);
2703 Value
*S2Conv
= Variable
? VariableShadowExtend(IRB
, S2
)
2704 : Lower64ShadowExtend(IRB
, S2
, getShadowTy(&I
));
2705 Value
*V1
= I
.getOperand(0);
2706 Value
*V2
= I
.getOperand(1);
2707 Value
*Shift
= IRB
.CreateCall(I
.getFunctionType(), I
.getCalledValue(),
2708 {IRB
.CreateBitCast(S1
, V1
->getType()), V2
});
2709 Shift
= IRB
.CreateBitCast(Shift
, getShadowTy(&I
));
2710 setShadow(&I
, IRB
.CreateOr(Shift
, S2Conv
));
2711 setOriginForNaryOp(I
);
2714 // Get an X86_MMX-sized vector type.
2715 Type
*getMMXVectorTy(unsigned EltSizeInBits
) {
2716 const unsigned X86_MMXSizeInBits
= 64;
2717 assert(EltSizeInBits
!= 0 && (X86_MMXSizeInBits
% EltSizeInBits
) == 0 &&
2718 "Illegal MMX vector element size");
2719 return VectorType::get(IntegerType::get(*MS
.C
, EltSizeInBits
),
2720 X86_MMXSizeInBits
/ EltSizeInBits
);
2723 // Returns a signed counterpart for an (un)signed-saturate-and-pack
2725 Intrinsic::ID
getSignedPackIntrinsic(Intrinsic::ID id
) {
2727 case Intrinsic::x86_sse2_packsswb_128
:
2728 case Intrinsic::x86_sse2_packuswb_128
:
2729 return Intrinsic::x86_sse2_packsswb_128
;
2731 case Intrinsic::x86_sse2_packssdw_128
:
2732 case Intrinsic::x86_sse41_packusdw
:
2733 return Intrinsic::x86_sse2_packssdw_128
;
2735 case Intrinsic::x86_avx2_packsswb
:
2736 case Intrinsic::x86_avx2_packuswb
:
2737 return Intrinsic::x86_avx2_packsswb
;
2739 case Intrinsic::x86_avx2_packssdw
:
2740 case Intrinsic::x86_avx2_packusdw
:
2741 return Intrinsic::x86_avx2_packssdw
;
2743 case Intrinsic::x86_mmx_packsswb
:
2744 case Intrinsic::x86_mmx_packuswb
:
2745 return Intrinsic::x86_mmx_packsswb
;
2747 case Intrinsic::x86_mmx_packssdw
:
2748 return Intrinsic::x86_mmx_packssdw
;
2750 llvm_unreachable("unexpected intrinsic id");
2754 // Instrument vector pack instrinsic.
2756 // This function instruments intrinsics like x86_mmx_packsswb, that
2757 // packs elements of 2 input vectors into half as many bits with saturation.
2758 // Shadow is propagated with the signed variant of the same intrinsic applied
2759 // to sext(Sa != zeroinitializer), sext(Sb != zeroinitializer).
2760 // EltSizeInBits is used only for x86mmx arguments.
2761 void handleVectorPackIntrinsic(IntrinsicInst
&I
, unsigned EltSizeInBits
= 0) {
2762 assert(I
.getNumArgOperands() == 2);
2763 bool isX86_MMX
= I
.getOperand(0)->getType()->isX86_MMXTy();
2764 IRBuilder
<> IRB(&I
);
2765 Value
*S1
= getShadow(&I
, 0);
2766 Value
*S2
= getShadow(&I
, 1);
2767 assert(isX86_MMX
|| S1
->getType()->isVectorTy());
2769 // SExt and ICmpNE below must apply to individual elements of input vectors.
2770 // In case of x86mmx arguments, cast them to appropriate vector types and
2772 Type
*T
= isX86_MMX
? getMMXVectorTy(EltSizeInBits
) : S1
->getType();
2774 S1
= IRB
.CreateBitCast(S1
, T
);
2775 S2
= IRB
.CreateBitCast(S2
, T
);
2777 Value
*S1_ext
= IRB
.CreateSExt(
2778 IRB
.CreateICmpNE(S1
, Constant::getNullValue(T
)), T
);
2779 Value
*S2_ext
= IRB
.CreateSExt(
2780 IRB
.CreateICmpNE(S2
, Constant::getNullValue(T
)), T
);
2782 Type
*X86_MMXTy
= Type::getX86_MMXTy(*MS
.C
);
2783 S1_ext
= IRB
.CreateBitCast(S1_ext
, X86_MMXTy
);
2784 S2_ext
= IRB
.CreateBitCast(S2_ext
, X86_MMXTy
);
2787 Function
*ShadowFn
= Intrinsic::getDeclaration(
2788 F
.getParent(), getSignedPackIntrinsic(I
.getIntrinsicID()));
2791 IRB
.CreateCall(ShadowFn
, {S1_ext
, S2_ext
}, "_msprop_vector_pack");
2792 if (isX86_MMX
) S
= IRB
.CreateBitCast(S
, getShadowTy(&I
));
2794 setOriginForNaryOp(I
);
2797 // Instrument sum-of-absolute-differencies intrinsic.
2798 void handleVectorSadIntrinsic(IntrinsicInst
&I
) {
2799 const unsigned SignificantBitsPerResultElement
= 16;
2800 bool isX86_MMX
= I
.getOperand(0)->getType()->isX86_MMXTy();
2801 Type
*ResTy
= isX86_MMX
? IntegerType::get(*MS
.C
, 64) : I
.getType();
2802 unsigned ZeroBitsPerResultElement
=
2803 ResTy
->getScalarSizeInBits() - SignificantBitsPerResultElement
;
2805 IRBuilder
<> IRB(&I
);
2806 Value
*S
= IRB
.CreateOr(getShadow(&I
, 0), getShadow(&I
, 1));
2807 S
= IRB
.CreateBitCast(S
, ResTy
);
2808 S
= IRB
.CreateSExt(IRB
.CreateICmpNE(S
, Constant::getNullValue(ResTy
)),
2810 S
= IRB
.CreateLShr(S
, ZeroBitsPerResultElement
);
2811 S
= IRB
.CreateBitCast(S
, getShadowTy(&I
));
2813 setOriginForNaryOp(I
);
2816 // Instrument multiply-add intrinsic.
2817 void handleVectorPmaddIntrinsic(IntrinsicInst
&I
,
2818 unsigned EltSizeInBits
= 0) {
2819 bool isX86_MMX
= I
.getOperand(0)->getType()->isX86_MMXTy();
2820 Type
*ResTy
= isX86_MMX
? getMMXVectorTy(EltSizeInBits
* 2) : I
.getType();
2821 IRBuilder
<> IRB(&I
);
2822 Value
*S
= IRB
.CreateOr(getShadow(&I
, 0), getShadow(&I
, 1));
2823 S
= IRB
.CreateBitCast(S
, ResTy
);
2824 S
= IRB
.CreateSExt(IRB
.CreateICmpNE(S
, Constant::getNullValue(ResTy
)),
2826 S
= IRB
.CreateBitCast(S
, getShadowTy(&I
));
2828 setOriginForNaryOp(I
);
2831 // Instrument compare-packed intrinsic.
2832 // Basically, an or followed by sext(icmp ne 0) to end up with all-zeros or
2834 void handleVectorComparePackedIntrinsic(IntrinsicInst
&I
) {
2835 IRBuilder
<> IRB(&I
);
2836 Type
*ResTy
= getShadowTy(&I
);
2837 Value
*S0
= IRB
.CreateOr(getShadow(&I
, 0), getShadow(&I
, 1));
2838 Value
*S
= IRB
.CreateSExt(
2839 IRB
.CreateICmpNE(S0
, Constant::getNullValue(ResTy
)), ResTy
);
2841 setOriginForNaryOp(I
);
2844 // Instrument compare-scalar intrinsic.
2845 // This handles both cmp* intrinsics which return the result in the first
2846 // element of a vector, and comi* which return the result as i32.
2847 void handleVectorCompareScalarIntrinsic(IntrinsicInst
&I
) {
2848 IRBuilder
<> IRB(&I
);
2849 Value
*S0
= IRB
.CreateOr(getShadow(&I
, 0), getShadow(&I
, 1));
2850 Value
*S
= LowerElementShadowExtend(IRB
, S0
, getShadowTy(&I
));
2852 setOriginForNaryOp(I
);
2855 void handleStmxcsr(IntrinsicInst
&I
) {
2856 IRBuilder
<> IRB(&I
);
2857 Value
* Addr
= I
.getArgOperand(0);
2858 Type
*Ty
= IRB
.getInt32Ty();
2860 getShadowOriginPtr(Addr
, IRB
, Ty
, /*Alignment*/ 1, /*isStore*/ true)
2863 IRB
.CreateStore(getCleanShadow(Ty
),
2864 IRB
.CreatePointerCast(ShadowPtr
, Ty
->getPointerTo()));
2866 if (ClCheckAccessAddress
)
2867 insertShadowCheck(Addr
, &I
);
2870 void handleLdmxcsr(IntrinsicInst
&I
) {
2871 if (!InsertChecks
) return;
2873 IRBuilder
<> IRB(&I
);
2874 Value
*Addr
= I
.getArgOperand(0);
2875 Type
*Ty
= IRB
.getInt32Ty();
2876 unsigned Alignment
= 1;
2877 Value
*ShadowPtr
, *OriginPtr
;
2878 std::tie(ShadowPtr
, OriginPtr
) =
2879 getShadowOriginPtr(Addr
, IRB
, Ty
, Alignment
, /*isStore*/ false);
2881 if (ClCheckAccessAddress
)
2882 insertShadowCheck(Addr
, &I
);
2884 Value
*Shadow
= IRB
.CreateAlignedLoad(Ty
, ShadowPtr
, Alignment
, "_ldmxcsr");
2885 Value
*Origin
= MS
.TrackOrigins
? IRB
.CreateLoad(MS
.OriginTy
, OriginPtr
)
2887 insertShadowCheck(Shadow
, Origin
, &I
);
2890 void handleMaskedStore(IntrinsicInst
&I
) {
2891 IRBuilder
<> IRB(&I
);
2892 Value
*V
= I
.getArgOperand(0);
2893 Value
*Addr
= I
.getArgOperand(1);
2894 unsigned Align
= cast
<ConstantInt
>(I
.getArgOperand(2))->getZExtValue();
2895 Value
*Mask
= I
.getArgOperand(3);
2896 Value
*Shadow
= getShadow(V
);
2900 std::tie(ShadowPtr
, OriginPtr
) = getShadowOriginPtr(
2901 Addr
, IRB
, Shadow
->getType(), Align
, /*isStore*/ true);
2903 if (ClCheckAccessAddress
) {
2904 insertShadowCheck(Addr
, &I
);
2905 // Uninitialized mask is kind of like uninitialized address, but not as
2907 insertShadowCheck(Mask
, &I
);
2910 IRB
.CreateMaskedStore(Shadow
, ShadowPtr
, Align
, Mask
);
2912 if (MS
.TrackOrigins
) {
2913 auto &DL
= F
.getParent()->getDataLayout();
2914 paintOrigin(IRB
, getOrigin(V
), OriginPtr
,
2915 DL
.getTypeStoreSize(Shadow
->getType()),
2916 std::max(Align
, kMinOriginAlignment
));
2920 bool handleMaskedLoad(IntrinsicInst
&I
) {
2921 IRBuilder
<> IRB(&I
);
2922 Value
*Addr
= I
.getArgOperand(0);
2923 unsigned Align
= cast
<ConstantInt
>(I
.getArgOperand(1))->getZExtValue();
2924 Value
*Mask
= I
.getArgOperand(2);
2925 Value
*PassThru
= I
.getArgOperand(3);
2927 Type
*ShadowTy
= getShadowTy(&I
);
2928 Value
*ShadowPtr
, *OriginPtr
;
2929 if (PropagateShadow
) {
2930 std::tie(ShadowPtr
, OriginPtr
) =
2931 getShadowOriginPtr(Addr
, IRB
, ShadowTy
, Align
, /*isStore*/ false);
2932 setShadow(&I
, IRB
.CreateMaskedLoad(ShadowPtr
, Align
, Mask
,
2933 getShadow(PassThru
), "_msmaskedld"));
2935 setShadow(&I
, getCleanShadow(&I
));
2938 if (ClCheckAccessAddress
) {
2939 insertShadowCheck(Addr
, &I
);
2940 insertShadowCheck(Mask
, &I
);
2943 if (MS
.TrackOrigins
) {
2944 if (PropagateShadow
) {
2945 // Choose between PassThru's and the loaded value's origins.
2946 Value
*MaskedPassThruShadow
= IRB
.CreateAnd(
2947 getShadow(PassThru
), IRB
.CreateSExt(IRB
.CreateNeg(Mask
), ShadowTy
));
2949 Value
*Acc
= IRB
.CreateExtractElement(
2950 MaskedPassThruShadow
, ConstantInt::get(IRB
.getInt32Ty(), 0));
2951 for (int i
= 1, N
= PassThru
->getType()->getVectorNumElements(); i
< N
;
2953 Value
*More
= IRB
.CreateExtractElement(
2954 MaskedPassThruShadow
, ConstantInt::get(IRB
.getInt32Ty(), i
));
2955 Acc
= IRB
.CreateOr(Acc
, More
);
2958 Value
*Origin
= IRB
.CreateSelect(
2959 IRB
.CreateICmpNE(Acc
, Constant::getNullValue(Acc
->getType())),
2960 getOrigin(PassThru
), IRB
.CreateLoad(MS
.OriginTy
, OriginPtr
));
2962 setOrigin(&I
, Origin
);
2964 setOrigin(&I
, getCleanOrigin());
2970 // Instrument BMI / BMI2 intrinsics.
2971 // All of these intrinsics are Z = I(X, Y)
2972 // where the types of all operands and the result match, and are either i32 or i64.
2973 // The following instrumentation happens to work for all of them:
2974 // Sz = I(Sx, Y) | (sext (Sy != 0))
2975 void handleBmiIntrinsic(IntrinsicInst
&I
) {
2976 IRBuilder
<> IRB(&I
);
2977 Type
*ShadowTy
= getShadowTy(&I
);
2979 // If any bit of the mask operand is poisoned, then the whole thing is.
2980 Value
*SMask
= getShadow(&I
, 1);
2981 SMask
= IRB
.CreateSExt(IRB
.CreateICmpNE(SMask
, getCleanShadow(ShadowTy
)),
2983 // Apply the same intrinsic to the shadow of the first operand.
2984 Value
*S
= IRB
.CreateCall(I
.getCalledFunction(),
2985 {getShadow(&I
, 0), I
.getOperand(1)});
2986 S
= IRB
.CreateOr(SMask
, S
);
2988 setOriginForNaryOp(I
);
2991 void visitIntrinsicInst(IntrinsicInst
&I
) {
2992 switch (I
.getIntrinsicID()) {
2993 case Intrinsic::lifetime_start
:
2994 handleLifetimeStart(I
);
2996 case Intrinsic::bswap
:
2999 case Intrinsic::masked_store
:
3000 handleMaskedStore(I
);
3002 case Intrinsic::masked_load
:
3003 handleMaskedLoad(I
);
3005 case Intrinsic::x86_sse_stmxcsr
:
3008 case Intrinsic::x86_sse_ldmxcsr
:
3011 case Intrinsic::x86_avx512_vcvtsd2usi64
:
3012 case Intrinsic::x86_avx512_vcvtsd2usi32
:
3013 case Intrinsic::x86_avx512_vcvtss2usi64
:
3014 case Intrinsic::x86_avx512_vcvtss2usi32
:
3015 case Intrinsic::x86_avx512_cvttss2usi64
:
3016 case Intrinsic::x86_avx512_cvttss2usi
:
3017 case Intrinsic::x86_avx512_cvttsd2usi64
:
3018 case Intrinsic::x86_avx512_cvttsd2usi
:
3019 case Intrinsic::x86_avx512_cvtusi2ss
:
3020 case Intrinsic::x86_avx512_cvtusi642sd
:
3021 case Intrinsic::x86_avx512_cvtusi642ss
:
3022 case Intrinsic::x86_sse2_cvtsd2si64
:
3023 case Intrinsic::x86_sse2_cvtsd2si
:
3024 case Intrinsic::x86_sse2_cvtsd2ss
:
3025 case Intrinsic::x86_sse2_cvttsd2si64
:
3026 case Intrinsic::x86_sse2_cvttsd2si
:
3027 case Intrinsic::x86_sse_cvtss2si64
:
3028 case Intrinsic::x86_sse_cvtss2si
:
3029 case Intrinsic::x86_sse_cvttss2si64
:
3030 case Intrinsic::x86_sse_cvttss2si
:
3031 handleVectorConvertIntrinsic(I
, 1);
3033 case Intrinsic::x86_sse_cvtps2pi
:
3034 case Intrinsic::x86_sse_cvttps2pi
:
3035 handleVectorConvertIntrinsic(I
, 2);
3038 case Intrinsic::x86_avx512_psll_w_512
:
3039 case Intrinsic::x86_avx512_psll_d_512
:
3040 case Intrinsic::x86_avx512_psll_q_512
:
3041 case Intrinsic::x86_avx512_pslli_w_512
:
3042 case Intrinsic::x86_avx512_pslli_d_512
:
3043 case Intrinsic::x86_avx512_pslli_q_512
:
3044 case Intrinsic::x86_avx512_psrl_w_512
:
3045 case Intrinsic::x86_avx512_psrl_d_512
:
3046 case Intrinsic::x86_avx512_psrl_q_512
:
3047 case Intrinsic::x86_avx512_psra_w_512
:
3048 case Intrinsic::x86_avx512_psra_d_512
:
3049 case Intrinsic::x86_avx512_psra_q_512
:
3050 case Intrinsic::x86_avx512_psrli_w_512
:
3051 case Intrinsic::x86_avx512_psrli_d_512
:
3052 case Intrinsic::x86_avx512_psrli_q_512
:
3053 case Intrinsic::x86_avx512_psrai_w_512
:
3054 case Intrinsic::x86_avx512_psrai_d_512
:
3055 case Intrinsic::x86_avx512_psrai_q_512
:
3056 case Intrinsic::x86_avx512_psra_q_256
:
3057 case Intrinsic::x86_avx512_psra_q_128
:
3058 case Intrinsic::x86_avx512_psrai_q_256
:
3059 case Intrinsic::x86_avx512_psrai_q_128
:
3060 case Intrinsic::x86_avx2_psll_w
:
3061 case Intrinsic::x86_avx2_psll_d
:
3062 case Intrinsic::x86_avx2_psll_q
:
3063 case Intrinsic::x86_avx2_pslli_w
:
3064 case Intrinsic::x86_avx2_pslli_d
:
3065 case Intrinsic::x86_avx2_pslli_q
:
3066 case Intrinsic::x86_avx2_psrl_w
:
3067 case Intrinsic::x86_avx2_psrl_d
:
3068 case Intrinsic::x86_avx2_psrl_q
:
3069 case Intrinsic::x86_avx2_psra_w
:
3070 case Intrinsic::x86_avx2_psra_d
:
3071 case Intrinsic::x86_avx2_psrli_w
:
3072 case Intrinsic::x86_avx2_psrli_d
:
3073 case Intrinsic::x86_avx2_psrli_q
:
3074 case Intrinsic::x86_avx2_psrai_w
:
3075 case Intrinsic::x86_avx2_psrai_d
:
3076 case Intrinsic::x86_sse2_psll_w
:
3077 case Intrinsic::x86_sse2_psll_d
:
3078 case Intrinsic::x86_sse2_psll_q
:
3079 case Intrinsic::x86_sse2_pslli_w
:
3080 case Intrinsic::x86_sse2_pslli_d
:
3081 case Intrinsic::x86_sse2_pslli_q
:
3082 case Intrinsic::x86_sse2_psrl_w
:
3083 case Intrinsic::x86_sse2_psrl_d
:
3084 case Intrinsic::x86_sse2_psrl_q
:
3085 case Intrinsic::x86_sse2_psra_w
:
3086 case Intrinsic::x86_sse2_psra_d
:
3087 case Intrinsic::x86_sse2_psrli_w
:
3088 case Intrinsic::x86_sse2_psrli_d
:
3089 case Intrinsic::x86_sse2_psrli_q
:
3090 case Intrinsic::x86_sse2_psrai_w
:
3091 case Intrinsic::x86_sse2_psrai_d
:
3092 case Intrinsic::x86_mmx_psll_w
:
3093 case Intrinsic::x86_mmx_psll_d
:
3094 case Intrinsic::x86_mmx_psll_q
:
3095 case Intrinsic::x86_mmx_pslli_w
:
3096 case Intrinsic::x86_mmx_pslli_d
:
3097 case Intrinsic::x86_mmx_pslli_q
:
3098 case Intrinsic::x86_mmx_psrl_w
:
3099 case Intrinsic::x86_mmx_psrl_d
:
3100 case Intrinsic::x86_mmx_psrl_q
:
3101 case Intrinsic::x86_mmx_psra_w
:
3102 case Intrinsic::x86_mmx_psra_d
:
3103 case Intrinsic::x86_mmx_psrli_w
:
3104 case Intrinsic::x86_mmx_psrli_d
:
3105 case Intrinsic::x86_mmx_psrli_q
:
3106 case Intrinsic::x86_mmx_psrai_w
:
3107 case Intrinsic::x86_mmx_psrai_d
:
3108 handleVectorShiftIntrinsic(I
, /* Variable */ false);
3110 case Intrinsic::x86_avx2_psllv_d
:
3111 case Intrinsic::x86_avx2_psllv_d_256
:
3112 case Intrinsic::x86_avx512_psllv_d_512
:
3113 case Intrinsic::x86_avx2_psllv_q
:
3114 case Intrinsic::x86_avx2_psllv_q_256
:
3115 case Intrinsic::x86_avx512_psllv_q_512
:
3116 case Intrinsic::x86_avx2_psrlv_d
:
3117 case Intrinsic::x86_avx2_psrlv_d_256
:
3118 case Intrinsic::x86_avx512_psrlv_d_512
:
3119 case Intrinsic::x86_avx2_psrlv_q
:
3120 case Intrinsic::x86_avx2_psrlv_q_256
:
3121 case Intrinsic::x86_avx512_psrlv_q_512
:
3122 case Intrinsic::x86_avx2_psrav_d
:
3123 case Intrinsic::x86_avx2_psrav_d_256
:
3124 case Intrinsic::x86_avx512_psrav_d_512
:
3125 case Intrinsic::x86_avx512_psrav_q_128
:
3126 case Intrinsic::x86_avx512_psrav_q_256
:
3127 case Intrinsic::x86_avx512_psrav_q_512
:
3128 handleVectorShiftIntrinsic(I
, /* Variable */ true);
3131 case Intrinsic::x86_sse2_packsswb_128
:
3132 case Intrinsic::x86_sse2_packssdw_128
:
3133 case Intrinsic::x86_sse2_packuswb_128
:
3134 case Intrinsic::x86_sse41_packusdw
:
3135 case Intrinsic::x86_avx2_packsswb
:
3136 case Intrinsic::x86_avx2_packssdw
:
3137 case Intrinsic::x86_avx2_packuswb
:
3138 case Intrinsic::x86_avx2_packusdw
:
3139 handleVectorPackIntrinsic(I
);
3142 case Intrinsic::x86_mmx_packsswb
:
3143 case Intrinsic::x86_mmx_packuswb
:
3144 handleVectorPackIntrinsic(I
, 16);
3147 case Intrinsic::x86_mmx_packssdw
:
3148 handleVectorPackIntrinsic(I
, 32);
3151 case Intrinsic::x86_mmx_psad_bw
:
3152 case Intrinsic::x86_sse2_psad_bw
:
3153 case Intrinsic::x86_avx2_psad_bw
:
3154 handleVectorSadIntrinsic(I
);
3157 case Intrinsic::x86_sse2_pmadd_wd
:
3158 case Intrinsic::x86_avx2_pmadd_wd
:
3159 case Intrinsic::x86_ssse3_pmadd_ub_sw_128
:
3160 case Intrinsic::x86_avx2_pmadd_ub_sw
:
3161 handleVectorPmaddIntrinsic(I
);
3164 case Intrinsic::x86_ssse3_pmadd_ub_sw
:
3165 handleVectorPmaddIntrinsic(I
, 8);
3168 case Intrinsic::x86_mmx_pmadd_wd
:
3169 handleVectorPmaddIntrinsic(I
, 16);
3172 case Intrinsic::x86_sse_cmp_ss
:
3173 case Intrinsic::x86_sse2_cmp_sd
:
3174 case Intrinsic::x86_sse_comieq_ss
:
3175 case Intrinsic::x86_sse_comilt_ss
:
3176 case Intrinsic::x86_sse_comile_ss
:
3177 case Intrinsic::x86_sse_comigt_ss
:
3178 case Intrinsic::x86_sse_comige_ss
:
3179 case Intrinsic::x86_sse_comineq_ss
:
3180 case Intrinsic::x86_sse_ucomieq_ss
:
3181 case Intrinsic::x86_sse_ucomilt_ss
:
3182 case Intrinsic::x86_sse_ucomile_ss
:
3183 case Intrinsic::x86_sse_ucomigt_ss
:
3184 case Intrinsic::x86_sse_ucomige_ss
:
3185 case Intrinsic::x86_sse_ucomineq_ss
:
3186 case Intrinsic::x86_sse2_comieq_sd
:
3187 case Intrinsic::x86_sse2_comilt_sd
:
3188 case Intrinsic::x86_sse2_comile_sd
:
3189 case Intrinsic::x86_sse2_comigt_sd
:
3190 case Intrinsic::x86_sse2_comige_sd
:
3191 case Intrinsic::x86_sse2_comineq_sd
:
3192 case Intrinsic::x86_sse2_ucomieq_sd
:
3193 case Intrinsic::x86_sse2_ucomilt_sd
:
3194 case Intrinsic::x86_sse2_ucomile_sd
:
3195 case Intrinsic::x86_sse2_ucomigt_sd
:
3196 case Intrinsic::x86_sse2_ucomige_sd
:
3197 case Intrinsic::x86_sse2_ucomineq_sd
:
3198 handleVectorCompareScalarIntrinsic(I
);
3201 case Intrinsic::x86_sse_cmp_ps
:
3202 case Intrinsic::x86_sse2_cmp_pd
:
3203 // FIXME: For x86_avx_cmp_pd_256 and x86_avx_cmp_ps_256 this function
3204 // generates reasonably looking IR that fails in the backend with "Do not
3205 // know how to split the result of this operator!".
3206 handleVectorComparePackedIntrinsic(I
);
3209 case Intrinsic::x86_bmi_bextr_32
:
3210 case Intrinsic::x86_bmi_bextr_64
:
3211 case Intrinsic::x86_bmi_bzhi_32
:
3212 case Intrinsic::x86_bmi_bzhi_64
:
3213 case Intrinsic::x86_bmi_pdep_32
:
3214 case Intrinsic::x86_bmi_pdep_64
:
3215 case Intrinsic::x86_bmi_pext_32
:
3216 case Intrinsic::x86_bmi_pext_64
:
3217 handleBmiIntrinsic(I
);
3220 case Intrinsic::is_constant
:
3221 // The result of llvm.is.constant() is always defined.
3222 setShadow(&I
, getCleanShadow(&I
));
3223 setOrigin(&I
, getCleanOrigin());
3227 if (!handleUnknownIntrinsic(I
))
3228 visitInstruction(I
);
3233 void visitCallSite(CallSite CS
) {
3234 Instruction
&I
= *CS
.getInstruction();
3235 assert(!I
.getMetadata("nosanitize"));
3236 assert((CS
.isCall() || CS
.isInvoke() || CS
.isCallBr()) &&
3237 "Unknown type of CallSite");
3238 if (CS
.isCallBr() || (CS
.isCall() && cast
<CallInst
>(&I
)->isInlineAsm())) {
3239 // For inline asm (either a call to asm function, or callbr instruction),
3240 // do the usual thing: check argument shadow and mark all outputs as
3241 // clean. Note that any side effects of the inline asm that are not
3242 // immediately visible in its constraints are not handled.
3243 if (ClHandleAsmConservative
&& MS
.CompileKernel
)
3244 visitAsmInstruction(I
);
3246 visitInstruction(I
);
3250 CallInst
*Call
= cast
<CallInst
>(&I
);
3251 assert(!isa
<IntrinsicInst
>(&I
) && "intrinsics are handled elsewhere");
3253 // We are going to insert code that relies on the fact that the callee
3254 // will become a non-readonly function after it is instrumented by us. To
3255 // prevent this code from being optimized out, mark that function
3256 // non-readonly in advance.
3257 if (Function
*Func
= Call
->getCalledFunction()) {
3258 // Clear out readonly/readnone attributes.
3260 B
.addAttribute(Attribute::ReadOnly
)
3261 .addAttribute(Attribute::ReadNone
);
3262 Func
->removeAttributes(AttributeList::FunctionIndex
, B
);
3265 maybeMarkSanitizerLibraryCallNoBuiltin(Call
, TLI
);
3267 IRBuilder
<> IRB(&I
);
3269 unsigned ArgOffset
= 0;
3270 LLVM_DEBUG(dbgs() << " CallSite: " << I
<< "\n");
3271 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin(), End
= CS
.arg_end();
3272 ArgIt
!= End
; ++ArgIt
) {
3274 unsigned i
= ArgIt
- CS
.arg_begin();
3275 if (!A
->getType()->isSized()) {
3276 LLVM_DEBUG(dbgs() << "Arg " << i
<< " is not sized: " << I
<< "\n");
3280 Value
*Store
= nullptr;
3281 // Compute the Shadow for arg even if it is ByVal, because
3282 // in that case getShadow() will copy the actual arg shadow to
3283 // __msan_param_tls.
3284 Value
*ArgShadow
= getShadow(A
);
3285 Value
*ArgShadowBase
= getShadowPtrForArgument(A
, IRB
, ArgOffset
);
3286 LLVM_DEBUG(dbgs() << " Arg#" << i
<< ": " << *A
3287 << " Shadow: " << *ArgShadow
<< "\n");
3288 bool ArgIsInitialized
= false;
3289 const DataLayout
&DL
= F
.getParent()->getDataLayout();
3290 if (CS
.paramHasAttr(i
, Attribute::ByVal
)) {
3291 assert(A
->getType()->isPointerTy() &&
3292 "ByVal argument is not a pointer!");
3293 Size
= DL
.getTypeAllocSize(A
->getType()->getPointerElementType());
3294 if (ArgOffset
+ Size
> kParamTLSSize
) break;
3295 unsigned ParamAlignment
= CS
.getParamAlignment(i
);
3296 unsigned Alignment
= std::min(ParamAlignment
, kShadowTLSAlignment
);
3298 getShadowOriginPtr(A
, IRB
, IRB
.getInt8Ty(), Alignment
,
3302 Store
= IRB
.CreateMemCpy(ArgShadowBase
, Alignment
, AShadowPtr
,
3304 // TODO(glider): need to copy origins.
3306 Size
= DL
.getTypeAllocSize(A
->getType());
3307 if (ArgOffset
+ Size
> kParamTLSSize
) break;
3308 Store
= IRB
.CreateAlignedStore(ArgShadow
, ArgShadowBase
,
3309 kShadowTLSAlignment
);
3310 Constant
*Cst
= dyn_cast
<Constant
>(ArgShadow
);
3311 if (Cst
&& Cst
->isNullValue()) ArgIsInitialized
= true;
3313 if (MS
.TrackOrigins
&& !ArgIsInitialized
)
3314 IRB
.CreateStore(getOrigin(A
),
3315 getOriginPtrForArgument(A
, IRB
, ArgOffset
));
3317 assert(Size
!= 0 && Store
!= nullptr);
3318 LLVM_DEBUG(dbgs() << " Param:" << *Store
<< "\n");
3319 ArgOffset
+= alignTo(Size
, 8);
3321 LLVM_DEBUG(dbgs() << " done with call args\n");
3323 FunctionType
*FT
= CS
.getFunctionType();
3324 if (FT
->isVarArg()) {
3325 VAHelper
->visitCallSite(CS
, IRB
);
3328 // Now, get the shadow for the RetVal.
3329 if (!I
.getType()->isSized()) return;
3330 // Don't emit the epilogue for musttail call returns.
3331 if (CS
.isCall() && cast
<CallInst
>(&I
)->isMustTailCall()) return;
3332 IRBuilder
<> IRBBefore(&I
);
3333 // Until we have full dynamic coverage, make sure the retval shadow is 0.
3334 Value
*Base
= getShadowPtrForRetval(&I
, IRBBefore
);
3335 IRBBefore
.CreateAlignedStore(getCleanShadow(&I
), Base
, kShadowTLSAlignment
);
3336 BasicBlock::iterator NextInsn
;
3338 NextInsn
= ++I
.getIterator();
3339 assert(NextInsn
!= I
.getParent()->end());
3341 BasicBlock
*NormalDest
= cast
<InvokeInst
>(&I
)->getNormalDest();
3342 if (!NormalDest
->getSinglePredecessor()) {
3343 // FIXME: this case is tricky, so we are just conservative here.
3344 // Perhaps we need to split the edge between this BB and NormalDest,
3345 // but a naive attempt to use SplitEdge leads to a crash.
3346 setShadow(&I
, getCleanShadow(&I
));
3347 setOrigin(&I
, getCleanOrigin());
3350 // FIXME: NextInsn is likely in a basic block that has not been visited yet.
3351 // Anything inserted there will be instrumented by MSan later!
3352 NextInsn
= NormalDest
->getFirstInsertionPt();
3353 assert(NextInsn
!= NormalDest
->end() &&
3354 "Could not find insertion point for retval shadow load");
3356 IRBuilder
<> IRBAfter(&*NextInsn
);
3357 Value
*RetvalShadow
= IRBAfter
.CreateAlignedLoad(
3358 getShadowTy(&I
), getShadowPtrForRetval(&I
, IRBAfter
),
3359 kShadowTLSAlignment
, "_msret");
3360 setShadow(&I
, RetvalShadow
);
3361 if (MS
.TrackOrigins
)
3362 setOrigin(&I
, IRBAfter
.CreateLoad(MS
.OriginTy
,
3363 getOriginPtrForRetval(IRBAfter
)));
3366 bool isAMustTailRetVal(Value
*RetVal
) {
3367 if (auto *I
= dyn_cast
<BitCastInst
>(RetVal
)) {
3368 RetVal
= I
->getOperand(0);
3370 if (auto *I
= dyn_cast
<CallInst
>(RetVal
)) {
3371 return I
->isMustTailCall();
3376 void visitReturnInst(ReturnInst
&I
) {
3377 IRBuilder
<> IRB(&I
);
3378 Value
*RetVal
= I
.getReturnValue();
3379 if (!RetVal
) return;
3380 // Don't emit the epilogue for musttail call returns.
3381 if (isAMustTailRetVal(RetVal
)) return;
3382 Value
*ShadowPtr
= getShadowPtrForRetval(RetVal
, IRB
);
3383 if (CheckReturnValue
) {
3384 insertShadowCheck(RetVal
, &I
);
3385 Value
*Shadow
= getCleanShadow(RetVal
);
3386 IRB
.CreateAlignedStore(Shadow
, ShadowPtr
, kShadowTLSAlignment
);
3388 Value
*Shadow
= getShadow(RetVal
);
3389 IRB
.CreateAlignedStore(Shadow
, ShadowPtr
, kShadowTLSAlignment
);
3390 if (MS
.TrackOrigins
)
3391 IRB
.CreateStore(getOrigin(RetVal
), getOriginPtrForRetval(IRB
));
3395 void visitPHINode(PHINode
&I
) {
3396 IRBuilder
<> IRB(&I
);
3397 if (!PropagateShadow
) {
3398 setShadow(&I
, getCleanShadow(&I
));
3399 setOrigin(&I
, getCleanOrigin());
3403 ShadowPHINodes
.push_back(&I
);
3404 setShadow(&I
, IRB
.CreatePHI(getShadowTy(&I
), I
.getNumIncomingValues(),
3406 if (MS
.TrackOrigins
)
3407 setOrigin(&I
, IRB
.CreatePHI(MS
.OriginTy
, I
.getNumIncomingValues(),
3411 Value
*getLocalVarDescription(AllocaInst
&I
) {
3412 SmallString
<2048> StackDescriptionStorage
;
3413 raw_svector_ostream
StackDescription(StackDescriptionStorage
);
3414 // We create a string with a description of the stack allocation and
3415 // pass it into __msan_set_alloca_origin.
3416 // It will be printed by the run-time if stack-originated UMR is found.
3417 // The first 4 bytes of the string are set to '----' and will be replaced
3418 // by __msan_va_arg_overflow_size_tls at the first call.
3419 StackDescription
<< "----" << I
.getName() << "@" << F
.getName();
3420 return createPrivateNonConstGlobalForString(*F
.getParent(),
3421 StackDescription
.str());
3424 void poisonAllocaUserspace(AllocaInst
&I
, IRBuilder
<> &IRB
, Value
*Len
) {
3425 if (PoisonStack
&& ClPoisonStackWithCall
) {
3426 IRB
.CreateCall(MS
.MsanPoisonStackFn
,
3427 {IRB
.CreatePointerCast(&I
, IRB
.getInt8PtrTy()), Len
});
3429 Value
*ShadowBase
, *OriginBase
;
3430 std::tie(ShadowBase
, OriginBase
) =
3431 getShadowOriginPtr(&I
, IRB
, IRB
.getInt8Ty(), 1, /*isStore*/ true);
3433 Value
*PoisonValue
= IRB
.getInt8(PoisonStack
? ClPoisonStackPattern
: 0);
3434 IRB
.CreateMemSet(ShadowBase
, PoisonValue
, Len
, I
.getAlignment());
3437 if (PoisonStack
&& MS
.TrackOrigins
) {
3438 Value
*Descr
= getLocalVarDescription(I
);
3439 IRB
.CreateCall(MS
.MsanSetAllocaOrigin4Fn
,
3440 {IRB
.CreatePointerCast(&I
, IRB
.getInt8PtrTy()), Len
,
3441 IRB
.CreatePointerCast(Descr
, IRB
.getInt8PtrTy()),
3442 IRB
.CreatePointerCast(&F
, MS
.IntptrTy
)});
3446 void poisonAllocaKmsan(AllocaInst
&I
, IRBuilder
<> &IRB
, Value
*Len
) {
3447 Value
*Descr
= getLocalVarDescription(I
);
3449 IRB
.CreateCall(MS
.MsanPoisonAllocaFn
,
3450 {IRB
.CreatePointerCast(&I
, IRB
.getInt8PtrTy()), Len
,
3451 IRB
.CreatePointerCast(Descr
, IRB
.getInt8PtrTy())});
3453 IRB
.CreateCall(MS
.MsanUnpoisonAllocaFn
,
3454 {IRB
.CreatePointerCast(&I
, IRB
.getInt8PtrTy()), Len
});
3458 void instrumentAlloca(AllocaInst
&I
, Instruction
*InsPoint
= nullptr) {
3461 IRBuilder
<> IRB(InsPoint
->getNextNode());
3462 const DataLayout
&DL
= F
.getParent()->getDataLayout();
3463 uint64_t TypeSize
= DL
.getTypeAllocSize(I
.getAllocatedType());
3464 Value
*Len
= ConstantInt::get(MS
.IntptrTy
, TypeSize
);
3465 if (I
.isArrayAllocation())
3466 Len
= IRB
.CreateMul(Len
, I
.getArraySize());
3468 if (MS
.CompileKernel
)
3469 poisonAllocaKmsan(I
, IRB
, Len
);
3471 poisonAllocaUserspace(I
, IRB
, Len
);
3474 void visitAllocaInst(AllocaInst
&I
) {
3475 setShadow(&I
, getCleanShadow(&I
));
3476 setOrigin(&I
, getCleanOrigin());
3477 // We'll get to this alloca later unless it's poisoned at the corresponding
3478 // llvm.lifetime.start.
3479 AllocaSet
.insert(&I
);
3482 void visitSelectInst(SelectInst
& I
) {
3483 IRBuilder
<> IRB(&I
);
3484 // a = select b, c, d
3485 Value
*B
= I
.getCondition();
3486 Value
*C
= I
.getTrueValue();
3487 Value
*D
= I
.getFalseValue();
3488 Value
*Sb
= getShadow(B
);
3489 Value
*Sc
= getShadow(C
);
3490 Value
*Sd
= getShadow(D
);
3492 // Result shadow if condition shadow is 0.
3493 Value
*Sa0
= IRB
.CreateSelect(B
, Sc
, Sd
);
3495 if (I
.getType()->isAggregateType()) {
3496 // To avoid "sign extending" i1 to an arbitrary aggregate type, we just do
3497 // an extra "select". This results in much more compact IR.
3498 // Sa = select Sb, poisoned, (select b, Sc, Sd)
3499 Sa1
= getPoisonedShadow(getShadowTy(I
.getType()));
3501 // Sa = select Sb, [ (c^d) | Sc | Sd ], [ b ? Sc : Sd ]
3502 // If Sb (condition is poisoned), look for bits in c and d that are equal
3503 // and both unpoisoned.
3504 // If !Sb (condition is unpoisoned), simply pick one of Sc and Sd.
3506 // Cast arguments to shadow-compatible type.
3507 C
= CreateAppToShadowCast(IRB
, C
);
3508 D
= CreateAppToShadowCast(IRB
, D
);
3510 // Result shadow if condition shadow is 1.
3511 Sa1
= IRB
.CreateOr({IRB
.CreateXor(C
, D
), Sc
, Sd
});
3513 Value
*Sa
= IRB
.CreateSelect(Sb
, Sa1
, Sa0
, "_msprop_select");
3515 if (MS
.TrackOrigins
) {
3516 // Origins are always i32, so any vector conditions must be flattened.
3517 // FIXME: consider tracking vector origins for app vectors?
3518 if (B
->getType()->isVectorTy()) {
3519 Type
*FlatTy
= getShadowTyNoVec(B
->getType());
3520 B
= IRB
.CreateICmpNE(IRB
.CreateBitCast(B
, FlatTy
),
3521 ConstantInt::getNullValue(FlatTy
));
3522 Sb
= IRB
.CreateICmpNE(IRB
.CreateBitCast(Sb
, FlatTy
),
3523 ConstantInt::getNullValue(FlatTy
));
3525 // a = select b, c, d
3526 // Oa = Sb ? Ob : (b ? Oc : Od)
3528 &I
, IRB
.CreateSelect(Sb
, getOrigin(I
.getCondition()),
3529 IRB
.CreateSelect(B
, getOrigin(I
.getTrueValue()),
3530 getOrigin(I
.getFalseValue()))));
3534 void visitLandingPadInst(LandingPadInst
&I
) {
3536 // See https://github.com/google/sanitizers/issues/504
3537 setShadow(&I
, getCleanShadow(&I
));
3538 setOrigin(&I
, getCleanOrigin());
3541 void visitCatchSwitchInst(CatchSwitchInst
&I
) {
3542 setShadow(&I
, getCleanShadow(&I
));
3543 setOrigin(&I
, getCleanOrigin());
3546 void visitFuncletPadInst(FuncletPadInst
&I
) {
3547 setShadow(&I
, getCleanShadow(&I
));
3548 setOrigin(&I
, getCleanOrigin());
3551 void visitGetElementPtrInst(GetElementPtrInst
&I
) {
3555 void visitExtractValueInst(ExtractValueInst
&I
) {
3556 IRBuilder
<> IRB(&I
);
3557 Value
*Agg
= I
.getAggregateOperand();
3558 LLVM_DEBUG(dbgs() << "ExtractValue: " << I
<< "\n");
3559 Value
*AggShadow
= getShadow(Agg
);
3560 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow
<< "\n");
3561 Value
*ResShadow
= IRB
.CreateExtractValue(AggShadow
, I
.getIndices());
3562 LLVM_DEBUG(dbgs() << " ResShadow: " << *ResShadow
<< "\n");
3563 setShadow(&I
, ResShadow
);
3564 setOriginForNaryOp(I
);
3567 void visitInsertValueInst(InsertValueInst
&I
) {
3568 IRBuilder
<> IRB(&I
);
3569 LLVM_DEBUG(dbgs() << "InsertValue: " << I
<< "\n");
3570 Value
*AggShadow
= getShadow(I
.getAggregateOperand());
3571 Value
*InsShadow
= getShadow(I
.getInsertedValueOperand());
3572 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow
<< "\n");
3573 LLVM_DEBUG(dbgs() << " InsShadow: " << *InsShadow
<< "\n");
3574 Value
*Res
= IRB
.CreateInsertValue(AggShadow
, InsShadow
, I
.getIndices());
3575 LLVM_DEBUG(dbgs() << " Res: " << *Res
<< "\n");
3577 setOriginForNaryOp(I
);
3580 void dumpInst(Instruction
&I
) {
3581 if (CallInst
*CI
= dyn_cast
<CallInst
>(&I
)) {
3582 errs() << "ZZZ call " << CI
->getCalledFunction()->getName() << "\n";
3584 errs() << "ZZZ " << I
.getOpcodeName() << "\n";
3586 errs() << "QQQ " << I
<< "\n";
3589 void visitResumeInst(ResumeInst
&I
) {
3590 LLVM_DEBUG(dbgs() << "Resume: " << I
<< "\n");
3591 // Nothing to do here.
3594 void visitCleanupReturnInst(CleanupReturnInst
&CRI
) {
3595 LLVM_DEBUG(dbgs() << "CleanupReturn: " << CRI
<< "\n");
3596 // Nothing to do here.
3599 void visitCatchReturnInst(CatchReturnInst
&CRI
) {
3600 LLVM_DEBUG(dbgs() << "CatchReturn: " << CRI
<< "\n");
3601 // Nothing to do here.
3604 void instrumentAsmArgument(Value
*Operand
, Instruction
&I
, IRBuilder
<> &IRB
,
3605 const DataLayout
&DL
, bool isOutput
) {
3606 // For each assembly argument, we check its value for being initialized.
3607 // If the argument is a pointer, we assume it points to a single element
3608 // of the corresponding type (or to a 8-byte word, if the type is unsized).
3609 // Each such pointer is instrumented with a call to the runtime library.
3610 Type
*OpType
= Operand
->getType();
3611 // Check the operand value itself.
3612 insertShadowCheck(Operand
, &I
);
3613 if (!OpType
->isPointerTy() || !isOutput
) {
3617 Type
*ElType
= OpType
->getPointerElementType();
3618 if (!ElType
->isSized())
3620 int Size
= DL
.getTypeStoreSize(ElType
);
3621 Value
*Ptr
= IRB
.CreatePointerCast(Operand
, IRB
.getInt8PtrTy());
3622 Value
*SizeVal
= ConstantInt::get(MS
.IntptrTy
, Size
);
3623 IRB
.CreateCall(MS
.MsanInstrumentAsmStoreFn
, {Ptr
, SizeVal
});
3626 /// Get the number of output arguments returned by pointers.
3627 int getNumOutputArgs(InlineAsm
*IA
, CallBase
*CB
) {
3628 int NumRetOutputs
= 0;
3630 Type
*RetTy
= dyn_cast
<Value
>(CB
)->getType();
3631 if (!RetTy
->isVoidTy()) {
3632 // Register outputs are returned via the CallInst return value.
3633 StructType
*ST
= dyn_cast_or_null
<StructType
>(RetTy
);
3635 NumRetOutputs
= ST
->getNumElements();
3639 InlineAsm::ConstraintInfoVector Constraints
= IA
->ParseConstraints();
3640 for (size_t i
= 0, n
= Constraints
.size(); i
< n
; i
++) {
3641 InlineAsm::ConstraintInfo Info
= Constraints
[i
];
3642 switch (Info
.Type
) {
3643 case InlineAsm::isOutput
:
3650 return NumOutputs
- NumRetOutputs
;
3653 void visitAsmInstruction(Instruction
&I
) {
3654 // Conservative inline assembly handling: check for poisoned shadow of
3655 // asm() arguments, then unpoison the result and all the memory locations
3656 // pointed to by those arguments.
3657 // An inline asm() statement in C++ contains lists of input and output
3658 // arguments used by the assembly code. These are mapped to operands of the
3659 // CallInst as follows:
3660 // - nR register outputs ("=r) are returned by value in a single structure
3661 // (SSA value of the CallInst);
3662 // - nO other outputs ("=m" and others) are returned by pointer as first
3663 // nO operands of the CallInst;
3664 // - nI inputs ("r", "m" and others) are passed to CallInst as the
3665 // remaining nI operands.
3666 // The total number of asm() arguments in the source is nR+nO+nI, and the
3667 // corresponding CallInst has nO+nI+1 operands (the last operand is the
3668 // function to be called).
3669 const DataLayout
&DL
= F
.getParent()->getDataLayout();
3670 CallBase
*CB
= dyn_cast
<CallBase
>(&I
);
3671 IRBuilder
<> IRB(&I
);
3672 InlineAsm
*IA
= cast
<InlineAsm
>(CB
->getCalledValue());
3673 int OutputArgs
= getNumOutputArgs(IA
, CB
);
3674 // The last operand of a CallInst is the function itself.
3675 int NumOperands
= CB
->getNumOperands() - 1;
3677 // Check input arguments. Doing so before unpoisoning output arguments, so
3678 // that we won't overwrite uninit values before checking them.
3679 for (int i
= OutputArgs
; i
< NumOperands
; i
++) {
3680 Value
*Operand
= CB
->getOperand(i
);
3681 instrumentAsmArgument(Operand
, I
, IRB
, DL
, /*isOutput*/ false);
3683 // Unpoison output arguments. This must happen before the actual InlineAsm
3684 // call, so that the shadow for memory published in the asm() statement
3686 for (int i
= 0; i
< OutputArgs
; i
++) {
3687 Value
*Operand
= CB
->getOperand(i
);
3688 instrumentAsmArgument(Operand
, I
, IRB
, DL
, /*isOutput*/ true);
3691 setShadow(&I
, getCleanShadow(&I
));
3692 setOrigin(&I
, getCleanOrigin());
3695 void visitInstruction(Instruction
&I
) {
3696 // Everything else: stop propagating and check for poisoned shadow.
3697 if (ClDumpStrictInstructions
)
3699 LLVM_DEBUG(dbgs() << "DEFAULT: " << I
<< "\n");
3700 for (size_t i
= 0, n
= I
.getNumOperands(); i
< n
; i
++) {
3701 Value
*Operand
= I
.getOperand(i
);
3702 if (Operand
->getType()->isSized())
3703 insertShadowCheck(Operand
, &I
);
3705 setShadow(&I
, getCleanShadow(&I
));
3706 setOrigin(&I
, getCleanOrigin());
3710 /// AMD64-specific implementation of VarArgHelper.
3711 struct VarArgAMD64Helper
: public VarArgHelper
{
3712 // An unfortunate workaround for asymmetric lowering of va_arg stuff.
3713 // See a comment in visitCallSite for more details.
3714 static const unsigned AMD64GpEndOffset
= 48; // AMD64 ABI Draft 0.99.6 p3.5.7
3715 static const unsigned AMD64FpEndOffsetSSE
= 176;
3716 // If SSE is disabled, fp_offset in va_list is zero.
3717 static const unsigned AMD64FpEndOffsetNoSSE
= AMD64GpEndOffset
;
3719 unsigned AMD64FpEndOffset
;
3721 MemorySanitizer
&MS
;
3722 MemorySanitizerVisitor
&MSV
;
3723 Value
*VAArgTLSCopy
= nullptr;
3724 Value
*VAArgTLSOriginCopy
= nullptr;
3725 Value
*VAArgOverflowSize
= nullptr;
3727 SmallVector
<CallInst
*, 16> VAStartInstrumentationList
;
3729 enum ArgKind
{ AK_GeneralPurpose
, AK_FloatingPoint
, AK_Memory
};
3731 VarArgAMD64Helper(Function
&F
, MemorySanitizer
&MS
,
3732 MemorySanitizerVisitor
&MSV
)
3733 : F(F
), MS(MS
), MSV(MSV
) {
3734 AMD64FpEndOffset
= AMD64FpEndOffsetSSE
;
3735 for (const auto &Attr
: F
.getAttributes().getFnAttributes()) {
3736 if (Attr
.isStringAttribute() &&
3737 (Attr
.getKindAsString() == "target-features")) {
3738 if (Attr
.getValueAsString().contains("-sse"))
3739 AMD64FpEndOffset
= AMD64FpEndOffsetNoSSE
;
3745 ArgKind
classifyArgument(Value
* arg
) {
3746 // A very rough approximation of X86_64 argument classification rules.
3747 Type
*T
= arg
->getType();
3748 if (T
->isFPOrFPVectorTy() || T
->isX86_MMXTy())
3749 return AK_FloatingPoint
;
3750 if (T
->isIntegerTy() && T
->getPrimitiveSizeInBits() <= 64)
3751 return AK_GeneralPurpose
;
3752 if (T
->isPointerTy())
3753 return AK_GeneralPurpose
;
3757 // For VarArg functions, store the argument shadow in an ABI-specific format
3758 // that corresponds to va_list layout.
3759 // We do this because Clang lowers va_arg in the frontend, and this pass
3760 // only sees the low level code that deals with va_list internals.
3761 // A much easier alternative (provided that Clang emits va_arg instructions)
3762 // would have been to associate each live instance of va_list with a copy of
3763 // MSanParamTLS, and extract shadow on va_arg() call in the argument list
3765 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{
3766 unsigned GpOffset
= 0;
3767 unsigned FpOffset
= AMD64GpEndOffset
;
3768 unsigned OverflowOffset
= AMD64FpEndOffset
;
3769 const DataLayout
&DL
= F
.getParent()->getDataLayout();
3770 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin(), End
= CS
.arg_end();
3771 ArgIt
!= End
; ++ArgIt
) {
3773 unsigned ArgNo
= CS
.getArgumentNo(ArgIt
);
3774 bool IsFixed
= ArgNo
< CS
.getFunctionType()->getNumParams();
3775 bool IsByVal
= CS
.paramHasAttr(ArgNo
, Attribute::ByVal
);
3777 // ByVal arguments always go to the overflow area.
3778 // Fixed arguments passed through the overflow area will be stepped
3779 // over by va_start, so don't count them towards the offset.
3782 assert(A
->getType()->isPointerTy());
3783 Type
*RealTy
= A
->getType()->getPointerElementType();
3784 uint64_t ArgSize
= DL
.getTypeAllocSize(RealTy
);
3785 Value
*ShadowBase
= getShadowPtrForVAArgument(
3786 RealTy
, IRB
, OverflowOffset
, alignTo(ArgSize
, 8));
3787 Value
*OriginBase
= nullptr;
3788 if (MS
.TrackOrigins
)
3789 OriginBase
= getOriginPtrForVAArgument(RealTy
, IRB
, OverflowOffset
);
3790 OverflowOffset
+= alignTo(ArgSize
, 8);
3793 Value
*ShadowPtr
, *OriginPtr
;
3794 std::tie(ShadowPtr
, OriginPtr
) =
3795 MSV
.getShadowOriginPtr(A
, IRB
, IRB
.getInt8Ty(), kShadowTLSAlignment
,
3798 IRB
.CreateMemCpy(ShadowBase
, kShadowTLSAlignment
, ShadowPtr
,
3799 kShadowTLSAlignment
, ArgSize
);
3800 if (MS
.TrackOrigins
)
3801 IRB
.CreateMemCpy(OriginBase
, kShadowTLSAlignment
, OriginPtr
,
3802 kShadowTLSAlignment
, ArgSize
);
3804 ArgKind AK
= classifyArgument(A
);
3805 if (AK
== AK_GeneralPurpose
&& GpOffset
>= AMD64GpEndOffset
)
3807 if (AK
== AK_FloatingPoint
&& FpOffset
>= AMD64FpEndOffset
)
3809 Value
*ShadowBase
, *OriginBase
= nullptr;
3811 case AK_GeneralPurpose
:
3813 getShadowPtrForVAArgument(A
->getType(), IRB
, GpOffset
, 8);
3814 if (MS
.TrackOrigins
)
3816 getOriginPtrForVAArgument(A
->getType(), IRB
, GpOffset
);
3819 case AK_FloatingPoint
:
3821 getShadowPtrForVAArgument(A
->getType(), IRB
, FpOffset
, 16);
3822 if (MS
.TrackOrigins
)
3824 getOriginPtrForVAArgument(A
->getType(), IRB
, FpOffset
);
3830 uint64_t ArgSize
= DL
.getTypeAllocSize(A
->getType());
3832 getShadowPtrForVAArgument(A
->getType(), IRB
, OverflowOffset
, 8);
3833 if (MS
.TrackOrigins
)
3835 getOriginPtrForVAArgument(A
->getType(), IRB
, OverflowOffset
);
3836 OverflowOffset
+= alignTo(ArgSize
, 8);
3838 // Take fixed arguments into account for GpOffset and FpOffset,
3839 // but don't actually store shadows for them.
3840 // TODO(glider): don't call get*PtrForVAArgument() for them.
3845 Value
*Shadow
= MSV
.getShadow(A
);
3846 IRB
.CreateAlignedStore(Shadow
, ShadowBase
, kShadowTLSAlignment
);
3847 if (MS
.TrackOrigins
) {
3848 Value
*Origin
= MSV
.getOrigin(A
);
3849 unsigned StoreSize
= DL
.getTypeStoreSize(Shadow
->getType());
3850 MSV
.paintOrigin(IRB
, Origin
, OriginBase
, StoreSize
,
3851 std::max(kShadowTLSAlignment
, kMinOriginAlignment
));
3855 Constant
*OverflowSize
=
3856 ConstantInt::get(IRB
.getInt64Ty(), OverflowOffset
- AMD64FpEndOffset
);
3857 IRB
.CreateStore(OverflowSize
, MS
.VAArgOverflowSizeTLS
);
3860 /// Compute the shadow address for a given va_arg.
3861 Value
*getShadowPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
,
3862 unsigned ArgOffset
, unsigned ArgSize
) {
3863 // Make sure we don't overflow __msan_va_arg_tls.
3864 if (ArgOffset
+ ArgSize
> kParamTLSSize
)
3866 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgTLS
, MS
.IntptrTy
);
3867 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
3868 return IRB
.CreateIntToPtr(Base
, PointerType::get(MSV
.getShadowTy(Ty
), 0),
3872 /// Compute the origin address for a given va_arg.
3873 Value
*getOriginPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
, int ArgOffset
) {
3874 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgOriginTLS
, MS
.IntptrTy
);
3875 // getOriginPtrForVAArgument() is always called after
3876 // getShadowPtrForVAArgument(), so __msan_va_arg_origin_tls can never
3878 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
3879 return IRB
.CreateIntToPtr(Base
, PointerType::get(MS
.OriginTy
, 0),
3883 void unpoisonVAListTagForInst(IntrinsicInst
&I
) {
3884 IRBuilder
<> IRB(&I
);
3885 Value
*VAListTag
= I
.getArgOperand(0);
3886 Value
*ShadowPtr
, *OriginPtr
;
3887 unsigned Alignment
= 8;
3888 std::tie(ShadowPtr
, OriginPtr
) =
3889 MSV
.getShadowOriginPtr(VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
,
3892 // Unpoison the whole __va_list_tag.
3893 // FIXME: magic ABI constants.
3894 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
3895 /* size */ 24, Alignment
, false);
3896 // We shouldn't need to zero out the origins, as they're only checked for
3900 void visitVAStartInst(VAStartInst
&I
) override
{
3901 if (F
.getCallingConv() == CallingConv::Win64
)
3903 VAStartInstrumentationList
.push_back(&I
);
3904 unpoisonVAListTagForInst(I
);
3907 void visitVACopyInst(VACopyInst
&I
) override
{
3908 if (F
.getCallingConv() == CallingConv::Win64
) return;
3909 unpoisonVAListTagForInst(I
);
3912 void finalizeInstrumentation() override
{
3913 assert(!VAArgOverflowSize
&& !VAArgTLSCopy
&&
3914 "finalizeInstrumentation called twice");
3915 if (!VAStartInstrumentationList
.empty()) {
3916 // If there is a va_start in this function, make a backup copy of
3917 // va_arg_tls somewhere in the function entry block.
3918 IRBuilder
<> IRB(MSV
.ActualFnStart
->getFirstNonPHI());
3920 IRB
.CreateLoad(IRB
.getInt64Ty(), MS
.VAArgOverflowSizeTLS
);
3922 IRB
.CreateAdd(ConstantInt::get(MS
.IntptrTy
, AMD64FpEndOffset
),
3924 VAArgTLSCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
3925 IRB
.CreateMemCpy(VAArgTLSCopy
, 8, MS
.VAArgTLS
, 8, CopySize
);
3926 if (MS
.TrackOrigins
) {
3927 VAArgTLSOriginCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
3928 IRB
.CreateMemCpy(VAArgTLSOriginCopy
, 8, MS
.VAArgOriginTLS
, 8, CopySize
);
3932 // Instrument va_start.
3933 // Copy va_list shadow from the backup copy of the TLS contents.
3934 for (size_t i
= 0, n
= VAStartInstrumentationList
.size(); i
< n
; i
++) {
3935 CallInst
*OrigInst
= VAStartInstrumentationList
[i
];
3936 IRBuilder
<> IRB(OrigInst
->getNextNode());
3937 Value
*VAListTag
= OrigInst
->getArgOperand(0);
3939 Type
*RegSaveAreaPtrTy
= Type::getInt64PtrTy(*MS
.C
);
3940 Value
*RegSaveAreaPtrPtr
= IRB
.CreateIntToPtr(
3941 IRB
.CreateAdd(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
3942 ConstantInt::get(MS
.IntptrTy
, 16)),
3943 PointerType::get(RegSaveAreaPtrTy
, 0));
3944 Value
*RegSaveAreaPtr
=
3945 IRB
.CreateLoad(RegSaveAreaPtrTy
, RegSaveAreaPtrPtr
);
3946 Value
*RegSaveAreaShadowPtr
, *RegSaveAreaOriginPtr
;
3947 unsigned Alignment
= 16;
3948 std::tie(RegSaveAreaShadowPtr
, RegSaveAreaOriginPtr
) =
3949 MSV
.getShadowOriginPtr(RegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
3950 Alignment
, /*isStore*/ true);
3951 IRB
.CreateMemCpy(RegSaveAreaShadowPtr
, Alignment
, VAArgTLSCopy
, Alignment
,
3953 if (MS
.TrackOrigins
)
3954 IRB
.CreateMemCpy(RegSaveAreaOriginPtr
, Alignment
, VAArgTLSOriginCopy
,
3955 Alignment
, AMD64FpEndOffset
);
3956 Type
*OverflowArgAreaPtrTy
= Type::getInt64PtrTy(*MS
.C
);
3957 Value
*OverflowArgAreaPtrPtr
= IRB
.CreateIntToPtr(
3958 IRB
.CreateAdd(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
3959 ConstantInt::get(MS
.IntptrTy
, 8)),
3960 PointerType::get(OverflowArgAreaPtrTy
, 0));
3961 Value
*OverflowArgAreaPtr
=
3962 IRB
.CreateLoad(OverflowArgAreaPtrTy
, OverflowArgAreaPtrPtr
);
3963 Value
*OverflowArgAreaShadowPtr
, *OverflowArgAreaOriginPtr
;
3964 std::tie(OverflowArgAreaShadowPtr
, OverflowArgAreaOriginPtr
) =
3965 MSV
.getShadowOriginPtr(OverflowArgAreaPtr
, IRB
, IRB
.getInt8Ty(),
3966 Alignment
, /*isStore*/ true);
3967 Value
*SrcPtr
= IRB
.CreateConstGEP1_32(IRB
.getInt8Ty(), VAArgTLSCopy
,
3969 IRB
.CreateMemCpy(OverflowArgAreaShadowPtr
, Alignment
, SrcPtr
, Alignment
,
3971 if (MS
.TrackOrigins
) {
3972 SrcPtr
= IRB
.CreateConstGEP1_32(IRB
.getInt8Ty(), VAArgTLSOriginCopy
,
3974 IRB
.CreateMemCpy(OverflowArgAreaOriginPtr
, Alignment
, SrcPtr
, Alignment
,
3981 /// MIPS64-specific implementation of VarArgHelper.
3982 struct VarArgMIPS64Helper
: public VarArgHelper
{
3984 MemorySanitizer
&MS
;
3985 MemorySanitizerVisitor
&MSV
;
3986 Value
*VAArgTLSCopy
= nullptr;
3987 Value
*VAArgSize
= nullptr;
3989 SmallVector
<CallInst
*, 16> VAStartInstrumentationList
;
3991 VarArgMIPS64Helper(Function
&F
, MemorySanitizer
&MS
,
3992 MemorySanitizerVisitor
&MSV
) : F(F
), MS(MS
), MSV(MSV
) {}
3994 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{
3995 unsigned VAArgOffset
= 0;
3996 const DataLayout
&DL
= F
.getParent()->getDataLayout();
3997 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin() +
3998 CS
.getFunctionType()->getNumParams(), End
= CS
.arg_end();
3999 ArgIt
!= End
; ++ArgIt
) {
4000 Triple
TargetTriple(F
.getParent()->getTargetTriple());
4003 uint64_t ArgSize
= DL
.getTypeAllocSize(A
->getType());
4004 if (TargetTriple
.getArch() == Triple::mips64
) {
4005 // Adjusting the shadow for argument with size < 8 to match the placement
4006 // of bits in big endian system
4008 VAArgOffset
+= (8 - ArgSize
);
4010 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
, VAArgOffset
, ArgSize
);
4011 VAArgOffset
+= ArgSize
;
4012 VAArgOffset
= alignTo(VAArgOffset
, 8);
4015 IRB
.CreateAlignedStore(MSV
.getShadow(A
), Base
, kShadowTLSAlignment
);
4018 Constant
*TotalVAArgSize
= ConstantInt::get(IRB
.getInt64Ty(), VAArgOffset
);
4019 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of
4020 // a new class member i.e. it is the total size of all VarArgs.
4021 IRB
.CreateStore(TotalVAArgSize
, MS
.VAArgOverflowSizeTLS
);
4024 /// Compute the shadow address for a given va_arg.
4025 Value
*getShadowPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
,
4026 unsigned ArgOffset
, unsigned ArgSize
) {
4027 // Make sure we don't overflow __msan_va_arg_tls.
4028 if (ArgOffset
+ ArgSize
> kParamTLSSize
)
4030 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgTLS
, MS
.IntptrTy
);
4031 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
4032 return IRB
.CreateIntToPtr(Base
, PointerType::get(MSV
.getShadowTy(Ty
), 0),
4036 void visitVAStartInst(VAStartInst
&I
) override
{
4037 IRBuilder
<> IRB(&I
);
4038 VAStartInstrumentationList
.push_back(&I
);
4039 Value
*VAListTag
= I
.getArgOperand(0);
4040 Value
*ShadowPtr
, *OriginPtr
;
4041 unsigned Alignment
= 8;
4042 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4043 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4044 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4045 /* size */ 8, Alignment
, false);
4048 void visitVACopyInst(VACopyInst
&I
) override
{
4049 IRBuilder
<> IRB(&I
);
4050 VAStartInstrumentationList
.push_back(&I
);
4051 Value
*VAListTag
= I
.getArgOperand(0);
4052 Value
*ShadowPtr
, *OriginPtr
;
4053 unsigned Alignment
= 8;
4054 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4055 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4056 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4057 /* size */ 8, Alignment
, false);
4060 void finalizeInstrumentation() override
{
4061 assert(!VAArgSize
&& !VAArgTLSCopy
&&
4062 "finalizeInstrumentation called twice");
4063 IRBuilder
<> IRB(MSV
.ActualFnStart
->getFirstNonPHI());
4064 VAArgSize
= IRB
.CreateLoad(IRB
.getInt64Ty(), MS
.VAArgOverflowSizeTLS
);
4065 Value
*CopySize
= IRB
.CreateAdd(ConstantInt::get(MS
.IntptrTy
, 0),
4068 if (!VAStartInstrumentationList
.empty()) {
4069 // If there is a va_start in this function, make a backup copy of
4070 // va_arg_tls somewhere in the function entry block.
4071 VAArgTLSCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
4072 IRB
.CreateMemCpy(VAArgTLSCopy
, 8, MS
.VAArgTLS
, 8, CopySize
);
4075 // Instrument va_start.
4076 // Copy va_list shadow from the backup copy of the TLS contents.
4077 for (size_t i
= 0, n
= VAStartInstrumentationList
.size(); i
< n
; i
++) {
4078 CallInst
*OrigInst
= VAStartInstrumentationList
[i
];
4079 IRBuilder
<> IRB(OrigInst
->getNextNode());
4080 Value
*VAListTag
= OrigInst
->getArgOperand(0);
4081 Type
*RegSaveAreaPtrTy
= Type::getInt64PtrTy(*MS
.C
);
4082 Value
*RegSaveAreaPtrPtr
=
4083 IRB
.CreateIntToPtr(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
4084 PointerType::get(RegSaveAreaPtrTy
, 0));
4085 Value
*RegSaveAreaPtr
=
4086 IRB
.CreateLoad(RegSaveAreaPtrTy
, RegSaveAreaPtrPtr
);
4087 Value
*RegSaveAreaShadowPtr
, *RegSaveAreaOriginPtr
;
4088 unsigned Alignment
= 8;
4089 std::tie(RegSaveAreaShadowPtr
, RegSaveAreaOriginPtr
) =
4090 MSV
.getShadowOriginPtr(RegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4091 Alignment
, /*isStore*/ true);
4092 IRB
.CreateMemCpy(RegSaveAreaShadowPtr
, Alignment
, VAArgTLSCopy
, Alignment
,
4098 /// AArch64-specific implementation of VarArgHelper.
4099 struct VarArgAArch64Helper
: public VarArgHelper
{
4100 static const unsigned kAArch64GrArgSize
= 64;
4101 static const unsigned kAArch64VrArgSize
= 128;
4103 static const unsigned AArch64GrBegOffset
= 0;
4104 static const unsigned AArch64GrEndOffset
= kAArch64GrArgSize
;
4105 // Make VR space aligned to 16 bytes.
4106 static const unsigned AArch64VrBegOffset
= AArch64GrEndOffset
;
4107 static const unsigned AArch64VrEndOffset
= AArch64VrBegOffset
4108 + kAArch64VrArgSize
;
4109 static const unsigned AArch64VAEndOffset
= AArch64VrEndOffset
;
4112 MemorySanitizer
&MS
;
4113 MemorySanitizerVisitor
&MSV
;
4114 Value
*VAArgTLSCopy
= nullptr;
4115 Value
*VAArgOverflowSize
= nullptr;
4117 SmallVector
<CallInst
*, 16> VAStartInstrumentationList
;
4119 enum ArgKind
{ AK_GeneralPurpose
, AK_FloatingPoint
, AK_Memory
};
4121 VarArgAArch64Helper(Function
&F
, MemorySanitizer
&MS
,
4122 MemorySanitizerVisitor
&MSV
) : F(F
), MS(MS
), MSV(MSV
) {}
4124 ArgKind
classifyArgument(Value
* arg
) {
4125 Type
*T
= arg
->getType();
4126 if (T
->isFPOrFPVectorTy())
4127 return AK_FloatingPoint
;
4128 if ((T
->isIntegerTy() && T
->getPrimitiveSizeInBits() <= 64)
4129 || (T
->isPointerTy()))
4130 return AK_GeneralPurpose
;
4134 // The instrumentation stores the argument shadow in a non ABI-specific
4135 // format because it does not know which argument is named (since Clang,
4136 // like x86_64 case, lowers the va_args in the frontend and this pass only
4137 // sees the low level code that deals with va_list internals).
4138 // The first seven GR registers are saved in the first 56 bytes of the
4139 // va_arg tls arra, followers by the first 8 FP/SIMD registers, and then
4140 // the remaining arguments.
4141 // Using constant offset within the va_arg TLS array allows fast copy
4142 // in the finalize instrumentation.
4143 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{
4144 unsigned GrOffset
= AArch64GrBegOffset
;
4145 unsigned VrOffset
= AArch64VrBegOffset
;
4146 unsigned OverflowOffset
= AArch64VAEndOffset
;
4148 const DataLayout
&DL
= F
.getParent()->getDataLayout();
4149 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin(), End
= CS
.arg_end();
4150 ArgIt
!= End
; ++ArgIt
) {
4152 unsigned ArgNo
= CS
.getArgumentNo(ArgIt
);
4153 bool IsFixed
= ArgNo
< CS
.getFunctionType()->getNumParams();
4154 ArgKind AK
= classifyArgument(A
);
4155 if (AK
== AK_GeneralPurpose
&& GrOffset
>= AArch64GrEndOffset
)
4157 if (AK
== AK_FloatingPoint
&& VrOffset
>= AArch64VrEndOffset
)
4161 case AK_GeneralPurpose
:
4162 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
, GrOffset
, 8);
4165 case AK_FloatingPoint
:
4166 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
, VrOffset
, 8);
4170 // Don't count fixed arguments in the overflow area - va_start will
4171 // skip right over them.
4174 uint64_t ArgSize
= DL
.getTypeAllocSize(A
->getType());
4175 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
, OverflowOffset
,
4176 alignTo(ArgSize
, 8));
4177 OverflowOffset
+= alignTo(ArgSize
, 8);
4180 // Count Gp/Vr fixed arguments to their respective offsets, but don't
4181 // bother to actually store a shadow.
4186 IRB
.CreateAlignedStore(MSV
.getShadow(A
), Base
, kShadowTLSAlignment
);
4188 Constant
*OverflowSize
=
4189 ConstantInt::get(IRB
.getInt64Ty(), OverflowOffset
- AArch64VAEndOffset
);
4190 IRB
.CreateStore(OverflowSize
, MS
.VAArgOverflowSizeTLS
);
4193 /// Compute the shadow address for a given va_arg.
4194 Value
*getShadowPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
,
4195 unsigned ArgOffset
, unsigned ArgSize
) {
4196 // Make sure we don't overflow __msan_va_arg_tls.
4197 if (ArgOffset
+ ArgSize
> kParamTLSSize
)
4199 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgTLS
, MS
.IntptrTy
);
4200 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
4201 return IRB
.CreateIntToPtr(Base
, PointerType::get(MSV
.getShadowTy(Ty
), 0),
4205 void visitVAStartInst(VAStartInst
&I
) override
{
4206 IRBuilder
<> IRB(&I
);
4207 VAStartInstrumentationList
.push_back(&I
);
4208 Value
*VAListTag
= I
.getArgOperand(0);
4209 Value
*ShadowPtr
, *OriginPtr
;
4210 unsigned Alignment
= 8;
4211 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4212 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4213 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4214 /* size */ 32, Alignment
, false);
4217 void visitVACopyInst(VACopyInst
&I
) override
{
4218 IRBuilder
<> IRB(&I
);
4219 VAStartInstrumentationList
.push_back(&I
);
4220 Value
*VAListTag
= I
.getArgOperand(0);
4221 Value
*ShadowPtr
, *OriginPtr
;
4222 unsigned Alignment
= 8;
4223 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4224 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4225 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4226 /* size */ 32, Alignment
, false);
4229 // Retrieve a va_list field of 'void*' size.
4230 Value
* getVAField64(IRBuilder
<> &IRB
, Value
*VAListTag
, int offset
) {
4231 Value
*SaveAreaPtrPtr
=
4233 IRB
.CreateAdd(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
4234 ConstantInt::get(MS
.IntptrTy
, offset
)),
4235 Type::getInt64PtrTy(*MS
.C
));
4236 return IRB
.CreateLoad(Type::getInt64Ty(*MS
.C
), SaveAreaPtrPtr
);
4239 // Retrieve a va_list field of 'int' size.
4240 Value
* getVAField32(IRBuilder
<> &IRB
, Value
*VAListTag
, int offset
) {
4241 Value
*SaveAreaPtr
=
4243 IRB
.CreateAdd(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
4244 ConstantInt::get(MS
.IntptrTy
, offset
)),
4245 Type::getInt32PtrTy(*MS
.C
));
4246 Value
*SaveArea32
= IRB
.CreateLoad(IRB
.getInt32Ty(), SaveAreaPtr
);
4247 return IRB
.CreateSExt(SaveArea32
, MS
.IntptrTy
);
4250 void finalizeInstrumentation() override
{
4251 assert(!VAArgOverflowSize
&& !VAArgTLSCopy
&&
4252 "finalizeInstrumentation called twice");
4253 if (!VAStartInstrumentationList
.empty()) {
4254 // If there is a va_start in this function, make a backup copy of
4255 // va_arg_tls somewhere in the function entry block.
4256 IRBuilder
<> IRB(MSV
.ActualFnStart
->getFirstNonPHI());
4258 IRB
.CreateLoad(IRB
.getInt64Ty(), MS
.VAArgOverflowSizeTLS
);
4260 IRB
.CreateAdd(ConstantInt::get(MS
.IntptrTy
, AArch64VAEndOffset
),
4262 VAArgTLSCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
4263 IRB
.CreateMemCpy(VAArgTLSCopy
, 8, MS
.VAArgTLS
, 8, CopySize
);
4266 Value
*GrArgSize
= ConstantInt::get(MS
.IntptrTy
, kAArch64GrArgSize
);
4267 Value
*VrArgSize
= ConstantInt::get(MS
.IntptrTy
, kAArch64VrArgSize
);
4269 // Instrument va_start, copy va_list shadow from the backup copy of
4270 // the TLS contents.
4271 for (size_t i
= 0, n
= VAStartInstrumentationList
.size(); i
< n
; i
++) {
4272 CallInst
*OrigInst
= VAStartInstrumentationList
[i
];
4273 IRBuilder
<> IRB(OrigInst
->getNextNode());
4275 Value
*VAListTag
= OrigInst
->getArgOperand(0);
4277 // The variadic ABI for AArch64 creates two areas to save the incoming
4278 // argument registers (one for 64-bit general register xn-x7 and another
4279 // for 128-bit FP/SIMD vn-v7).
4280 // We need then to propagate the shadow arguments on both regions
4281 // 'va::__gr_top + va::__gr_offs' and 'va::__vr_top + va::__vr_offs'.
4282 // The remaning arguments are saved on shadow for 'va::stack'.
4283 // One caveat is it requires only to propagate the non-named arguments,
4284 // however on the call site instrumentation 'all' the arguments are
4285 // saved. So to copy the shadow values from the va_arg TLS array
4286 // we need to adjust the offset for both GR and VR fields based on
4287 // the __{gr,vr}_offs value (since they are stores based on incoming
4288 // named arguments).
4290 // Read the stack pointer from the va_list.
4291 Value
*StackSaveAreaPtr
= getVAField64(IRB
, VAListTag
, 0);
4293 // Read both the __gr_top and __gr_off and add them up.
4294 Value
*GrTopSaveAreaPtr
= getVAField64(IRB
, VAListTag
, 8);
4295 Value
*GrOffSaveArea
= getVAField32(IRB
, VAListTag
, 24);
4297 Value
*GrRegSaveAreaPtr
= IRB
.CreateAdd(GrTopSaveAreaPtr
, GrOffSaveArea
);
4299 // Read both the __vr_top and __vr_off and add them up.
4300 Value
*VrTopSaveAreaPtr
= getVAField64(IRB
, VAListTag
, 16);
4301 Value
*VrOffSaveArea
= getVAField32(IRB
, VAListTag
, 28);
4303 Value
*VrRegSaveAreaPtr
= IRB
.CreateAdd(VrTopSaveAreaPtr
, VrOffSaveArea
);
4305 // It does not know how many named arguments is being used and, on the
4306 // callsite all the arguments were saved. Since __gr_off is defined as
4307 // '0 - ((8 - named_gr) * 8)', the idea is to just propagate the variadic
4308 // argument by ignoring the bytes of shadow from named arguments.
4309 Value
*GrRegSaveAreaShadowPtrOff
=
4310 IRB
.CreateAdd(GrArgSize
, GrOffSaveArea
);
4312 Value
*GrRegSaveAreaShadowPtr
=
4313 MSV
.getShadowOriginPtr(GrRegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4314 /*Alignment*/ 8, /*isStore*/ true)
4317 Value
*GrSrcPtr
= IRB
.CreateInBoundsGEP(IRB
.getInt8Ty(), VAArgTLSCopy
,
4318 GrRegSaveAreaShadowPtrOff
);
4319 Value
*GrCopySize
= IRB
.CreateSub(GrArgSize
, GrRegSaveAreaShadowPtrOff
);
4321 IRB
.CreateMemCpy(GrRegSaveAreaShadowPtr
, 8, GrSrcPtr
, 8, GrCopySize
);
4323 // Again, but for FP/SIMD values.
4324 Value
*VrRegSaveAreaShadowPtrOff
=
4325 IRB
.CreateAdd(VrArgSize
, VrOffSaveArea
);
4327 Value
*VrRegSaveAreaShadowPtr
=
4328 MSV
.getShadowOriginPtr(VrRegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4329 /*Alignment*/ 8, /*isStore*/ true)
4332 Value
*VrSrcPtr
= IRB
.CreateInBoundsGEP(
4334 IRB
.CreateInBoundsGEP(IRB
.getInt8Ty(), VAArgTLSCopy
,
4335 IRB
.getInt32(AArch64VrBegOffset
)),
4336 VrRegSaveAreaShadowPtrOff
);
4337 Value
*VrCopySize
= IRB
.CreateSub(VrArgSize
, VrRegSaveAreaShadowPtrOff
);
4339 IRB
.CreateMemCpy(VrRegSaveAreaShadowPtr
, 8, VrSrcPtr
, 8, VrCopySize
);
4341 // And finally for remaining arguments.
4342 Value
*StackSaveAreaShadowPtr
=
4343 MSV
.getShadowOriginPtr(StackSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4344 /*Alignment*/ 16, /*isStore*/ true)
4347 Value
*StackSrcPtr
=
4348 IRB
.CreateInBoundsGEP(IRB
.getInt8Ty(), VAArgTLSCopy
,
4349 IRB
.getInt32(AArch64VAEndOffset
));
4351 IRB
.CreateMemCpy(StackSaveAreaShadowPtr
, 16, StackSrcPtr
, 16,
4357 /// PowerPC64-specific implementation of VarArgHelper.
4358 struct VarArgPowerPC64Helper
: public VarArgHelper
{
4360 MemorySanitizer
&MS
;
4361 MemorySanitizerVisitor
&MSV
;
4362 Value
*VAArgTLSCopy
= nullptr;
4363 Value
*VAArgSize
= nullptr;
4365 SmallVector
<CallInst
*, 16> VAStartInstrumentationList
;
4367 VarArgPowerPC64Helper(Function
&F
, MemorySanitizer
&MS
,
4368 MemorySanitizerVisitor
&MSV
) : F(F
), MS(MS
), MSV(MSV
) {}
4370 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{
4371 // For PowerPC, we need to deal with alignment of stack arguments -
4372 // they are mostly aligned to 8 bytes, but vectors and i128 arrays
4373 // are aligned to 16 bytes, byvals can be aligned to 8 or 16 bytes,
4374 // and QPX vectors are aligned to 32 bytes. For that reason, we
4375 // compute current offset from stack pointer (which is always properly
4376 // aligned), and offset for the first vararg, then subtract them.
4378 Triple
TargetTriple(F
.getParent()->getTargetTriple());
4379 // Parameter save area starts at 48 bytes from frame pointer for ABIv1,
4380 // and 32 bytes for ABIv2. This is usually determined by target
4381 // endianness, but in theory could be overriden by function attribute.
4382 // For simplicity, we ignore it here (it'd only matter for QPX vectors).
4383 if (TargetTriple
.getArch() == Triple::ppc64
)
4387 unsigned VAArgOffset
= VAArgBase
;
4388 const DataLayout
&DL
= F
.getParent()->getDataLayout();
4389 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin(), End
= CS
.arg_end();
4390 ArgIt
!= End
; ++ArgIt
) {
4392 unsigned ArgNo
= CS
.getArgumentNo(ArgIt
);
4393 bool IsFixed
= ArgNo
< CS
.getFunctionType()->getNumParams();
4394 bool IsByVal
= CS
.paramHasAttr(ArgNo
, Attribute::ByVal
);
4396 assert(A
->getType()->isPointerTy());
4397 Type
*RealTy
= A
->getType()->getPointerElementType();
4398 uint64_t ArgSize
= DL
.getTypeAllocSize(RealTy
);
4399 uint64_t ArgAlign
= CS
.getParamAlignment(ArgNo
);
4402 VAArgOffset
= alignTo(VAArgOffset
, ArgAlign
);
4404 Value
*Base
= getShadowPtrForVAArgument(
4405 RealTy
, IRB
, VAArgOffset
- VAArgBase
, ArgSize
);
4407 Value
*AShadowPtr
, *AOriginPtr
;
4408 std::tie(AShadowPtr
, AOriginPtr
) =
4409 MSV
.getShadowOriginPtr(A
, IRB
, IRB
.getInt8Ty(),
4410 kShadowTLSAlignment
, /*isStore*/ false);
4412 IRB
.CreateMemCpy(Base
, kShadowTLSAlignment
, AShadowPtr
,
4413 kShadowTLSAlignment
, ArgSize
);
4416 VAArgOffset
+= alignTo(ArgSize
, 8);
4419 uint64_t ArgSize
= DL
.getTypeAllocSize(A
->getType());
4420 uint64_t ArgAlign
= 8;
4421 if (A
->getType()->isArrayTy()) {
4422 // Arrays are aligned to element size, except for long double
4423 // arrays, which are aligned to 8 bytes.
4424 Type
*ElementTy
= A
->getType()->getArrayElementType();
4425 if (!ElementTy
->isPPC_FP128Ty())
4426 ArgAlign
= DL
.getTypeAllocSize(ElementTy
);
4427 } else if (A
->getType()->isVectorTy()) {
4428 // Vectors are naturally aligned.
4429 ArgAlign
= DL
.getTypeAllocSize(A
->getType());
4433 VAArgOffset
= alignTo(VAArgOffset
, ArgAlign
);
4434 if (DL
.isBigEndian()) {
4435 // Adjusting the shadow for argument with size < 8 to match the placement
4436 // of bits in big endian system
4438 VAArgOffset
+= (8 - ArgSize
);
4441 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
,
4442 VAArgOffset
- VAArgBase
, ArgSize
);
4444 IRB
.CreateAlignedStore(MSV
.getShadow(A
), Base
, kShadowTLSAlignment
);
4446 VAArgOffset
+= ArgSize
;
4447 VAArgOffset
= alignTo(VAArgOffset
, 8);
4450 VAArgBase
= VAArgOffset
;
4453 Constant
*TotalVAArgSize
= ConstantInt::get(IRB
.getInt64Ty(),
4454 VAArgOffset
- VAArgBase
);
4455 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of
4456 // a new class member i.e. it is the total size of all VarArgs.
4457 IRB
.CreateStore(TotalVAArgSize
, MS
.VAArgOverflowSizeTLS
);
4460 /// Compute the shadow address for a given va_arg.
4461 Value
*getShadowPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
,
4462 unsigned ArgOffset
, unsigned ArgSize
) {
4463 // Make sure we don't overflow __msan_va_arg_tls.
4464 if (ArgOffset
+ ArgSize
> kParamTLSSize
)
4466 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgTLS
, MS
.IntptrTy
);
4467 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
4468 return IRB
.CreateIntToPtr(Base
, PointerType::get(MSV
.getShadowTy(Ty
), 0),
4472 void visitVAStartInst(VAStartInst
&I
) override
{
4473 IRBuilder
<> IRB(&I
);
4474 VAStartInstrumentationList
.push_back(&I
);
4475 Value
*VAListTag
= I
.getArgOperand(0);
4476 Value
*ShadowPtr
, *OriginPtr
;
4477 unsigned Alignment
= 8;
4478 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4479 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4480 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4481 /* size */ 8, Alignment
, false);
4484 void visitVACopyInst(VACopyInst
&I
) override
{
4485 IRBuilder
<> IRB(&I
);
4486 Value
*VAListTag
= I
.getArgOperand(0);
4487 Value
*ShadowPtr
, *OriginPtr
;
4488 unsigned Alignment
= 8;
4489 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4490 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4491 // Unpoison the whole __va_list_tag.
4492 // FIXME: magic ABI constants.
4493 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4494 /* size */ 8, Alignment
, false);
4497 void finalizeInstrumentation() override
{
4498 assert(!VAArgSize
&& !VAArgTLSCopy
&&
4499 "finalizeInstrumentation called twice");
4500 IRBuilder
<> IRB(MSV
.ActualFnStart
->getFirstNonPHI());
4501 VAArgSize
= IRB
.CreateLoad(IRB
.getInt64Ty(), MS
.VAArgOverflowSizeTLS
);
4502 Value
*CopySize
= IRB
.CreateAdd(ConstantInt::get(MS
.IntptrTy
, 0),
4505 if (!VAStartInstrumentationList
.empty()) {
4506 // If there is a va_start in this function, make a backup copy of
4507 // va_arg_tls somewhere in the function entry block.
4508 VAArgTLSCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
4509 IRB
.CreateMemCpy(VAArgTLSCopy
, 8, MS
.VAArgTLS
, 8, CopySize
);
4512 // Instrument va_start.
4513 // Copy va_list shadow from the backup copy of the TLS contents.
4514 for (size_t i
= 0, n
= VAStartInstrumentationList
.size(); i
< n
; i
++) {
4515 CallInst
*OrigInst
= VAStartInstrumentationList
[i
];
4516 IRBuilder
<> IRB(OrigInst
->getNextNode());
4517 Value
*VAListTag
= OrigInst
->getArgOperand(0);
4518 Type
*RegSaveAreaPtrTy
= Type::getInt64PtrTy(*MS
.C
);
4519 Value
*RegSaveAreaPtrPtr
=
4520 IRB
.CreateIntToPtr(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
4521 PointerType::get(RegSaveAreaPtrTy
, 0));
4522 Value
*RegSaveAreaPtr
=
4523 IRB
.CreateLoad(RegSaveAreaPtrTy
, RegSaveAreaPtrPtr
);
4524 Value
*RegSaveAreaShadowPtr
, *RegSaveAreaOriginPtr
;
4525 unsigned Alignment
= 8;
4526 std::tie(RegSaveAreaShadowPtr
, RegSaveAreaOriginPtr
) =
4527 MSV
.getShadowOriginPtr(RegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4528 Alignment
, /*isStore*/ true);
4529 IRB
.CreateMemCpy(RegSaveAreaShadowPtr
, Alignment
, VAArgTLSCopy
, Alignment
,
4535 /// A no-op implementation of VarArgHelper.
4536 struct VarArgNoOpHelper
: public VarArgHelper
{
4537 VarArgNoOpHelper(Function
&F
, MemorySanitizer
&MS
,
4538 MemorySanitizerVisitor
&MSV
) {}
4540 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{}
4542 void visitVAStartInst(VAStartInst
&I
) override
{}
4544 void visitVACopyInst(VACopyInst
&I
) override
{}
4546 void finalizeInstrumentation() override
{}
4549 } // end anonymous namespace
4551 static VarArgHelper
*CreateVarArgHelper(Function
&Func
, MemorySanitizer
&Msan
,
4552 MemorySanitizerVisitor
&Visitor
) {
4553 // VarArg handling is only implemented on AMD64. False positives are possible
4554 // on other platforms.
4555 Triple
TargetTriple(Func
.getParent()->getTargetTriple());
4556 if (TargetTriple
.getArch() == Triple::x86_64
)
4557 return new VarArgAMD64Helper(Func
, Msan
, Visitor
);
4558 else if (TargetTriple
.isMIPS64())
4559 return new VarArgMIPS64Helper(Func
, Msan
, Visitor
);
4560 else if (TargetTriple
.getArch() == Triple::aarch64
)
4561 return new VarArgAArch64Helper(Func
, Msan
, Visitor
);
4562 else if (TargetTriple
.getArch() == Triple::ppc64
||
4563 TargetTriple
.getArch() == Triple::ppc64le
)
4564 return new VarArgPowerPC64Helper(Func
, Msan
, Visitor
);
4566 return new VarArgNoOpHelper(Func
, Msan
, Visitor
);
4569 bool MemorySanitizer::sanitizeFunction(Function
&F
, TargetLibraryInfo
&TLI
) {
4570 if (!CompileKernel
&& (&F
== MsanCtorFunction
))
4572 MemorySanitizerVisitor
Visitor(F
, *this, TLI
);
4574 // Clear out readonly/readnone attributes.
4576 B
.addAttribute(Attribute::ReadOnly
)
4577 .addAttribute(Attribute::ReadNone
);
4578 F
.removeAttributes(AttributeList::FunctionIndex
, B
);
4580 return Visitor
.runOnFunction();