1 //===- MemorySanitizer.cpp - detector of uninitialized reads --------------===//
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4 // See https://llvm.org/LICENSE.txt for license information.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
7 //===----------------------------------------------------------------------===//
10 /// This file is a part of MemorySanitizer, a detector of uninitialized
13 /// The algorithm of the tool is similar to Memcheck
14 /// (http://goo.gl/QKbem). We associate a few shadow bits with every
15 /// byte of the application memory, poison the shadow of the malloc-ed
16 /// or alloca-ed memory, load the shadow bits on every memory read,
17 /// propagate the shadow bits through some of the arithmetic
18 /// instruction (including MOV), store the shadow bits on every memory
19 /// write, report a bug on some other instructions (e.g. JMP) if the
20 /// associated shadow is poisoned.
22 /// But there are differences too. The first and the major one:
23 /// compiler instrumentation instead of binary instrumentation. This
24 /// gives us much better register allocation, possible compiler
25 /// optimizations and a fast start-up. But this brings the major issue
26 /// as well: msan needs to see all program events, including system
27 /// calls and reads/writes in system libraries, so we either need to
28 /// compile *everything* with msan or use a binary translation
29 /// component (e.g. DynamoRIO) to instrument pre-built libraries.
30 /// Another difference from Memcheck is that we use 8 shadow bits per
31 /// byte of application memory and use a direct shadow mapping. This
32 /// greatly simplifies the instrumentation code and avoids races on
33 /// shadow updates (Memcheck is single-threaded so races are not a
34 /// concern there. Memcheck uses 2 shadow bits per byte with a slow
35 /// path storage that uses 8 bits per byte).
37 /// The default value of shadow is 0, which means "clean" (not poisoned).
39 /// Every module initializer should call __msan_init to ensure that the
40 /// shadow memory is ready. On error, __msan_warning is called. Since
41 /// parameters and return values may be passed via registers, we have a
42 /// specialized thread-local shadow for return values
43 /// (__msan_retval_tls) and parameters (__msan_param_tls).
47 /// MemorySanitizer can track origins (allocation points) of all uninitialized
48 /// values. This behavior is controlled with a flag (msan-track-origins) and is
49 /// disabled by default.
51 /// Origins are 4-byte values created and interpreted by the runtime library.
52 /// They are stored in a second shadow mapping, one 4-byte value for 4 bytes
53 /// of application memory. Propagation of origins is basically a bunch of
54 /// "select" instructions that pick the origin of a dirty argument, if an
55 /// instruction has one.
57 /// Every 4 aligned, consecutive bytes of application memory have one origin
58 /// value associated with them. If these bytes contain uninitialized data
59 /// coming from 2 different allocations, the last store wins. Because of this,
60 /// MemorySanitizer reports can show unrelated origins, but this is unlikely in
63 /// Origins are meaningless for fully initialized values, so MemorySanitizer
64 /// avoids storing origin to memory when a fully initialized value is stored.
65 /// This way it avoids needless overwritting origin of the 4-byte region on
66 /// a short (i.e. 1 byte) clean store, and it is also good for performance.
70 /// Ideally, every atomic store of application value should update the
71 /// corresponding shadow location in an atomic way. Unfortunately, atomic store
72 /// of two disjoint locations can not be done without severe slowdown.
74 /// Therefore, we implement an approximation that may err on the safe side.
75 /// In this implementation, every atomically accessed location in the program
76 /// may only change from (partially) uninitialized to fully initialized, but
77 /// not the other way around. We load the shadow _after_ the application load,
78 /// and we store the shadow _before_ the app store. Also, we always store clean
79 /// shadow (if the application store is atomic). This way, if the store-load
80 /// pair constitutes a happens-before arc, shadow store and load are correctly
81 /// ordered such that the load will get either the value that was stored, or
82 /// some later value (which is always clean).
84 /// This does not work very well with Compare-And-Swap (CAS) and
85 /// Read-Modify-Write (RMW) operations. To follow the above logic, CAS and RMW
86 /// must store the new shadow before the app operation, and load the shadow
87 /// after the app operation. Computers don't work this way. Current
88 /// implementation ignores the load aspect of CAS/RMW, always returning a clean
89 /// value. It implements the store part as a simple atomic store by storing a
92 /// Instrumenting inline assembly.
94 /// For inline assembly code LLVM has little idea about which memory locations
95 /// become initialized depending on the arguments. It can be possible to figure
96 /// out which arguments are meant to point to inputs and outputs, but the
97 /// actual semantics can be only visible at runtime. In the Linux kernel it's
98 /// also possible that the arguments only indicate the offset for a base taken
99 /// from a segment register, so it's dangerous to treat any asm() arguments as
100 /// pointers. We take a conservative approach generating calls to
101 /// __msan_instrument_asm_store(ptr, size)
102 /// , which defer the memory unpoisoning to the runtime library.
103 /// The latter can perform more complex address checks to figure out whether
104 /// it's safe to touch the shadow memory.
105 /// Like with atomic operations, we call __msan_instrument_asm_store() before
106 /// the assembly call, so that changes to the shadow memory will be seen by
107 /// other threads together with main memory initialization.
109 /// KernelMemorySanitizer (KMSAN) implementation.
111 /// The major differences between KMSAN and MSan instrumentation are:
112 /// - KMSAN always tracks the origins and implies msan-keep-going=true;
113 /// - KMSAN allocates shadow and origin memory for each page separately, so
114 /// there are no explicit accesses to shadow and origin in the
116 /// Shadow and origin values for a particular X-byte memory location
117 /// (X=1,2,4,8) are accessed through pointers obtained via the
118 /// __msan_metadata_ptr_for_load_X(ptr)
119 /// __msan_metadata_ptr_for_store_X(ptr)
120 /// functions. The corresponding functions check that the X-byte accesses
121 /// are possible and returns the pointers to shadow and origin memory.
122 /// Arbitrary sized accesses are handled with:
123 /// __msan_metadata_ptr_for_load_n(ptr, size)
124 /// __msan_metadata_ptr_for_store_n(ptr, size);
125 /// - TLS variables are stored in a single per-task struct. A call to a
126 /// function __msan_get_context_state() returning a pointer to that struct
127 /// is inserted into every instrumented function before the entry block;
128 /// - __msan_warning() takes a 32-bit origin parameter;
129 /// - local variables are poisoned with __msan_poison_alloca() upon function
130 /// entry and unpoisoned with __msan_unpoison_alloca() before leaving the
132 /// - the pass doesn't declare any global variables or add global constructors
133 /// to the translation unit.
135 /// Also, KMSAN currently ignores uninitialized memory passed into inline asm
136 /// calls, making sure we're on the safe side wrt. possible false positives.
138 /// KernelMemorySanitizer only supports X86_64 at the moment.
140 //===----------------------------------------------------------------------===//
142 #include "llvm/Transforms/Instrumentation/MemorySanitizer.h"
143 #include "llvm/ADT/APInt.h"
144 #include "llvm/ADT/ArrayRef.h"
145 #include "llvm/ADT/DepthFirstIterator.h"
146 #include "llvm/ADT/SmallSet.h"
147 #include "llvm/ADT/SmallString.h"
148 #include "llvm/ADT/SmallVector.h"
149 #include "llvm/ADT/StringExtras.h"
150 #include "llvm/ADT/StringRef.h"
151 #include "llvm/ADT/Triple.h"
152 #include "llvm/Analysis/TargetLibraryInfo.h"
153 #include "llvm/IR/Argument.h"
154 #include "llvm/IR/Attributes.h"
155 #include "llvm/IR/BasicBlock.h"
156 #include "llvm/IR/CallSite.h"
157 #include "llvm/IR/CallingConv.h"
158 #include "llvm/IR/Constant.h"
159 #include "llvm/IR/Constants.h"
160 #include "llvm/IR/DataLayout.h"
161 #include "llvm/IR/DerivedTypes.h"
162 #include "llvm/IR/Function.h"
163 #include "llvm/IR/GlobalValue.h"
164 #include "llvm/IR/GlobalVariable.h"
165 #include "llvm/IR/IRBuilder.h"
166 #include "llvm/IR/InlineAsm.h"
167 #include "llvm/IR/InstVisitor.h"
168 #include "llvm/IR/InstrTypes.h"
169 #include "llvm/IR/Instruction.h"
170 #include "llvm/IR/Instructions.h"
171 #include "llvm/IR/IntrinsicInst.h"
172 #include "llvm/IR/Intrinsics.h"
173 #include "llvm/IR/LLVMContext.h"
174 #include "llvm/IR/MDBuilder.h"
175 #include "llvm/IR/Module.h"
176 #include "llvm/IR/Type.h"
177 #include "llvm/IR/Value.h"
178 #include "llvm/IR/ValueMap.h"
179 #include "llvm/Pass.h"
180 #include "llvm/Support/AtomicOrdering.h"
181 #include "llvm/Support/Casting.h"
182 #include "llvm/Support/CommandLine.h"
183 #include "llvm/Support/Compiler.h"
184 #include "llvm/Support/Debug.h"
185 #include "llvm/Support/ErrorHandling.h"
186 #include "llvm/Support/MathExtras.h"
187 #include "llvm/Support/raw_ostream.h"
188 #include "llvm/Transforms/Instrumentation.h"
189 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
190 #include "llvm/Transforms/Utils/Local.h"
191 #include "llvm/Transforms/Utils/ModuleUtils.h"
200 using namespace llvm
;
202 #define DEBUG_TYPE "msan"
204 static const unsigned kOriginSize
= 4;
205 static const unsigned kMinOriginAlignment
= 4;
206 static const unsigned kShadowTLSAlignment
= 8;
208 // These constants must be kept in sync with the ones in msan.h.
209 static const unsigned kParamTLSSize
= 800;
210 static const unsigned kRetvalTLSSize
= 800;
212 // Accesses sizes are powers of two: 1, 2, 4, 8.
213 static const size_t kNumberOfAccessSizes
= 4;
215 /// Track origins of uninitialized values.
217 /// Adds a section to MemorySanitizer report that points to the allocation
218 /// (stack or heap) the uninitialized bits came from originally.
219 static cl::opt
<int> ClTrackOrigins("msan-track-origins",
220 cl::desc("Track origins (allocation sites) of poisoned memory"),
221 cl::Hidden
, cl::init(0));
223 static cl::opt
<bool> ClKeepGoing("msan-keep-going",
224 cl::desc("keep going after reporting a UMR"),
225 cl::Hidden
, cl::init(false));
227 static cl::opt
<bool> ClPoisonStack("msan-poison-stack",
228 cl::desc("poison uninitialized stack variables"),
229 cl::Hidden
, cl::init(true));
231 static cl::opt
<bool> ClPoisonStackWithCall("msan-poison-stack-with-call",
232 cl::desc("poison uninitialized stack variables with a call"),
233 cl::Hidden
, cl::init(false));
235 static cl::opt
<int> ClPoisonStackPattern("msan-poison-stack-pattern",
236 cl::desc("poison uninitialized stack variables with the given pattern"),
237 cl::Hidden
, cl::init(0xff));
239 static cl::opt
<bool> ClPoisonUndef("msan-poison-undef",
240 cl::desc("poison undef temps"),
241 cl::Hidden
, cl::init(true));
243 static cl::opt
<bool> ClHandleICmp("msan-handle-icmp",
244 cl::desc("propagate shadow through ICmpEQ and ICmpNE"),
245 cl::Hidden
, cl::init(true));
247 static cl::opt
<bool> ClHandleICmpExact("msan-handle-icmp-exact",
248 cl::desc("exact handling of relational integer ICmp"),
249 cl::Hidden
, cl::init(false));
251 static cl::opt
<bool> ClHandleLifetimeIntrinsics(
252 "msan-handle-lifetime-intrinsics",
254 "when possible, poison scoped variables at the beginning of the scope "
255 "(slower, but more precise)"),
256 cl::Hidden
, cl::init(true));
258 // When compiling the Linux kernel, we sometimes see false positives related to
259 // MSan being unable to understand that inline assembly calls may initialize
261 // This flag makes the compiler conservatively unpoison every memory location
262 // passed into an assembly call. Note that this may cause false positives.
263 // Because it's impossible to figure out the array sizes, we can only unpoison
264 // the first sizeof(type) bytes for each type* pointer.
265 // The instrumentation is only enabled in KMSAN builds, and only if
266 // -msan-handle-asm-conservative is on. This is done because we may want to
267 // quickly disable assembly instrumentation when it breaks.
268 static cl::opt
<bool> ClHandleAsmConservative(
269 "msan-handle-asm-conservative",
270 cl::desc("conservative handling of inline assembly"), cl::Hidden
,
273 // This flag controls whether we check the shadow of the address
274 // operand of load or store. Such bugs are very rare, since load from
275 // a garbage address typically results in SEGV, but still happen
276 // (e.g. only lower bits of address are garbage, or the access happens
277 // early at program startup where malloc-ed memory is more likely to
278 // be zeroed. As of 2012-08-28 this flag adds 20% slowdown.
279 static cl::opt
<bool> ClCheckAccessAddress("msan-check-access-address",
280 cl::desc("report accesses through a pointer which has poisoned shadow"),
281 cl::Hidden
, cl::init(true));
283 static cl::opt
<bool> ClDumpStrictInstructions("msan-dump-strict-instructions",
284 cl::desc("print out instructions with default strict semantics"),
285 cl::Hidden
, cl::init(false));
287 static cl::opt
<int> ClInstrumentationWithCallThreshold(
288 "msan-instrumentation-with-call-threshold",
290 "If the function being instrumented requires more than "
291 "this number of checks and origin stores, use callbacks instead of "
292 "inline checks (-1 means never use callbacks)."),
293 cl::Hidden
, cl::init(3500));
296 ClEnableKmsan("msan-kernel",
297 cl::desc("Enable KernelMemorySanitizer instrumentation"),
298 cl::Hidden
, cl::init(false));
300 // This is an experiment to enable handling of cases where shadow is a non-zero
301 // compile-time constant. For some unexplainable reason they were silently
302 // ignored in the instrumentation.
303 static cl::opt
<bool> ClCheckConstantShadow("msan-check-constant-shadow",
304 cl::desc("Insert checks for constant shadow values"),
305 cl::Hidden
, cl::init(false));
307 // This is off by default because of a bug in gold:
308 // https://sourceware.org/bugzilla/show_bug.cgi?id=19002
309 static cl::opt
<bool> ClWithComdat("msan-with-comdat",
310 cl::desc("Place MSan constructors in comdat sections"),
311 cl::Hidden
, cl::init(false));
313 // These options allow to specify custom memory map parameters
314 // See MemoryMapParams for details.
315 static cl::opt
<uint64_t> ClAndMask("msan-and-mask",
316 cl::desc("Define custom MSan AndMask"),
317 cl::Hidden
, cl::init(0));
319 static cl::opt
<uint64_t> ClXorMask("msan-xor-mask",
320 cl::desc("Define custom MSan XorMask"),
321 cl::Hidden
, cl::init(0));
323 static cl::opt
<uint64_t> ClShadowBase("msan-shadow-base",
324 cl::desc("Define custom MSan ShadowBase"),
325 cl::Hidden
, cl::init(0));
327 static cl::opt
<uint64_t> ClOriginBase("msan-origin-base",
328 cl::desc("Define custom MSan OriginBase"),
329 cl::Hidden
, cl::init(0));
331 static const char *const kMsanModuleCtorName
= "msan.module_ctor";
332 static const char *const kMsanInitName
= "__msan_init";
336 // Memory map parameters used in application-to-shadow address calculation.
337 // Offset = (Addr & ~AndMask) ^ XorMask
338 // Shadow = ShadowBase + Offset
339 // Origin = OriginBase + Offset
340 struct MemoryMapParams
{
347 struct PlatformMemoryMapParams
{
348 const MemoryMapParams
*bits32
;
349 const MemoryMapParams
*bits64
;
352 } // end anonymous namespace
355 static const MemoryMapParams Linux_I386_MemoryMapParams
= {
356 0x000080000000, // AndMask
357 0, // XorMask (not used)
358 0, // ShadowBase (not used)
359 0x000040000000, // OriginBase
363 static const MemoryMapParams Linux_X86_64_MemoryMapParams
= {
364 #ifdef MSAN_LINUX_X86_64_OLD_MAPPING
365 0x400000000000, // AndMask
366 0, // XorMask (not used)
367 0, // ShadowBase (not used)
368 0x200000000000, // OriginBase
370 0, // AndMask (not used)
371 0x500000000000, // XorMask
372 0, // ShadowBase (not used)
373 0x100000000000, // OriginBase
378 static const MemoryMapParams Linux_MIPS64_MemoryMapParams
= {
379 0, // AndMask (not used)
380 0x008000000000, // XorMask
381 0, // ShadowBase (not used)
382 0x002000000000, // OriginBase
386 static const MemoryMapParams Linux_PowerPC64_MemoryMapParams
= {
387 0xE00000000000, // AndMask
388 0x100000000000, // XorMask
389 0x080000000000, // ShadowBase
390 0x1C0000000000, // OriginBase
394 static const MemoryMapParams Linux_AArch64_MemoryMapParams
= {
395 0, // AndMask (not used)
396 0x06000000000, // XorMask
397 0, // ShadowBase (not used)
398 0x01000000000, // OriginBase
402 static const MemoryMapParams FreeBSD_I386_MemoryMapParams
= {
403 0x000180000000, // AndMask
404 0x000040000000, // XorMask
405 0x000020000000, // ShadowBase
406 0x000700000000, // OriginBase
410 static const MemoryMapParams FreeBSD_X86_64_MemoryMapParams
= {
411 0xc00000000000, // AndMask
412 0x200000000000, // XorMask
413 0x100000000000, // ShadowBase
414 0x380000000000, // OriginBase
418 static const MemoryMapParams NetBSD_X86_64_MemoryMapParams
= {
420 0x500000000000, // XorMask
422 0x100000000000, // OriginBase
425 static const PlatformMemoryMapParams Linux_X86_MemoryMapParams
= {
426 &Linux_I386_MemoryMapParams
,
427 &Linux_X86_64_MemoryMapParams
,
430 static const PlatformMemoryMapParams Linux_MIPS_MemoryMapParams
= {
432 &Linux_MIPS64_MemoryMapParams
,
435 static const PlatformMemoryMapParams Linux_PowerPC_MemoryMapParams
= {
437 &Linux_PowerPC64_MemoryMapParams
,
440 static const PlatformMemoryMapParams Linux_ARM_MemoryMapParams
= {
442 &Linux_AArch64_MemoryMapParams
,
445 static const PlatformMemoryMapParams FreeBSD_X86_MemoryMapParams
= {
446 &FreeBSD_I386_MemoryMapParams
,
447 &FreeBSD_X86_64_MemoryMapParams
,
450 static const PlatformMemoryMapParams NetBSD_X86_MemoryMapParams
= {
452 &NetBSD_X86_64_MemoryMapParams
,
457 /// Instrument functions of a module to detect uninitialized reads.
459 /// Instantiating MemorySanitizer inserts the msan runtime library API function
460 /// declarations into the module if they don't exist already. Instantiating
461 /// ensures the __msan_init function is in the list of global constructors for
463 class MemorySanitizer
{
465 MemorySanitizer(Module
&M
, MemorySanitizerOptions Options
) {
466 this->CompileKernel
=
467 ClEnableKmsan
.getNumOccurrences() > 0 ? ClEnableKmsan
: Options
.Kernel
;
468 if (ClTrackOrigins
.getNumOccurrences() > 0)
469 this->TrackOrigins
= ClTrackOrigins
;
471 this->TrackOrigins
= this->CompileKernel
? 2 : Options
.TrackOrigins
;
472 this->Recover
= ClKeepGoing
.getNumOccurrences() > 0
474 : (this->CompileKernel
| Options
.Recover
);
478 // MSan cannot be moved or copied because of MapParams.
479 MemorySanitizer(MemorySanitizer
&&) = delete;
480 MemorySanitizer
&operator=(MemorySanitizer
&&) = delete;
481 MemorySanitizer(const MemorySanitizer
&) = delete;
482 MemorySanitizer
&operator=(const MemorySanitizer
&) = delete;
484 bool sanitizeFunction(Function
&F
, TargetLibraryInfo
&TLI
);
487 friend struct MemorySanitizerVisitor
;
488 friend struct VarArgAMD64Helper
;
489 friend struct VarArgMIPS64Helper
;
490 friend struct VarArgAArch64Helper
;
491 friend struct VarArgPowerPC64Helper
;
493 void initializeModule(Module
&M
);
494 void initializeCallbacks(Module
&M
);
495 void createKernelApi(Module
&M
);
496 void createUserspaceApi(Module
&M
);
498 /// True if we're compiling the Linux kernel.
500 /// Track origins (allocation points) of uninitialized values.
508 // XxxTLS variables represent the per-thread state in MSan and per-task state
510 // For the userspace these point to thread-local globals. In the kernel land
511 // they point to the members of a per-task struct obtained via a call to
512 // __msan_get_context_state().
514 /// Thread-local shadow storage for function parameters.
517 /// Thread-local origin storage for function parameters.
518 Value
*ParamOriginTLS
;
520 /// Thread-local shadow storage for function return value.
523 /// Thread-local origin storage for function return value.
524 Value
*RetvalOriginTLS
;
526 /// Thread-local shadow storage for in-register va_arg function
527 /// parameters (x86_64-specific).
530 /// Thread-local shadow storage for in-register va_arg function
531 /// parameters (x86_64-specific).
532 Value
*VAArgOriginTLS
;
534 /// Thread-local shadow storage for va_arg overflow area
535 /// (x86_64-specific).
536 Value
*VAArgOverflowSizeTLS
;
538 /// Thread-local space used to pass origin value to the UMR reporting
542 /// Are the instrumentation callbacks set up?
543 bool CallbacksInitialized
= false;
545 /// The run-time callback to print a warning.
546 FunctionCallee WarningFn
;
548 // These arrays are indexed by log2(AccessSize).
549 FunctionCallee MaybeWarningFn
[kNumberOfAccessSizes
];
550 FunctionCallee MaybeStoreOriginFn
[kNumberOfAccessSizes
];
552 /// Run-time helper that generates a new origin value for a stack
554 FunctionCallee MsanSetAllocaOrigin4Fn
;
556 /// Run-time helper that poisons stack on function entry.
557 FunctionCallee MsanPoisonStackFn
;
559 /// Run-time helper that records a store (or any event) of an
560 /// uninitialized value and returns an updated origin id encoding this info.
561 FunctionCallee MsanChainOriginFn
;
563 /// MSan runtime replacements for memmove, memcpy and memset.
564 FunctionCallee MemmoveFn
, MemcpyFn
, MemsetFn
;
566 /// KMSAN callback for task-local function argument shadow.
567 StructType
*MsanContextStateTy
;
568 FunctionCallee MsanGetContextStateFn
;
570 /// Functions for poisoning/unpoisoning local variables
571 FunctionCallee MsanPoisonAllocaFn
, MsanUnpoisonAllocaFn
;
573 /// Each of the MsanMetadataPtrXxx functions returns a pair of shadow/origin
575 FunctionCallee MsanMetadataPtrForLoadN
, MsanMetadataPtrForStoreN
;
576 FunctionCallee MsanMetadataPtrForLoad_1_8
[4];
577 FunctionCallee MsanMetadataPtrForStore_1_8
[4];
578 FunctionCallee MsanInstrumentAsmStoreFn
;
580 /// Helper to choose between different MsanMetadataPtrXxx().
581 FunctionCallee
getKmsanShadowOriginAccessFn(bool isStore
, int size
);
583 /// Memory map parameters used in application-to-shadow calculation.
584 const MemoryMapParams
*MapParams
;
586 /// Custom memory map parameters used when -msan-shadow-base or
587 // -msan-origin-base is provided.
588 MemoryMapParams CustomMapParams
;
590 MDNode
*ColdCallWeights
;
592 /// Branch weights for origin store.
593 MDNode
*OriginStoreWeights
;
595 /// An empty volatile inline asm that prevents callback merge.
598 Function
*MsanCtorFunction
;
601 /// A legacy function pass for msan instrumentation.
603 /// Instruments functions to detect unitialized reads.
604 struct MemorySanitizerLegacyPass
: public FunctionPass
{
605 // Pass identification, replacement for typeid.
608 MemorySanitizerLegacyPass(MemorySanitizerOptions Options
= {})
609 : FunctionPass(ID
), Options(Options
) {}
610 StringRef
getPassName() const override
{ return "MemorySanitizerLegacyPass"; }
612 void getAnalysisUsage(AnalysisUsage
&AU
) const override
{
613 AU
.addRequired
<TargetLibraryInfoWrapperPass
>();
616 bool runOnFunction(Function
&F
) override
{
617 return MSan
->sanitizeFunction(
618 F
, getAnalysis
<TargetLibraryInfoWrapperPass
>().getTLI(F
));
620 bool doInitialization(Module
&M
) override
;
622 Optional
<MemorySanitizer
> MSan
;
623 MemorySanitizerOptions Options
;
626 } // end anonymous namespace
628 PreservedAnalyses
MemorySanitizerPass::run(Function
&F
,
629 FunctionAnalysisManager
&FAM
) {
630 MemorySanitizer
Msan(*F
.getParent(), Options
);
631 if (Msan
.sanitizeFunction(F
, FAM
.getResult
<TargetLibraryAnalysis
>(F
)))
632 return PreservedAnalyses::none();
633 return PreservedAnalyses::all();
636 char MemorySanitizerLegacyPass::ID
= 0;
638 INITIALIZE_PASS_BEGIN(MemorySanitizerLegacyPass
, "msan",
639 "MemorySanitizer: detects uninitialized reads.", false,
641 INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfoWrapperPass
)
642 INITIALIZE_PASS_END(MemorySanitizerLegacyPass
, "msan",
643 "MemorySanitizer: detects uninitialized reads.", false,
647 llvm::createMemorySanitizerLegacyPassPass(MemorySanitizerOptions Options
) {
648 return new MemorySanitizerLegacyPass(Options
);
651 /// Create a non-const global initialized with the given string.
653 /// Creates a writable global for Str so that we can pass it to the
654 /// run-time lib. Runtime uses first 4 bytes of the string to store the
655 /// frame ID, so the string needs to be mutable.
656 static GlobalVariable
*createPrivateNonConstGlobalForString(Module
&M
,
658 Constant
*StrConst
= ConstantDataArray::getString(M
.getContext(), Str
);
659 return new GlobalVariable(M
, StrConst
->getType(), /*isConstant=*/false,
660 GlobalValue::PrivateLinkage
, StrConst
, "");
663 /// Create KMSAN API callbacks.
664 void MemorySanitizer::createKernelApi(Module
&M
) {
667 // These will be initialized in insertKmsanPrologue().
669 RetvalOriginTLS
= nullptr;
671 ParamOriginTLS
= nullptr;
673 VAArgOriginTLS
= nullptr;
674 VAArgOverflowSizeTLS
= nullptr;
675 // OriginTLS is unused in the kernel.
678 // __msan_warning() in the kernel takes an origin.
679 WarningFn
= M
.getOrInsertFunction("__msan_warning", IRB
.getVoidTy(),
681 // Requests the per-task context state (kmsan_context_state*) from the
683 MsanContextStateTy
= StructType::get(
684 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8),
685 ArrayType::get(IRB
.getInt64Ty(), kRetvalTLSSize
/ 8),
686 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8),
687 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8), /* va_arg_origin */
688 IRB
.getInt64Ty(), ArrayType::get(OriginTy
, kParamTLSSize
/ 4), OriginTy
,
690 MsanGetContextStateFn
= M
.getOrInsertFunction(
691 "__msan_get_context_state", PointerType::get(MsanContextStateTy
, 0));
693 Type
*RetTy
= StructType::get(PointerType::get(IRB
.getInt8Ty(), 0),
694 PointerType::get(IRB
.getInt32Ty(), 0));
696 for (int ind
= 0, size
= 1; ind
< 4; ind
++, size
<<= 1) {
697 std::string name_load
=
698 "__msan_metadata_ptr_for_load_" + std::to_string(size
);
699 std::string name_store
=
700 "__msan_metadata_ptr_for_store_" + std::to_string(size
);
701 MsanMetadataPtrForLoad_1_8
[ind
] = M
.getOrInsertFunction(
702 name_load
, RetTy
, PointerType::get(IRB
.getInt8Ty(), 0));
703 MsanMetadataPtrForStore_1_8
[ind
] = M
.getOrInsertFunction(
704 name_store
, RetTy
, PointerType::get(IRB
.getInt8Ty(), 0));
707 MsanMetadataPtrForLoadN
= M
.getOrInsertFunction(
708 "__msan_metadata_ptr_for_load_n", RetTy
,
709 PointerType::get(IRB
.getInt8Ty(), 0), IRB
.getInt64Ty());
710 MsanMetadataPtrForStoreN
= M
.getOrInsertFunction(
711 "__msan_metadata_ptr_for_store_n", RetTy
,
712 PointerType::get(IRB
.getInt8Ty(), 0), IRB
.getInt64Ty());
714 // Functions for poisoning and unpoisoning memory.
716 M
.getOrInsertFunction("__msan_poison_alloca", IRB
.getVoidTy(),
717 IRB
.getInt8PtrTy(), IntptrTy
, IRB
.getInt8PtrTy());
718 MsanUnpoisonAllocaFn
= M
.getOrInsertFunction(
719 "__msan_unpoison_alloca", IRB
.getVoidTy(), IRB
.getInt8PtrTy(), IntptrTy
);
722 static Constant
*getOrInsertGlobal(Module
&M
, StringRef Name
, Type
*Ty
) {
723 return M
.getOrInsertGlobal(Name
, Ty
, [&] {
724 return new GlobalVariable(M
, Ty
, false, GlobalVariable::ExternalLinkage
,
725 nullptr, Name
, nullptr,
726 GlobalVariable::InitialExecTLSModel
);
730 /// Insert declarations for userspace-specific functions and globals.
731 void MemorySanitizer::createUserspaceApi(Module
&M
) {
733 // Create the callback.
734 // FIXME: this function should have "Cold" calling conv,
735 // which is not yet implemented.
736 StringRef WarningFnName
= Recover
? "__msan_warning"
737 : "__msan_warning_noreturn";
738 WarningFn
= M
.getOrInsertFunction(WarningFnName
, IRB
.getVoidTy());
740 // Create the global TLS variables.
742 getOrInsertGlobal(M
, "__msan_retval_tls",
743 ArrayType::get(IRB
.getInt64Ty(), kRetvalTLSSize
/ 8));
745 RetvalOriginTLS
= getOrInsertGlobal(M
, "__msan_retval_origin_tls", OriginTy
);
748 getOrInsertGlobal(M
, "__msan_param_tls",
749 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8));
752 getOrInsertGlobal(M
, "__msan_param_origin_tls",
753 ArrayType::get(OriginTy
, kParamTLSSize
/ 4));
756 getOrInsertGlobal(M
, "__msan_va_arg_tls",
757 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8));
760 getOrInsertGlobal(M
, "__msan_va_arg_origin_tls",
761 ArrayType::get(OriginTy
, kParamTLSSize
/ 4));
763 VAArgOverflowSizeTLS
=
764 getOrInsertGlobal(M
, "__msan_va_arg_overflow_size_tls", IRB
.getInt64Ty());
765 OriginTLS
= getOrInsertGlobal(M
, "__msan_origin_tls", IRB
.getInt32Ty());
767 for (size_t AccessSizeIndex
= 0; AccessSizeIndex
< kNumberOfAccessSizes
;
769 unsigned AccessSize
= 1 << AccessSizeIndex
;
770 std::string FunctionName
= "__msan_maybe_warning_" + itostr(AccessSize
);
771 MaybeWarningFn
[AccessSizeIndex
] = M
.getOrInsertFunction(
772 FunctionName
, IRB
.getVoidTy(), IRB
.getIntNTy(AccessSize
* 8),
775 FunctionName
= "__msan_maybe_store_origin_" + itostr(AccessSize
);
776 MaybeStoreOriginFn
[AccessSizeIndex
] = M
.getOrInsertFunction(
777 FunctionName
, IRB
.getVoidTy(), IRB
.getIntNTy(AccessSize
* 8),
778 IRB
.getInt8PtrTy(), IRB
.getInt32Ty());
781 MsanSetAllocaOrigin4Fn
= M
.getOrInsertFunction(
782 "__msan_set_alloca_origin4", IRB
.getVoidTy(), IRB
.getInt8PtrTy(), IntptrTy
,
783 IRB
.getInt8PtrTy(), IntptrTy
);
785 M
.getOrInsertFunction("__msan_poison_stack", IRB
.getVoidTy(),
786 IRB
.getInt8PtrTy(), IntptrTy
);
789 /// Insert extern declaration of runtime-provided functions and globals.
790 void MemorySanitizer::initializeCallbacks(Module
&M
) {
791 // Only do this once.
792 if (CallbacksInitialized
)
796 // Initialize callbacks that are common for kernel and userspace
798 MsanChainOriginFn
= M
.getOrInsertFunction(
799 "__msan_chain_origin", IRB
.getInt32Ty(), IRB
.getInt32Ty());
800 MemmoveFn
= M
.getOrInsertFunction(
801 "__msan_memmove", IRB
.getInt8PtrTy(), IRB
.getInt8PtrTy(),
802 IRB
.getInt8PtrTy(), IntptrTy
);
803 MemcpyFn
= M
.getOrInsertFunction(
804 "__msan_memcpy", IRB
.getInt8PtrTy(), IRB
.getInt8PtrTy(), IRB
.getInt8PtrTy(),
806 MemsetFn
= M
.getOrInsertFunction(
807 "__msan_memset", IRB
.getInt8PtrTy(), IRB
.getInt8PtrTy(), IRB
.getInt32Ty(),
809 // We insert an empty inline asm after __msan_report* to avoid callback merge.
810 EmptyAsm
= InlineAsm::get(FunctionType::get(IRB
.getVoidTy(), false),
811 StringRef(""), StringRef(""),
812 /*hasSideEffects=*/true);
814 MsanInstrumentAsmStoreFn
=
815 M
.getOrInsertFunction("__msan_instrument_asm_store", IRB
.getVoidTy(),
816 PointerType::get(IRB
.getInt8Ty(), 0), IntptrTy
);
821 createUserspaceApi(M
);
823 CallbacksInitialized
= true;
826 FunctionCallee
MemorySanitizer::getKmsanShadowOriginAccessFn(bool isStore
,
828 FunctionCallee
*Fns
=
829 isStore
? MsanMetadataPtrForStore_1_8
: MsanMetadataPtrForLoad_1_8
;
844 /// Module-level initialization.
846 /// inserts a call to __msan_init to the module's constructor list.
847 void MemorySanitizer::initializeModule(Module
&M
) {
848 auto &DL
= M
.getDataLayout();
850 bool ShadowPassed
= ClShadowBase
.getNumOccurrences() > 0;
851 bool OriginPassed
= ClOriginBase
.getNumOccurrences() > 0;
852 // Check the overrides first
853 if (ShadowPassed
|| OriginPassed
) {
854 CustomMapParams
.AndMask
= ClAndMask
;
855 CustomMapParams
.XorMask
= ClXorMask
;
856 CustomMapParams
.ShadowBase
= ClShadowBase
;
857 CustomMapParams
.OriginBase
= ClOriginBase
;
858 MapParams
= &CustomMapParams
;
860 Triple
TargetTriple(M
.getTargetTriple());
861 switch (TargetTriple
.getOS()) {
862 case Triple::FreeBSD
:
863 switch (TargetTriple
.getArch()) {
865 MapParams
= FreeBSD_X86_MemoryMapParams
.bits64
;
868 MapParams
= FreeBSD_X86_MemoryMapParams
.bits32
;
871 report_fatal_error("unsupported architecture");
875 switch (TargetTriple
.getArch()) {
877 MapParams
= NetBSD_X86_MemoryMapParams
.bits64
;
880 report_fatal_error("unsupported architecture");
884 switch (TargetTriple
.getArch()) {
886 MapParams
= Linux_X86_MemoryMapParams
.bits64
;
889 MapParams
= Linux_X86_MemoryMapParams
.bits32
;
892 case Triple::mips64el
:
893 MapParams
= Linux_MIPS_MemoryMapParams
.bits64
;
896 case Triple::ppc64le
:
897 MapParams
= Linux_PowerPC_MemoryMapParams
.bits64
;
899 case Triple::aarch64
:
900 case Triple::aarch64_be
:
901 MapParams
= Linux_ARM_MemoryMapParams
.bits64
;
904 report_fatal_error("unsupported architecture");
908 report_fatal_error("unsupported operating system");
912 C
= &(M
.getContext());
914 IntptrTy
= IRB
.getIntPtrTy(DL
);
915 OriginTy
= IRB
.getInt32Ty();
917 ColdCallWeights
= MDBuilder(*C
).createBranchWeights(1, 1000);
918 OriginStoreWeights
= MDBuilder(*C
).createBranchWeights(1, 1000);
920 if (!CompileKernel
) {
921 std::tie(MsanCtorFunction
, std::ignore
) =
922 getOrCreateSanitizerCtorAndInitFunctions(
923 M
, kMsanModuleCtorName
, kMsanInitName
,
926 // This callback is invoked when the functions are created the first
927 // time. Hook them into the global ctors list in that case:
928 [&](Function
*Ctor
, FunctionCallee
) {
930 appendToGlobalCtors(M
, Ctor
, 0);
933 Comdat
*MsanCtorComdat
= M
.getOrInsertComdat(kMsanModuleCtorName
);
934 Ctor
->setComdat(MsanCtorComdat
);
935 appendToGlobalCtors(M
, Ctor
, 0, Ctor
);
939 M
.getOrInsertGlobal("__msan_track_origins", IRB
.getInt32Ty(), [&] {
940 return new GlobalVariable(
941 M
, IRB
.getInt32Ty(), true, GlobalValue::WeakODRLinkage
,
942 IRB
.getInt32(TrackOrigins
), "__msan_track_origins");
946 M
.getOrInsertGlobal("__msan_keep_going", IRB
.getInt32Ty(), [&] {
947 return new GlobalVariable(M
, IRB
.getInt32Ty(), true,
948 GlobalValue::WeakODRLinkage
,
949 IRB
.getInt32(Recover
), "__msan_keep_going");
954 bool MemorySanitizerLegacyPass::doInitialization(Module
&M
) {
955 MSan
.emplace(M
, Options
);
961 /// A helper class that handles instrumentation of VarArg
962 /// functions on a particular platform.
964 /// Implementations are expected to insert the instrumentation
965 /// necessary to propagate argument shadow through VarArg function
966 /// calls. Visit* methods are called during an InstVisitor pass over
967 /// the function, and should avoid creating new basic blocks. A new
968 /// instance of this class is created for each instrumented function.
969 struct VarArgHelper
{
970 virtual ~VarArgHelper() = default;
972 /// Visit a CallSite.
973 virtual void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) = 0;
975 /// Visit a va_start call.
976 virtual void visitVAStartInst(VAStartInst
&I
) = 0;
978 /// Visit a va_copy call.
979 virtual void visitVACopyInst(VACopyInst
&I
) = 0;
981 /// Finalize function instrumentation.
983 /// This method is called after visiting all interesting (see above)
984 /// instructions in a function.
985 virtual void finalizeInstrumentation() = 0;
988 struct MemorySanitizerVisitor
;
990 } // end anonymous namespace
992 static VarArgHelper
*CreateVarArgHelper(Function
&Func
, MemorySanitizer
&Msan
,
993 MemorySanitizerVisitor
&Visitor
);
995 static unsigned TypeSizeToSizeIndex(unsigned TypeSize
) {
996 if (TypeSize
<= 8) return 0;
997 return Log2_32_Ceil((TypeSize
+ 7) / 8);
1002 /// This class does all the work for a given function. Store and Load
1003 /// instructions store and load corresponding shadow and origin
1004 /// values. Most instructions propagate shadow from arguments to their
1005 /// return values. Certain instructions (most importantly, BranchInst)
1006 /// test their argument shadow and print reports (with a runtime call) if it's
1008 struct MemorySanitizerVisitor
: public InstVisitor
<MemorySanitizerVisitor
> {
1010 MemorySanitizer
&MS
;
1011 SmallVector
<PHINode
*, 16> ShadowPHINodes
, OriginPHINodes
;
1012 ValueMap
<Value
*, Value
*> ShadowMap
, OriginMap
;
1013 std::unique_ptr
<VarArgHelper
> VAHelper
;
1014 const TargetLibraryInfo
*TLI
;
1015 BasicBlock
*ActualFnStart
;
1017 // The following flags disable parts of MSan instrumentation based on
1018 // blacklist contents and command-line options.
1020 bool PropagateShadow
;
1023 bool CheckReturnValue
;
1025 struct ShadowOriginAndInsertPoint
{
1028 Instruction
*OrigIns
;
1030 ShadowOriginAndInsertPoint(Value
*S
, Value
*O
, Instruction
*I
)
1031 : Shadow(S
), Origin(O
), OrigIns(I
) {}
1033 SmallVector
<ShadowOriginAndInsertPoint
, 16> InstrumentationList
;
1034 bool InstrumentLifetimeStart
= ClHandleLifetimeIntrinsics
;
1035 SmallSet
<AllocaInst
*, 16> AllocaSet
;
1036 SmallVector
<std::pair
<IntrinsicInst
*, AllocaInst
*>, 16> LifetimeStartList
;
1037 SmallVector
<StoreInst
*, 16> StoreList
;
1039 MemorySanitizerVisitor(Function
&F
, MemorySanitizer
&MS
,
1040 const TargetLibraryInfo
&TLI
)
1041 : F(F
), MS(MS
), VAHelper(CreateVarArgHelper(F
, MS
, *this)), TLI(&TLI
) {
1042 bool SanitizeFunction
= F
.hasFnAttribute(Attribute::SanitizeMemory
);
1043 InsertChecks
= SanitizeFunction
;
1044 PropagateShadow
= SanitizeFunction
;
1045 PoisonStack
= SanitizeFunction
&& ClPoisonStack
;
1046 PoisonUndef
= SanitizeFunction
&& ClPoisonUndef
;
1047 // FIXME: Consider using SpecialCaseList to specify a list of functions that
1048 // must always return fully initialized values. For now, we hardcode "main".
1049 CheckReturnValue
= SanitizeFunction
&& (F
.getName() == "main");
1051 MS
.initializeCallbacks(*F
.getParent());
1052 if (MS
.CompileKernel
)
1053 ActualFnStart
= insertKmsanPrologue(F
);
1055 ActualFnStart
= &F
.getEntryBlock();
1057 LLVM_DEBUG(if (!InsertChecks
) dbgs()
1058 << "MemorySanitizer is not inserting checks into '"
1059 << F
.getName() << "'\n");
1062 Value
*updateOrigin(Value
*V
, IRBuilder
<> &IRB
) {
1063 if (MS
.TrackOrigins
<= 1) return V
;
1064 return IRB
.CreateCall(MS
.MsanChainOriginFn
, V
);
1067 Value
*originToIntptr(IRBuilder
<> &IRB
, Value
*Origin
) {
1068 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1069 unsigned IntptrSize
= DL
.getTypeStoreSize(MS
.IntptrTy
);
1070 if (IntptrSize
== kOriginSize
) return Origin
;
1071 assert(IntptrSize
== kOriginSize
* 2);
1072 Origin
= IRB
.CreateIntCast(Origin
, MS
.IntptrTy
, /* isSigned */ false);
1073 return IRB
.CreateOr(Origin
, IRB
.CreateShl(Origin
, kOriginSize
* 8));
1076 /// Fill memory range with the given origin value.
1077 void paintOrigin(IRBuilder
<> &IRB
, Value
*Origin
, Value
*OriginPtr
,
1078 unsigned Size
, unsigned Alignment
) {
1079 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1080 unsigned IntptrAlignment
= DL
.getABITypeAlignment(MS
.IntptrTy
);
1081 unsigned IntptrSize
= DL
.getTypeStoreSize(MS
.IntptrTy
);
1082 assert(IntptrAlignment
>= kMinOriginAlignment
);
1083 assert(IntptrSize
>= kOriginSize
);
1086 unsigned CurrentAlignment
= Alignment
;
1087 if (Alignment
>= IntptrAlignment
&& IntptrSize
> kOriginSize
) {
1088 Value
*IntptrOrigin
= originToIntptr(IRB
, Origin
);
1089 Value
*IntptrOriginPtr
=
1090 IRB
.CreatePointerCast(OriginPtr
, PointerType::get(MS
.IntptrTy
, 0));
1091 for (unsigned i
= 0; i
< Size
/ IntptrSize
; ++i
) {
1092 Value
*Ptr
= i
? IRB
.CreateConstGEP1_32(MS
.IntptrTy
, IntptrOriginPtr
, i
)
1094 IRB
.CreateAlignedStore(IntptrOrigin
, Ptr
, CurrentAlignment
);
1095 Ofs
+= IntptrSize
/ kOriginSize
;
1096 CurrentAlignment
= IntptrAlignment
;
1100 for (unsigned i
= Ofs
; i
< (Size
+ kOriginSize
- 1) / kOriginSize
; ++i
) {
1102 i
? IRB
.CreateConstGEP1_32(MS
.OriginTy
, OriginPtr
, i
) : OriginPtr
;
1103 IRB
.CreateAlignedStore(Origin
, GEP
, CurrentAlignment
);
1104 CurrentAlignment
= kMinOriginAlignment
;
1108 void storeOrigin(IRBuilder
<> &IRB
, Value
*Addr
, Value
*Shadow
, Value
*Origin
,
1109 Value
*OriginPtr
, unsigned Alignment
, bool AsCall
) {
1110 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1111 unsigned OriginAlignment
= std::max(kMinOriginAlignment
, Alignment
);
1112 unsigned StoreSize
= DL
.getTypeStoreSize(Shadow
->getType());
1113 if (Shadow
->getType()->isAggregateType()) {
1114 paintOrigin(IRB
, updateOrigin(Origin
, IRB
), OriginPtr
, StoreSize
,
1117 Value
*ConvertedShadow
= convertToShadowTyNoVec(Shadow
, IRB
);
1118 Constant
*ConstantShadow
= dyn_cast_or_null
<Constant
>(ConvertedShadow
);
1119 if (ConstantShadow
) {
1120 if (ClCheckConstantShadow
&& !ConstantShadow
->isZeroValue())
1121 paintOrigin(IRB
, updateOrigin(Origin
, IRB
), OriginPtr
, StoreSize
,
1126 unsigned TypeSizeInBits
=
1127 DL
.getTypeSizeInBits(ConvertedShadow
->getType());
1128 unsigned SizeIndex
= TypeSizeToSizeIndex(TypeSizeInBits
);
1129 if (AsCall
&& SizeIndex
< kNumberOfAccessSizes
&& !MS
.CompileKernel
) {
1130 FunctionCallee Fn
= MS
.MaybeStoreOriginFn
[SizeIndex
];
1131 Value
*ConvertedShadow2
= IRB
.CreateZExt(
1132 ConvertedShadow
, IRB
.getIntNTy(8 * (1 << SizeIndex
)));
1133 IRB
.CreateCall(Fn
, {ConvertedShadow2
,
1134 IRB
.CreatePointerCast(Addr
, IRB
.getInt8PtrTy()),
1137 Value
*Cmp
= IRB
.CreateICmpNE(
1138 ConvertedShadow
, getCleanShadow(ConvertedShadow
), "_mscmp");
1139 Instruction
*CheckTerm
= SplitBlockAndInsertIfThen(
1140 Cmp
, &*IRB
.GetInsertPoint(), false, MS
.OriginStoreWeights
);
1141 IRBuilder
<> IRBNew(CheckTerm
);
1142 paintOrigin(IRBNew
, updateOrigin(Origin
, IRBNew
), OriginPtr
, StoreSize
,
1148 void materializeStores(bool InstrumentWithCalls
) {
1149 for (StoreInst
*SI
: StoreList
) {
1150 IRBuilder
<> IRB(SI
);
1151 Value
*Val
= SI
->getValueOperand();
1152 Value
*Addr
= SI
->getPointerOperand();
1153 Value
*Shadow
= SI
->isAtomic() ? getCleanShadow(Val
) : getShadow(Val
);
1154 Value
*ShadowPtr
, *OriginPtr
;
1155 Type
*ShadowTy
= Shadow
->getType();
1156 unsigned Alignment
= SI
->getAlignment();
1157 unsigned OriginAlignment
= std::max(kMinOriginAlignment
, Alignment
);
1158 std::tie(ShadowPtr
, OriginPtr
) =
1159 getShadowOriginPtr(Addr
, IRB
, ShadowTy
, Alignment
, /*isStore*/ true);
1161 StoreInst
*NewSI
= IRB
.CreateAlignedStore(Shadow
, ShadowPtr
, Alignment
);
1162 LLVM_DEBUG(dbgs() << " STORE: " << *NewSI
<< "\n");
1166 SI
->setOrdering(addReleaseOrdering(SI
->getOrdering()));
1168 if (MS
.TrackOrigins
&& !SI
->isAtomic())
1169 storeOrigin(IRB
, Addr
, Shadow
, getOrigin(Val
), OriginPtr
,
1170 OriginAlignment
, InstrumentWithCalls
);
1174 /// Helper function to insert a warning at IRB's current insert point.
1175 void insertWarningFn(IRBuilder
<> &IRB
, Value
*Origin
) {
1177 Origin
= (Value
*)IRB
.getInt32(0);
1178 if (MS
.CompileKernel
) {
1179 IRB
.CreateCall(MS
.WarningFn
, Origin
);
1181 if (MS
.TrackOrigins
) {
1182 IRB
.CreateStore(Origin
, MS
.OriginTLS
);
1184 IRB
.CreateCall(MS
.WarningFn
, {});
1186 IRB
.CreateCall(MS
.EmptyAsm
, {});
1187 // FIXME: Insert UnreachableInst if !MS.Recover?
1188 // This may invalidate some of the following checks and needs to be done
1192 void materializeOneCheck(Instruction
*OrigIns
, Value
*Shadow
, Value
*Origin
,
1194 IRBuilder
<> IRB(OrigIns
);
1195 LLVM_DEBUG(dbgs() << " SHAD0 : " << *Shadow
<< "\n");
1196 Value
*ConvertedShadow
= convertToShadowTyNoVec(Shadow
, IRB
);
1197 LLVM_DEBUG(dbgs() << " SHAD1 : " << *ConvertedShadow
<< "\n");
1199 Constant
*ConstantShadow
= dyn_cast_or_null
<Constant
>(ConvertedShadow
);
1200 if (ConstantShadow
) {
1201 if (ClCheckConstantShadow
&& !ConstantShadow
->isZeroValue()) {
1202 insertWarningFn(IRB
, Origin
);
1207 const DataLayout
&DL
= OrigIns
->getModule()->getDataLayout();
1209 unsigned TypeSizeInBits
= DL
.getTypeSizeInBits(ConvertedShadow
->getType());
1210 unsigned SizeIndex
= TypeSizeToSizeIndex(TypeSizeInBits
);
1211 if (AsCall
&& SizeIndex
< kNumberOfAccessSizes
&& !MS
.CompileKernel
) {
1212 FunctionCallee Fn
= MS
.MaybeWarningFn
[SizeIndex
];
1213 Value
*ConvertedShadow2
=
1214 IRB
.CreateZExt(ConvertedShadow
, IRB
.getIntNTy(8 * (1 << SizeIndex
)));
1215 IRB
.CreateCall(Fn
, {ConvertedShadow2
, MS
.TrackOrigins
&& Origin
1217 : (Value
*)IRB
.getInt32(0)});
1219 Value
*Cmp
= IRB
.CreateICmpNE(ConvertedShadow
,
1220 getCleanShadow(ConvertedShadow
), "_mscmp");
1221 Instruction
*CheckTerm
= SplitBlockAndInsertIfThen(
1223 /* Unreachable */ !MS
.Recover
, MS
.ColdCallWeights
);
1225 IRB
.SetInsertPoint(CheckTerm
);
1226 insertWarningFn(IRB
, Origin
);
1227 LLVM_DEBUG(dbgs() << " CHECK: " << *Cmp
<< "\n");
1231 void materializeChecks(bool InstrumentWithCalls
) {
1232 for (const auto &ShadowData
: InstrumentationList
) {
1233 Instruction
*OrigIns
= ShadowData
.OrigIns
;
1234 Value
*Shadow
= ShadowData
.Shadow
;
1235 Value
*Origin
= ShadowData
.Origin
;
1236 materializeOneCheck(OrigIns
, Shadow
, Origin
, InstrumentWithCalls
);
1238 LLVM_DEBUG(dbgs() << "DONE:\n" << F
);
1241 BasicBlock
*insertKmsanPrologue(Function
&F
) {
1243 SplitBlock(&F
.getEntryBlock(), F
.getEntryBlock().getFirstNonPHI());
1244 IRBuilder
<> IRB(F
.getEntryBlock().getFirstNonPHI());
1245 Value
*ContextState
= IRB
.CreateCall(MS
.MsanGetContextStateFn
, {});
1246 Constant
*Zero
= IRB
.getInt32(0);
1247 MS
.ParamTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1248 {Zero
, IRB
.getInt32(0)}, "param_shadow");
1249 MS
.RetvalTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1250 {Zero
, IRB
.getInt32(1)}, "retval_shadow");
1251 MS
.VAArgTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1252 {Zero
, IRB
.getInt32(2)}, "va_arg_shadow");
1253 MS
.VAArgOriginTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1254 {Zero
, IRB
.getInt32(3)}, "va_arg_origin");
1255 MS
.VAArgOverflowSizeTLS
=
1256 IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1257 {Zero
, IRB
.getInt32(4)}, "va_arg_overflow_size");
1258 MS
.ParamOriginTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1259 {Zero
, IRB
.getInt32(5)}, "param_origin");
1260 MS
.RetvalOriginTLS
=
1261 IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1262 {Zero
, IRB
.getInt32(6)}, "retval_origin");
1266 /// Add MemorySanitizer instrumentation to a function.
1267 bool runOnFunction() {
1268 // In the presence of unreachable blocks, we may see Phi nodes with
1269 // incoming nodes from such blocks. Since InstVisitor skips unreachable
1270 // blocks, such nodes will not have any shadow value associated with them.
1271 // It's easier to remove unreachable blocks than deal with missing shadow.
1272 removeUnreachableBlocks(F
);
1274 // Iterate all BBs in depth-first order and create shadow instructions
1275 // for all instructions (where applicable).
1276 // For PHI nodes we create dummy shadow PHIs which will be finalized later.
1277 for (BasicBlock
*BB
: depth_first(ActualFnStart
))
1280 // Finalize PHI nodes.
1281 for (PHINode
*PN
: ShadowPHINodes
) {
1282 PHINode
*PNS
= cast
<PHINode
>(getShadow(PN
));
1283 PHINode
*PNO
= MS
.TrackOrigins
? cast
<PHINode
>(getOrigin(PN
)) : nullptr;
1284 size_t NumValues
= PN
->getNumIncomingValues();
1285 for (size_t v
= 0; v
< NumValues
; v
++) {
1286 PNS
->addIncoming(getShadow(PN
, v
), PN
->getIncomingBlock(v
));
1287 if (PNO
) PNO
->addIncoming(getOrigin(PN
, v
), PN
->getIncomingBlock(v
));
1291 VAHelper
->finalizeInstrumentation();
1293 // Poison llvm.lifetime.start intrinsics, if we haven't fallen back to
1294 // instrumenting only allocas.
1295 if (InstrumentLifetimeStart
) {
1296 for (auto Item
: LifetimeStartList
) {
1297 instrumentAlloca(*Item
.second
, Item
.first
);
1298 AllocaSet
.erase(Item
.second
);
1301 // Poison the allocas for which we didn't instrument the corresponding
1302 // lifetime intrinsics.
1303 for (AllocaInst
*AI
: AllocaSet
)
1304 instrumentAlloca(*AI
);
1306 bool InstrumentWithCalls
= ClInstrumentationWithCallThreshold
>= 0 &&
1307 InstrumentationList
.size() + StoreList
.size() >
1308 (unsigned)ClInstrumentationWithCallThreshold
;
1310 // Insert shadow value checks.
1311 materializeChecks(InstrumentWithCalls
);
1313 // Delayed instrumentation of StoreInst.
1314 // This may not add new address checks.
1315 materializeStores(InstrumentWithCalls
);
1320 /// Compute the shadow type that corresponds to a given Value.
1321 Type
*getShadowTy(Value
*V
) {
1322 return getShadowTy(V
->getType());
1325 /// Compute the shadow type that corresponds to a given Type.
1326 Type
*getShadowTy(Type
*OrigTy
) {
1327 if (!OrigTy
->isSized()) {
1330 // For integer type, shadow is the same as the original type.
1331 // This may return weird-sized types like i1.
1332 if (IntegerType
*IT
= dyn_cast
<IntegerType
>(OrigTy
))
1334 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1335 if (VectorType
*VT
= dyn_cast
<VectorType
>(OrigTy
)) {
1336 uint32_t EltSize
= DL
.getTypeSizeInBits(VT
->getElementType());
1337 return VectorType::get(IntegerType::get(*MS
.C
, EltSize
),
1338 VT
->getNumElements());
1340 if (ArrayType
*AT
= dyn_cast
<ArrayType
>(OrigTy
)) {
1341 return ArrayType::get(getShadowTy(AT
->getElementType()),
1342 AT
->getNumElements());
1344 if (StructType
*ST
= dyn_cast
<StructType
>(OrigTy
)) {
1345 SmallVector
<Type
*, 4> Elements
;
1346 for (unsigned i
= 0, n
= ST
->getNumElements(); i
< n
; i
++)
1347 Elements
.push_back(getShadowTy(ST
->getElementType(i
)));
1348 StructType
*Res
= StructType::get(*MS
.C
, Elements
, ST
->isPacked());
1349 LLVM_DEBUG(dbgs() << "getShadowTy: " << *ST
<< " ===> " << *Res
<< "\n");
1352 uint32_t TypeSize
= DL
.getTypeSizeInBits(OrigTy
);
1353 return IntegerType::get(*MS
.C
, TypeSize
);
1356 /// Flatten a vector type.
1357 Type
*getShadowTyNoVec(Type
*ty
) {
1358 if (VectorType
*vt
= dyn_cast
<VectorType
>(ty
))
1359 return IntegerType::get(*MS
.C
, vt
->getBitWidth());
1363 /// Convert a shadow value to it's flattened variant.
1364 Value
*convertToShadowTyNoVec(Value
*V
, IRBuilder
<> &IRB
) {
1365 Type
*Ty
= V
->getType();
1366 Type
*NoVecTy
= getShadowTyNoVec(Ty
);
1367 if (Ty
== NoVecTy
) return V
;
1368 return IRB
.CreateBitCast(V
, NoVecTy
);
1371 /// Compute the integer shadow offset that corresponds to a given
1372 /// application address.
1374 /// Offset = (Addr & ~AndMask) ^ XorMask
1375 Value
*getShadowPtrOffset(Value
*Addr
, IRBuilder
<> &IRB
) {
1376 Value
*OffsetLong
= IRB
.CreatePointerCast(Addr
, MS
.IntptrTy
);
1378 uint64_t AndMask
= MS
.MapParams
->AndMask
;
1381 IRB
.CreateAnd(OffsetLong
, ConstantInt::get(MS
.IntptrTy
, ~AndMask
));
1383 uint64_t XorMask
= MS
.MapParams
->XorMask
;
1386 IRB
.CreateXor(OffsetLong
, ConstantInt::get(MS
.IntptrTy
, XorMask
));
1390 /// Compute the shadow and origin addresses corresponding to a given
1391 /// application address.
1393 /// Shadow = ShadowBase + Offset
1394 /// Origin = (OriginBase + Offset) & ~3ULL
1395 std::pair
<Value
*, Value
*> getShadowOriginPtrUserspace(Value
*Addr
,
1398 unsigned Alignment
) {
1399 Value
*ShadowOffset
= getShadowPtrOffset(Addr
, IRB
);
1400 Value
*ShadowLong
= ShadowOffset
;
1401 uint64_t ShadowBase
= MS
.MapParams
->ShadowBase
;
1402 if (ShadowBase
!= 0) {
1404 IRB
.CreateAdd(ShadowLong
,
1405 ConstantInt::get(MS
.IntptrTy
, ShadowBase
));
1408 IRB
.CreateIntToPtr(ShadowLong
, PointerType::get(ShadowTy
, 0));
1409 Value
*OriginPtr
= nullptr;
1410 if (MS
.TrackOrigins
) {
1411 Value
*OriginLong
= ShadowOffset
;
1412 uint64_t OriginBase
= MS
.MapParams
->OriginBase
;
1413 if (OriginBase
!= 0)
1414 OriginLong
= IRB
.CreateAdd(OriginLong
,
1415 ConstantInt::get(MS
.IntptrTy
, OriginBase
));
1416 if (Alignment
< kMinOriginAlignment
) {
1417 uint64_t Mask
= kMinOriginAlignment
- 1;
1419 IRB
.CreateAnd(OriginLong
, ConstantInt::get(MS
.IntptrTy
, ~Mask
));
1422 IRB
.CreateIntToPtr(OriginLong
, PointerType::get(MS
.OriginTy
, 0));
1424 return std::make_pair(ShadowPtr
, OriginPtr
);
1427 std::pair
<Value
*, Value
*>
1428 getShadowOriginPtrKernel(Value
*Addr
, IRBuilder
<> &IRB
, Type
*ShadowTy
,
1429 unsigned Alignment
, bool isStore
) {
1430 Value
*ShadowOriginPtrs
;
1431 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1432 int Size
= DL
.getTypeStoreSize(ShadowTy
);
1434 FunctionCallee Getter
= MS
.getKmsanShadowOriginAccessFn(isStore
, Size
);
1436 IRB
.CreatePointerCast(Addr
, PointerType::get(IRB
.getInt8Ty(), 0));
1438 ShadowOriginPtrs
= IRB
.CreateCall(Getter
, AddrCast
);
1440 Value
*SizeVal
= ConstantInt::get(MS
.IntptrTy
, Size
);
1441 ShadowOriginPtrs
= IRB
.CreateCall(isStore
? MS
.MsanMetadataPtrForStoreN
1442 : MS
.MsanMetadataPtrForLoadN
,
1443 {AddrCast
, SizeVal
});
1445 Value
*ShadowPtr
= IRB
.CreateExtractValue(ShadowOriginPtrs
, 0);
1446 ShadowPtr
= IRB
.CreatePointerCast(ShadowPtr
, PointerType::get(ShadowTy
, 0));
1447 Value
*OriginPtr
= IRB
.CreateExtractValue(ShadowOriginPtrs
, 1);
1449 return std::make_pair(ShadowPtr
, OriginPtr
);
1452 std::pair
<Value
*, Value
*> getShadowOriginPtr(Value
*Addr
, IRBuilder
<> &IRB
,
1456 std::pair
<Value
*, Value
*> ret
;
1457 if (MS
.CompileKernel
)
1458 ret
= getShadowOriginPtrKernel(Addr
, IRB
, ShadowTy
, Alignment
, isStore
);
1460 ret
= getShadowOriginPtrUserspace(Addr
, IRB
, ShadowTy
, Alignment
);
1464 /// Compute the shadow address for a given function argument.
1466 /// Shadow = ParamTLS+ArgOffset.
1467 Value
*getShadowPtrForArgument(Value
*A
, IRBuilder
<> &IRB
,
1469 Value
*Base
= IRB
.CreatePointerCast(MS
.ParamTLS
, MS
.IntptrTy
);
1471 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
1472 return IRB
.CreateIntToPtr(Base
, PointerType::get(getShadowTy(A
), 0),
1476 /// Compute the origin address for a given function argument.
1477 Value
*getOriginPtrForArgument(Value
*A
, IRBuilder
<> &IRB
,
1479 if (!MS
.TrackOrigins
)
1481 Value
*Base
= IRB
.CreatePointerCast(MS
.ParamOriginTLS
, MS
.IntptrTy
);
1483 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
1484 return IRB
.CreateIntToPtr(Base
, PointerType::get(MS
.OriginTy
, 0),
1488 /// Compute the shadow address for a retval.
1489 Value
*getShadowPtrForRetval(Value
*A
, IRBuilder
<> &IRB
) {
1490 return IRB
.CreatePointerCast(MS
.RetvalTLS
,
1491 PointerType::get(getShadowTy(A
), 0),
1495 /// Compute the origin address for a retval.
1496 Value
*getOriginPtrForRetval(IRBuilder
<> &IRB
) {
1497 // We keep a single origin for the entire retval. Might be too optimistic.
1498 return MS
.RetvalOriginTLS
;
1501 /// Set SV to be the shadow value for V.
1502 void setShadow(Value
*V
, Value
*SV
) {
1503 assert(!ShadowMap
.count(V
) && "Values may only have one shadow");
1504 ShadowMap
[V
] = PropagateShadow
? SV
: getCleanShadow(V
);
1507 /// Set Origin to be the origin value for V.
1508 void setOrigin(Value
*V
, Value
*Origin
) {
1509 if (!MS
.TrackOrigins
) return;
1510 assert(!OriginMap
.count(V
) && "Values may only have one origin");
1511 LLVM_DEBUG(dbgs() << "ORIGIN: " << *V
<< " ==> " << *Origin
<< "\n");
1512 OriginMap
[V
] = Origin
;
1515 Constant
*getCleanShadow(Type
*OrigTy
) {
1516 Type
*ShadowTy
= getShadowTy(OrigTy
);
1519 return Constant::getNullValue(ShadowTy
);
1522 /// Create a clean shadow value for a given value.
1524 /// Clean shadow (all zeroes) means all bits of the value are defined
1526 Constant
*getCleanShadow(Value
*V
) {
1527 return getCleanShadow(V
->getType());
1530 /// Create a dirty shadow of a given shadow type.
1531 Constant
*getPoisonedShadow(Type
*ShadowTy
) {
1533 if (isa
<IntegerType
>(ShadowTy
) || isa
<VectorType
>(ShadowTy
))
1534 return Constant::getAllOnesValue(ShadowTy
);
1535 if (ArrayType
*AT
= dyn_cast
<ArrayType
>(ShadowTy
)) {
1536 SmallVector
<Constant
*, 4> Vals(AT
->getNumElements(),
1537 getPoisonedShadow(AT
->getElementType()));
1538 return ConstantArray::get(AT
, Vals
);
1540 if (StructType
*ST
= dyn_cast
<StructType
>(ShadowTy
)) {
1541 SmallVector
<Constant
*, 4> Vals
;
1542 for (unsigned i
= 0, n
= ST
->getNumElements(); i
< n
; i
++)
1543 Vals
.push_back(getPoisonedShadow(ST
->getElementType(i
)));
1544 return ConstantStruct::get(ST
, Vals
);
1546 llvm_unreachable("Unexpected shadow type");
1549 /// Create a dirty shadow for a given value.
1550 Constant
*getPoisonedShadow(Value
*V
) {
1551 Type
*ShadowTy
= getShadowTy(V
);
1554 return getPoisonedShadow(ShadowTy
);
1557 /// Create a clean (zero) origin.
1558 Value
*getCleanOrigin() {
1559 return Constant::getNullValue(MS
.OriginTy
);
1562 /// Get the shadow value for a given Value.
1564 /// This function either returns the value set earlier with setShadow,
1565 /// or extracts if from ParamTLS (for function arguments).
1566 Value
*getShadow(Value
*V
) {
1567 if (!PropagateShadow
) return getCleanShadow(V
);
1568 if (Instruction
*I
= dyn_cast
<Instruction
>(V
)) {
1569 if (I
->getMetadata("nosanitize"))
1570 return getCleanShadow(V
);
1571 // For instructions the shadow is already stored in the map.
1572 Value
*Shadow
= ShadowMap
[V
];
1574 LLVM_DEBUG(dbgs() << "No shadow: " << *V
<< "\n" << *(I
->getParent()));
1576 assert(Shadow
&& "No shadow for a value");
1580 if (UndefValue
*U
= dyn_cast
<UndefValue
>(V
)) {
1581 Value
*AllOnes
= PoisonUndef
? getPoisonedShadow(V
) : getCleanShadow(V
);
1582 LLVM_DEBUG(dbgs() << "Undef: " << *U
<< " ==> " << *AllOnes
<< "\n");
1586 if (Argument
*A
= dyn_cast
<Argument
>(V
)) {
1587 // For arguments we compute the shadow on demand and store it in the map.
1588 Value
**ShadowPtr
= &ShadowMap
[V
];
1591 Function
*F
= A
->getParent();
1592 IRBuilder
<> EntryIRB(ActualFnStart
->getFirstNonPHI());
1593 unsigned ArgOffset
= 0;
1594 const DataLayout
&DL
= F
->getParent()->getDataLayout();
1595 for (auto &FArg
: F
->args()) {
1596 if (!FArg
.getType()->isSized()) {
1597 LLVM_DEBUG(dbgs() << "Arg is not sized\n");
1602 ? DL
.getTypeAllocSize(FArg
.getType()->getPointerElementType())
1603 : DL
.getTypeAllocSize(FArg
.getType());
1605 bool Overflow
= ArgOffset
+ Size
> kParamTLSSize
;
1606 Value
*Base
= getShadowPtrForArgument(&FArg
, EntryIRB
, ArgOffset
);
1607 if (FArg
.hasByValAttr()) {
1608 // ByVal pointer itself has clean shadow. We copy the actual
1609 // argument shadow to the underlying memory.
1610 // Figure out maximal valid memcpy alignment.
1611 unsigned ArgAlign
= FArg
.getParamAlignment();
1612 if (ArgAlign
== 0) {
1613 Type
*EltType
= A
->getType()->getPointerElementType();
1614 ArgAlign
= DL
.getABITypeAlignment(EltType
);
1616 Value
*CpShadowPtr
=
1617 getShadowOriginPtr(V
, EntryIRB
, EntryIRB
.getInt8Ty(), ArgAlign
,
1620 // TODO(glider): need to copy origins.
1622 // ParamTLS overflow.
1623 EntryIRB
.CreateMemSet(
1624 CpShadowPtr
, Constant::getNullValue(EntryIRB
.getInt8Ty()),
1627 unsigned CopyAlign
= std::min(ArgAlign
, kShadowTLSAlignment
);
1628 Value
*Cpy
= EntryIRB
.CreateMemCpy(CpShadowPtr
, CopyAlign
, Base
,
1630 LLVM_DEBUG(dbgs() << " ByValCpy: " << *Cpy
<< "\n");
1633 *ShadowPtr
= getCleanShadow(V
);
1636 // ParamTLS overflow.
1637 *ShadowPtr
= getCleanShadow(V
);
1639 *ShadowPtr
= EntryIRB
.CreateAlignedLoad(getShadowTy(&FArg
), Base
,
1640 kShadowTLSAlignment
);
1644 << " ARG: " << FArg
<< " ==> " << **ShadowPtr
<< "\n");
1645 if (MS
.TrackOrigins
&& !Overflow
) {
1647 getOriginPtrForArgument(&FArg
, EntryIRB
, ArgOffset
);
1648 setOrigin(A
, EntryIRB
.CreateLoad(MS
.OriginTy
, OriginPtr
));
1650 setOrigin(A
, getCleanOrigin());
1653 ArgOffset
+= alignTo(Size
, kShadowTLSAlignment
);
1655 assert(*ShadowPtr
&& "Could not find shadow for an argument");
1658 // For everything else the shadow is zero.
1659 return getCleanShadow(V
);
1662 /// Get the shadow for i-th argument of the instruction I.
1663 Value
*getShadow(Instruction
*I
, int i
) {
1664 return getShadow(I
->getOperand(i
));
1667 /// Get the origin for a value.
1668 Value
*getOrigin(Value
*V
) {
1669 if (!MS
.TrackOrigins
) return nullptr;
1670 if (!PropagateShadow
) return getCleanOrigin();
1671 if (isa
<Constant
>(V
)) return getCleanOrigin();
1672 assert((isa
<Instruction
>(V
) || isa
<Argument
>(V
)) &&
1673 "Unexpected value type in getOrigin()");
1674 if (Instruction
*I
= dyn_cast
<Instruction
>(V
)) {
1675 if (I
->getMetadata("nosanitize"))
1676 return getCleanOrigin();
1678 Value
*Origin
= OriginMap
[V
];
1679 assert(Origin
&& "Missing origin");
1683 /// Get the origin for i-th argument of the instruction I.
1684 Value
*getOrigin(Instruction
*I
, int i
) {
1685 return getOrigin(I
->getOperand(i
));
1688 /// Remember the place where a shadow check should be inserted.
1690 /// This location will be later instrumented with a check that will print a
1691 /// UMR warning in runtime if the shadow value is not 0.
1692 void insertShadowCheck(Value
*Shadow
, Value
*Origin
, Instruction
*OrigIns
) {
1694 if (!InsertChecks
) return;
1696 Type
*ShadowTy
= Shadow
->getType();
1697 assert((isa
<IntegerType
>(ShadowTy
) || isa
<VectorType
>(ShadowTy
)) &&
1698 "Can only insert checks for integer and vector shadow types");
1700 InstrumentationList
.push_back(
1701 ShadowOriginAndInsertPoint(Shadow
, Origin
, OrigIns
));
1704 /// Remember the place where a shadow check should be inserted.
1706 /// This location will be later instrumented with a check that will print a
1707 /// UMR warning in runtime if the value is not fully defined.
1708 void insertShadowCheck(Value
*Val
, Instruction
*OrigIns
) {
1710 Value
*Shadow
, *Origin
;
1711 if (ClCheckConstantShadow
) {
1712 Shadow
= getShadow(Val
);
1713 if (!Shadow
) return;
1714 Origin
= getOrigin(Val
);
1716 Shadow
= dyn_cast_or_null
<Instruction
>(getShadow(Val
));
1717 if (!Shadow
) return;
1718 Origin
= dyn_cast_or_null
<Instruction
>(getOrigin(Val
));
1720 insertShadowCheck(Shadow
, Origin
, OrigIns
);
1723 AtomicOrdering
addReleaseOrdering(AtomicOrdering a
) {
1725 case AtomicOrdering::NotAtomic
:
1726 return AtomicOrdering::NotAtomic
;
1727 case AtomicOrdering::Unordered
:
1728 case AtomicOrdering::Monotonic
:
1729 case AtomicOrdering::Release
:
1730 return AtomicOrdering::Release
;
1731 case AtomicOrdering::Acquire
:
1732 case AtomicOrdering::AcquireRelease
:
1733 return AtomicOrdering::AcquireRelease
;
1734 case AtomicOrdering::SequentiallyConsistent
:
1735 return AtomicOrdering::SequentiallyConsistent
;
1737 llvm_unreachable("Unknown ordering");
1740 AtomicOrdering
addAcquireOrdering(AtomicOrdering a
) {
1742 case AtomicOrdering::NotAtomic
:
1743 return AtomicOrdering::NotAtomic
;
1744 case AtomicOrdering::Unordered
:
1745 case AtomicOrdering::Monotonic
:
1746 case AtomicOrdering::Acquire
:
1747 return AtomicOrdering::Acquire
;
1748 case AtomicOrdering::Release
:
1749 case AtomicOrdering::AcquireRelease
:
1750 return AtomicOrdering::AcquireRelease
;
1751 case AtomicOrdering::SequentiallyConsistent
:
1752 return AtomicOrdering::SequentiallyConsistent
;
1754 llvm_unreachable("Unknown ordering");
1757 // ------------------- Visitors.
1758 using InstVisitor
<MemorySanitizerVisitor
>::visit
;
1759 void visit(Instruction
&I
) {
1760 if (!I
.getMetadata("nosanitize"))
1761 InstVisitor
<MemorySanitizerVisitor
>::visit(I
);
1764 /// Instrument LoadInst
1766 /// Loads the corresponding shadow and (optionally) origin.
1767 /// Optionally, checks that the load address is fully defined.
1768 void visitLoadInst(LoadInst
&I
) {
1769 assert(I
.getType()->isSized() && "Load type must have size");
1770 assert(!I
.getMetadata("nosanitize"));
1771 IRBuilder
<> IRB(I
.getNextNode());
1772 Type
*ShadowTy
= getShadowTy(&I
);
1773 Value
*Addr
= I
.getPointerOperand();
1774 Value
*ShadowPtr
, *OriginPtr
;
1775 unsigned Alignment
= I
.getAlignment();
1776 if (PropagateShadow
) {
1777 std::tie(ShadowPtr
, OriginPtr
) =
1778 getShadowOriginPtr(Addr
, IRB
, ShadowTy
, Alignment
, /*isStore*/ false);
1780 IRB
.CreateAlignedLoad(ShadowTy
, ShadowPtr
, Alignment
, "_msld"));
1782 setShadow(&I
, getCleanShadow(&I
));
1785 if (ClCheckAccessAddress
)
1786 insertShadowCheck(I
.getPointerOperand(), &I
);
1789 I
.setOrdering(addAcquireOrdering(I
.getOrdering()));
1791 if (MS
.TrackOrigins
) {
1792 if (PropagateShadow
) {
1793 unsigned OriginAlignment
= std::max(kMinOriginAlignment
, Alignment
);
1795 &I
, IRB
.CreateAlignedLoad(MS
.OriginTy
, OriginPtr
, OriginAlignment
));
1797 setOrigin(&I
, getCleanOrigin());
1802 /// Instrument StoreInst
1804 /// Stores the corresponding shadow and (optionally) origin.
1805 /// Optionally, checks that the store address is fully defined.
1806 void visitStoreInst(StoreInst
&I
) {
1807 StoreList
.push_back(&I
);
1808 if (ClCheckAccessAddress
)
1809 insertShadowCheck(I
.getPointerOperand(), &I
);
1812 void handleCASOrRMW(Instruction
&I
) {
1813 assert(isa
<AtomicRMWInst
>(I
) || isa
<AtomicCmpXchgInst
>(I
));
1815 IRBuilder
<> IRB(&I
);
1816 Value
*Addr
= I
.getOperand(0);
1817 Value
*ShadowPtr
= getShadowOriginPtr(Addr
, IRB
, I
.getType(),
1818 /*Alignment*/ 1, /*isStore*/ true)
1821 if (ClCheckAccessAddress
)
1822 insertShadowCheck(Addr
, &I
);
1824 // Only test the conditional argument of cmpxchg instruction.
1825 // The other argument can potentially be uninitialized, but we can not
1826 // detect this situation reliably without possible false positives.
1827 if (isa
<AtomicCmpXchgInst
>(I
))
1828 insertShadowCheck(I
.getOperand(1), &I
);
1830 IRB
.CreateStore(getCleanShadow(&I
), ShadowPtr
);
1832 setShadow(&I
, getCleanShadow(&I
));
1833 setOrigin(&I
, getCleanOrigin());
1836 void visitAtomicRMWInst(AtomicRMWInst
&I
) {
1838 I
.setOrdering(addReleaseOrdering(I
.getOrdering()));
1841 void visitAtomicCmpXchgInst(AtomicCmpXchgInst
&I
) {
1843 I
.setSuccessOrdering(addReleaseOrdering(I
.getSuccessOrdering()));
1846 // Vector manipulation.
1847 void visitExtractElementInst(ExtractElementInst
&I
) {
1848 insertShadowCheck(I
.getOperand(1), &I
);
1849 IRBuilder
<> IRB(&I
);
1850 setShadow(&I
, IRB
.CreateExtractElement(getShadow(&I
, 0), I
.getOperand(1),
1852 setOrigin(&I
, getOrigin(&I
, 0));
1855 void visitInsertElementInst(InsertElementInst
&I
) {
1856 insertShadowCheck(I
.getOperand(2), &I
);
1857 IRBuilder
<> IRB(&I
);
1858 setShadow(&I
, IRB
.CreateInsertElement(getShadow(&I
, 0), getShadow(&I
, 1),
1859 I
.getOperand(2), "_msprop"));
1860 setOriginForNaryOp(I
);
1863 void visitShuffleVectorInst(ShuffleVectorInst
&I
) {
1864 insertShadowCheck(I
.getOperand(2), &I
);
1865 IRBuilder
<> IRB(&I
);
1866 setShadow(&I
, IRB
.CreateShuffleVector(getShadow(&I
, 0), getShadow(&I
, 1),
1867 I
.getOperand(2), "_msprop"));
1868 setOriginForNaryOp(I
);
1872 void visitSExtInst(SExtInst
&I
) {
1873 IRBuilder
<> IRB(&I
);
1874 setShadow(&I
, IRB
.CreateSExt(getShadow(&I
, 0), I
.getType(), "_msprop"));
1875 setOrigin(&I
, getOrigin(&I
, 0));
1878 void visitZExtInst(ZExtInst
&I
) {
1879 IRBuilder
<> IRB(&I
);
1880 setShadow(&I
, IRB
.CreateZExt(getShadow(&I
, 0), I
.getType(), "_msprop"));
1881 setOrigin(&I
, getOrigin(&I
, 0));
1884 void visitTruncInst(TruncInst
&I
) {
1885 IRBuilder
<> IRB(&I
);
1886 setShadow(&I
, IRB
.CreateTrunc(getShadow(&I
, 0), I
.getType(), "_msprop"));
1887 setOrigin(&I
, getOrigin(&I
, 0));
1890 void visitBitCastInst(BitCastInst
&I
) {
1891 // Special case: if this is the bitcast (there is exactly 1 allowed) between
1892 // a musttail call and a ret, don't instrument. New instructions are not
1893 // allowed after a musttail call.
1894 if (auto *CI
= dyn_cast
<CallInst
>(I
.getOperand(0)))
1895 if (CI
->isMustTailCall())
1897 IRBuilder
<> IRB(&I
);
1898 setShadow(&I
, IRB
.CreateBitCast(getShadow(&I
, 0), getShadowTy(&I
)));
1899 setOrigin(&I
, getOrigin(&I
, 0));
1902 void visitPtrToIntInst(PtrToIntInst
&I
) {
1903 IRBuilder
<> IRB(&I
);
1904 setShadow(&I
, IRB
.CreateIntCast(getShadow(&I
, 0), getShadowTy(&I
), false,
1905 "_msprop_ptrtoint"));
1906 setOrigin(&I
, getOrigin(&I
, 0));
1909 void visitIntToPtrInst(IntToPtrInst
&I
) {
1910 IRBuilder
<> IRB(&I
);
1911 setShadow(&I
, IRB
.CreateIntCast(getShadow(&I
, 0), getShadowTy(&I
), false,
1912 "_msprop_inttoptr"));
1913 setOrigin(&I
, getOrigin(&I
, 0));
1916 void visitFPToSIInst(CastInst
& I
) { handleShadowOr(I
); }
1917 void visitFPToUIInst(CastInst
& I
) { handleShadowOr(I
); }
1918 void visitSIToFPInst(CastInst
& I
) { handleShadowOr(I
); }
1919 void visitUIToFPInst(CastInst
& I
) { handleShadowOr(I
); }
1920 void visitFPExtInst(CastInst
& I
) { handleShadowOr(I
); }
1921 void visitFPTruncInst(CastInst
& I
) { handleShadowOr(I
); }
1923 /// Propagate shadow for bitwise AND.
1925 /// This code is exact, i.e. if, for example, a bit in the left argument
1926 /// is defined and 0, then neither the value not definedness of the
1927 /// corresponding bit in B don't affect the resulting shadow.
1928 void visitAnd(BinaryOperator
&I
) {
1929 IRBuilder
<> IRB(&I
);
1930 // "And" of 0 and a poisoned value results in unpoisoned value.
1931 // 1&1 => 1; 0&1 => 0; p&1 => p;
1932 // 1&0 => 0; 0&0 => 0; p&0 => 0;
1933 // 1&p => p; 0&p => 0; p&p => p;
1934 // S = (S1 & S2) | (V1 & S2) | (S1 & V2)
1935 Value
*S1
= getShadow(&I
, 0);
1936 Value
*S2
= getShadow(&I
, 1);
1937 Value
*V1
= I
.getOperand(0);
1938 Value
*V2
= I
.getOperand(1);
1939 if (V1
->getType() != S1
->getType()) {
1940 V1
= IRB
.CreateIntCast(V1
, S1
->getType(), false);
1941 V2
= IRB
.CreateIntCast(V2
, S2
->getType(), false);
1943 Value
*S1S2
= IRB
.CreateAnd(S1
, S2
);
1944 Value
*V1S2
= IRB
.CreateAnd(V1
, S2
);
1945 Value
*S1V2
= IRB
.CreateAnd(S1
, V2
);
1946 setShadow(&I
, IRB
.CreateOr({S1S2
, V1S2
, S1V2
}));
1947 setOriginForNaryOp(I
);
1950 void visitOr(BinaryOperator
&I
) {
1951 IRBuilder
<> IRB(&I
);
1952 // "Or" of 1 and a poisoned value results in unpoisoned value.
1953 // 1|1 => 1; 0|1 => 1; p|1 => 1;
1954 // 1|0 => 1; 0|0 => 0; p|0 => p;
1955 // 1|p => 1; 0|p => p; p|p => p;
1956 // S = (S1 & S2) | (~V1 & S2) | (S1 & ~V2)
1957 Value
*S1
= getShadow(&I
, 0);
1958 Value
*S2
= getShadow(&I
, 1);
1959 Value
*V1
= IRB
.CreateNot(I
.getOperand(0));
1960 Value
*V2
= IRB
.CreateNot(I
.getOperand(1));
1961 if (V1
->getType() != S1
->getType()) {
1962 V1
= IRB
.CreateIntCast(V1
, S1
->getType(), false);
1963 V2
= IRB
.CreateIntCast(V2
, S2
->getType(), false);
1965 Value
*S1S2
= IRB
.CreateAnd(S1
, S2
);
1966 Value
*V1S2
= IRB
.CreateAnd(V1
, S2
);
1967 Value
*S1V2
= IRB
.CreateAnd(S1
, V2
);
1968 setShadow(&I
, IRB
.CreateOr({S1S2
, V1S2
, S1V2
}));
1969 setOriginForNaryOp(I
);
1972 /// Default propagation of shadow and/or origin.
1974 /// This class implements the general case of shadow propagation, used in all
1975 /// cases where we don't know and/or don't care about what the operation
1976 /// actually does. It converts all input shadow values to a common type
1977 /// (extending or truncating as necessary), and bitwise OR's them.
1979 /// This is much cheaper than inserting checks (i.e. requiring inputs to be
1980 /// fully initialized), and less prone to false positives.
1982 /// This class also implements the general case of origin propagation. For a
1983 /// Nary operation, result origin is set to the origin of an argument that is
1984 /// not entirely initialized. If there is more than one such arguments, the
1985 /// rightmost of them is picked. It does not matter which one is picked if all
1986 /// arguments are initialized.
1987 template <bool CombineShadow
>
1989 Value
*Shadow
= nullptr;
1990 Value
*Origin
= nullptr;
1992 MemorySanitizerVisitor
*MSV
;
1995 Combiner(MemorySanitizerVisitor
*MSV
, IRBuilder
<> &IRB
)
1996 : IRB(IRB
), MSV(MSV
) {}
1998 /// Add a pair of shadow and origin values to the mix.
1999 Combiner
&Add(Value
*OpShadow
, Value
*OpOrigin
) {
2000 if (CombineShadow
) {
2005 OpShadow
= MSV
->CreateShadowCast(IRB
, OpShadow
, Shadow
->getType());
2006 Shadow
= IRB
.CreateOr(Shadow
, OpShadow
, "_msprop");
2010 if (MSV
->MS
.TrackOrigins
) {
2015 Constant
*ConstOrigin
= dyn_cast
<Constant
>(OpOrigin
);
2016 // No point in adding something that might result in 0 origin value.
2017 if (!ConstOrigin
|| !ConstOrigin
->isNullValue()) {
2018 Value
*FlatShadow
= MSV
->convertToShadowTyNoVec(OpShadow
, IRB
);
2020 IRB
.CreateICmpNE(FlatShadow
, MSV
->getCleanShadow(FlatShadow
));
2021 Origin
= IRB
.CreateSelect(Cond
, OpOrigin
, Origin
);
2028 /// Add an application value to the mix.
2029 Combiner
&Add(Value
*V
) {
2030 Value
*OpShadow
= MSV
->getShadow(V
);
2031 Value
*OpOrigin
= MSV
->MS
.TrackOrigins
? MSV
->getOrigin(V
) : nullptr;
2032 return Add(OpShadow
, OpOrigin
);
2035 /// Set the current combined values as the given instruction's shadow
2037 void Done(Instruction
*I
) {
2038 if (CombineShadow
) {
2040 Shadow
= MSV
->CreateShadowCast(IRB
, Shadow
, MSV
->getShadowTy(I
));
2041 MSV
->setShadow(I
, Shadow
);
2043 if (MSV
->MS
.TrackOrigins
) {
2045 MSV
->setOrigin(I
, Origin
);
2050 using ShadowAndOriginCombiner
= Combiner
<true>;
2051 using OriginCombiner
= Combiner
<false>;
2053 /// Propagate origin for arbitrary operation.
2054 void setOriginForNaryOp(Instruction
&I
) {
2055 if (!MS
.TrackOrigins
) return;
2056 IRBuilder
<> IRB(&I
);
2057 OriginCombiner
OC(this, IRB
);
2058 for (Instruction::op_iterator OI
= I
.op_begin(); OI
!= I
.op_end(); ++OI
)
2063 size_t VectorOrPrimitiveTypeSizeInBits(Type
*Ty
) {
2064 assert(!(Ty
->isVectorTy() && Ty
->getScalarType()->isPointerTy()) &&
2065 "Vector of pointers is not a valid shadow type");
2066 return Ty
->isVectorTy() ?
2067 Ty
->getVectorNumElements() * Ty
->getScalarSizeInBits() :
2068 Ty
->getPrimitiveSizeInBits();
2071 /// Cast between two shadow types, extending or truncating as
2073 Value
*CreateShadowCast(IRBuilder
<> &IRB
, Value
*V
, Type
*dstTy
,
2074 bool Signed
= false) {
2075 Type
*srcTy
= V
->getType();
2076 size_t srcSizeInBits
= VectorOrPrimitiveTypeSizeInBits(srcTy
);
2077 size_t dstSizeInBits
= VectorOrPrimitiveTypeSizeInBits(dstTy
);
2078 if (srcSizeInBits
> 1 && dstSizeInBits
== 1)
2079 return IRB
.CreateICmpNE(V
, getCleanShadow(V
));
2081 if (dstTy
->isIntegerTy() && srcTy
->isIntegerTy())
2082 return IRB
.CreateIntCast(V
, dstTy
, Signed
);
2083 if (dstTy
->isVectorTy() && srcTy
->isVectorTy() &&
2084 dstTy
->getVectorNumElements() == srcTy
->getVectorNumElements())
2085 return IRB
.CreateIntCast(V
, dstTy
, Signed
);
2086 Value
*V1
= IRB
.CreateBitCast(V
, Type::getIntNTy(*MS
.C
, srcSizeInBits
));
2088 IRB
.CreateIntCast(V1
, Type::getIntNTy(*MS
.C
, dstSizeInBits
), Signed
);
2089 return IRB
.CreateBitCast(V2
, dstTy
);
2090 // TODO: handle struct types.
2093 /// Cast an application value to the type of its own shadow.
2094 Value
*CreateAppToShadowCast(IRBuilder
<> &IRB
, Value
*V
) {
2095 Type
*ShadowTy
= getShadowTy(V
);
2096 if (V
->getType() == ShadowTy
)
2098 if (V
->getType()->isPtrOrPtrVectorTy())
2099 return IRB
.CreatePtrToInt(V
, ShadowTy
);
2101 return IRB
.CreateBitCast(V
, ShadowTy
);
2104 /// Propagate shadow for arbitrary operation.
2105 void handleShadowOr(Instruction
&I
) {
2106 IRBuilder
<> IRB(&I
);
2107 ShadowAndOriginCombiner
SC(this, IRB
);
2108 for (Instruction::op_iterator OI
= I
.op_begin(); OI
!= I
.op_end(); ++OI
)
2113 void visitFNeg(UnaryOperator
&I
) { handleShadowOr(I
); }
2115 // Handle multiplication by constant.
2117 // Handle a special case of multiplication by constant that may have one or
2118 // more zeros in the lower bits. This makes corresponding number of lower bits
2119 // of the result zero as well. We model it by shifting the other operand
2120 // shadow left by the required number of bits. Effectively, we transform
2121 // (X * (A * 2**B)) to ((X << B) * A) and instrument (X << B) as (Sx << B).
2122 // We use multiplication by 2**N instead of shift to cover the case of
2123 // multiplication by 0, which may occur in some elements of a vector operand.
2124 void handleMulByConstant(BinaryOperator
&I
, Constant
*ConstArg
,
2126 Constant
*ShadowMul
;
2127 Type
*Ty
= ConstArg
->getType();
2128 if (Ty
->isVectorTy()) {
2129 unsigned NumElements
= Ty
->getVectorNumElements();
2130 Type
*EltTy
= Ty
->getSequentialElementType();
2131 SmallVector
<Constant
*, 16> Elements
;
2132 for (unsigned Idx
= 0; Idx
< NumElements
; ++Idx
) {
2133 if (ConstantInt
*Elt
=
2134 dyn_cast
<ConstantInt
>(ConstArg
->getAggregateElement(Idx
))) {
2135 const APInt
&V
= Elt
->getValue();
2136 APInt V2
= APInt(V
.getBitWidth(), 1) << V
.countTrailingZeros();
2137 Elements
.push_back(ConstantInt::get(EltTy
, V2
));
2139 Elements
.push_back(ConstantInt::get(EltTy
, 1));
2142 ShadowMul
= ConstantVector::get(Elements
);
2144 if (ConstantInt
*Elt
= dyn_cast
<ConstantInt
>(ConstArg
)) {
2145 const APInt
&V
= Elt
->getValue();
2146 APInt V2
= APInt(V
.getBitWidth(), 1) << V
.countTrailingZeros();
2147 ShadowMul
= ConstantInt::get(Ty
, V2
);
2149 ShadowMul
= ConstantInt::get(Ty
, 1);
2153 IRBuilder
<> IRB(&I
);
2155 IRB
.CreateMul(getShadow(OtherArg
), ShadowMul
, "msprop_mul_cst"));
2156 setOrigin(&I
, getOrigin(OtherArg
));
2159 void visitMul(BinaryOperator
&I
) {
2160 Constant
*constOp0
= dyn_cast
<Constant
>(I
.getOperand(0));
2161 Constant
*constOp1
= dyn_cast
<Constant
>(I
.getOperand(1));
2162 if (constOp0
&& !constOp1
)
2163 handleMulByConstant(I
, constOp0
, I
.getOperand(1));
2164 else if (constOp1
&& !constOp0
)
2165 handleMulByConstant(I
, constOp1
, I
.getOperand(0));
2170 void visitFAdd(BinaryOperator
&I
) { handleShadowOr(I
); }
2171 void visitFSub(BinaryOperator
&I
) { handleShadowOr(I
); }
2172 void visitFMul(BinaryOperator
&I
) { handleShadowOr(I
); }
2173 void visitAdd(BinaryOperator
&I
) { handleShadowOr(I
); }
2174 void visitSub(BinaryOperator
&I
) { handleShadowOr(I
); }
2175 void visitXor(BinaryOperator
&I
) { handleShadowOr(I
); }
2177 void handleIntegerDiv(Instruction
&I
) {
2178 IRBuilder
<> IRB(&I
);
2179 // Strict on the second argument.
2180 insertShadowCheck(I
.getOperand(1), &I
);
2181 setShadow(&I
, getShadow(&I
, 0));
2182 setOrigin(&I
, getOrigin(&I
, 0));
2185 void visitUDiv(BinaryOperator
&I
) { handleIntegerDiv(I
); }
2186 void visitSDiv(BinaryOperator
&I
) { handleIntegerDiv(I
); }
2187 void visitURem(BinaryOperator
&I
) { handleIntegerDiv(I
); }
2188 void visitSRem(BinaryOperator
&I
) { handleIntegerDiv(I
); }
2190 // Floating point division is side-effect free. We can not require that the
2191 // divisor is fully initialized and must propagate shadow. See PR37523.
2192 void visitFDiv(BinaryOperator
&I
) { handleShadowOr(I
); }
2193 void visitFRem(BinaryOperator
&I
) { handleShadowOr(I
); }
2195 /// Instrument == and != comparisons.
2197 /// Sometimes the comparison result is known even if some of the bits of the
2198 /// arguments are not.
2199 void handleEqualityComparison(ICmpInst
&I
) {
2200 IRBuilder
<> IRB(&I
);
2201 Value
*A
= I
.getOperand(0);
2202 Value
*B
= I
.getOperand(1);
2203 Value
*Sa
= getShadow(A
);
2204 Value
*Sb
= getShadow(B
);
2206 // Get rid of pointers and vectors of pointers.
2207 // For ints (and vectors of ints), types of A and Sa match,
2208 // and this is a no-op.
2209 A
= IRB
.CreatePointerCast(A
, Sa
->getType());
2210 B
= IRB
.CreatePointerCast(B
, Sb
->getType());
2212 // A == B <==> (C = A^B) == 0
2213 // A != B <==> (C = A^B) != 0
2215 Value
*C
= IRB
.CreateXor(A
, B
);
2216 Value
*Sc
= IRB
.CreateOr(Sa
, Sb
);
2217 // Now dealing with i = (C == 0) comparison (or C != 0, does not matter now)
2218 // Result is defined if one of the following is true
2219 // * there is a defined 1 bit in C
2220 // * C is fully defined
2221 // Si = !(C & ~Sc) && Sc
2222 Value
*Zero
= Constant::getNullValue(Sc
->getType());
2223 Value
*MinusOne
= Constant::getAllOnesValue(Sc
->getType());
2225 IRB
.CreateAnd(IRB
.CreateICmpNE(Sc
, Zero
),
2227 IRB
.CreateAnd(IRB
.CreateXor(Sc
, MinusOne
), C
), Zero
));
2228 Si
->setName("_msprop_icmp");
2230 setOriginForNaryOp(I
);
2233 /// Build the lowest possible value of V, taking into account V's
2234 /// uninitialized bits.
2235 Value
*getLowestPossibleValue(IRBuilder
<> &IRB
, Value
*A
, Value
*Sa
,
2238 // Split shadow into sign bit and other bits.
2239 Value
*SaOtherBits
= IRB
.CreateLShr(IRB
.CreateShl(Sa
, 1), 1);
2240 Value
*SaSignBit
= IRB
.CreateXor(Sa
, SaOtherBits
);
2241 // Maximise the undefined shadow bit, minimize other undefined bits.
2243 IRB
.CreateOr(IRB
.CreateAnd(A
, IRB
.CreateNot(SaOtherBits
)), SaSignBit
);
2245 // Minimize undefined bits.
2246 return IRB
.CreateAnd(A
, IRB
.CreateNot(Sa
));
2250 /// Build the highest possible value of V, taking into account V's
2251 /// uninitialized bits.
2252 Value
*getHighestPossibleValue(IRBuilder
<> &IRB
, Value
*A
, Value
*Sa
,
2255 // Split shadow into sign bit and other bits.
2256 Value
*SaOtherBits
= IRB
.CreateLShr(IRB
.CreateShl(Sa
, 1), 1);
2257 Value
*SaSignBit
= IRB
.CreateXor(Sa
, SaOtherBits
);
2258 // Minimise the undefined shadow bit, maximise other undefined bits.
2260 IRB
.CreateOr(IRB
.CreateAnd(A
, IRB
.CreateNot(SaSignBit
)), SaOtherBits
);
2262 // Maximize undefined bits.
2263 return IRB
.CreateOr(A
, Sa
);
2267 /// Instrument relational comparisons.
2269 /// This function does exact shadow propagation for all relational
2270 /// comparisons of integers, pointers and vectors of those.
2271 /// FIXME: output seems suboptimal when one of the operands is a constant
2272 void handleRelationalComparisonExact(ICmpInst
&I
) {
2273 IRBuilder
<> IRB(&I
);
2274 Value
*A
= I
.getOperand(0);
2275 Value
*B
= I
.getOperand(1);
2276 Value
*Sa
= getShadow(A
);
2277 Value
*Sb
= getShadow(B
);
2279 // Get rid of pointers and vectors of pointers.
2280 // For ints (and vectors of ints), types of A and Sa match,
2281 // and this is a no-op.
2282 A
= IRB
.CreatePointerCast(A
, Sa
->getType());
2283 B
= IRB
.CreatePointerCast(B
, Sb
->getType());
2285 // Let [a0, a1] be the interval of possible values of A, taking into account
2286 // its undefined bits. Let [b0, b1] be the interval of possible values of B.
2287 // Then (A cmp B) is defined iff (a0 cmp b1) == (a1 cmp b0).
2288 bool IsSigned
= I
.isSigned();
2289 Value
*S1
= IRB
.CreateICmp(I
.getPredicate(),
2290 getLowestPossibleValue(IRB
, A
, Sa
, IsSigned
),
2291 getHighestPossibleValue(IRB
, B
, Sb
, IsSigned
));
2292 Value
*S2
= IRB
.CreateICmp(I
.getPredicate(),
2293 getHighestPossibleValue(IRB
, A
, Sa
, IsSigned
),
2294 getLowestPossibleValue(IRB
, B
, Sb
, IsSigned
));
2295 Value
*Si
= IRB
.CreateXor(S1
, S2
);
2297 setOriginForNaryOp(I
);
2300 /// Instrument signed relational comparisons.
2302 /// Handle sign bit tests: x<0, x>=0, x<=-1, x>-1 by propagating the highest
2303 /// bit of the shadow. Everything else is delegated to handleShadowOr().
2304 void handleSignedRelationalComparison(ICmpInst
&I
) {
2306 Value
*op
= nullptr;
2307 CmpInst::Predicate pre
;
2308 if ((constOp
= dyn_cast
<Constant
>(I
.getOperand(1)))) {
2309 op
= I
.getOperand(0);
2310 pre
= I
.getPredicate();
2311 } else if ((constOp
= dyn_cast
<Constant
>(I
.getOperand(0)))) {
2312 op
= I
.getOperand(1);
2313 pre
= I
.getSwappedPredicate();
2319 if ((constOp
->isNullValue() &&
2320 (pre
== CmpInst::ICMP_SLT
|| pre
== CmpInst::ICMP_SGE
)) ||
2321 (constOp
->isAllOnesValue() &&
2322 (pre
== CmpInst::ICMP_SGT
|| pre
== CmpInst::ICMP_SLE
))) {
2323 IRBuilder
<> IRB(&I
);
2324 Value
*Shadow
= IRB
.CreateICmpSLT(getShadow(op
), getCleanShadow(op
),
2326 setShadow(&I
, Shadow
);
2327 setOrigin(&I
, getOrigin(op
));
2333 void visitICmpInst(ICmpInst
&I
) {
2334 if (!ClHandleICmp
) {
2338 if (I
.isEquality()) {
2339 handleEqualityComparison(I
);
2343 assert(I
.isRelational());
2344 if (ClHandleICmpExact
) {
2345 handleRelationalComparisonExact(I
);
2349 handleSignedRelationalComparison(I
);
2353 assert(I
.isUnsigned());
2354 if ((isa
<Constant
>(I
.getOperand(0)) || isa
<Constant
>(I
.getOperand(1)))) {
2355 handleRelationalComparisonExact(I
);
2362 void visitFCmpInst(FCmpInst
&I
) {
2366 void handleShift(BinaryOperator
&I
) {
2367 IRBuilder
<> IRB(&I
);
2368 // If any of the S2 bits are poisoned, the whole thing is poisoned.
2369 // Otherwise perform the same shift on S1.
2370 Value
*S1
= getShadow(&I
, 0);
2371 Value
*S2
= getShadow(&I
, 1);
2372 Value
*S2Conv
= IRB
.CreateSExt(IRB
.CreateICmpNE(S2
, getCleanShadow(S2
)),
2374 Value
*V2
= I
.getOperand(1);
2375 Value
*Shift
= IRB
.CreateBinOp(I
.getOpcode(), S1
, V2
);
2376 setShadow(&I
, IRB
.CreateOr(Shift
, S2Conv
));
2377 setOriginForNaryOp(I
);
2380 void visitShl(BinaryOperator
&I
) { handleShift(I
); }
2381 void visitAShr(BinaryOperator
&I
) { handleShift(I
); }
2382 void visitLShr(BinaryOperator
&I
) { handleShift(I
); }
2384 /// Instrument llvm.memmove
2386 /// At this point we don't know if llvm.memmove will be inlined or not.
2387 /// If we don't instrument it and it gets inlined,
2388 /// our interceptor will not kick in and we will lose the memmove.
2389 /// If we instrument the call here, but it does not get inlined,
2390 /// we will memove the shadow twice: which is bad in case
2391 /// of overlapping regions. So, we simply lower the intrinsic to a call.
2393 /// Similar situation exists for memcpy and memset.
2394 void visitMemMoveInst(MemMoveInst
&I
) {
2395 IRBuilder
<> IRB(&I
);
2398 {IRB
.CreatePointerCast(I
.getArgOperand(0), IRB
.getInt8PtrTy()),
2399 IRB
.CreatePointerCast(I
.getArgOperand(1), IRB
.getInt8PtrTy()),
2400 IRB
.CreateIntCast(I
.getArgOperand(2), MS
.IntptrTy
, false)});
2401 I
.eraseFromParent();
2404 // Similar to memmove: avoid copying shadow twice.
2405 // This is somewhat unfortunate as it may slowdown small constant memcpys.
2406 // FIXME: consider doing manual inline for small constant sizes and proper
2408 void visitMemCpyInst(MemCpyInst
&I
) {
2409 IRBuilder
<> IRB(&I
);
2412 {IRB
.CreatePointerCast(I
.getArgOperand(0), IRB
.getInt8PtrTy()),
2413 IRB
.CreatePointerCast(I
.getArgOperand(1), IRB
.getInt8PtrTy()),
2414 IRB
.CreateIntCast(I
.getArgOperand(2), MS
.IntptrTy
, false)});
2415 I
.eraseFromParent();
2419 void visitMemSetInst(MemSetInst
&I
) {
2420 IRBuilder
<> IRB(&I
);
2423 {IRB
.CreatePointerCast(I
.getArgOperand(0), IRB
.getInt8PtrTy()),
2424 IRB
.CreateIntCast(I
.getArgOperand(1), IRB
.getInt32Ty(), false),
2425 IRB
.CreateIntCast(I
.getArgOperand(2), MS
.IntptrTy
, false)});
2426 I
.eraseFromParent();
2429 void visitVAStartInst(VAStartInst
&I
) {
2430 VAHelper
->visitVAStartInst(I
);
2433 void visitVACopyInst(VACopyInst
&I
) {
2434 VAHelper
->visitVACopyInst(I
);
2437 /// Handle vector store-like intrinsics.
2439 /// Instrument intrinsics that look like a simple SIMD store: writes memory,
2440 /// has 1 pointer argument and 1 vector argument, returns void.
2441 bool handleVectorStoreIntrinsic(IntrinsicInst
&I
) {
2442 IRBuilder
<> IRB(&I
);
2443 Value
* Addr
= I
.getArgOperand(0);
2444 Value
*Shadow
= getShadow(&I
, 1);
2445 Value
*ShadowPtr
, *OriginPtr
;
2447 // We don't know the pointer alignment (could be unaligned SSE store!).
2448 // Have to assume to worst case.
2449 std::tie(ShadowPtr
, OriginPtr
) = getShadowOriginPtr(
2450 Addr
, IRB
, Shadow
->getType(), /*Alignment*/ 1, /*isStore*/ true);
2451 IRB
.CreateAlignedStore(Shadow
, ShadowPtr
, 1);
2453 if (ClCheckAccessAddress
)
2454 insertShadowCheck(Addr
, &I
);
2456 // FIXME: factor out common code from materializeStores
2457 if (MS
.TrackOrigins
) IRB
.CreateStore(getOrigin(&I
, 1), OriginPtr
);
2461 /// Handle vector load-like intrinsics.
2463 /// Instrument intrinsics that look like a simple SIMD load: reads memory,
2464 /// has 1 pointer argument, returns a vector.
2465 bool handleVectorLoadIntrinsic(IntrinsicInst
&I
) {
2466 IRBuilder
<> IRB(&I
);
2467 Value
*Addr
= I
.getArgOperand(0);
2469 Type
*ShadowTy
= getShadowTy(&I
);
2470 Value
*ShadowPtr
, *OriginPtr
;
2471 if (PropagateShadow
) {
2472 // We don't know the pointer alignment (could be unaligned SSE load!).
2473 // Have to assume to worst case.
2474 unsigned Alignment
= 1;
2475 std::tie(ShadowPtr
, OriginPtr
) =
2476 getShadowOriginPtr(Addr
, IRB
, ShadowTy
, Alignment
, /*isStore*/ false);
2478 IRB
.CreateAlignedLoad(ShadowTy
, ShadowPtr
, Alignment
, "_msld"));
2480 setShadow(&I
, getCleanShadow(&I
));
2483 if (ClCheckAccessAddress
)
2484 insertShadowCheck(Addr
, &I
);
2486 if (MS
.TrackOrigins
) {
2487 if (PropagateShadow
)
2488 setOrigin(&I
, IRB
.CreateLoad(MS
.OriginTy
, OriginPtr
));
2490 setOrigin(&I
, getCleanOrigin());
2495 /// Handle (SIMD arithmetic)-like intrinsics.
2497 /// Instrument intrinsics with any number of arguments of the same type,
2498 /// equal to the return type. The type should be simple (no aggregates or
2499 /// pointers; vectors are fine).
2500 /// Caller guarantees that this intrinsic does not access memory.
2501 bool maybeHandleSimpleNomemIntrinsic(IntrinsicInst
&I
) {
2502 Type
*RetTy
= I
.getType();
2503 if (!(RetTy
->isIntOrIntVectorTy() ||
2504 RetTy
->isFPOrFPVectorTy() ||
2505 RetTy
->isX86_MMXTy()))
2508 unsigned NumArgOperands
= I
.getNumArgOperands();
2510 for (unsigned i
= 0; i
< NumArgOperands
; ++i
) {
2511 Type
*Ty
= I
.getArgOperand(i
)->getType();
2516 IRBuilder
<> IRB(&I
);
2517 ShadowAndOriginCombiner
SC(this, IRB
);
2518 for (unsigned i
= 0; i
< NumArgOperands
; ++i
)
2519 SC
.Add(I
.getArgOperand(i
));
2525 /// Heuristically instrument unknown intrinsics.
2527 /// The main purpose of this code is to do something reasonable with all
2528 /// random intrinsics we might encounter, most importantly - SIMD intrinsics.
2529 /// We recognize several classes of intrinsics by their argument types and
2530 /// ModRefBehaviour and apply special intrumentation when we are reasonably
2531 /// sure that we know what the intrinsic does.
2533 /// We special-case intrinsics where this approach fails. See llvm.bswap
2534 /// handling as an example of that.
2535 bool handleUnknownIntrinsic(IntrinsicInst
&I
) {
2536 unsigned NumArgOperands
= I
.getNumArgOperands();
2537 if (NumArgOperands
== 0)
2540 if (NumArgOperands
== 2 &&
2541 I
.getArgOperand(0)->getType()->isPointerTy() &&
2542 I
.getArgOperand(1)->getType()->isVectorTy() &&
2543 I
.getType()->isVoidTy() &&
2544 !I
.onlyReadsMemory()) {
2545 // This looks like a vector store.
2546 return handleVectorStoreIntrinsic(I
);
2549 if (NumArgOperands
== 1 &&
2550 I
.getArgOperand(0)->getType()->isPointerTy() &&
2551 I
.getType()->isVectorTy() &&
2552 I
.onlyReadsMemory()) {
2553 // This looks like a vector load.
2554 return handleVectorLoadIntrinsic(I
);
2557 if (I
.doesNotAccessMemory())
2558 if (maybeHandleSimpleNomemIntrinsic(I
))
2561 // FIXME: detect and handle SSE maskstore/maskload
2565 void handleInvariantGroup(IntrinsicInst
&I
) {
2566 setShadow(&I
, getShadow(&I
, 0));
2567 setOrigin(&I
, getOrigin(&I
, 0));
2570 void handleLifetimeStart(IntrinsicInst
&I
) {
2573 DenseMap
<Value
*, AllocaInst
*> AllocaForValue
;
2575 llvm::findAllocaForValue(I
.getArgOperand(1), AllocaForValue
);
2577 InstrumentLifetimeStart
= false;
2578 LifetimeStartList
.push_back(std::make_pair(&I
, AI
));
2581 void handleBswap(IntrinsicInst
&I
) {
2582 IRBuilder
<> IRB(&I
);
2583 Value
*Op
= I
.getArgOperand(0);
2584 Type
*OpType
= Op
->getType();
2585 Function
*BswapFunc
= Intrinsic::getDeclaration(
2586 F
.getParent(), Intrinsic::bswap
, makeArrayRef(&OpType
, 1));
2587 setShadow(&I
, IRB
.CreateCall(BswapFunc
, getShadow(Op
)));
2588 setOrigin(&I
, getOrigin(Op
));
2591 // Instrument vector convert instrinsic.
2593 // This function instruments intrinsics like cvtsi2ss:
2594 // %Out = int_xxx_cvtyyy(%ConvertOp)
2596 // %Out = int_xxx_cvtyyy(%CopyOp, %ConvertOp)
2597 // Intrinsic converts \p NumUsedElements elements of \p ConvertOp to the same
2598 // number \p Out elements, and (if has 2 arguments) copies the rest of the
2599 // elements from \p CopyOp.
2600 // In most cases conversion involves floating-point value which may trigger a
2601 // hardware exception when not fully initialized. For this reason we require
2602 // \p ConvertOp[0:NumUsedElements] to be fully initialized and trap otherwise.
2603 // We copy the shadow of \p CopyOp[NumUsedElements:] to \p
2604 // Out[NumUsedElements:]. This means that intrinsics without \p CopyOp always
2605 // return a fully initialized value.
2606 void handleVectorConvertIntrinsic(IntrinsicInst
&I
, int NumUsedElements
) {
2607 IRBuilder
<> IRB(&I
);
2608 Value
*CopyOp
, *ConvertOp
;
2610 switch (I
.getNumArgOperands()) {
2612 assert(isa
<ConstantInt
>(I
.getArgOperand(2)) && "Invalid rounding mode");
2615 CopyOp
= I
.getArgOperand(0);
2616 ConvertOp
= I
.getArgOperand(1);
2619 ConvertOp
= I
.getArgOperand(0);
2623 llvm_unreachable("Cvt intrinsic with unsupported number of arguments.");
2626 // The first *NumUsedElements* elements of ConvertOp are converted to the
2627 // same number of output elements. The rest of the output is copied from
2628 // CopyOp, or (if not available) filled with zeroes.
2629 // Combine shadow for elements of ConvertOp that are used in this operation,
2630 // and insert a check.
2631 // FIXME: consider propagating shadow of ConvertOp, at least in the case of
2632 // int->any conversion.
2633 Value
*ConvertShadow
= getShadow(ConvertOp
);
2634 Value
*AggShadow
= nullptr;
2635 if (ConvertOp
->getType()->isVectorTy()) {
2636 AggShadow
= IRB
.CreateExtractElement(
2637 ConvertShadow
, ConstantInt::get(IRB
.getInt32Ty(), 0));
2638 for (int i
= 1; i
< NumUsedElements
; ++i
) {
2639 Value
*MoreShadow
= IRB
.CreateExtractElement(
2640 ConvertShadow
, ConstantInt::get(IRB
.getInt32Ty(), i
));
2641 AggShadow
= IRB
.CreateOr(AggShadow
, MoreShadow
);
2644 AggShadow
= ConvertShadow
;
2646 assert(AggShadow
->getType()->isIntegerTy());
2647 insertShadowCheck(AggShadow
, getOrigin(ConvertOp
), &I
);
2649 // Build result shadow by zero-filling parts of CopyOp shadow that come from
2652 assert(CopyOp
->getType() == I
.getType());
2653 assert(CopyOp
->getType()->isVectorTy());
2654 Value
*ResultShadow
= getShadow(CopyOp
);
2655 Type
*EltTy
= ResultShadow
->getType()->getVectorElementType();
2656 for (int i
= 0; i
< NumUsedElements
; ++i
) {
2657 ResultShadow
= IRB
.CreateInsertElement(
2658 ResultShadow
, ConstantInt::getNullValue(EltTy
),
2659 ConstantInt::get(IRB
.getInt32Ty(), i
));
2661 setShadow(&I
, ResultShadow
);
2662 setOrigin(&I
, getOrigin(CopyOp
));
2664 setShadow(&I
, getCleanShadow(&I
));
2665 setOrigin(&I
, getCleanOrigin());
2669 // Given a scalar or vector, extract lower 64 bits (or less), and return all
2670 // zeroes if it is zero, and all ones otherwise.
2671 Value
*Lower64ShadowExtend(IRBuilder
<> &IRB
, Value
*S
, Type
*T
) {
2672 if (S
->getType()->isVectorTy())
2673 S
= CreateShadowCast(IRB
, S
, IRB
.getInt64Ty(), /* Signed */ true);
2674 assert(S
->getType()->getPrimitiveSizeInBits() <= 64);
2675 Value
*S2
= IRB
.CreateICmpNE(S
, getCleanShadow(S
));
2676 return CreateShadowCast(IRB
, S2
, T
, /* Signed */ true);
2679 // Given a vector, extract its first element, and return all
2680 // zeroes if it is zero, and all ones otherwise.
2681 Value
*LowerElementShadowExtend(IRBuilder
<> &IRB
, Value
*S
, Type
*T
) {
2682 Value
*S1
= IRB
.CreateExtractElement(S
, (uint64_t)0);
2683 Value
*S2
= IRB
.CreateICmpNE(S1
, getCleanShadow(S1
));
2684 return CreateShadowCast(IRB
, S2
, T
, /* Signed */ true);
2687 Value
*VariableShadowExtend(IRBuilder
<> &IRB
, Value
*S
) {
2688 Type
*T
= S
->getType();
2689 assert(T
->isVectorTy());
2690 Value
*S2
= IRB
.CreateICmpNE(S
, getCleanShadow(S
));
2691 return IRB
.CreateSExt(S2
, T
);
2694 // Instrument vector shift instrinsic.
2696 // This function instruments intrinsics like int_x86_avx2_psll_w.
2697 // Intrinsic shifts %In by %ShiftSize bits.
2698 // %ShiftSize may be a vector. In that case the lower 64 bits determine shift
2699 // size, and the rest is ignored. Behavior is defined even if shift size is
2700 // greater than register (or field) width.
2701 void handleVectorShiftIntrinsic(IntrinsicInst
&I
, bool Variable
) {
2702 assert(I
.getNumArgOperands() == 2);
2703 IRBuilder
<> IRB(&I
);
2704 // If any of the S2 bits are poisoned, the whole thing is poisoned.
2705 // Otherwise perform the same shift on S1.
2706 Value
*S1
= getShadow(&I
, 0);
2707 Value
*S2
= getShadow(&I
, 1);
2708 Value
*S2Conv
= Variable
? VariableShadowExtend(IRB
, S2
)
2709 : Lower64ShadowExtend(IRB
, S2
, getShadowTy(&I
));
2710 Value
*V1
= I
.getOperand(0);
2711 Value
*V2
= I
.getOperand(1);
2712 Value
*Shift
= IRB
.CreateCall(I
.getFunctionType(), I
.getCalledValue(),
2713 {IRB
.CreateBitCast(S1
, V1
->getType()), V2
});
2714 Shift
= IRB
.CreateBitCast(Shift
, getShadowTy(&I
));
2715 setShadow(&I
, IRB
.CreateOr(Shift
, S2Conv
));
2716 setOriginForNaryOp(I
);
2719 // Get an X86_MMX-sized vector type.
2720 Type
*getMMXVectorTy(unsigned EltSizeInBits
) {
2721 const unsigned X86_MMXSizeInBits
= 64;
2722 assert(EltSizeInBits
!= 0 && (X86_MMXSizeInBits
% EltSizeInBits
) == 0 &&
2723 "Illegal MMX vector element size");
2724 return VectorType::get(IntegerType::get(*MS
.C
, EltSizeInBits
),
2725 X86_MMXSizeInBits
/ EltSizeInBits
);
2728 // Returns a signed counterpart for an (un)signed-saturate-and-pack
2730 Intrinsic::ID
getSignedPackIntrinsic(Intrinsic::ID id
) {
2732 case Intrinsic::x86_sse2_packsswb_128
:
2733 case Intrinsic::x86_sse2_packuswb_128
:
2734 return Intrinsic::x86_sse2_packsswb_128
;
2736 case Intrinsic::x86_sse2_packssdw_128
:
2737 case Intrinsic::x86_sse41_packusdw
:
2738 return Intrinsic::x86_sse2_packssdw_128
;
2740 case Intrinsic::x86_avx2_packsswb
:
2741 case Intrinsic::x86_avx2_packuswb
:
2742 return Intrinsic::x86_avx2_packsswb
;
2744 case Intrinsic::x86_avx2_packssdw
:
2745 case Intrinsic::x86_avx2_packusdw
:
2746 return Intrinsic::x86_avx2_packssdw
;
2748 case Intrinsic::x86_mmx_packsswb
:
2749 case Intrinsic::x86_mmx_packuswb
:
2750 return Intrinsic::x86_mmx_packsswb
;
2752 case Intrinsic::x86_mmx_packssdw
:
2753 return Intrinsic::x86_mmx_packssdw
;
2755 llvm_unreachable("unexpected intrinsic id");
2759 // Instrument vector pack instrinsic.
2761 // This function instruments intrinsics like x86_mmx_packsswb, that
2762 // packs elements of 2 input vectors into half as many bits with saturation.
2763 // Shadow is propagated with the signed variant of the same intrinsic applied
2764 // to sext(Sa != zeroinitializer), sext(Sb != zeroinitializer).
2765 // EltSizeInBits is used only for x86mmx arguments.
2766 void handleVectorPackIntrinsic(IntrinsicInst
&I
, unsigned EltSizeInBits
= 0) {
2767 assert(I
.getNumArgOperands() == 2);
2768 bool isX86_MMX
= I
.getOperand(0)->getType()->isX86_MMXTy();
2769 IRBuilder
<> IRB(&I
);
2770 Value
*S1
= getShadow(&I
, 0);
2771 Value
*S2
= getShadow(&I
, 1);
2772 assert(isX86_MMX
|| S1
->getType()->isVectorTy());
2774 // SExt and ICmpNE below must apply to individual elements of input vectors.
2775 // In case of x86mmx arguments, cast them to appropriate vector types and
2777 Type
*T
= isX86_MMX
? getMMXVectorTy(EltSizeInBits
) : S1
->getType();
2779 S1
= IRB
.CreateBitCast(S1
, T
);
2780 S2
= IRB
.CreateBitCast(S2
, T
);
2782 Value
*S1_ext
= IRB
.CreateSExt(
2783 IRB
.CreateICmpNE(S1
, Constant::getNullValue(T
)), T
);
2784 Value
*S2_ext
= IRB
.CreateSExt(
2785 IRB
.CreateICmpNE(S2
, Constant::getNullValue(T
)), T
);
2787 Type
*X86_MMXTy
= Type::getX86_MMXTy(*MS
.C
);
2788 S1_ext
= IRB
.CreateBitCast(S1_ext
, X86_MMXTy
);
2789 S2_ext
= IRB
.CreateBitCast(S2_ext
, X86_MMXTy
);
2792 Function
*ShadowFn
= Intrinsic::getDeclaration(
2793 F
.getParent(), getSignedPackIntrinsic(I
.getIntrinsicID()));
2796 IRB
.CreateCall(ShadowFn
, {S1_ext
, S2_ext
}, "_msprop_vector_pack");
2797 if (isX86_MMX
) S
= IRB
.CreateBitCast(S
, getShadowTy(&I
));
2799 setOriginForNaryOp(I
);
2802 // Instrument sum-of-absolute-differencies intrinsic.
2803 void handleVectorSadIntrinsic(IntrinsicInst
&I
) {
2804 const unsigned SignificantBitsPerResultElement
= 16;
2805 bool isX86_MMX
= I
.getOperand(0)->getType()->isX86_MMXTy();
2806 Type
*ResTy
= isX86_MMX
? IntegerType::get(*MS
.C
, 64) : I
.getType();
2807 unsigned ZeroBitsPerResultElement
=
2808 ResTy
->getScalarSizeInBits() - SignificantBitsPerResultElement
;
2810 IRBuilder
<> IRB(&I
);
2811 Value
*S
= IRB
.CreateOr(getShadow(&I
, 0), getShadow(&I
, 1));
2812 S
= IRB
.CreateBitCast(S
, ResTy
);
2813 S
= IRB
.CreateSExt(IRB
.CreateICmpNE(S
, Constant::getNullValue(ResTy
)),
2815 S
= IRB
.CreateLShr(S
, ZeroBitsPerResultElement
);
2816 S
= IRB
.CreateBitCast(S
, getShadowTy(&I
));
2818 setOriginForNaryOp(I
);
2821 // Instrument multiply-add intrinsic.
2822 void handleVectorPmaddIntrinsic(IntrinsicInst
&I
,
2823 unsigned EltSizeInBits
= 0) {
2824 bool isX86_MMX
= I
.getOperand(0)->getType()->isX86_MMXTy();
2825 Type
*ResTy
= isX86_MMX
? getMMXVectorTy(EltSizeInBits
* 2) : I
.getType();
2826 IRBuilder
<> IRB(&I
);
2827 Value
*S
= IRB
.CreateOr(getShadow(&I
, 0), getShadow(&I
, 1));
2828 S
= IRB
.CreateBitCast(S
, ResTy
);
2829 S
= IRB
.CreateSExt(IRB
.CreateICmpNE(S
, Constant::getNullValue(ResTy
)),
2831 S
= IRB
.CreateBitCast(S
, getShadowTy(&I
));
2833 setOriginForNaryOp(I
);
2836 // Instrument compare-packed intrinsic.
2837 // Basically, an or followed by sext(icmp ne 0) to end up with all-zeros or
2839 void handleVectorComparePackedIntrinsic(IntrinsicInst
&I
) {
2840 IRBuilder
<> IRB(&I
);
2841 Type
*ResTy
= getShadowTy(&I
);
2842 Value
*S0
= IRB
.CreateOr(getShadow(&I
, 0), getShadow(&I
, 1));
2843 Value
*S
= IRB
.CreateSExt(
2844 IRB
.CreateICmpNE(S0
, Constant::getNullValue(ResTy
)), ResTy
);
2846 setOriginForNaryOp(I
);
2849 // Instrument compare-scalar intrinsic.
2850 // This handles both cmp* intrinsics which return the result in the first
2851 // element of a vector, and comi* which return the result as i32.
2852 void handleVectorCompareScalarIntrinsic(IntrinsicInst
&I
) {
2853 IRBuilder
<> IRB(&I
);
2854 Value
*S0
= IRB
.CreateOr(getShadow(&I
, 0), getShadow(&I
, 1));
2855 Value
*S
= LowerElementShadowExtend(IRB
, S0
, getShadowTy(&I
));
2857 setOriginForNaryOp(I
);
2860 void handleStmxcsr(IntrinsicInst
&I
) {
2861 IRBuilder
<> IRB(&I
);
2862 Value
* Addr
= I
.getArgOperand(0);
2863 Type
*Ty
= IRB
.getInt32Ty();
2865 getShadowOriginPtr(Addr
, IRB
, Ty
, /*Alignment*/ 1, /*isStore*/ true)
2868 IRB
.CreateStore(getCleanShadow(Ty
),
2869 IRB
.CreatePointerCast(ShadowPtr
, Ty
->getPointerTo()));
2871 if (ClCheckAccessAddress
)
2872 insertShadowCheck(Addr
, &I
);
2875 void handleLdmxcsr(IntrinsicInst
&I
) {
2876 if (!InsertChecks
) return;
2878 IRBuilder
<> IRB(&I
);
2879 Value
*Addr
= I
.getArgOperand(0);
2880 Type
*Ty
= IRB
.getInt32Ty();
2881 unsigned Alignment
= 1;
2882 Value
*ShadowPtr
, *OriginPtr
;
2883 std::tie(ShadowPtr
, OriginPtr
) =
2884 getShadowOriginPtr(Addr
, IRB
, Ty
, Alignment
, /*isStore*/ false);
2886 if (ClCheckAccessAddress
)
2887 insertShadowCheck(Addr
, &I
);
2889 Value
*Shadow
= IRB
.CreateAlignedLoad(Ty
, ShadowPtr
, Alignment
, "_ldmxcsr");
2890 Value
*Origin
= MS
.TrackOrigins
? IRB
.CreateLoad(MS
.OriginTy
, OriginPtr
)
2892 insertShadowCheck(Shadow
, Origin
, &I
);
2895 void handleMaskedStore(IntrinsicInst
&I
) {
2896 IRBuilder
<> IRB(&I
);
2897 Value
*V
= I
.getArgOperand(0);
2898 Value
*Addr
= I
.getArgOperand(1);
2899 unsigned Align
= cast
<ConstantInt
>(I
.getArgOperand(2))->getZExtValue();
2900 Value
*Mask
= I
.getArgOperand(3);
2901 Value
*Shadow
= getShadow(V
);
2905 std::tie(ShadowPtr
, OriginPtr
) = getShadowOriginPtr(
2906 Addr
, IRB
, Shadow
->getType(), Align
, /*isStore*/ true);
2908 if (ClCheckAccessAddress
) {
2909 insertShadowCheck(Addr
, &I
);
2910 // Uninitialized mask is kind of like uninitialized address, but not as
2912 insertShadowCheck(Mask
, &I
);
2915 IRB
.CreateMaskedStore(Shadow
, ShadowPtr
, Align
, Mask
);
2917 if (MS
.TrackOrigins
) {
2918 auto &DL
= F
.getParent()->getDataLayout();
2919 paintOrigin(IRB
, getOrigin(V
), OriginPtr
,
2920 DL
.getTypeStoreSize(Shadow
->getType()),
2921 std::max(Align
, kMinOriginAlignment
));
2925 bool handleMaskedLoad(IntrinsicInst
&I
) {
2926 IRBuilder
<> IRB(&I
);
2927 Value
*Addr
= I
.getArgOperand(0);
2928 unsigned Align
= cast
<ConstantInt
>(I
.getArgOperand(1))->getZExtValue();
2929 Value
*Mask
= I
.getArgOperand(2);
2930 Value
*PassThru
= I
.getArgOperand(3);
2932 Type
*ShadowTy
= getShadowTy(&I
);
2933 Value
*ShadowPtr
, *OriginPtr
;
2934 if (PropagateShadow
) {
2935 std::tie(ShadowPtr
, OriginPtr
) =
2936 getShadowOriginPtr(Addr
, IRB
, ShadowTy
, Align
, /*isStore*/ false);
2937 setShadow(&I
, IRB
.CreateMaskedLoad(ShadowPtr
, Align
, Mask
,
2938 getShadow(PassThru
), "_msmaskedld"));
2940 setShadow(&I
, getCleanShadow(&I
));
2943 if (ClCheckAccessAddress
) {
2944 insertShadowCheck(Addr
, &I
);
2945 insertShadowCheck(Mask
, &I
);
2948 if (MS
.TrackOrigins
) {
2949 if (PropagateShadow
) {
2950 // Choose between PassThru's and the loaded value's origins.
2951 Value
*MaskedPassThruShadow
= IRB
.CreateAnd(
2952 getShadow(PassThru
), IRB
.CreateSExt(IRB
.CreateNeg(Mask
), ShadowTy
));
2954 Value
*Acc
= IRB
.CreateExtractElement(
2955 MaskedPassThruShadow
, ConstantInt::get(IRB
.getInt32Ty(), 0));
2956 for (int i
= 1, N
= PassThru
->getType()->getVectorNumElements(); i
< N
;
2958 Value
*More
= IRB
.CreateExtractElement(
2959 MaskedPassThruShadow
, ConstantInt::get(IRB
.getInt32Ty(), i
));
2960 Acc
= IRB
.CreateOr(Acc
, More
);
2963 Value
*Origin
= IRB
.CreateSelect(
2964 IRB
.CreateICmpNE(Acc
, Constant::getNullValue(Acc
->getType())),
2965 getOrigin(PassThru
), IRB
.CreateLoad(MS
.OriginTy
, OriginPtr
));
2967 setOrigin(&I
, Origin
);
2969 setOrigin(&I
, getCleanOrigin());
2975 // Instrument BMI / BMI2 intrinsics.
2976 // All of these intrinsics are Z = I(X, Y)
2977 // where the types of all operands and the result match, and are either i32 or i64.
2978 // The following instrumentation happens to work for all of them:
2979 // Sz = I(Sx, Y) | (sext (Sy != 0))
2980 void handleBmiIntrinsic(IntrinsicInst
&I
) {
2981 IRBuilder
<> IRB(&I
);
2982 Type
*ShadowTy
= getShadowTy(&I
);
2984 // If any bit of the mask operand is poisoned, then the whole thing is.
2985 Value
*SMask
= getShadow(&I
, 1);
2986 SMask
= IRB
.CreateSExt(IRB
.CreateICmpNE(SMask
, getCleanShadow(ShadowTy
)),
2988 // Apply the same intrinsic to the shadow of the first operand.
2989 Value
*S
= IRB
.CreateCall(I
.getCalledFunction(),
2990 {getShadow(&I
, 0), I
.getOperand(1)});
2991 S
= IRB
.CreateOr(SMask
, S
);
2993 setOriginForNaryOp(I
);
2996 void visitIntrinsicInst(IntrinsicInst
&I
) {
2997 switch (I
.getIntrinsicID()) {
2998 case Intrinsic::lifetime_start
:
2999 handleLifetimeStart(I
);
3001 case Intrinsic::launder_invariant_group
:
3002 case Intrinsic::strip_invariant_group
:
3003 handleInvariantGroup(I
);
3005 case Intrinsic::bswap
:
3008 case Intrinsic::masked_store
:
3009 handleMaskedStore(I
);
3011 case Intrinsic::masked_load
:
3012 handleMaskedLoad(I
);
3014 case Intrinsic::x86_sse_stmxcsr
:
3017 case Intrinsic::x86_sse_ldmxcsr
:
3020 case Intrinsic::x86_avx512_vcvtsd2usi64
:
3021 case Intrinsic::x86_avx512_vcvtsd2usi32
:
3022 case Intrinsic::x86_avx512_vcvtss2usi64
:
3023 case Intrinsic::x86_avx512_vcvtss2usi32
:
3024 case Intrinsic::x86_avx512_cvttss2usi64
:
3025 case Intrinsic::x86_avx512_cvttss2usi
:
3026 case Intrinsic::x86_avx512_cvttsd2usi64
:
3027 case Intrinsic::x86_avx512_cvttsd2usi
:
3028 case Intrinsic::x86_avx512_cvtusi2ss
:
3029 case Intrinsic::x86_avx512_cvtusi642sd
:
3030 case Intrinsic::x86_avx512_cvtusi642ss
:
3031 case Intrinsic::x86_sse2_cvtsd2si64
:
3032 case Intrinsic::x86_sse2_cvtsd2si
:
3033 case Intrinsic::x86_sse2_cvtsd2ss
:
3034 case Intrinsic::x86_sse2_cvttsd2si64
:
3035 case Intrinsic::x86_sse2_cvttsd2si
:
3036 case Intrinsic::x86_sse_cvtss2si64
:
3037 case Intrinsic::x86_sse_cvtss2si
:
3038 case Intrinsic::x86_sse_cvttss2si64
:
3039 case Intrinsic::x86_sse_cvttss2si
:
3040 handleVectorConvertIntrinsic(I
, 1);
3042 case Intrinsic::x86_sse_cvtps2pi
:
3043 case Intrinsic::x86_sse_cvttps2pi
:
3044 handleVectorConvertIntrinsic(I
, 2);
3047 case Intrinsic::x86_avx512_psll_w_512
:
3048 case Intrinsic::x86_avx512_psll_d_512
:
3049 case Intrinsic::x86_avx512_psll_q_512
:
3050 case Intrinsic::x86_avx512_pslli_w_512
:
3051 case Intrinsic::x86_avx512_pslli_d_512
:
3052 case Intrinsic::x86_avx512_pslli_q_512
:
3053 case Intrinsic::x86_avx512_psrl_w_512
:
3054 case Intrinsic::x86_avx512_psrl_d_512
:
3055 case Intrinsic::x86_avx512_psrl_q_512
:
3056 case Intrinsic::x86_avx512_psra_w_512
:
3057 case Intrinsic::x86_avx512_psra_d_512
:
3058 case Intrinsic::x86_avx512_psra_q_512
:
3059 case Intrinsic::x86_avx512_psrli_w_512
:
3060 case Intrinsic::x86_avx512_psrli_d_512
:
3061 case Intrinsic::x86_avx512_psrli_q_512
:
3062 case Intrinsic::x86_avx512_psrai_w_512
:
3063 case Intrinsic::x86_avx512_psrai_d_512
:
3064 case Intrinsic::x86_avx512_psrai_q_512
:
3065 case Intrinsic::x86_avx512_psra_q_256
:
3066 case Intrinsic::x86_avx512_psra_q_128
:
3067 case Intrinsic::x86_avx512_psrai_q_256
:
3068 case Intrinsic::x86_avx512_psrai_q_128
:
3069 case Intrinsic::x86_avx2_psll_w
:
3070 case Intrinsic::x86_avx2_psll_d
:
3071 case Intrinsic::x86_avx2_psll_q
:
3072 case Intrinsic::x86_avx2_pslli_w
:
3073 case Intrinsic::x86_avx2_pslli_d
:
3074 case Intrinsic::x86_avx2_pslli_q
:
3075 case Intrinsic::x86_avx2_psrl_w
:
3076 case Intrinsic::x86_avx2_psrl_d
:
3077 case Intrinsic::x86_avx2_psrl_q
:
3078 case Intrinsic::x86_avx2_psra_w
:
3079 case Intrinsic::x86_avx2_psra_d
:
3080 case Intrinsic::x86_avx2_psrli_w
:
3081 case Intrinsic::x86_avx2_psrli_d
:
3082 case Intrinsic::x86_avx2_psrli_q
:
3083 case Intrinsic::x86_avx2_psrai_w
:
3084 case Intrinsic::x86_avx2_psrai_d
:
3085 case Intrinsic::x86_sse2_psll_w
:
3086 case Intrinsic::x86_sse2_psll_d
:
3087 case Intrinsic::x86_sse2_psll_q
:
3088 case Intrinsic::x86_sse2_pslli_w
:
3089 case Intrinsic::x86_sse2_pslli_d
:
3090 case Intrinsic::x86_sse2_pslli_q
:
3091 case Intrinsic::x86_sse2_psrl_w
:
3092 case Intrinsic::x86_sse2_psrl_d
:
3093 case Intrinsic::x86_sse2_psrl_q
:
3094 case Intrinsic::x86_sse2_psra_w
:
3095 case Intrinsic::x86_sse2_psra_d
:
3096 case Intrinsic::x86_sse2_psrli_w
:
3097 case Intrinsic::x86_sse2_psrli_d
:
3098 case Intrinsic::x86_sse2_psrli_q
:
3099 case Intrinsic::x86_sse2_psrai_w
:
3100 case Intrinsic::x86_sse2_psrai_d
:
3101 case Intrinsic::x86_mmx_psll_w
:
3102 case Intrinsic::x86_mmx_psll_d
:
3103 case Intrinsic::x86_mmx_psll_q
:
3104 case Intrinsic::x86_mmx_pslli_w
:
3105 case Intrinsic::x86_mmx_pslli_d
:
3106 case Intrinsic::x86_mmx_pslli_q
:
3107 case Intrinsic::x86_mmx_psrl_w
:
3108 case Intrinsic::x86_mmx_psrl_d
:
3109 case Intrinsic::x86_mmx_psrl_q
:
3110 case Intrinsic::x86_mmx_psra_w
:
3111 case Intrinsic::x86_mmx_psra_d
:
3112 case Intrinsic::x86_mmx_psrli_w
:
3113 case Intrinsic::x86_mmx_psrli_d
:
3114 case Intrinsic::x86_mmx_psrli_q
:
3115 case Intrinsic::x86_mmx_psrai_w
:
3116 case Intrinsic::x86_mmx_psrai_d
:
3117 handleVectorShiftIntrinsic(I
, /* Variable */ false);
3119 case Intrinsic::x86_avx2_psllv_d
:
3120 case Intrinsic::x86_avx2_psllv_d_256
:
3121 case Intrinsic::x86_avx512_psllv_d_512
:
3122 case Intrinsic::x86_avx2_psllv_q
:
3123 case Intrinsic::x86_avx2_psllv_q_256
:
3124 case Intrinsic::x86_avx512_psllv_q_512
:
3125 case Intrinsic::x86_avx2_psrlv_d
:
3126 case Intrinsic::x86_avx2_psrlv_d_256
:
3127 case Intrinsic::x86_avx512_psrlv_d_512
:
3128 case Intrinsic::x86_avx2_psrlv_q
:
3129 case Intrinsic::x86_avx2_psrlv_q_256
:
3130 case Intrinsic::x86_avx512_psrlv_q_512
:
3131 case Intrinsic::x86_avx2_psrav_d
:
3132 case Intrinsic::x86_avx2_psrav_d_256
:
3133 case Intrinsic::x86_avx512_psrav_d_512
:
3134 case Intrinsic::x86_avx512_psrav_q_128
:
3135 case Intrinsic::x86_avx512_psrav_q_256
:
3136 case Intrinsic::x86_avx512_psrav_q_512
:
3137 handleVectorShiftIntrinsic(I
, /* Variable */ true);
3140 case Intrinsic::x86_sse2_packsswb_128
:
3141 case Intrinsic::x86_sse2_packssdw_128
:
3142 case Intrinsic::x86_sse2_packuswb_128
:
3143 case Intrinsic::x86_sse41_packusdw
:
3144 case Intrinsic::x86_avx2_packsswb
:
3145 case Intrinsic::x86_avx2_packssdw
:
3146 case Intrinsic::x86_avx2_packuswb
:
3147 case Intrinsic::x86_avx2_packusdw
:
3148 handleVectorPackIntrinsic(I
);
3151 case Intrinsic::x86_mmx_packsswb
:
3152 case Intrinsic::x86_mmx_packuswb
:
3153 handleVectorPackIntrinsic(I
, 16);
3156 case Intrinsic::x86_mmx_packssdw
:
3157 handleVectorPackIntrinsic(I
, 32);
3160 case Intrinsic::x86_mmx_psad_bw
:
3161 case Intrinsic::x86_sse2_psad_bw
:
3162 case Intrinsic::x86_avx2_psad_bw
:
3163 handleVectorSadIntrinsic(I
);
3166 case Intrinsic::x86_sse2_pmadd_wd
:
3167 case Intrinsic::x86_avx2_pmadd_wd
:
3168 case Intrinsic::x86_ssse3_pmadd_ub_sw_128
:
3169 case Intrinsic::x86_avx2_pmadd_ub_sw
:
3170 handleVectorPmaddIntrinsic(I
);
3173 case Intrinsic::x86_ssse3_pmadd_ub_sw
:
3174 handleVectorPmaddIntrinsic(I
, 8);
3177 case Intrinsic::x86_mmx_pmadd_wd
:
3178 handleVectorPmaddIntrinsic(I
, 16);
3181 case Intrinsic::x86_sse_cmp_ss
:
3182 case Intrinsic::x86_sse2_cmp_sd
:
3183 case Intrinsic::x86_sse_comieq_ss
:
3184 case Intrinsic::x86_sse_comilt_ss
:
3185 case Intrinsic::x86_sse_comile_ss
:
3186 case Intrinsic::x86_sse_comigt_ss
:
3187 case Intrinsic::x86_sse_comige_ss
:
3188 case Intrinsic::x86_sse_comineq_ss
:
3189 case Intrinsic::x86_sse_ucomieq_ss
:
3190 case Intrinsic::x86_sse_ucomilt_ss
:
3191 case Intrinsic::x86_sse_ucomile_ss
:
3192 case Intrinsic::x86_sse_ucomigt_ss
:
3193 case Intrinsic::x86_sse_ucomige_ss
:
3194 case Intrinsic::x86_sse_ucomineq_ss
:
3195 case Intrinsic::x86_sse2_comieq_sd
:
3196 case Intrinsic::x86_sse2_comilt_sd
:
3197 case Intrinsic::x86_sse2_comile_sd
:
3198 case Intrinsic::x86_sse2_comigt_sd
:
3199 case Intrinsic::x86_sse2_comige_sd
:
3200 case Intrinsic::x86_sse2_comineq_sd
:
3201 case Intrinsic::x86_sse2_ucomieq_sd
:
3202 case Intrinsic::x86_sse2_ucomilt_sd
:
3203 case Intrinsic::x86_sse2_ucomile_sd
:
3204 case Intrinsic::x86_sse2_ucomigt_sd
:
3205 case Intrinsic::x86_sse2_ucomige_sd
:
3206 case Intrinsic::x86_sse2_ucomineq_sd
:
3207 handleVectorCompareScalarIntrinsic(I
);
3210 case Intrinsic::x86_sse_cmp_ps
:
3211 case Intrinsic::x86_sse2_cmp_pd
:
3212 // FIXME: For x86_avx_cmp_pd_256 and x86_avx_cmp_ps_256 this function
3213 // generates reasonably looking IR that fails in the backend with "Do not
3214 // know how to split the result of this operator!".
3215 handleVectorComparePackedIntrinsic(I
);
3218 case Intrinsic::x86_bmi_bextr_32
:
3219 case Intrinsic::x86_bmi_bextr_64
:
3220 case Intrinsic::x86_bmi_bzhi_32
:
3221 case Intrinsic::x86_bmi_bzhi_64
:
3222 case Intrinsic::x86_bmi_pdep_32
:
3223 case Intrinsic::x86_bmi_pdep_64
:
3224 case Intrinsic::x86_bmi_pext_32
:
3225 case Intrinsic::x86_bmi_pext_64
:
3226 handleBmiIntrinsic(I
);
3229 case Intrinsic::is_constant
:
3230 // The result of llvm.is.constant() is always defined.
3231 setShadow(&I
, getCleanShadow(&I
));
3232 setOrigin(&I
, getCleanOrigin());
3236 if (!handleUnknownIntrinsic(I
))
3237 visitInstruction(I
);
3242 void visitCallSite(CallSite CS
) {
3243 Instruction
&I
= *CS
.getInstruction();
3244 assert(!I
.getMetadata("nosanitize"));
3245 assert((CS
.isCall() || CS
.isInvoke() || CS
.isCallBr()) &&
3246 "Unknown type of CallSite");
3247 if (CS
.isCallBr() || (CS
.isCall() && cast
<CallInst
>(&I
)->isInlineAsm())) {
3248 // For inline asm (either a call to asm function, or callbr instruction),
3249 // do the usual thing: check argument shadow and mark all outputs as
3250 // clean. Note that any side effects of the inline asm that are not
3251 // immediately visible in its constraints are not handled.
3252 if (ClHandleAsmConservative
&& MS
.CompileKernel
)
3253 visitAsmInstruction(I
);
3255 visitInstruction(I
);
3259 CallInst
*Call
= cast
<CallInst
>(&I
);
3260 assert(!isa
<IntrinsicInst
>(&I
) && "intrinsics are handled elsewhere");
3262 // We are going to insert code that relies on the fact that the callee
3263 // will become a non-readonly function after it is instrumented by us. To
3264 // prevent this code from being optimized out, mark that function
3265 // non-readonly in advance.
3266 if (Function
*Func
= Call
->getCalledFunction()) {
3267 // Clear out readonly/readnone attributes.
3269 B
.addAttribute(Attribute::ReadOnly
)
3270 .addAttribute(Attribute::ReadNone
);
3271 Func
->removeAttributes(AttributeList::FunctionIndex
, B
);
3274 maybeMarkSanitizerLibraryCallNoBuiltin(Call
, TLI
);
3276 IRBuilder
<> IRB(&I
);
3278 unsigned ArgOffset
= 0;
3279 LLVM_DEBUG(dbgs() << " CallSite: " << I
<< "\n");
3280 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin(), End
= CS
.arg_end();
3281 ArgIt
!= End
; ++ArgIt
) {
3283 unsigned i
= ArgIt
- CS
.arg_begin();
3284 if (!A
->getType()->isSized()) {
3285 LLVM_DEBUG(dbgs() << "Arg " << i
<< " is not sized: " << I
<< "\n");
3289 Value
*Store
= nullptr;
3290 // Compute the Shadow for arg even if it is ByVal, because
3291 // in that case getShadow() will copy the actual arg shadow to
3292 // __msan_param_tls.
3293 Value
*ArgShadow
= getShadow(A
);
3294 Value
*ArgShadowBase
= getShadowPtrForArgument(A
, IRB
, ArgOffset
);
3295 LLVM_DEBUG(dbgs() << " Arg#" << i
<< ": " << *A
3296 << " Shadow: " << *ArgShadow
<< "\n");
3297 bool ArgIsInitialized
= false;
3298 const DataLayout
&DL
= F
.getParent()->getDataLayout();
3299 if (CS
.paramHasAttr(i
, Attribute::ByVal
)) {
3300 assert(A
->getType()->isPointerTy() &&
3301 "ByVal argument is not a pointer!");
3302 Size
= DL
.getTypeAllocSize(A
->getType()->getPointerElementType());
3303 if (ArgOffset
+ Size
> kParamTLSSize
) break;
3304 unsigned ParamAlignment
= CS
.getParamAlignment(i
);
3305 unsigned Alignment
= std::min(ParamAlignment
, kShadowTLSAlignment
);
3307 getShadowOriginPtr(A
, IRB
, IRB
.getInt8Ty(), Alignment
,
3311 Store
= IRB
.CreateMemCpy(ArgShadowBase
, Alignment
, AShadowPtr
,
3313 // TODO(glider): need to copy origins.
3315 Size
= DL
.getTypeAllocSize(A
->getType());
3316 if (ArgOffset
+ Size
> kParamTLSSize
) break;
3317 Store
= IRB
.CreateAlignedStore(ArgShadow
, ArgShadowBase
,
3318 kShadowTLSAlignment
);
3319 Constant
*Cst
= dyn_cast
<Constant
>(ArgShadow
);
3320 if (Cst
&& Cst
->isNullValue()) ArgIsInitialized
= true;
3322 if (MS
.TrackOrigins
&& !ArgIsInitialized
)
3323 IRB
.CreateStore(getOrigin(A
),
3324 getOriginPtrForArgument(A
, IRB
, ArgOffset
));
3326 assert(Size
!= 0 && Store
!= nullptr);
3327 LLVM_DEBUG(dbgs() << " Param:" << *Store
<< "\n");
3328 ArgOffset
+= alignTo(Size
, 8);
3330 LLVM_DEBUG(dbgs() << " done with call args\n");
3332 FunctionType
*FT
= CS
.getFunctionType();
3333 if (FT
->isVarArg()) {
3334 VAHelper
->visitCallSite(CS
, IRB
);
3337 // Now, get the shadow for the RetVal.
3338 if (!I
.getType()->isSized()) return;
3339 // Don't emit the epilogue for musttail call returns.
3340 if (CS
.isCall() && cast
<CallInst
>(&I
)->isMustTailCall()) return;
3341 IRBuilder
<> IRBBefore(&I
);
3342 // Until we have full dynamic coverage, make sure the retval shadow is 0.
3343 Value
*Base
= getShadowPtrForRetval(&I
, IRBBefore
);
3344 IRBBefore
.CreateAlignedStore(getCleanShadow(&I
), Base
, kShadowTLSAlignment
);
3345 BasicBlock::iterator NextInsn
;
3347 NextInsn
= ++I
.getIterator();
3348 assert(NextInsn
!= I
.getParent()->end());
3350 BasicBlock
*NormalDest
= cast
<InvokeInst
>(&I
)->getNormalDest();
3351 if (!NormalDest
->getSinglePredecessor()) {
3352 // FIXME: this case is tricky, so we are just conservative here.
3353 // Perhaps we need to split the edge between this BB and NormalDest,
3354 // but a naive attempt to use SplitEdge leads to a crash.
3355 setShadow(&I
, getCleanShadow(&I
));
3356 setOrigin(&I
, getCleanOrigin());
3359 // FIXME: NextInsn is likely in a basic block that has not been visited yet.
3360 // Anything inserted there will be instrumented by MSan later!
3361 NextInsn
= NormalDest
->getFirstInsertionPt();
3362 assert(NextInsn
!= NormalDest
->end() &&
3363 "Could not find insertion point for retval shadow load");
3365 IRBuilder
<> IRBAfter(&*NextInsn
);
3366 Value
*RetvalShadow
= IRBAfter
.CreateAlignedLoad(
3367 getShadowTy(&I
), getShadowPtrForRetval(&I
, IRBAfter
),
3368 kShadowTLSAlignment
, "_msret");
3369 setShadow(&I
, RetvalShadow
);
3370 if (MS
.TrackOrigins
)
3371 setOrigin(&I
, IRBAfter
.CreateLoad(MS
.OriginTy
,
3372 getOriginPtrForRetval(IRBAfter
)));
3375 bool isAMustTailRetVal(Value
*RetVal
) {
3376 if (auto *I
= dyn_cast
<BitCastInst
>(RetVal
)) {
3377 RetVal
= I
->getOperand(0);
3379 if (auto *I
= dyn_cast
<CallInst
>(RetVal
)) {
3380 return I
->isMustTailCall();
3385 void visitReturnInst(ReturnInst
&I
) {
3386 IRBuilder
<> IRB(&I
);
3387 Value
*RetVal
= I
.getReturnValue();
3388 if (!RetVal
) return;
3389 // Don't emit the epilogue for musttail call returns.
3390 if (isAMustTailRetVal(RetVal
)) return;
3391 Value
*ShadowPtr
= getShadowPtrForRetval(RetVal
, IRB
);
3392 if (CheckReturnValue
) {
3393 insertShadowCheck(RetVal
, &I
);
3394 Value
*Shadow
= getCleanShadow(RetVal
);
3395 IRB
.CreateAlignedStore(Shadow
, ShadowPtr
, kShadowTLSAlignment
);
3397 Value
*Shadow
= getShadow(RetVal
);
3398 IRB
.CreateAlignedStore(Shadow
, ShadowPtr
, kShadowTLSAlignment
);
3399 if (MS
.TrackOrigins
)
3400 IRB
.CreateStore(getOrigin(RetVal
), getOriginPtrForRetval(IRB
));
3404 void visitPHINode(PHINode
&I
) {
3405 IRBuilder
<> IRB(&I
);
3406 if (!PropagateShadow
) {
3407 setShadow(&I
, getCleanShadow(&I
));
3408 setOrigin(&I
, getCleanOrigin());
3412 ShadowPHINodes
.push_back(&I
);
3413 setShadow(&I
, IRB
.CreatePHI(getShadowTy(&I
), I
.getNumIncomingValues(),
3415 if (MS
.TrackOrigins
)
3416 setOrigin(&I
, IRB
.CreatePHI(MS
.OriginTy
, I
.getNumIncomingValues(),
3420 Value
*getLocalVarDescription(AllocaInst
&I
) {
3421 SmallString
<2048> StackDescriptionStorage
;
3422 raw_svector_ostream
StackDescription(StackDescriptionStorage
);
3423 // We create a string with a description of the stack allocation and
3424 // pass it into __msan_set_alloca_origin.
3425 // It will be printed by the run-time if stack-originated UMR is found.
3426 // The first 4 bytes of the string are set to '----' and will be replaced
3427 // by __msan_va_arg_overflow_size_tls at the first call.
3428 StackDescription
<< "----" << I
.getName() << "@" << F
.getName();
3429 return createPrivateNonConstGlobalForString(*F
.getParent(),
3430 StackDescription
.str());
3433 void poisonAllocaUserspace(AllocaInst
&I
, IRBuilder
<> &IRB
, Value
*Len
) {
3434 if (PoisonStack
&& ClPoisonStackWithCall
) {
3435 IRB
.CreateCall(MS
.MsanPoisonStackFn
,
3436 {IRB
.CreatePointerCast(&I
, IRB
.getInt8PtrTy()), Len
});
3438 Value
*ShadowBase
, *OriginBase
;
3439 std::tie(ShadowBase
, OriginBase
) =
3440 getShadowOriginPtr(&I
, IRB
, IRB
.getInt8Ty(), 1, /*isStore*/ true);
3442 Value
*PoisonValue
= IRB
.getInt8(PoisonStack
? ClPoisonStackPattern
: 0);
3443 IRB
.CreateMemSet(ShadowBase
, PoisonValue
, Len
, I
.getAlignment());
3446 if (PoisonStack
&& MS
.TrackOrigins
) {
3447 Value
*Descr
= getLocalVarDescription(I
);
3448 IRB
.CreateCall(MS
.MsanSetAllocaOrigin4Fn
,
3449 {IRB
.CreatePointerCast(&I
, IRB
.getInt8PtrTy()), Len
,
3450 IRB
.CreatePointerCast(Descr
, IRB
.getInt8PtrTy()),
3451 IRB
.CreatePointerCast(&F
, MS
.IntptrTy
)});
3455 void poisonAllocaKmsan(AllocaInst
&I
, IRBuilder
<> &IRB
, Value
*Len
) {
3456 Value
*Descr
= getLocalVarDescription(I
);
3458 IRB
.CreateCall(MS
.MsanPoisonAllocaFn
,
3459 {IRB
.CreatePointerCast(&I
, IRB
.getInt8PtrTy()), Len
,
3460 IRB
.CreatePointerCast(Descr
, IRB
.getInt8PtrTy())});
3462 IRB
.CreateCall(MS
.MsanUnpoisonAllocaFn
,
3463 {IRB
.CreatePointerCast(&I
, IRB
.getInt8PtrTy()), Len
});
3467 void instrumentAlloca(AllocaInst
&I
, Instruction
*InsPoint
= nullptr) {
3470 IRBuilder
<> IRB(InsPoint
->getNextNode());
3471 const DataLayout
&DL
= F
.getParent()->getDataLayout();
3472 uint64_t TypeSize
= DL
.getTypeAllocSize(I
.getAllocatedType());
3473 Value
*Len
= ConstantInt::get(MS
.IntptrTy
, TypeSize
);
3474 if (I
.isArrayAllocation())
3475 Len
= IRB
.CreateMul(Len
, I
.getArraySize());
3477 if (MS
.CompileKernel
)
3478 poisonAllocaKmsan(I
, IRB
, Len
);
3480 poisonAllocaUserspace(I
, IRB
, Len
);
3483 void visitAllocaInst(AllocaInst
&I
) {
3484 setShadow(&I
, getCleanShadow(&I
));
3485 setOrigin(&I
, getCleanOrigin());
3486 // We'll get to this alloca later unless it's poisoned at the corresponding
3487 // llvm.lifetime.start.
3488 AllocaSet
.insert(&I
);
3491 void visitSelectInst(SelectInst
& I
) {
3492 IRBuilder
<> IRB(&I
);
3493 // a = select b, c, d
3494 Value
*B
= I
.getCondition();
3495 Value
*C
= I
.getTrueValue();
3496 Value
*D
= I
.getFalseValue();
3497 Value
*Sb
= getShadow(B
);
3498 Value
*Sc
= getShadow(C
);
3499 Value
*Sd
= getShadow(D
);
3501 // Result shadow if condition shadow is 0.
3502 Value
*Sa0
= IRB
.CreateSelect(B
, Sc
, Sd
);
3504 if (I
.getType()->isAggregateType()) {
3505 // To avoid "sign extending" i1 to an arbitrary aggregate type, we just do
3506 // an extra "select". This results in much more compact IR.
3507 // Sa = select Sb, poisoned, (select b, Sc, Sd)
3508 Sa1
= getPoisonedShadow(getShadowTy(I
.getType()));
3510 // Sa = select Sb, [ (c^d) | Sc | Sd ], [ b ? Sc : Sd ]
3511 // If Sb (condition is poisoned), look for bits in c and d that are equal
3512 // and both unpoisoned.
3513 // If !Sb (condition is unpoisoned), simply pick one of Sc and Sd.
3515 // Cast arguments to shadow-compatible type.
3516 C
= CreateAppToShadowCast(IRB
, C
);
3517 D
= CreateAppToShadowCast(IRB
, D
);
3519 // Result shadow if condition shadow is 1.
3520 Sa1
= IRB
.CreateOr({IRB
.CreateXor(C
, D
), Sc
, Sd
});
3522 Value
*Sa
= IRB
.CreateSelect(Sb
, Sa1
, Sa0
, "_msprop_select");
3524 if (MS
.TrackOrigins
) {
3525 // Origins are always i32, so any vector conditions must be flattened.
3526 // FIXME: consider tracking vector origins for app vectors?
3527 if (B
->getType()->isVectorTy()) {
3528 Type
*FlatTy
= getShadowTyNoVec(B
->getType());
3529 B
= IRB
.CreateICmpNE(IRB
.CreateBitCast(B
, FlatTy
),
3530 ConstantInt::getNullValue(FlatTy
));
3531 Sb
= IRB
.CreateICmpNE(IRB
.CreateBitCast(Sb
, FlatTy
),
3532 ConstantInt::getNullValue(FlatTy
));
3534 // a = select b, c, d
3535 // Oa = Sb ? Ob : (b ? Oc : Od)
3537 &I
, IRB
.CreateSelect(Sb
, getOrigin(I
.getCondition()),
3538 IRB
.CreateSelect(B
, getOrigin(I
.getTrueValue()),
3539 getOrigin(I
.getFalseValue()))));
3543 void visitLandingPadInst(LandingPadInst
&I
) {
3545 // See https://github.com/google/sanitizers/issues/504
3546 setShadow(&I
, getCleanShadow(&I
));
3547 setOrigin(&I
, getCleanOrigin());
3550 void visitCatchSwitchInst(CatchSwitchInst
&I
) {
3551 setShadow(&I
, getCleanShadow(&I
));
3552 setOrigin(&I
, getCleanOrigin());
3555 void visitFuncletPadInst(FuncletPadInst
&I
) {
3556 setShadow(&I
, getCleanShadow(&I
));
3557 setOrigin(&I
, getCleanOrigin());
3560 void visitGetElementPtrInst(GetElementPtrInst
&I
) {
3564 void visitExtractValueInst(ExtractValueInst
&I
) {
3565 IRBuilder
<> IRB(&I
);
3566 Value
*Agg
= I
.getAggregateOperand();
3567 LLVM_DEBUG(dbgs() << "ExtractValue: " << I
<< "\n");
3568 Value
*AggShadow
= getShadow(Agg
);
3569 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow
<< "\n");
3570 Value
*ResShadow
= IRB
.CreateExtractValue(AggShadow
, I
.getIndices());
3571 LLVM_DEBUG(dbgs() << " ResShadow: " << *ResShadow
<< "\n");
3572 setShadow(&I
, ResShadow
);
3573 setOriginForNaryOp(I
);
3576 void visitInsertValueInst(InsertValueInst
&I
) {
3577 IRBuilder
<> IRB(&I
);
3578 LLVM_DEBUG(dbgs() << "InsertValue: " << I
<< "\n");
3579 Value
*AggShadow
= getShadow(I
.getAggregateOperand());
3580 Value
*InsShadow
= getShadow(I
.getInsertedValueOperand());
3581 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow
<< "\n");
3582 LLVM_DEBUG(dbgs() << " InsShadow: " << *InsShadow
<< "\n");
3583 Value
*Res
= IRB
.CreateInsertValue(AggShadow
, InsShadow
, I
.getIndices());
3584 LLVM_DEBUG(dbgs() << " Res: " << *Res
<< "\n");
3586 setOriginForNaryOp(I
);
3589 void dumpInst(Instruction
&I
) {
3590 if (CallInst
*CI
= dyn_cast
<CallInst
>(&I
)) {
3591 errs() << "ZZZ call " << CI
->getCalledFunction()->getName() << "\n";
3593 errs() << "ZZZ " << I
.getOpcodeName() << "\n";
3595 errs() << "QQQ " << I
<< "\n";
3598 void visitResumeInst(ResumeInst
&I
) {
3599 LLVM_DEBUG(dbgs() << "Resume: " << I
<< "\n");
3600 // Nothing to do here.
3603 void visitCleanupReturnInst(CleanupReturnInst
&CRI
) {
3604 LLVM_DEBUG(dbgs() << "CleanupReturn: " << CRI
<< "\n");
3605 // Nothing to do here.
3608 void visitCatchReturnInst(CatchReturnInst
&CRI
) {
3609 LLVM_DEBUG(dbgs() << "CatchReturn: " << CRI
<< "\n");
3610 // Nothing to do here.
3613 void instrumentAsmArgument(Value
*Operand
, Instruction
&I
, IRBuilder
<> &IRB
,
3614 const DataLayout
&DL
, bool isOutput
) {
3615 // For each assembly argument, we check its value for being initialized.
3616 // If the argument is a pointer, we assume it points to a single element
3617 // of the corresponding type (or to a 8-byte word, if the type is unsized).
3618 // Each such pointer is instrumented with a call to the runtime library.
3619 Type
*OpType
= Operand
->getType();
3620 // Check the operand value itself.
3621 insertShadowCheck(Operand
, &I
);
3622 if (!OpType
->isPointerTy() || !isOutput
) {
3626 Type
*ElType
= OpType
->getPointerElementType();
3627 if (!ElType
->isSized())
3629 int Size
= DL
.getTypeStoreSize(ElType
);
3630 Value
*Ptr
= IRB
.CreatePointerCast(Operand
, IRB
.getInt8PtrTy());
3631 Value
*SizeVal
= ConstantInt::get(MS
.IntptrTy
, Size
);
3632 IRB
.CreateCall(MS
.MsanInstrumentAsmStoreFn
, {Ptr
, SizeVal
});
3635 /// Get the number of output arguments returned by pointers.
3636 int getNumOutputArgs(InlineAsm
*IA
, CallBase
*CB
) {
3637 int NumRetOutputs
= 0;
3639 Type
*RetTy
= cast
<Value
>(CB
)->getType();
3640 if (!RetTy
->isVoidTy()) {
3641 // Register outputs are returned via the CallInst return value.
3642 auto *ST
= dyn_cast
<StructType
>(RetTy
);
3644 NumRetOutputs
= ST
->getNumElements();
3648 InlineAsm::ConstraintInfoVector Constraints
= IA
->ParseConstraints();
3649 for (size_t i
= 0, n
= Constraints
.size(); i
< n
; i
++) {
3650 InlineAsm::ConstraintInfo Info
= Constraints
[i
];
3651 switch (Info
.Type
) {
3652 case InlineAsm::isOutput
:
3659 return NumOutputs
- NumRetOutputs
;
3662 void visitAsmInstruction(Instruction
&I
) {
3663 // Conservative inline assembly handling: check for poisoned shadow of
3664 // asm() arguments, then unpoison the result and all the memory locations
3665 // pointed to by those arguments.
3666 // An inline asm() statement in C++ contains lists of input and output
3667 // arguments used by the assembly code. These are mapped to operands of the
3668 // CallInst as follows:
3669 // - nR register outputs ("=r) are returned by value in a single structure
3670 // (SSA value of the CallInst);
3671 // - nO other outputs ("=m" and others) are returned by pointer as first
3672 // nO operands of the CallInst;
3673 // - nI inputs ("r", "m" and others) are passed to CallInst as the
3674 // remaining nI operands.
3675 // The total number of asm() arguments in the source is nR+nO+nI, and the
3676 // corresponding CallInst has nO+nI+1 operands (the last operand is the
3677 // function to be called).
3678 const DataLayout
&DL
= F
.getParent()->getDataLayout();
3679 CallBase
*CB
= cast
<CallBase
>(&I
);
3680 IRBuilder
<> IRB(&I
);
3681 InlineAsm
*IA
= cast
<InlineAsm
>(CB
->getCalledValue());
3682 int OutputArgs
= getNumOutputArgs(IA
, CB
);
3683 // The last operand of a CallInst is the function itself.
3684 int NumOperands
= CB
->getNumOperands() - 1;
3686 // Check input arguments. Doing so before unpoisoning output arguments, so
3687 // that we won't overwrite uninit values before checking them.
3688 for (int i
= OutputArgs
; i
< NumOperands
; i
++) {
3689 Value
*Operand
= CB
->getOperand(i
);
3690 instrumentAsmArgument(Operand
, I
, IRB
, DL
, /*isOutput*/ false);
3692 // Unpoison output arguments. This must happen before the actual InlineAsm
3693 // call, so that the shadow for memory published in the asm() statement
3695 for (int i
= 0; i
< OutputArgs
; i
++) {
3696 Value
*Operand
= CB
->getOperand(i
);
3697 instrumentAsmArgument(Operand
, I
, IRB
, DL
, /*isOutput*/ true);
3700 setShadow(&I
, getCleanShadow(&I
));
3701 setOrigin(&I
, getCleanOrigin());
3704 void visitInstruction(Instruction
&I
) {
3705 // Everything else: stop propagating and check for poisoned shadow.
3706 if (ClDumpStrictInstructions
)
3708 LLVM_DEBUG(dbgs() << "DEFAULT: " << I
<< "\n");
3709 for (size_t i
= 0, n
= I
.getNumOperands(); i
< n
; i
++) {
3710 Value
*Operand
= I
.getOperand(i
);
3711 if (Operand
->getType()->isSized())
3712 insertShadowCheck(Operand
, &I
);
3714 setShadow(&I
, getCleanShadow(&I
));
3715 setOrigin(&I
, getCleanOrigin());
3719 /// AMD64-specific implementation of VarArgHelper.
3720 struct VarArgAMD64Helper
: public VarArgHelper
{
3721 // An unfortunate workaround for asymmetric lowering of va_arg stuff.
3722 // See a comment in visitCallSite for more details.
3723 static const unsigned AMD64GpEndOffset
= 48; // AMD64 ABI Draft 0.99.6 p3.5.7
3724 static const unsigned AMD64FpEndOffsetSSE
= 176;
3725 // If SSE is disabled, fp_offset in va_list is zero.
3726 static const unsigned AMD64FpEndOffsetNoSSE
= AMD64GpEndOffset
;
3728 unsigned AMD64FpEndOffset
;
3730 MemorySanitizer
&MS
;
3731 MemorySanitizerVisitor
&MSV
;
3732 Value
*VAArgTLSCopy
= nullptr;
3733 Value
*VAArgTLSOriginCopy
= nullptr;
3734 Value
*VAArgOverflowSize
= nullptr;
3736 SmallVector
<CallInst
*, 16> VAStartInstrumentationList
;
3738 enum ArgKind
{ AK_GeneralPurpose
, AK_FloatingPoint
, AK_Memory
};
3740 VarArgAMD64Helper(Function
&F
, MemorySanitizer
&MS
,
3741 MemorySanitizerVisitor
&MSV
)
3742 : F(F
), MS(MS
), MSV(MSV
) {
3743 AMD64FpEndOffset
= AMD64FpEndOffsetSSE
;
3744 for (const auto &Attr
: F
.getAttributes().getFnAttributes()) {
3745 if (Attr
.isStringAttribute() &&
3746 (Attr
.getKindAsString() == "target-features")) {
3747 if (Attr
.getValueAsString().contains("-sse"))
3748 AMD64FpEndOffset
= AMD64FpEndOffsetNoSSE
;
3754 ArgKind
classifyArgument(Value
* arg
) {
3755 // A very rough approximation of X86_64 argument classification rules.
3756 Type
*T
= arg
->getType();
3757 if (T
->isFPOrFPVectorTy() || T
->isX86_MMXTy())
3758 return AK_FloatingPoint
;
3759 if (T
->isIntegerTy() && T
->getPrimitiveSizeInBits() <= 64)
3760 return AK_GeneralPurpose
;
3761 if (T
->isPointerTy())
3762 return AK_GeneralPurpose
;
3766 // For VarArg functions, store the argument shadow in an ABI-specific format
3767 // that corresponds to va_list layout.
3768 // We do this because Clang lowers va_arg in the frontend, and this pass
3769 // only sees the low level code that deals with va_list internals.
3770 // A much easier alternative (provided that Clang emits va_arg instructions)
3771 // would have been to associate each live instance of va_list with a copy of
3772 // MSanParamTLS, and extract shadow on va_arg() call in the argument list
3774 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{
3775 unsigned GpOffset
= 0;
3776 unsigned FpOffset
= AMD64GpEndOffset
;
3777 unsigned OverflowOffset
= AMD64FpEndOffset
;
3778 const DataLayout
&DL
= F
.getParent()->getDataLayout();
3779 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin(), End
= CS
.arg_end();
3780 ArgIt
!= End
; ++ArgIt
) {
3782 unsigned ArgNo
= CS
.getArgumentNo(ArgIt
);
3783 bool IsFixed
= ArgNo
< CS
.getFunctionType()->getNumParams();
3784 bool IsByVal
= CS
.paramHasAttr(ArgNo
, Attribute::ByVal
);
3786 // ByVal arguments always go to the overflow area.
3787 // Fixed arguments passed through the overflow area will be stepped
3788 // over by va_start, so don't count them towards the offset.
3791 assert(A
->getType()->isPointerTy());
3792 Type
*RealTy
= A
->getType()->getPointerElementType();
3793 uint64_t ArgSize
= DL
.getTypeAllocSize(RealTy
);
3794 Value
*ShadowBase
= getShadowPtrForVAArgument(
3795 RealTy
, IRB
, OverflowOffset
, alignTo(ArgSize
, 8));
3796 Value
*OriginBase
= nullptr;
3797 if (MS
.TrackOrigins
)
3798 OriginBase
= getOriginPtrForVAArgument(RealTy
, IRB
, OverflowOffset
);
3799 OverflowOffset
+= alignTo(ArgSize
, 8);
3802 Value
*ShadowPtr
, *OriginPtr
;
3803 std::tie(ShadowPtr
, OriginPtr
) =
3804 MSV
.getShadowOriginPtr(A
, IRB
, IRB
.getInt8Ty(), kShadowTLSAlignment
,
3807 IRB
.CreateMemCpy(ShadowBase
, kShadowTLSAlignment
, ShadowPtr
,
3808 kShadowTLSAlignment
, ArgSize
);
3809 if (MS
.TrackOrigins
)
3810 IRB
.CreateMemCpy(OriginBase
, kShadowTLSAlignment
, OriginPtr
,
3811 kShadowTLSAlignment
, ArgSize
);
3813 ArgKind AK
= classifyArgument(A
);
3814 if (AK
== AK_GeneralPurpose
&& GpOffset
>= AMD64GpEndOffset
)
3816 if (AK
== AK_FloatingPoint
&& FpOffset
>= AMD64FpEndOffset
)
3818 Value
*ShadowBase
, *OriginBase
= nullptr;
3820 case AK_GeneralPurpose
:
3822 getShadowPtrForVAArgument(A
->getType(), IRB
, GpOffset
, 8);
3823 if (MS
.TrackOrigins
)
3825 getOriginPtrForVAArgument(A
->getType(), IRB
, GpOffset
);
3828 case AK_FloatingPoint
:
3830 getShadowPtrForVAArgument(A
->getType(), IRB
, FpOffset
, 16);
3831 if (MS
.TrackOrigins
)
3833 getOriginPtrForVAArgument(A
->getType(), IRB
, FpOffset
);
3839 uint64_t ArgSize
= DL
.getTypeAllocSize(A
->getType());
3841 getShadowPtrForVAArgument(A
->getType(), IRB
, OverflowOffset
, 8);
3842 if (MS
.TrackOrigins
)
3844 getOriginPtrForVAArgument(A
->getType(), IRB
, OverflowOffset
);
3845 OverflowOffset
+= alignTo(ArgSize
, 8);
3847 // Take fixed arguments into account for GpOffset and FpOffset,
3848 // but don't actually store shadows for them.
3849 // TODO(glider): don't call get*PtrForVAArgument() for them.
3854 Value
*Shadow
= MSV
.getShadow(A
);
3855 IRB
.CreateAlignedStore(Shadow
, ShadowBase
, kShadowTLSAlignment
);
3856 if (MS
.TrackOrigins
) {
3857 Value
*Origin
= MSV
.getOrigin(A
);
3858 unsigned StoreSize
= DL
.getTypeStoreSize(Shadow
->getType());
3859 MSV
.paintOrigin(IRB
, Origin
, OriginBase
, StoreSize
,
3860 std::max(kShadowTLSAlignment
, kMinOriginAlignment
));
3864 Constant
*OverflowSize
=
3865 ConstantInt::get(IRB
.getInt64Ty(), OverflowOffset
- AMD64FpEndOffset
);
3866 IRB
.CreateStore(OverflowSize
, MS
.VAArgOverflowSizeTLS
);
3869 /// Compute the shadow address for a given va_arg.
3870 Value
*getShadowPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
,
3871 unsigned ArgOffset
, unsigned ArgSize
) {
3872 // Make sure we don't overflow __msan_va_arg_tls.
3873 if (ArgOffset
+ ArgSize
> kParamTLSSize
)
3875 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgTLS
, MS
.IntptrTy
);
3876 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
3877 return IRB
.CreateIntToPtr(Base
, PointerType::get(MSV
.getShadowTy(Ty
), 0),
3881 /// Compute the origin address for a given va_arg.
3882 Value
*getOriginPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
, int ArgOffset
) {
3883 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgOriginTLS
, MS
.IntptrTy
);
3884 // getOriginPtrForVAArgument() is always called after
3885 // getShadowPtrForVAArgument(), so __msan_va_arg_origin_tls can never
3887 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
3888 return IRB
.CreateIntToPtr(Base
, PointerType::get(MS
.OriginTy
, 0),
3892 void unpoisonVAListTagForInst(IntrinsicInst
&I
) {
3893 IRBuilder
<> IRB(&I
);
3894 Value
*VAListTag
= I
.getArgOperand(0);
3895 Value
*ShadowPtr
, *OriginPtr
;
3896 unsigned Alignment
= 8;
3897 std::tie(ShadowPtr
, OriginPtr
) =
3898 MSV
.getShadowOriginPtr(VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
,
3901 // Unpoison the whole __va_list_tag.
3902 // FIXME: magic ABI constants.
3903 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
3904 /* size */ 24, Alignment
, false);
3905 // We shouldn't need to zero out the origins, as they're only checked for
3909 void visitVAStartInst(VAStartInst
&I
) override
{
3910 if (F
.getCallingConv() == CallingConv::Win64
)
3912 VAStartInstrumentationList
.push_back(&I
);
3913 unpoisonVAListTagForInst(I
);
3916 void visitVACopyInst(VACopyInst
&I
) override
{
3917 if (F
.getCallingConv() == CallingConv::Win64
) return;
3918 unpoisonVAListTagForInst(I
);
3921 void finalizeInstrumentation() override
{
3922 assert(!VAArgOverflowSize
&& !VAArgTLSCopy
&&
3923 "finalizeInstrumentation called twice");
3924 if (!VAStartInstrumentationList
.empty()) {
3925 // If there is a va_start in this function, make a backup copy of
3926 // va_arg_tls somewhere in the function entry block.
3927 IRBuilder
<> IRB(MSV
.ActualFnStart
->getFirstNonPHI());
3929 IRB
.CreateLoad(IRB
.getInt64Ty(), MS
.VAArgOverflowSizeTLS
);
3931 IRB
.CreateAdd(ConstantInt::get(MS
.IntptrTy
, AMD64FpEndOffset
),
3933 VAArgTLSCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
3934 IRB
.CreateMemCpy(VAArgTLSCopy
, 8, MS
.VAArgTLS
, 8, CopySize
);
3935 if (MS
.TrackOrigins
) {
3936 VAArgTLSOriginCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
3937 IRB
.CreateMemCpy(VAArgTLSOriginCopy
, 8, MS
.VAArgOriginTLS
, 8, CopySize
);
3941 // Instrument va_start.
3942 // Copy va_list shadow from the backup copy of the TLS contents.
3943 for (size_t i
= 0, n
= VAStartInstrumentationList
.size(); i
< n
; i
++) {
3944 CallInst
*OrigInst
= VAStartInstrumentationList
[i
];
3945 IRBuilder
<> IRB(OrigInst
->getNextNode());
3946 Value
*VAListTag
= OrigInst
->getArgOperand(0);
3948 Type
*RegSaveAreaPtrTy
= Type::getInt64PtrTy(*MS
.C
);
3949 Value
*RegSaveAreaPtrPtr
= IRB
.CreateIntToPtr(
3950 IRB
.CreateAdd(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
3951 ConstantInt::get(MS
.IntptrTy
, 16)),
3952 PointerType::get(RegSaveAreaPtrTy
, 0));
3953 Value
*RegSaveAreaPtr
=
3954 IRB
.CreateLoad(RegSaveAreaPtrTy
, RegSaveAreaPtrPtr
);
3955 Value
*RegSaveAreaShadowPtr
, *RegSaveAreaOriginPtr
;
3956 unsigned Alignment
= 16;
3957 std::tie(RegSaveAreaShadowPtr
, RegSaveAreaOriginPtr
) =
3958 MSV
.getShadowOriginPtr(RegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
3959 Alignment
, /*isStore*/ true);
3960 IRB
.CreateMemCpy(RegSaveAreaShadowPtr
, Alignment
, VAArgTLSCopy
, Alignment
,
3962 if (MS
.TrackOrigins
)
3963 IRB
.CreateMemCpy(RegSaveAreaOriginPtr
, Alignment
, VAArgTLSOriginCopy
,
3964 Alignment
, AMD64FpEndOffset
);
3965 Type
*OverflowArgAreaPtrTy
= Type::getInt64PtrTy(*MS
.C
);
3966 Value
*OverflowArgAreaPtrPtr
= IRB
.CreateIntToPtr(
3967 IRB
.CreateAdd(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
3968 ConstantInt::get(MS
.IntptrTy
, 8)),
3969 PointerType::get(OverflowArgAreaPtrTy
, 0));
3970 Value
*OverflowArgAreaPtr
=
3971 IRB
.CreateLoad(OverflowArgAreaPtrTy
, OverflowArgAreaPtrPtr
);
3972 Value
*OverflowArgAreaShadowPtr
, *OverflowArgAreaOriginPtr
;
3973 std::tie(OverflowArgAreaShadowPtr
, OverflowArgAreaOriginPtr
) =
3974 MSV
.getShadowOriginPtr(OverflowArgAreaPtr
, IRB
, IRB
.getInt8Ty(),
3975 Alignment
, /*isStore*/ true);
3976 Value
*SrcPtr
= IRB
.CreateConstGEP1_32(IRB
.getInt8Ty(), VAArgTLSCopy
,
3978 IRB
.CreateMemCpy(OverflowArgAreaShadowPtr
, Alignment
, SrcPtr
, Alignment
,
3980 if (MS
.TrackOrigins
) {
3981 SrcPtr
= IRB
.CreateConstGEP1_32(IRB
.getInt8Ty(), VAArgTLSOriginCopy
,
3983 IRB
.CreateMemCpy(OverflowArgAreaOriginPtr
, Alignment
, SrcPtr
, Alignment
,
3990 /// MIPS64-specific implementation of VarArgHelper.
3991 struct VarArgMIPS64Helper
: public VarArgHelper
{
3993 MemorySanitizer
&MS
;
3994 MemorySanitizerVisitor
&MSV
;
3995 Value
*VAArgTLSCopy
= nullptr;
3996 Value
*VAArgSize
= nullptr;
3998 SmallVector
<CallInst
*, 16> VAStartInstrumentationList
;
4000 VarArgMIPS64Helper(Function
&F
, MemorySanitizer
&MS
,
4001 MemorySanitizerVisitor
&MSV
) : F(F
), MS(MS
), MSV(MSV
) {}
4003 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{
4004 unsigned VAArgOffset
= 0;
4005 const DataLayout
&DL
= F
.getParent()->getDataLayout();
4006 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin() +
4007 CS
.getFunctionType()->getNumParams(), End
= CS
.arg_end();
4008 ArgIt
!= End
; ++ArgIt
) {
4009 Triple
TargetTriple(F
.getParent()->getTargetTriple());
4012 uint64_t ArgSize
= DL
.getTypeAllocSize(A
->getType());
4013 if (TargetTriple
.getArch() == Triple::mips64
) {
4014 // Adjusting the shadow for argument with size < 8 to match the placement
4015 // of bits in big endian system
4017 VAArgOffset
+= (8 - ArgSize
);
4019 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
, VAArgOffset
, ArgSize
);
4020 VAArgOffset
+= ArgSize
;
4021 VAArgOffset
= alignTo(VAArgOffset
, 8);
4024 IRB
.CreateAlignedStore(MSV
.getShadow(A
), Base
, kShadowTLSAlignment
);
4027 Constant
*TotalVAArgSize
= ConstantInt::get(IRB
.getInt64Ty(), VAArgOffset
);
4028 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of
4029 // a new class member i.e. it is the total size of all VarArgs.
4030 IRB
.CreateStore(TotalVAArgSize
, MS
.VAArgOverflowSizeTLS
);
4033 /// Compute the shadow address for a given va_arg.
4034 Value
*getShadowPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
,
4035 unsigned ArgOffset
, unsigned ArgSize
) {
4036 // Make sure we don't overflow __msan_va_arg_tls.
4037 if (ArgOffset
+ ArgSize
> kParamTLSSize
)
4039 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgTLS
, MS
.IntptrTy
);
4040 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
4041 return IRB
.CreateIntToPtr(Base
, PointerType::get(MSV
.getShadowTy(Ty
), 0),
4045 void visitVAStartInst(VAStartInst
&I
) override
{
4046 IRBuilder
<> IRB(&I
);
4047 VAStartInstrumentationList
.push_back(&I
);
4048 Value
*VAListTag
= I
.getArgOperand(0);
4049 Value
*ShadowPtr
, *OriginPtr
;
4050 unsigned Alignment
= 8;
4051 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4052 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4053 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4054 /* size */ 8, Alignment
, false);
4057 void visitVACopyInst(VACopyInst
&I
) override
{
4058 IRBuilder
<> IRB(&I
);
4059 VAStartInstrumentationList
.push_back(&I
);
4060 Value
*VAListTag
= I
.getArgOperand(0);
4061 Value
*ShadowPtr
, *OriginPtr
;
4062 unsigned Alignment
= 8;
4063 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4064 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4065 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4066 /* size */ 8, Alignment
, false);
4069 void finalizeInstrumentation() override
{
4070 assert(!VAArgSize
&& !VAArgTLSCopy
&&
4071 "finalizeInstrumentation called twice");
4072 IRBuilder
<> IRB(MSV
.ActualFnStart
->getFirstNonPHI());
4073 VAArgSize
= IRB
.CreateLoad(IRB
.getInt64Ty(), MS
.VAArgOverflowSizeTLS
);
4074 Value
*CopySize
= IRB
.CreateAdd(ConstantInt::get(MS
.IntptrTy
, 0),
4077 if (!VAStartInstrumentationList
.empty()) {
4078 // If there is a va_start in this function, make a backup copy of
4079 // va_arg_tls somewhere in the function entry block.
4080 VAArgTLSCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
4081 IRB
.CreateMemCpy(VAArgTLSCopy
, 8, MS
.VAArgTLS
, 8, CopySize
);
4084 // Instrument va_start.
4085 // Copy va_list shadow from the backup copy of the TLS contents.
4086 for (size_t i
= 0, n
= VAStartInstrumentationList
.size(); i
< n
; i
++) {
4087 CallInst
*OrigInst
= VAStartInstrumentationList
[i
];
4088 IRBuilder
<> IRB(OrigInst
->getNextNode());
4089 Value
*VAListTag
= OrigInst
->getArgOperand(0);
4090 Type
*RegSaveAreaPtrTy
= Type::getInt64PtrTy(*MS
.C
);
4091 Value
*RegSaveAreaPtrPtr
=
4092 IRB
.CreateIntToPtr(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
4093 PointerType::get(RegSaveAreaPtrTy
, 0));
4094 Value
*RegSaveAreaPtr
=
4095 IRB
.CreateLoad(RegSaveAreaPtrTy
, RegSaveAreaPtrPtr
);
4096 Value
*RegSaveAreaShadowPtr
, *RegSaveAreaOriginPtr
;
4097 unsigned Alignment
= 8;
4098 std::tie(RegSaveAreaShadowPtr
, RegSaveAreaOriginPtr
) =
4099 MSV
.getShadowOriginPtr(RegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4100 Alignment
, /*isStore*/ true);
4101 IRB
.CreateMemCpy(RegSaveAreaShadowPtr
, Alignment
, VAArgTLSCopy
, Alignment
,
4107 /// AArch64-specific implementation of VarArgHelper.
4108 struct VarArgAArch64Helper
: public VarArgHelper
{
4109 static const unsigned kAArch64GrArgSize
= 64;
4110 static const unsigned kAArch64VrArgSize
= 128;
4112 static const unsigned AArch64GrBegOffset
= 0;
4113 static const unsigned AArch64GrEndOffset
= kAArch64GrArgSize
;
4114 // Make VR space aligned to 16 bytes.
4115 static const unsigned AArch64VrBegOffset
= AArch64GrEndOffset
;
4116 static const unsigned AArch64VrEndOffset
= AArch64VrBegOffset
4117 + kAArch64VrArgSize
;
4118 static const unsigned AArch64VAEndOffset
= AArch64VrEndOffset
;
4121 MemorySanitizer
&MS
;
4122 MemorySanitizerVisitor
&MSV
;
4123 Value
*VAArgTLSCopy
= nullptr;
4124 Value
*VAArgOverflowSize
= nullptr;
4126 SmallVector
<CallInst
*, 16> VAStartInstrumentationList
;
4128 enum ArgKind
{ AK_GeneralPurpose
, AK_FloatingPoint
, AK_Memory
};
4130 VarArgAArch64Helper(Function
&F
, MemorySanitizer
&MS
,
4131 MemorySanitizerVisitor
&MSV
) : F(F
), MS(MS
), MSV(MSV
) {}
4133 ArgKind
classifyArgument(Value
* arg
) {
4134 Type
*T
= arg
->getType();
4135 if (T
->isFPOrFPVectorTy())
4136 return AK_FloatingPoint
;
4137 if ((T
->isIntegerTy() && T
->getPrimitiveSizeInBits() <= 64)
4138 || (T
->isPointerTy()))
4139 return AK_GeneralPurpose
;
4143 // The instrumentation stores the argument shadow in a non ABI-specific
4144 // format because it does not know which argument is named (since Clang,
4145 // like x86_64 case, lowers the va_args in the frontend and this pass only
4146 // sees the low level code that deals with va_list internals).
4147 // The first seven GR registers are saved in the first 56 bytes of the
4148 // va_arg tls arra, followers by the first 8 FP/SIMD registers, and then
4149 // the remaining arguments.
4150 // Using constant offset within the va_arg TLS array allows fast copy
4151 // in the finalize instrumentation.
4152 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{
4153 unsigned GrOffset
= AArch64GrBegOffset
;
4154 unsigned VrOffset
= AArch64VrBegOffset
;
4155 unsigned OverflowOffset
= AArch64VAEndOffset
;
4157 const DataLayout
&DL
= F
.getParent()->getDataLayout();
4158 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin(), End
= CS
.arg_end();
4159 ArgIt
!= End
; ++ArgIt
) {
4161 unsigned ArgNo
= CS
.getArgumentNo(ArgIt
);
4162 bool IsFixed
= ArgNo
< CS
.getFunctionType()->getNumParams();
4163 ArgKind AK
= classifyArgument(A
);
4164 if (AK
== AK_GeneralPurpose
&& GrOffset
>= AArch64GrEndOffset
)
4166 if (AK
== AK_FloatingPoint
&& VrOffset
>= AArch64VrEndOffset
)
4170 case AK_GeneralPurpose
:
4171 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
, GrOffset
, 8);
4174 case AK_FloatingPoint
:
4175 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
, VrOffset
, 8);
4179 // Don't count fixed arguments in the overflow area - va_start will
4180 // skip right over them.
4183 uint64_t ArgSize
= DL
.getTypeAllocSize(A
->getType());
4184 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
, OverflowOffset
,
4185 alignTo(ArgSize
, 8));
4186 OverflowOffset
+= alignTo(ArgSize
, 8);
4189 // Count Gp/Vr fixed arguments to their respective offsets, but don't
4190 // bother to actually store a shadow.
4195 IRB
.CreateAlignedStore(MSV
.getShadow(A
), Base
, kShadowTLSAlignment
);
4197 Constant
*OverflowSize
=
4198 ConstantInt::get(IRB
.getInt64Ty(), OverflowOffset
- AArch64VAEndOffset
);
4199 IRB
.CreateStore(OverflowSize
, MS
.VAArgOverflowSizeTLS
);
4202 /// Compute the shadow address for a given va_arg.
4203 Value
*getShadowPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
,
4204 unsigned ArgOffset
, unsigned ArgSize
) {
4205 // Make sure we don't overflow __msan_va_arg_tls.
4206 if (ArgOffset
+ ArgSize
> kParamTLSSize
)
4208 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgTLS
, MS
.IntptrTy
);
4209 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
4210 return IRB
.CreateIntToPtr(Base
, PointerType::get(MSV
.getShadowTy(Ty
), 0),
4214 void visitVAStartInst(VAStartInst
&I
) override
{
4215 IRBuilder
<> IRB(&I
);
4216 VAStartInstrumentationList
.push_back(&I
);
4217 Value
*VAListTag
= I
.getArgOperand(0);
4218 Value
*ShadowPtr
, *OriginPtr
;
4219 unsigned Alignment
= 8;
4220 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4221 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4222 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4223 /* size */ 32, Alignment
, false);
4226 void visitVACopyInst(VACopyInst
&I
) override
{
4227 IRBuilder
<> IRB(&I
);
4228 VAStartInstrumentationList
.push_back(&I
);
4229 Value
*VAListTag
= I
.getArgOperand(0);
4230 Value
*ShadowPtr
, *OriginPtr
;
4231 unsigned Alignment
= 8;
4232 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4233 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4234 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4235 /* size */ 32, Alignment
, false);
4238 // Retrieve a va_list field of 'void*' size.
4239 Value
* getVAField64(IRBuilder
<> &IRB
, Value
*VAListTag
, int offset
) {
4240 Value
*SaveAreaPtrPtr
=
4242 IRB
.CreateAdd(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
4243 ConstantInt::get(MS
.IntptrTy
, offset
)),
4244 Type::getInt64PtrTy(*MS
.C
));
4245 return IRB
.CreateLoad(Type::getInt64Ty(*MS
.C
), SaveAreaPtrPtr
);
4248 // Retrieve a va_list field of 'int' size.
4249 Value
* getVAField32(IRBuilder
<> &IRB
, Value
*VAListTag
, int offset
) {
4250 Value
*SaveAreaPtr
=
4252 IRB
.CreateAdd(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
4253 ConstantInt::get(MS
.IntptrTy
, offset
)),
4254 Type::getInt32PtrTy(*MS
.C
));
4255 Value
*SaveArea32
= IRB
.CreateLoad(IRB
.getInt32Ty(), SaveAreaPtr
);
4256 return IRB
.CreateSExt(SaveArea32
, MS
.IntptrTy
);
4259 void finalizeInstrumentation() override
{
4260 assert(!VAArgOverflowSize
&& !VAArgTLSCopy
&&
4261 "finalizeInstrumentation called twice");
4262 if (!VAStartInstrumentationList
.empty()) {
4263 // If there is a va_start in this function, make a backup copy of
4264 // va_arg_tls somewhere in the function entry block.
4265 IRBuilder
<> IRB(MSV
.ActualFnStart
->getFirstNonPHI());
4267 IRB
.CreateLoad(IRB
.getInt64Ty(), MS
.VAArgOverflowSizeTLS
);
4269 IRB
.CreateAdd(ConstantInt::get(MS
.IntptrTy
, AArch64VAEndOffset
),
4271 VAArgTLSCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
4272 IRB
.CreateMemCpy(VAArgTLSCopy
, 8, MS
.VAArgTLS
, 8, CopySize
);
4275 Value
*GrArgSize
= ConstantInt::get(MS
.IntptrTy
, kAArch64GrArgSize
);
4276 Value
*VrArgSize
= ConstantInt::get(MS
.IntptrTy
, kAArch64VrArgSize
);
4278 // Instrument va_start, copy va_list shadow from the backup copy of
4279 // the TLS contents.
4280 for (size_t i
= 0, n
= VAStartInstrumentationList
.size(); i
< n
; i
++) {
4281 CallInst
*OrigInst
= VAStartInstrumentationList
[i
];
4282 IRBuilder
<> IRB(OrigInst
->getNextNode());
4284 Value
*VAListTag
= OrigInst
->getArgOperand(0);
4286 // The variadic ABI for AArch64 creates two areas to save the incoming
4287 // argument registers (one for 64-bit general register xn-x7 and another
4288 // for 128-bit FP/SIMD vn-v7).
4289 // We need then to propagate the shadow arguments on both regions
4290 // 'va::__gr_top + va::__gr_offs' and 'va::__vr_top + va::__vr_offs'.
4291 // The remaning arguments are saved on shadow for 'va::stack'.
4292 // One caveat is it requires only to propagate the non-named arguments,
4293 // however on the call site instrumentation 'all' the arguments are
4294 // saved. So to copy the shadow values from the va_arg TLS array
4295 // we need to adjust the offset for both GR and VR fields based on
4296 // the __{gr,vr}_offs value (since they are stores based on incoming
4297 // named arguments).
4299 // Read the stack pointer from the va_list.
4300 Value
*StackSaveAreaPtr
= getVAField64(IRB
, VAListTag
, 0);
4302 // Read both the __gr_top and __gr_off and add them up.
4303 Value
*GrTopSaveAreaPtr
= getVAField64(IRB
, VAListTag
, 8);
4304 Value
*GrOffSaveArea
= getVAField32(IRB
, VAListTag
, 24);
4306 Value
*GrRegSaveAreaPtr
= IRB
.CreateAdd(GrTopSaveAreaPtr
, GrOffSaveArea
);
4308 // Read both the __vr_top and __vr_off and add them up.
4309 Value
*VrTopSaveAreaPtr
= getVAField64(IRB
, VAListTag
, 16);
4310 Value
*VrOffSaveArea
= getVAField32(IRB
, VAListTag
, 28);
4312 Value
*VrRegSaveAreaPtr
= IRB
.CreateAdd(VrTopSaveAreaPtr
, VrOffSaveArea
);
4314 // It does not know how many named arguments is being used and, on the
4315 // callsite all the arguments were saved. Since __gr_off is defined as
4316 // '0 - ((8 - named_gr) * 8)', the idea is to just propagate the variadic
4317 // argument by ignoring the bytes of shadow from named arguments.
4318 Value
*GrRegSaveAreaShadowPtrOff
=
4319 IRB
.CreateAdd(GrArgSize
, GrOffSaveArea
);
4321 Value
*GrRegSaveAreaShadowPtr
=
4322 MSV
.getShadowOriginPtr(GrRegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4323 /*Alignment*/ 8, /*isStore*/ true)
4326 Value
*GrSrcPtr
= IRB
.CreateInBoundsGEP(IRB
.getInt8Ty(), VAArgTLSCopy
,
4327 GrRegSaveAreaShadowPtrOff
);
4328 Value
*GrCopySize
= IRB
.CreateSub(GrArgSize
, GrRegSaveAreaShadowPtrOff
);
4330 IRB
.CreateMemCpy(GrRegSaveAreaShadowPtr
, 8, GrSrcPtr
, 8, GrCopySize
);
4332 // Again, but for FP/SIMD values.
4333 Value
*VrRegSaveAreaShadowPtrOff
=
4334 IRB
.CreateAdd(VrArgSize
, VrOffSaveArea
);
4336 Value
*VrRegSaveAreaShadowPtr
=
4337 MSV
.getShadowOriginPtr(VrRegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4338 /*Alignment*/ 8, /*isStore*/ true)
4341 Value
*VrSrcPtr
= IRB
.CreateInBoundsGEP(
4343 IRB
.CreateInBoundsGEP(IRB
.getInt8Ty(), VAArgTLSCopy
,
4344 IRB
.getInt32(AArch64VrBegOffset
)),
4345 VrRegSaveAreaShadowPtrOff
);
4346 Value
*VrCopySize
= IRB
.CreateSub(VrArgSize
, VrRegSaveAreaShadowPtrOff
);
4348 IRB
.CreateMemCpy(VrRegSaveAreaShadowPtr
, 8, VrSrcPtr
, 8, VrCopySize
);
4350 // And finally for remaining arguments.
4351 Value
*StackSaveAreaShadowPtr
=
4352 MSV
.getShadowOriginPtr(StackSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4353 /*Alignment*/ 16, /*isStore*/ true)
4356 Value
*StackSrcPtr
=
4357 IRB
.CreateInBoundsGEP(IRB
.getInt8Ty(), VAArgTLSCopy
,
4358 IRB
.getInt32(AArch64VAEndOffset
));
4360 IRB
.CreateMemCpy(StackSaveAreaShadowPtr
, 16, StackSrcPtr
, 16,
4366 /// PowerPC64-specific implementation of VarArgHelper.
4367 struct VarArgPowerPC64Helper
: public VarArgHelper
{
4369 MemorySanitizer
&MS
;
4370 MemorySanitizerVisitor
&MSV
;
4371 Value
*VAArgTLSCopy
= nullptr;
4372 Value
*VAArgSize
= nullptr;
4374 SmallVector
<CallInst
*, 16> VAStartInstrumentationList
;
4376 VarArgPowerPC64Helper(Function
&F
, MemorySanitizer
&MS
,
4377 MemorySanitizerVisitor
&MSV
) : F(F
), MS(MS
), MSV(MSV
) {}
4379 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{
4380 // For PowerPC, we need to deal with alignment of stack arguments -
4381 // they are mostly aligned to 8 bytes, but vectors and i128 arrays
4382 // are aligned to 16 bytes, byvals can be aligned to 8 or 16 bytes,
4383 // and QPX vectors are aligned to 32 bytes. For that reason, we
4384 // compute current offset from stack pointer (which is always properly
4385 // aligned), and offset for the first vararg, then subtract them.
4387 Triple
TargetTriple(F
.getParent()->getTargetTriple());
4388 // Parameter save area starts at 48 bytes from frame pointer for ABIv1,
4389 // and 32 bytes for ABIv2. This is usually determined by target
4390 // endianness, but in theory could be overriden by function attribute.
4391 // For simplicity, we ignore it here (it'd only matter for QPX vectors).
4392 if (TargetTriple
.getArch() == Triple::ppc64
)
4396 unsigned VAArgOffset
= VAArgBase
;
4397 const DataLayout
&DL
= F
.getParent()->getDataLayout();
4398 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin(), End
= CS
.arg_end();
4399 ArgIt
!= End
; ++ArgIt
) {
4401 unsigned ArgNo
= CS
.getArgumentNo(ArgIt
);
4402 bool IsFixed
= ArgNo
< CS
.getFunctionType()->getNumParams();
4403 bool IsByVal
= CS
.paramHasAttr(ArgNo
, Attribute::ByVal
);
4405 assert(A
->getType()->isPointerTy());
4406 Type
*RealTy
= A
->getType()->getPointerElementType();
4407 uint64_t ArgSize
= DL
.getTypeAllocSize(RealTy
);
4408 uint64_t ArgAlign
= CS
.getParamAlignment(ArgNo
);
4411 VAArgOffset
= alignTo(VAArgOffset
, ArgAlign
);
4413 Value
*Base
= getShadowPtrForVAArgument(
4414 RealTy
, IRB
, VAArgOffset
- VAArgBase
, ArgSize
);
4416 Value
*AShadowPtr
, *AOriginPtr
;
4417 std::tie(AShadowPtr
, AOriginPtr
) =
4418 MSV
.getShadowOriginPtr(A
, IRB
, IRB
.getInt8Ty(),
4419 kShadowTLSAlignment
, /*isStore*/ false);
4421 IRB
.CreateMemCpy(Base
, kShadowTLSAlignment
, AShadowPtr
,
4422 kShadowTLSAlignment
, ArgSize
);
4425 VAArgOffset
+= alignTo(ArgSize
, 8);
4428 uint64_t ArgSize
= DL
.getTypeAllocSize(A
->getType());
4429 uint64_t ArgAlign
= 8;
4430 if (A
->getType()->isArrayTy()) {
4431 // Arrays are aligned to element size, except for long double
4432 // arrays, which are aligned to 8 bytes.
4433 Type
*ElementTy
= A
->getType()->getArrayElementType();
4434 if (!ElementTy
->isPPC_FP128Ty())
4435 ArgAlign
= DL
.getTypeAllocSize(ElementTy
);
4436 } else if (A
->getType()->isVectorTy()) {
4437 // Vectors are naturally aligned.
4438 ArgAlign
= DL
.getTypeAllocSize(A
->getType());
4442 VAArgOffset
= alignTo(VAArgOffset
, ArgAlign
);
4443 if (DL
.isBigEndian()) {
4444 // Adjusting the shadow for argument with size < 8 to match the placement
4445 // of bits in big endian system
4447 VAArgOffset
+= (8 - ArgSize
);
4450 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
,
4451 VAArgOffset
- VAArgBase
, ArgSize
);
4453 IRB
.CreateAlignedStore(MSV
.getShadow(A
), Base
, kShadowTLSAlignment
);
4455 VAArgOffset
+= ArgSize
;
4456 VAArgOffset
= alignTo(VAArgOffset
, 8);
4459 VAArgBase
= VAArgOffset
;
4462 Constant
*TotalVAArgSize
= ConstantInt::get(IRB
.getInt64Ty(),
4463 VAArgOffset
- VAArgBase
);
4464 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of
4465 // a new class member i.e. it is the total size of all VarArgs.
4466 IRB
.CreateStore(TotalVAArgSize
, MS
.VAArgOverflowSizeTLS
);
4469 /// Compute the shadow address for a given va_arg.
4470 Value
*getShadowPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
,
4471 unsigned ArgOffset
, unsigned ArgSize
) {
4472 // Make sure we don't overflow __msan_va_arg_tls.
4473 if (ArgOffset
+ ArgSize
> kParamTLSSize
)
4475 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgTLS
, MS
.IntptrTy
);
4476 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
4477 return IRB
.CreateIntToPtr(Base
, PointerType::get(MSV
.getShadowTy(Ty
), 0),
4481 void visitVAStartInst(VAStartInst
&I
) override
{
4482 IRBuilder
<> IRB(&I
);
4483 VAStartInstrumentationList
.push_back(&I
);
4484 Value
*VAListTag
= I
.getArgOperand(0);
4485 Value
*ShadowPtr
, *OriginPtr
;
4486 unsigned Alignment
= 8;
4487 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4488 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4489 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4490 /* size */ 8, Alignment
, false);
4493 void visitVACopyInst(VACopyInst
&I
) override
{
4494 IRBuilder
<> IRB(&I
);
4495 Value
*VAListTag
= I
.getArgOperand(0);
4496 Value
*ShadowPtr
, *OriginPtr
;
4497 unsigned Alignment
= 8;
4498 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4499 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4500 // Unpoison the whole __va_list_tag.
4501 // FIXME: magic ABI constants.
4502 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4503 /* size */ 8, Alignment
, false);
4506 void finalizeInstrumentation() override
{
4507 assert(!VAArgSize
&& !VAArgTLSCopy
&&
4508 "finalizeInstrumentation called twice");
4509 IRBuilder
<> IRB(MSV
.ActualFnStart
->getFirstNonPHI());
4510 VAArgSize
= IRB
.CreateLoad(IRB
.getInt64Ty(), MS
.VAArgOverflowSizeTLS
);
4511 Value
*CopySize
= IRB
.CreateAdd(ConstantInt::get(MS
.IntptrTy
, 0),
4514 if (!VAStartInstrumentationList
.empty()) {
4515 // If there is a va_start in this function, make a backup copy of
4516 // va_arg_tls somewhere in the function entry block.
4517 VAArgTLSCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
4518 IRB
.CreateMemCpy(VAArgTLSCopy
, 8, MS
.VAArgTLS
, 8, CopySize
);
4521 // Instrument va_start.
4522 // Copy va_list shadow from the backup copy of the TLS contents.
4523 for (size_t i
= 0, n
= VAStartInstrumentationList
.size(); i
< n
; i
++) {
4524 CallInst
*OrigInst
= VAStartInstrumentationList
[i
];
4525 IRBuilder
<> IRB(OrigInst
->getNextNode());
4526 Value
*VAListTag
= OrigInst
->getArgOperand(0);
4527 Type
*RegSaveAreaPtrTy
= Type::getInt64PtrTy(*MS
.C
);
4528 Value
*RegSaveAreaPtrPtr
=
4529 IRB
.CreateIntToPtr(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
4530 PointerType::get(RegSaveAreaPtrTy
, 0));
4531 Value
*RegSaveAreaPtr
=
4532 IRB
.CreateLoad(RegSaveAreaPtrTy
, RegSaveAreaPtrPtr
);
4533 Value
*RegSaveAreaShadowPtr
, *RegSaveAreaOriginPtr
;
4534 unsigned Alignment
= 8;
4535 std::tie(RegSaveAreaShadowPtr
, RegSaveAreaOriginPtr
) =
4536 MSV
.getShadowOriginPtr(RegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4537 Alignment
, /*isStore*/ true);
4538 IRB
.CreateMemCpy(RegSaveAreaShadowPtr
, Alignment
, VAArgTLSCopy
, Alignment
,
4544 /// A no-op implementation of VarArgHelper.
4545 struct VarArgNoOpHelper
: public VarArgHelper
{
4546 VarArgNoOpHelper(Function
&F
, MemorySanitizer
&MS
,
4547 MemorySanitizerVisitor
&MSV
) {}
4549 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{}
4551 void visitVAStartInst(VAStartInst
&I
) override
{}
4553 void visitVACopyInst(VACopyInst
&I
) override
{}
4555 void finalizeInstrumentation() override
{}
4558 } // end anonymous namespace
4560 static VarArgHelper
*CreateVarArgHelper(Function
&Func
, MemorySanitizer
&Msan
,
4561 MemorySanitizerVisitor
&Visitor
) {
4562 // VarArg handling is only implemented on AMD64. False positives are possible
4563 // on other platforms.
4564 Triple
TargetTriple(Func
.getParent()->getTargetTriple());
4565 if (TargetTriple
.getArch() == Triple::x86_64
)
4566 return new VarArgAMD64Helper(Func
, Msan
, Visitor
);
4567 else if (TargetTriple
.isMIPS64())
4568 return new VarArgMIPS64Helper(Func
, Msan
, Visitor
);
4569 else if (TargetTriple
.getArch() == Triple::aarch64
)
4570 return new VarArgAArch64Helper(Func
, Msan
, Visitor
);
4571 else if (TargetTriple
.getArch() == Triple::ppc64
||
4572 TargetTriple
.getArch() == Triple::ppc64le
)
4573 return new VarArgPowerPC64Helper(Func
, Msan
, Visitor
);
4575 return new VarArgNoOpHelper(Func
, Msan
, Visitor
);
4578 bool MemorySanitizer::sanitizeFunction(Function
&F
, TargetLibraryInfo
&TLI
) {
4579 if (!CompileKernel
&& (&F
== MsanCtorFunction
))
4581 MemorySanitizerVisitor
Visitor(F
, *this, TLI
);
4583 // Clear out readonly/readnone attributes.
4585 B
.addAttribute(Attribute::ReadOnly
)
4586 .addAttribute(Attribute::ReadNone
);
4587 F
.removeAttributes(AttributeList::FunctionIndex
, B
);
4589 return Visitor
.runOnFunction();