1 //===- MemorySanitizer.cpp - detector of uninitialized reads --------------===//
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4 // See https://llvm.org/LICENSE.txt for license information.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
7 //===----------------------------------------------------------------------===//
10 /// This file is a part of MemorySanitizer, a detector of uninitialized
13 /// The algorithm of the tool is similar to Memcheck
14 /// (http://goo.gl/QKbem). We associate a few shadow bits with every
15 /// byte of the application memory, poison the shadow of the malloc-ed
16 /// or alloca-ed memory, load the shadow bits on every memory read,
17 /// propagate the shadow bits through some of the arithmetic
18 /// instruction (including MOV), store the shadow bits on every memory
19 /// write, report a bug on some other instructions (e.g. JMP) if the
20 /// associated shadow is poisoned.
22 /// But there are differences too. The first and the major one:
23 /// compiler instrumentation instead of binary instrumentation. This
24 /// gives us much better register allocation, possible compiler
25 /// optimizations and a fast start-up. But this brings the major issue
26 /// as well: msan needs to see all program events, including system
27 /// calls and reads/writes in system libraries, so we either need to
28 /// compile *everything* with msan or use a binary translation
29 /// component (e.g. DynamoRIO) to instrument pre-built libraries.
30 /// Another difference from Memcheck is that we use 8 shadow bits per
31 /// byte of application memory and use a direct shadow mapping. This
32 /// greatly simplifies the instrumentation code and avoids races on
33 /// shadow updates (Memcheck is single-threaded so races are not a
34 /// concern there. Memcheck uses 2 shadow bits per byte with a slow
35 /// path storage that uses 8 bits per byte).
37 /// The default value of shadow is 0, which means "clean" (not poisoned).
39 /// Every module initializer should call __msan_init to ensure that the
40 /// shadow memory is ready. On error, __msan_warning is called. Since
41 /// parameters and return values may be passed via registers, we have a
42 /// specialized thread-local shadow for return values
43 /// (__msan_retval_tls) and parameters (__msan_param_tls).
47 /// MemorySanitizer can track origins (allocation points) of all uninitialized
48 /// values. This behavior is controlled with a flag (msan-track-origins) and is
49 /// disabled by default.
51 /// Origins are 4-byte values created and interpreted by the runtime library.
52 /// They are stored in a second shadow mapping, one 4-byte value for 4 bytes
53 /// of application memory. Propagation of origins is basically a bunch of
54 /// "select" instructions that pick the origin of a dirty argument, if an
55 /// instruction has one.
57 /// Every 4 aligned, consecutive bytes of application memory have one origin
58 /// value associated with them. If these bytes contain uninitialized data
59 /// coming from 2 different allocations, the last store wins. Because of this,
60 /// MemorySanitizer reports can show unrelated origins, but this is unlikely in
63 /// Origins are meaningless for fully initialized values, so MemorySanitizer
64 /// avoids storing origin to memory when a fully initialized value is stored.
65 /// This way it avoids needless overwritting origin of the 4-byte region on
66 /// a short (i.e. 1 byte) clean store, and it is also good for performance.
70 /// Ideally, every atomic store of application value should update the
71 /// corresponding shadow location in an atomic way. Unfortunately, atomic store
72 /// of two disjoint locations can not be done without severe slowdown.
74 /// Therefore, we implement an approximation that may err on the safe side.
75 /// In this implementation, every atomically accessed location in the program
76 /// may only change from (partially) uninitialized to fully initialized, but
77 /// not the other way around. We load the shadow _after_ the application load,
78 /// and we store the shadow _before_ the app store. Also, we always store clean
79 /// shadow (if the application store is atomic). This way, if the store-load
80 /// pair constitutes a happens-before arc, shadow store and load are correctly
81 /// ordered such that the load will get either the value that was stored, or
82 /// some later value (which is always clean).
84 /// This does not work very well with Compare-And-Swap (CAS) and
85 /// Read-Modify-Write (RMW) operations. To follow the above logic, CAS and RMW
86 /// must store the new shadow before the app operation, and load the shadow
87 /// after the app operation. Computers don't work this way. Current
88 /// implementation ignores the load aspect of CAS/RMW, always returning a clean
89 /// value. It implements the store part as a simple atomic store by storing a
92 /// Instrumenting inline assembly.
94 /// For inline assembly code LLVM has little idea about which memory locations
95 /// become initialized depending on the arguments. It can be possible to figure
96 /// out which arguments are meant to point to inputs and outputs, but the
97 /// actual semantics can be only visible at runtime. In the Linux kernel it's
98 /// also possible that the arguments only indicate the offset for a base taken
99 /// from a segment register, so it's dangerous to treat any asm() arguments as
100 /// pointers. We take a conservative approach generating calls to
101 /// __msan_instrument_asm_store(ptr, size)
102 /// , which defer the memory unpoisoning to the runtime library.
103 /// The latter can perform more complex address checks to figure out whether
104 /// it's safe to touch the shadow memory.
105 /// Like with atomic operations, we call __msan_instrument_asm_store() before
106 /// the assembly call, so that changes to the shadow memory will be seen by
107 /// other threads together with main memory initialization.
109 /// KernelMemorySanitizer (KMSAN) implementation.
111 /// The major differences between KMSAN and MSan instrumentation are:
112 /// - KMSAN always tracks the origins and implies msan-keep-going=true;
113 /// - KMSAN allocates shadow and origin memory for each page separately, so
114 /// there are no explicit accesses to shadow and origin in the
116 /// Shadow and origin values for a particular X-byte memory location
117 /// (X=1,2,4,8) are accessed through pointers obtained via the
118 /// __msan_metadata_ptr_for_load_X(ptr)
119 /// __msan_metadata_ptr_for_store_X(ptr)
120 /// functions. The corresponding functions check that the X-byte accesses
121 /// are possible and returns the pointers to shadow and origin memory.
122 /// Arbitrary sized accesses are handled with:
123 /// __msan_metadata_ptr_for_load_n(ptr, size)
124 /// __msan_metadata_ptr_for_store_n(ptr, size);
125 /// - TLS variables are stored in a single per-task struct. A call to a
126 /// function __msan_get_context_state() returning a pointer to that struct
127 /// is inserted into every instrumented function before the entry block;
128 /// - __msan_warning() takes a 32-bit origin parameter;
129 /// - local variables are poisoned with __msan_poison_alloca() upon function
130 /// entry and unpoisoned with __msan_unpoison_alloca() before leaving the
132 /// - the pass doesn't declare any global variables or add global constructors
133 /// to the translation unit.
135 /// Also, KMSAN currently ignores uninitialized memory passed into inline asm
136 /// calls, making sure we're on the safe side wrt. possible false positives.
138 /// KernelMemorySanitizer only supports X86_64 at the moment.
140 //===----------------------------------------------------------------------===//
142 #include "llvm/Transforms/Instrumentation/MemorySanitizer.h"
143 #include "llvm/ADT/APInt.h"
144 #include "llvm/ADT/ArrayRef.h"
145 #include "llvm/ADT/DepthFirstIterator.h"
146 #include "llvm/ADT/SmallSet.h"
147 #include "llvm/ADT/SmallString.h"
148 #include "llvm/ADT/SmallVector.h"
149 #include "llvm/ADT/StringExtras.h"
150 #include "llvm/ADT/StringRef.h"
151 #include "llvm/ADT/Triple.h"
152 #include "llvm/Analysis/TargetLibraryInfo.h"
153 #include "llvm/IR/Argument.h"
154 #include "llvm/IR/Attributes.h"
155 #include "llvm/IR/BasicBlock.h"
156 #include "llvm/IR/CallSite.h"
157 #include "llvm/IR/CallingConv.h"
158 #include "llvm/IR/Constant.h"
159 #include "llvm/IR/Constants.h"
160 #include "llvm/IR/DataLayout.h"
161 #include "llvm/IR/DerivedTypes.h"
162 #include "llvm/IR/Function.h"
163 #include "llvm/IR/GlobalValue.h"
164 #include "llvm/IR/GlobalVariable.h"
165 #include "llvm/IR/IRBuilder.h"
166 #include "llvm/IR/InlineAsm.h"
167 #include "llvm/IR/InstVisitor.h"
168 #include "llvm/IR/InstrTypes.h"
169 #include "llvm/IR/Instruction.h"
170 #include "llvm/IR/Instructions.h"
171 #include "llvm/IR/IntrinsicInst.h"
172 #include "llvm/IR/Intrinsics.h"
173 #include "llvm/IR/LLVMContext.h"
174 #include "llvm/IR/MDBuilder.h"
175 #include "llvm/IR/Module.h"
176 #include "llvm/IR/Type.h"
177 #include "llvm/IR/Value.h"
178 #include "llvm/IR/ValueMap.h"
179 #include "llvm/Pass.h"
180 #include "llvm/Support/AtomicOrdering.h"
181 #include "llvm/Support/Casting.h"
182 #include "llvm/Support/CommandLine.h"
183 #include "llvm/Support/Compiler.h"
184 #include "llvm/Support/Debug.h"
185 #include "llvm/Support/ErrorHandling.h"
186 #include "llvm/Support/MathExtras.h"
187 #include "llvm/Support/raw_ostream.h"
188 #include "llvm/Transforms/Instrumentation.h"
189 #include "llvm/Transforms/Utils/BasicBlockUtils.h"
190 #include "llvm/Transforms/Utils/Local.h"
191 #include "llvm/Transforms/Utils/ModuleUtils.h"
200 using namespace llvm
;
202 #define DEBUG_TYPE "msan"
204 static const unsigned kOriginSize
= 4;
205 static const unsigned kMinOriginAlignment
= 4;
206 static const unsigned kShadowTLSAlignment
= 8;
208 // These constants must be kept in sync with the ones in msan.h.
209 static const unsigned kParamTLSSize
= 800;
210 static const unsigned kRetvalTLSSize
= 800;
212 // Accesses sizes are powers of two: 1, 2, 4, 8.
213 static const size_t kNumberOfAccessSizes
= 4;
215 /// Track origins of uninitialized values.
217 /// Adds a section to MemorySanitizer report that points to the allocation
218 /// (stack or heap) the uninitialized bits came from originally.
219 static cl::opt
<int> ClTrackOrigins("msan-track-origins",
220 cl::desc("Track origins (allocation sites) of poisoned memory"),
221 cl::Hidden
, cl::init(0));
223 static cl::opt
<bool> ClKeepGoing("msan-keep-going",
224 cl::desc("keep going after reporting a UMR"),
225 cl::Hidden
, cl::init(false));
227 static cl::opt
<bool> ClPoisonStack("msan-poison-stack",
228 cl::desc("poison uninitialized stack variables"),
229 cl::Hidden
, cl::init(true));
231 static cl::opt
<bool> ClPoisonStackWithCall("msan-poison-stack-with-call",
232 cl::desc("poison uninitialized stack variables with a call"),
233 cl::Hidden
, cl::init(false));
235 static cl::opt
<int> ClPoisonStackPattern("msan-poison-stack-pattern",
236 cl::desc("poison uninitialized stack variables with the given pattern"),
237 cl::Hidden
, cl::init(0xff));
239 static cl::opt
<bool> ClPoisonUndef("msan-poison-undef",
240 cl::desc("poison undef temps"),
241 cl::Hidden
, cl::init(true));
243 static cl::opt
<bool> ClHandleICmp("msan-handle-icmp",
244 cl::desc("propagate shadow through ICmpEQ and ICmpNE"),
245 cl::Hidden
, cl::init(true));
247 static cl::opt
<bool> ClHandleICmpExact("msan-handle-icmp-exact",
248 cl::desc("exact handling of relational integer ICmp"),
249 cl::Hidden
, cl::init(false));
251 static cl::opt
<bool> ClHandleLifetimeIntrinsics(
252 "msan-handle-lifetime-intrinsics",
254 "when possible, poison scoped variables at the beginning of the scope "
255 "(slower, but more precise)"),
256 cl::Hidden
, cl::init(true));
258 // When compiling the Linux kernel, we sometimes see false positives related to
259 // MSan being unable to understand that inline assembly calls may initialize
261 // This flag makes the compiler conservatively unpoison every memory location
262 // passed into an assembly call. Note that this may cause false positives.
263 // Because it's impossible to figure out the array sizes, we can only unpoison
264 // the first sizeof(type) bytes for each type* pointer.
265 // The instrumentation is only enabled in KMSAN builds, and only if
266 // -msan-handle-asm-conservative is on. This is done because we may want to
267 // quickly disable assembly instrumentation when it breaks.
268 static cl::opt
<bool> ClHandleAsmConservative(
269 "msan-handle-asm-conservative",
270 cl::desc("conservative handling of inline assembly"), cl::Hidden
,
273 // This flag controls whether we check the shadow of the address
274 // operand of load or store. Such bugs are very rare, since load from
275 // a garbage address typically results in SEGV, but still happen
276 // (e.g. only lower bits of address are garbage, or the access happens
277 // early at program startup where malloc-ed memory is more likely to
278 // be zeroed. As of 2012-08-28 this flag adds 20% slowdown.
279 static cl::opt
<bool> ClCheckAccessAddress("msan-check-access-address",
280 cl::desc("report accesses through a pointer which has poisoned shadow"),
281 cl::Hidden
, cl::init(true));
283 static cl::opt
<bool> ClDumpStrictInstructions("msan-dump-strict-instructions",
284 cl::desc("print out instructions with default strict semantics"),
285 cl::Hidden
, cl::init(false));
287 static cl::opt
<int> ClInstrumentationWithCallThreshold(
288 "msan-instrumentation-with-call-threshold",
290 "If the function being instrumented requires more than "
291 "this number of checks and origin stores, use callbacks instead of "
292 "inline checks (-1 means never use callbacks)."),
293 cl::Hidden
, cl::init(3500));
296 ClEnableKmsan("msan-kernel",
297 cl::desc("Enable KernelMemorySanitizer instrumentation"),
298 cl::Hidden
, cl::init(false));
300 // This is an experiment to enable handling of cases where shadow is a non-zero
301 // compile-time constant. For some unexplainable reason they were silently
302 // ignored in the instrumentation.
303 static cl::opt
<bool> ClCheckConstantShadow("msan-check-constant-shadow",
304 cl::desc("Insert checks for constant shadow values"),
305 cl::Hidden
, cl::init(false));
307 // This is off by default because of a bug in gold:
308 // https://sourceware.org/bugzilla/show_bug.cgi?id=19002
309 static cl::opt
<bool> ClWithComdat("msan-with-comdat",
310 cl::desc("Place MSan constructors in comdat sections"),
311 cl::Hidden
, cl::init(false));
313 // These options allow to specify custom memory map parameters
314 // See MemoryMapParams for details.
315 static cl::opt
<uint64_t> ClAndMask("msan-and-mask",
316 cl::desc("Define custom MSan AndMask"),
317 cl::Hidden
, cl::init(0));
319 static cl::opt
<uint64_t> ClXorMask("msan-xor-mask",
320 cl::desc("Define custom MSan XorMask"),
321 cl::Hidden
, cl::init(0));
323 static cl::opt
<uint64_t> ClShadowBase("msan-shadow-base",
324 cl::desc("Define custom MSan ShadowBase"),
325 cl::Hidden
, cl::init(0));
327 static cl::opt
<uint64_t> ClOriginBase("msan-origin-base",
328 cl::desc("Define custom MSan OriginBase"),
329 cl::Hidden
, cl::init(0));
331 static const char *const kMsanModuleCtorName
= "msan.module_ctor";
332 static const char *const kMsanInitName
= "__msan_init";
336 // Memory map parameters used in application-to-shadow address calculation.
337 // Offset = (Addr & ~AndMask) ^ XorMask
338 // Shadow = ShadowBase + Offset
339 // Origin = OriginBase + Offset
340 struct MemoryMapParams
{
347 struct PlatformMemoryMapParams
{
348 const MemoryMapParams
*bits32
;
349 const MemoryMapParams
*bits64
;
352 } // end anonymous namespace
355 static const MemoryMapParams Linux_I386_MemoryMapParams
= {
356 0x000080000000, // AndMask
357 0, // XorMask (not used)
358 0, // ShadowBase (not used)
359 0x000040000000, // OriginBase
363 static const MemoryMapParams Linux_X86_64_MemoryMapParams
= {
364 #ifdef MSAN_LINUX_X86_64_OLD_MAPPING
365 0x400000000000, // AndMask
366 0, // XorMask (not used)
367 0, // ShadowBase (not used)
368 0x200000000000, // OriginBase
370 0, // AndMask (not used)
371 0x500000000000, // XorMask
372 0, // ShadowBase (not used)
373 0x100000000000, // OriginBase
378 static const MemoryMapParams Linux_MIPS64_MemoryMapParams
= {
379 0, // AndMask (not used)
380 0x008000000000, // XorMask
381 0, // ShadowBase (not used)
382 0x002000000000, // OriginBase
386 static const MemoryMapParams Linux_PowerPC64_MemoryMapParams
= {
387 0xE00000000000, // AndMask
388 0x100000000000, // XorMask
389 0x080000000000, // ShadowBase
390 0x1C0000000000, // OriginBase
394 static const MemoryMapParams Linux_AArch64_MemoryMapParams
= {
395 0, // AndMask (not used)
396 0x06000000000, // XorMask
397 0, // ShadowBase (not used)
398 0x01000000000, // OriginBase
402 static const MemoryMapParams FreeBSD_I386_MemoryMapParams
= {
403 0x000180000000, // AndMask
404 0x000040000000, // XorMask
405 0x000020000000, // ShadowBase
406 0x000700000000, // OriginBase
410 static const MemoryMapParams FreeBSD_X86_64_MemoryMapParams
= {
411 0xc00000000000, // AndMask
412 0x200000000000, // XorMask
413 0x100000000000, // ShadowBase
414 0x380000000000, // OriginBase
418 static const MemoryMapParams NetBSD_X86_64_MemoryMapParams
= {
420 0x500000000000, // XorMask
422 0x100000000000, // OriginBase
425 static const PlatformMemoryMapParams Linux_X86_MemoryMapParams
= {
426 &Linux_I386_MemoryMapParams
,
427 &Linux_X86_64_MemoryMapParams
,
430 static const PlatformMemoryMapParams Linux_MIPS_MemoryMapParams
= {
432 &Linux_MIPS64_MemoryMapParams
,
435 static const PlatformMemoryMapParams Linux_PowerPC_MemoryMapParams
= {
437 &Linux_PowerPC64_MemoryMapParams
,
440 static const PlatformMemoryMapParams Linux_ARM_MemoryMapParams
= {
442 &Linux_AArch64_MemoryMapParams
,
445 static const PlatformMemoryMapParams FreeBSD_X86_MemoryMapParams
= {
446 &FreeBSD_I386_MemoryMapParams
,
447 &FreeBSD_X86_64_MemoryMapParams
,
450 static const PlatformMemoryMapParams NetBSD_X86_MemoryMapParams
= {
452 &NetBSD_X86_64_MemoryMapParams
,
457 /// Instrument functions of a module to detect uninitialized reads.
459 /// Instantiating MemorySanitizer inserts the msan runtime library API function
460 /// declarations into the module if they don't exist already. Instantiating
461 /// ensures the __msan_init function is in the list of global constructors for
463 class MemorySanitizer
{
465 MemorySanitizer(Module
&M
, MemorySanitizerOptions Options
)
466 : CompileKernel(Options
.Kernel
), TrackOrigins(Options
.TrackOrigins
),
467 Recover(Options
.Recover
) {
471 // MSan cannot be moved or copied because of MapParams.
472 MemorySanitizer(MemorySanitizer
&&) = delete;
473 MemorySanitizer
&operator=(MemorySanitizer
&&) = delete;
474 MemorySanitizer(const MemorySanitizer
&) = delete;
475 MemorySanitizer
&operator=(const MemorySanitizer
&) = delete;
477 bool sanitizeFunction(Function
&F
, TargetLibraryInfo
&TLI
);
480 friend struct MemorySanitizerVisitor
;
481 friend struct VarArgAMD64Helper
;
482 friend struct VarArgMIPS64Helper
;
483 friend struct VarArgAArch64Helper
;
484 friend struct VarArgPowerPC64Helper
;
486 void initializeModule(Module
&M
);
487 void initializeCallbacks(Module
&M
);
488 void createKernelApi(Module
&M
);
489 void createUserspaceApi(Module
&M
);
491 /// True if we're compiling the Linux kernel.
493 /// Track origins (allocation points) of uninitialized values.
501 // XxxTLS variables represent the per-thread state in MSan and per-task state
503 // For the userspace these point to thread-local globals. In the kernel land
504 // they point to the members of a per-task struct obtained via a call to
505 // __msan_get_context_state().
507 /// Thread-local shadow storage for function parameters.
510 /// Thread-local origin storage for function parameters.
511 Value
*ParamOriginTLS
;
513 /// Thread-local shadow storage for function return value.
516 /// Thread-local origin storage for function return value.
517 Value
*RetvalOriginTLS
;
519 /// Thread-local shadow storage for in-register va_arg function
520 /// parameters (x86_64-specific).
523 /// Thread-local shadow storage for in-register va_arg function
524 /// parameters (x86_64-specific).
525 Value
*VAArgOriginTLS
;
527 /// Thread-local shadow storage for va_arg overflow area
528 /// (x86_64-specific).
529 Value
*VAArgOverflowSizeTLS
;
531 /// Thread-local space used to pass origin value to the UMR reporting
535 /// Are the instrumentation callbacks set up?
536 bool CallbacksInitialized
= false;
538 /// The run-time callback to print a warning.
539 FunctionCallee WarningFn
;
541 // These arrays are indexed by log2(AccessSize).
542 FunctionCallee MaybeWarningFn
[kNumberOfAccessSizes
];
543 FunctionCallee MaybeStoreOriginFn
[kNumberOfAccessSizes
];
545 /// Run-time helper that generates a new origin value for a stack
547 FunctionCallee MsanSetAllocaOrigin4Fn
;
549 /// Run-time helper that poisons stack on function entry.
550 FunctionCallee MsanPoisonStackFn
;
552 /// Run-time helper that records a store (or any event) of an
553 /// uninitialized value and returns an updated origin id encoding this info.
554 FunctionCallee MsanChainOriginFn
;
556 /// MSan runtime replacements for memmove, memcpy and memset.
557 FunctionCallee MemmoveFn
, MemcpyFn
, MemsetFn
;
559 /// KMSAN callback for task-local function argument shadow.
560 StructType
*MsanContextStateTy
;
561 FunctionCallee MsanGetContextStateFn
;
563 /// Functions for poisoning/unpoisoning local variables
564 FunctionCallee MsanPoisonAllocaFn
, MsanUnpoisonAllocaFn
;
566 /// Each of the MsanMetadataPtrXxx functions returns a pair of shadow/origin
568 FunctionCallee MsanMetadataPtrForLoadN
, MsanMetadataPtrForStoreN
;
569 FunctionCallee MsanMetadataPtrForLoad_1_8
[4];
570 FunctionCallee MsanMetadataPtrForStore_1_8
[4];
571 FunctionCallee MsanInstrumentAsmStoreFn
;
573 /// Helper to choose between different MsanMetadataPtrXxx().
574 FunctionCallee
getKmsanShadowOriginAccessFn(bool isStore
, int size
);
576 /// Memory map parameters used in application-to-shadow calculation.
577 const MemoryMapParams
*MapParams
;
579 /// Custom memory map parameters used when -msan-shadow-base or
580 // -msan-origin-base is provided.
581 MemoryMapParams CustomMapParams
;
583 MDNode
*ColdCallWeights
;
585 /// Branch weights for origin store.
586 MDNode
*OriginStoreWeights
;
588 /// An empty volatile inline asm that prevents callback merge.
592 void insertModuleCtor(Module
&M
) {
593 getOrCreateSanitizerCtorAndInitFunctions(
594 M
, kMsanModuleCtorName
, kMsanInitName
,
597 // This callback is invoked when the functions are created the first
598 // time. Hook them into the global ctors list in that case:
599 [&](Function
*Ctor
, FunctionCallee
) {
601 appendToGlobalCtors(M
, Ctor
, 0);
604 Comdat
*MsanCtorComdat
= M
.getOrInsertComdat(kMsanModuleCtorName
);
605 Ctor
->setComdat(MsanCtorComdat
);
606 appendToGlobalCtors(M
, Ctor
, 0, Ctor
);
610 /// A legacy function pass for msan instrumentation.
612 /// Instruments functions to detect unitialized reads.
613 struct MemorySanitizerLegacyPass
: public FunctionPass
{
614 // Pass identification, replacement for typeid.
617 MemorySanitizerLegacyPass(MemorySanitizerOptions Options
= {})
618 : FunctionPass(ID
), Options(Options
) {}
619 StringRef
getPassName() const override
{ return "MemorySanitizerLegacyPass"; }
621 void getAnalysisUsage(AnalysisUsage
&AU
) const override
{
622 AU
.addRequired
<TargetLibraryInfoWrapperPass
>();
625 bool runOnFunction(Function
&F
) override
{
626 return MSan
->sanitizeFunction(
627 F
, getAnalysis
<TargetLibraryInfoWrapperPass
>().getTLI(F
));
629 bool doInitialization(Module
&M
) override
;
631 Optional
<MemorySanitizer
> MSan
;
632 MemorySanitizerOptions Options
;
635 template <class T
> T
getOptOrDefault(const cl::opt
<T
> &Opt
, T Default
) {
636 return (Opt
.getNumOccurrences() > 0) ? Opt
: Default
;
639 } // end anonymous namespace
641 MemorySanitizerOptions::MemorySanitizerOptions(int TO
, bool R
, bool K
)
642 : Kernel(getOptOrDefault(ClEnableKmsan
, K
)),
643 TrackOrigins(getOptOrDefault(ClTrackOrigins
, Kernel
? 2 : TO
)),
644 Recover(getOptOrDefault(ClKeepGoing
, Kernel
|| R
)) {}
646 PreservedAnalyses
MemorySanitizerPass::run(Function
&F
,
647 FunctionAnalysisManager
&FAM
) {
648 MemorySanitizer
Msan(*F
.getParent(), Options
);
649 if (Msan
.sanitizeFunction(F
, FAM
.getResult
<TargetLibraryAnalysis
>(F
)))
650 return PreservedAnalyses::none();
651 return PreservedAnalyses::all();
654 PreservedAnalyses
MemorySanitizerPass::run(Module
&M
,
655 ModuleAnalysisManager
&AM
) {
657 return PreservedAnalyses::all();
659 return PreservedAnalyses::none();
662 char MemorySanitizerLegacyPass::ID
= 0;
664 INITIALIZE_PASS_BEGIN(MemorySanitizerLegacyPass
, "msan",
665 "MemorySanitizer: detects uninitialized reads.", false,
667 INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfoWrapperPass
)
668 INITIALIZE_PASS_END(MemorySanitizerLegacyPass
, "msan",
669 "MemorySanitizer: detects uninitialized reads.", false,
673 llvm::createMemorySanitizerLegacyPassPass(MemorySanitizerOptions Options
) {
674 return new MemorySanitizerLegacyPass(Options
);
677 /// Create a non-const global initialized with the given string.
679 /// Creates a writable global for Str so that we can pass it to the
680 /// run-time lib. Runtime uses first 4 bytes of the string to store the
681 /// frame ID, so the string needs to be mutable.
682 static GlobalVariable
*createPrivateNonConstGlobalForString(Module
&M
,
684 Constant
*StrConst
= ConstantDataArray::getString(M
.getContext(), Str
);
685 return new GlobalVariable(M
, StrConst
->getType(), /*isConstant=*/false,
686 GlobalValue::PrivateLinkage
, StrConst
, "");
689 /// Create KMSAN API callbacks.
690 void MemorySanitizer::createKernelApi(Module
&M
) {
693 // These will be initialized in insertKmsanPrologue().
695 RetvalOriginTLS
= nullptr;
697 ParamOriginTLS
= nullptr;
699 VAArgOriginTLS
= nullptr;
700 VAArgOverflowSizeTLS
= nullptr;
701 // OriginTLS is unused in the kernel.
704 // __msan_warning() in the kernel takes an origin.
705 WarningFn
= M
.getOrInsertFunction("__msan_warning", IRB
.getVoidTy(),
707 // Requests the per-task context state (kmsan_context_state*) from the
709 MsanContextStateTy
= StructType::get(
710 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8),
711 ArrayType::get(IRB
.getInt64Ty(), kRetvalTLSSize
/ 8),
712 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8),
713 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8), /* va_arg_origin */
714 IRB
.getInt64Ty(), ArrayType::get(OriginTy
, kParamTLSSize
/ 4), OriginTy
,
716 MsanGetContextStateFn
= M
.getOrInsertFunction(
717 "__msan_get_context_state", PointerType::get(MsanContextStateTy
, 0));
719 Type
*RetTy
= StructType::get(PointerType::get(IRB
.getInt8Ty(), 0),
720 PointerType::get(IRB
.getInt32Ty(), 0));
722 for (int ind
= 0, size
= 1; ind
< 4; ind
++, size
<<= 1) {
723 std::string name_load
=
724 "__msan_metadata_ptr_for_load_" + std::to_string(size
);
725 std::string name_store
=
726 "__msan_metadata_ptr_for_store_" + std::to_string(size
);
727 MsanMetadataPtrForLoad_1_8
[ind
] = M
.getOrInsertFunction(
728 name_load
, RetTy
, PointerType::get(IRB
.getInt8Ty(), 0));
729 MsanMetadataPtrForStore_1_8
[ind
] = M
.getOrInsertFunction(
730 name_store
, RetTy
, PointerType::get(IRB
.getInt8Ty(), 0));
733 MsanMetadataPtrForLoadN
= M
.getOrInsertFunction(
734 "__msan_metadata_ptr_for_load_n", RetTy
,
735 PointerType::get(IRB
.getInt8Ty(), 0), IRB
.getInt64Ty());
736 MsanMetadataPtrForStoreN
= M
.getOrInsertFunction(
737 "__msan_metadata_ptr_for_store_n", RetTy
,
738 PointerType::get(IRB
.getInt8Ty(), 0), IRB
.getInt64Ty());
740 // Functions for poisoning and unpoisoning memory.
742 M
.getOrInsertFunction("__msan_poison_alloca", IRB
.getVoidTy(),
743 IRB
.getInt8PtrTy(), IntptrTy
, IRB
.getInt8PtrTy());
744 MsanUnpoisonAllocaFn
= M
.getOrInsertFunction(
745 "__msan_unpoison_alloca", IRB
.getVoidTy(), IRB
.getInt8PtrTy(), IntptrTy
);
748 static Constant
*getOrInsertGlobal(Module
&M
, StringRef Name
, Type
*Ty
) {
749 return M
.getOrInsertGlobal(Name
, Ty
, [&] {
750 return new GlobalVariable(M
, Ty
, false, GlobalVariable::ExternalLinkage
,
751 nullptr, Name
, nullptr,
752 GlobalVariable::InitialExecTLSModel
);
756 /// Insert declarations for userspace-specific functions and globals.
757 void MemorySanitizer::createUserspaceApi(Module
&M
) {
759 // Create the callback.
760 // FIXME: this function should have "Cold" calling conv,
761 // which is not yet implemented.
762 StringRef WarningFnName
= Recover
? "__msan_warning"
763 : "__msan_warning_noreturn";
764 WarningFn
= M
.getOrInsertFunction(WarningFnName
, IRB
.getVoidTy());
766 // Create the global TLS variables.
768 getOrInsertGlobal(M
, "__msan_retval_tls",
769 ArrayType::get(IRB
.getInt64Ty(), kRetvalTLSSize
/ 8));
771 RetvalOriginTLS
= getOrInsertGlobal(M
, "__msan_retval_origin_tls", OriginTy
);
774 getOrInsertGlobal(M
, "__msan_param_tls",
775 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8));
778 getOrInsertGlobal(M
, "__msan_param_origin_tls",
779 ArrayType::get(OriginTy
, kParamTLSSize
/ 4));
782 getOrInsertGlobal(M
, "__msan_va_arg_tls",
783 ArrayType::get(IRB
.getInt64Ty(), kParamTLSSize
/ 8));
786 getOrInsertGlobal(M
, "__msan_va_arg_origin_tls",
787 ArrayType::get(OriginTy
, kParamTLSSize
/ 4));
789 VAArgOverflowSizeTLS
=
790 getOrInsertGlobal(M
, "__msan_va_arg_overflow_size_tls", IRB
.getInt64Ty());
791 OriginTLS
= getOrInsertGlobal(M
, "__msan_origin_tls", IRB
.getInt32Ty());
793 for (size_t AccessSizeIndex
= 0; AccessSizeIndex
< kNumberOfAccessSizes
;
795 unsigned AccessSize
= 1 << AccessSizeIndex
;
796 std::string FunctionName
= "__msan_maybe_warning_" + itostr(AccessSize
);
797 MaybeWarningFn
[AccessSizeIndex
] = M
.getOrInsertFunction(
798 FunctionName
, IRB
.getVoidTy(), IRB
.getIntNTy(AccessSize
* 8),
801 FunctionName
= "__msan_maybe_store_origin_" + itostr(AccessSize
);
802 MaybeStoreOriginFn
[AccessSizeIndex
] = M
.getOrInsertFunction(
803 FunctionName
, IRB
.getVoidTy(), IRB
.getIntNTy(AccessSize
* 8),
804 IRB
.getInt8PtrTy(), IRB
.getInt32Ty());
807 MsanSetAllocaOrigin4Fn
= M
.getOrInsertFunction(
808 "__msan_set_alloca_origin4", IRB
.getVoidTy(), IRB
.getInt8PtrTy(), IntptrTy
,
809 IRB
.getInt8PtrTy(), IntptrTy
);
811 M
.getOrInsertFunction("__msan_poison_stack", IRB
.getVoidTy(),
812 IRB
.getInt8PtrTy(), IntptrTy
);
815 /// Insert extern declaration of runtime-provided functions and globals.
816 void MemorySanitizer::initializeCallbacks(Module
&M
) {
817 // Only do this once.
818 if (CallbacksInitialized
)
822 // Initialize callbacks that are common for kernel and userspace
824 MsanChainOriginFn
= M
.getOrInsertFunction(
825 "__msan_chain_origin", IRB
.getInt32Ty(), IRB
.getInt32Ty());
826 MemmoveFn
= M
.getOrInsertFunction(
827 "__msan_memmove", IRB
.getInt8PtrTy(), IRB
.getInt8PtrTy(),
828 IRB
.getInt8PtrTy(), IntptrTy
);
829 MemcpyFn
= M
.getOrInsertFunction(
830 "__msan_memcpy", IRB
.getInt8PtrTy(), IRB
.getInt8PtrTy(), IRB
.getInt8PtrTy(),
832 MemsetFn
= M
.getOrInsertFunction(
833 "__msan_memset", IRB
.getInt8PtrTy(), IRB
.getInt8PtrTy(), IRB
.getInt32Ty(),
835 // We insert an empty inline asm after __msan_report* to avoid callback merge.
836 EmptyAsm
= InlineAsm::get(FunctionType::get(IRB
.getVoidTy(), false),
837 StringRef(""), StringRef(""),
838 /*hasSideEffects=*/true);
840 MsanInstrumentAsmStoreFn
=
841 M
.getOrInsertFunction("__msan_instrument_asm_store", IRB
.getVoidTy(),
842 PointerType::get(IRB
.getInt8Ty(), 0), IntptrTy
);
847 createUserspaceApi(M
);
849 CallbacksInitialized
= true;
852 FunctionCallee
MemorySanitizer::getKmsanShadowOriginAccessFn(bool isStore
,
854 FunctionCallee
*Fns
=
855 isStore
? MsanMetadataPtrForStore_1_8
: MsanMetadataPtrForLoad_1_8
;
870 /// Module-level initialization.
872 /// inserts a call to __msan_init to the module's constructor list.
873 void MemorySanitizer::initializeModule(Module
&M
) {
874 auto &DL
= M
.getDataLayout();
876 bool ShadowPassed
= ClShadowBase
.getNumOccurrences() > 0;
877 bool OriginPassed
= ClOriginBase
.getNumOccurrences() > 0;
878 // Check the overrides first
879 if (ShadowPassed
|| OriginPassed
) {
880 CustomMapParams
.AndMask
= ClAndMask
;
881 CustomMapParams
.XorMask
= ClXorMask
;
882 CustomMapParams
.ShadowBase
= ClShadowBase
;
883 CustomMapParams
.OriginBase
= ClOriginBase
;
884 MapParams
= &CustomMapParams
;
886 Triple
TargetTriple(M
.getTargetTriple());
887 switch (TargetTriple
.getOS()) {
888 case Triple::FreeBSD
:
889 switch (TargetTriple
.getArch()) {
891 MapParams
= FreeBSD_X86_MemoryMapParams
.bits64
;
894 MapParams
= FreeBSD_X86_MemoryMapParams
.bits32
;
897 report_fatal_error("unsupported architecture");
901 switch (TargetTriple
.getArch()) {
903 MapParams
= NetBSD_X86_MemoryMapParams
.bits64
;
906 report_fatal_error("unsupported architecture");
910 switch (TargetTriple
.getArch()) {
912 MapParams
= Linux_X86_MemoryMapParams
.bits64
;
915 MapParams
= Linux_X86_MemoryMapParams
.bits32
;
918 case Triple::mips64el
:
919 MapParams
= Linux_MIPS_MemoryMapParams
.bits64
;
922 case Triple::ppc64le
:
923 MapParams
= Linux_PowerPC_MemoryMapParams
.bits64
;
925 case Triple::aarch64
:
926 case Triple::aarch64_be
:
927 MapParams
= Linux_ARM_MemoryMapParams
.bits64
;
930 report_fatal_error("unsupported architecture");
934 report_fatal_error("unsupported operating system");
938 C
= &(M
.getContext());
940 IntptrTy
= IRB
.getIntPtrTy(DL
);
941 OriginTy
= IRB
.getInt32Ty();
943 ColdCallWeights
= MDBuilder(*C
).createBranchWeights(1, 1000);
944 OriginStoreWeights
= MDBuilder(*C
).createBranchWeights(1, 1000);
946 if (!CompileKernel
) {
948 M
.getOrInsertGlobal("__msan_track_origins", IRB
.getInt32Ty(), [&] {
949 return new GlobalVariable(
950 M
, IRB
.getInt32Ty(), true, GlobalValue::WeakODRLinkage
,
951 IRB
.getInt32(TrackOrigins
), "__msan_track_origins");
955 M
.getOrInsertGlobal("__msan_keep_going", IRB
.getInt32Ty(), [&] {
956 return new GlobalVariable(M
, IRB
.getInt32Ty(), true,
957 GlobalValue::WeakODRLinkage
,
958 IRB
.getInt32(Recover
), "__msan_keep_going");
963 bool MemorySanitizerLegacyPass::doInitialization(Module
&M
) {
966 MSan
.emplace(M
, Options
);
972 /// A helper class that handles instrumentation of VarArg
973 /// functions on a particular platform.
975 /// Implementations are expected to insert the instrumentation
976 /// necessary to propagate argument shadow through VarArg function
977 /// calls. Visit* methods are called during an InstVisitor pass over
978 /// the function, and should avoid creating new basic blocks. A new
979 /// instance of this class is created for each instrumented function.
980 struct VarArgHelper
{
981 virtual ~VarArgHelper() = default;
983 /// Visit a CallSite.
984 virtual void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) = 0;
986 /// Visit a va_start call.
987 virtual void visitVAStartInst(VAStartInst
&I
) = 0;
989 /// Visit a va_copy call.
990 virtual void visitVACopyInst(VACopyInst
&I
) = 0;
992 /// Finalize function instrumentation.
994 /// This method is called after visiting all interesting (see above)
995 /// instructions in a function.
996 virtual void finalizeInstrumentation() = 0;
999 struct MemorySanitizerVisitor
;
1001 } // end anonymous namespace
1003 static VarArgHelper
*CreateVarArgHelper(Function
&Func
, MemorySanitizer
&Msan
,
1004 MemorySanitizerVisitor
&Visitor
);
1006 static unsigned TypeSizeToSizeIndex(unsigned TypeSize
) {
1007 if (TypeSize
<= 8) return 0;
1008 return Log2_32_Ceil((TypeSize
+ 7) / 8);
1013 /// This class does all the work for a given function. Store and Load
1014 /// instructions store and load corresponding shadow and origin
1015 /// values. Most instructions propagate shadow from arguments to their
1016 /// return values. Certain instructions (most importantly, BranchInst)
1017 /// test their argument shadow and print reports (with a runtime call) if it's
1019 struct MemorySanitizerVisitor
: public InstVisitor
<MemorySanitizerVisitor
> {
1021 MemorySanitizer
&MS
;
1022 SmallVector
<PHINode
*, 16> ShadowPHINodes
, OriginPHINodes
;
1023 ValueMap
<Value
*, Value
*> ShadowMap
, OriginMap
;
1024 std::unique_ptr
<VarArgHelper
> VAHelper
;
1025 const TargetLibraryInfo
*TLI
;
1026 BasicBlock
*ActualFnStart
;
1028 // The following flags disable parts of MSan instrumentation based on
1029 // blacklist contents and command-line options.
1031 bool PropagateShadow
;
1034 bool CheckReturnValue
;
1036 struct ShadowOriginAndInsertPoint
{
1039 Instruction
*OrigIns
;
1041 ShadowOriginAndInsertPoint(Value
*S
, Value
*O
, Instruction
*I
)
1042 : Shadow(S
), Origin(O
), OrigIns(I
) {}
1044 SmallVector
<ShadowOriginAndInsertPoint
, 16> InstrumentationList
;
1045 bool InstrumentLifetimeStart
= ClHandleLifetimeIntrinsics
;
1046 SmallSet
<AllocaInst
*, 16> AllocaSet
;
1047 SmallVector
<std::pair
<IntrinsicInst
*, AllocaInst
*>, 16> LifetimeStartList
;
1048 SmallVector
<StoreInst
*, 16> StoreList
;
1050 MemorySanitizerVisitor(Function
&F
, MemorySanitizer
&MS
,
1051 const TargetLibraryInfo
&TLI
)
1052 : F(F
), MS(MS
), VAHelper(CreateVarArgHelper(F
, MS
, *this)), TLI(&TLI
) {
1053 bool SanitizeFunction
= F
.hasFnAttribute(Attribute::SanitizeMemory
);
1054 InsertChecks
= SanitizeFunction
;
1055 PropagateShadow
= SanitizeFunction
;
1056 PoisonStack
= SanitizeFunction
&& ClPoisonStack
;
1057 PoisonUndef
= SanitizeFunction
&& ClPoisonUndef
;
1058 // FIXME: Consider using SpecialCaseList to specify a list of functions that
1059 // must always return fully initialized values. For now, we hardcode "main".
1060 CheckReturnValue
= SanitizeFunction
&& (F
.getName() == "main");
1062 MS
.initializeCallbacks(*F
.getParent());
1063 if (MS
.CompileKernel
)
1064 ActualFnStart
= insertKmsanPrologue(F
);
1066 ActualFnStart
= &F
.getEntryBlock();
1068 LLVM_DEBUG(if (!InsertChecks
) dbgs()
1069 << "MemorySanitizer is not inserting checks into '"
1070 << F
.getName() << "'\n");
1073 Value
*updateOrigin(Value
*V
, IRBuilder
<> &IRB
) {
1074 if (MS
.TrackOrigins
<= 1) return V
;
1075 return IRB
.CreateCall(MS
.MsanChainOriginFn
, V
);
1078 Value
*originToIntptr(IRBuilder
<> &IRB
, Value
*Origin
) {
1079 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1080 unsigned IntptrSize
= DL
.getTypeStoreSize(MS
.IntptrTy
);
1081 if (IntptrSize
== kOriginSize
) return Origin
;
1082 assert(IntptrSize
== kOriginSize
* 2);
1083 Origin
= IRB
.CreateIntCast(Origin
, MS
.IntptrTy
, /* isSigned */ false);
1084 return IRB
.CreateOr(Origin
, IRB
.CreateShl(Origin
, kOriginSize
* 8));
1087 /// Fill memory range with the given origin value.
1088 void paintOrigin(IRBuilder
<> &IRB
, Value
*Origin
, Value
*OriginPtr
,
1089 unsigned Size
, unsigned Alignment
) {
1090 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1091 unsigned IntptrAlignment
= DL
.getABITypeAlignment(MS
.IntptrTy
);
1092 unsigned IntptrSize
= DL
.getTypeStoreSize(MS
.IntptrTy
);
1093 assert(IntptrAlignment
>= kMinOriginAlignment
);
1094 assert(IntptrSize
>= kOriginSize
);
1097 unsigned CurrentAlignment
= Alignment
;
1098 if (Alignment
>= IntptrAlignment
&& IntptrSize
> kOriginSize
) {
1099 Value
*IntptrOrigin
= originToIntptr(IRB
, Origin
);
1100 Value
*IntptrOriginPtr
=
1101 IRB
.CreatePointerCast(OriginPtr
, PointerType::get(MS
.IntptrTy
, 0));
1102 for (unsigned i
= 0; i
< Size
/ IntptrSize
; ++i
) {
1103 Value
*Ptr
= i
? IRB
.CreateConstGEP1_32(MS
.IntptrTy
, IntptrOriginPtr
, i
)
1105 IRB
.CreateAlignedStore(IntptrOrigin
, Ptr
, CurrentAlignment
);
1106 Ofs
+= IntptrSize
/ kOriginSize
;
1107 CurrentAlignment
= IntptrAlignment
;
1111 for (unsigned i
= Ofs
; i
< (Size
+ kOriginSize
- 1) / kOriginSize
; ++i
) {
1113 i
? IRB
.CreateConstGEP1_32(MS
.OriginTy
, OriginPtr
, i
) : OriginPtr
;
1114 IRB
.CreateAlignedStore(Origin
, GEP
, CurrentAlignment
);
1115 CurrentAlignment
= kMinOriginAlignment
;
1119 void storeOrigin(IRBuilder
<> &IRB
, Value
*Addr
, Value
*Shadow
, Value
*Origin
,
1120 Value
*OriginPtr
, unsigned Alignment
, bool AsCall
) {
1121 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1122 unsigned OriginAlignment
= std::max(kMinOriginAlignment
, Alignment
);
1123 unsigned StoreSize
= DL
.getTypeStoreSize(Shadow
->getType());
1124 if (Shadow
->getType()->isAggregateType()) {
1125 paintOrigin(IRB
, updateOrigin(Origin
, IRB
), OriginPtr
, StoreSize
,
1128 Value
*ConvertedShadow
= convertToShadowTyNoVec(Shadow
, IRB
);
1129 Constant
*ConstantShadow
= dyn_cast_or_null
<Constant
>(ConvertedShadow
);
1130 if (ConstantShadow
) {
1131 if (ClCheckConstantShadow
&& !ConstantShadow
->isZeroValue())
1132 paintOrigin(IRB
, updateOrigin(Origin
, IRB
), OriginPtr
, StoreSize
,
1137 unsigned TypeSizeInBits
=
1138 DL
.getTypeSizeInBits(ConvertedShadow
->getType());
1139 unsigned SizeIndex
= TypeSizeToSizeIndex(TypeSizeInBits
);
1140 if (AsCall
&& SizeIndex
< kNumberOfAccessSizes
&& !MS
.CompileKernel
) {
1141 FunctionCallee Fn
= MS
.MaybeStoreOriginFn
[SizeIndex
];
1142 Value
*ConvertedShadow2
= IRB
.CreateZExt(
1143 ConvertedShadow
, IRB
.getIntNTy(8 * (1 << SizeIndex
)));
1144 IRB
.CreateCall(Fn
, {ConvertedShadow2
,
1145 IRB
.CreatePointerCast(Addr
, IRB
.getInt8PtrTy()),
1148 Value
*Cmp
= IRB
.CreateICmpNE(
1149 ConvertedShadow
, getCleanShadow(ConvertedShadow
), "_mscmp");
1150 Instruction
*CheckTerm
= SplitBlockAndInsertIfThen(
1151 Cmp
, &*IRB
.GetInsertPoint(), false, MS
.OriginStoreWeights
);
1152 IRBuilder
<> IRBNew(CheckTerm
);
1153 paintOrigin(IRBNew
, updateOrigin(Origin
, IRBNew
), OriginPtr
, StoreSize
,
1159 void materializeStores(bool InstrumentWithCalls
) {
1160 for (StoreInst
*SI
: StoreList
) {
1161 IRBuilder
<> IRB(SI
);
1162 Value
*Val
= SI
->getValueOperand();
1163 Value
*Addr
= SI
->getPointerOperand();
1164 Value
*Shadow
= SI
->isAtomic() ? getCleanShadow(Val
) : getShadow(Val
);
1165 Value
*ShadowPtr
, *OriginPtr
;
1166 Type
*ShadowTy
= Shadow
->getType();
1167 unsigned Alignment
= SI
->getAlignment();
1168 unsigned OriginAlignment
= std::max(kMinOriginAlignment
, Alignment
);
1169 std::tie(ShadowPtr
, OriginPtr
) =
1170 getShadowOriginPtr(Addr
, IRB
, ShadowTy
, Alignment
, /*isStore*/ true);
1172 StoreInst
*NewSI
= IRB
.CreateAlignedStore(Shadow
, ShadowPtr
, Alignment
);
1173 LLVM_DEBUG(dbgs() << " STORE: " << *NewSI
<< "\n");
1177 SI
->setOrdering(addReleaseOrdering(SI
->getOrdering()));
1179 if (MS
.TrackOrigins
&& !SI
->isAtomic())
1180 storeOrigin(IRB
, Addr
, Shadow
, getOrigin(Val
), OriginPtr
,
1181 OriginAlignment
, InstrumentWithCalls
);
1185 /// Helper function to insert a warning at IRB's current insert point.
1186 void insertWarningFn(IRBuilder
<> &IRB
, Value
*Origin
) {
1188 Origin
= (Value
*)IRB
.getInt32(0);
1189 if (MS
.CompileKernel
) {
1190 IRB
.CreateCall(MS
.WarningFn
, Origin
);
1192 if (MS
.TrackOrigins
) {
1193 IRB
.CreateStore(Origin
, MS
.OriginTLS
);
1195 IRB
.CreateCall(MS
.WarningFn
, {});
1197 IRB
.CreateCall(MS
.EmptyAsm
, {});
1198 // FIXME: Insert UnreachableInst if !MS.Recover?
1199 // This may invalidate some of the following checks and needs to be done
1203 void materializeOneCheck(Instruction
*OrigIns
, Value
*Shadow
, Value
*Origin
,
1205 IRBuilder
<> IRB(OrigIns
);
1206 LLVM_DEBUG(dbgs() << " SHAD0 : " << *Shadow
<< "\n");
1207 Value
*ConvertedShadow
= convertToShadowTyNoVec(Shadow
, IRB
);
1208 LLVM_DEBUG(dbgs() << " SHAD1 : " << *ConvertedShadow
<< "\n");
1210 Constant
*ConstantShadow
= dyn_cast_or_null
<Constant
>(ConvertedShadow
);
1211 if (ConstantShadow
) {
1212 if (ClCheckConstantShadow
&& !ConstantShadow
->isZeroValue()) {
1213 insertWarningFn(IRB
, Origin
);
1218 const DataLayout
&DL
= OrigIns
->getModule()->getDataLayout();
1220 unsigned TypeSizeInBits
= DL
.getTypeSizeInBits(ConvertedShadow
->getType());
1221 unsigned SizeIndex
= TypeSizeToSizeIndex(TypeSizeInBits
);
1222 if (AsCall
&& SizeIndex
< kNumberOfAccessSizes
&& !MS
.CompileKernel
) {
1223 FunctionCallee Fn
= MS
.MaybeWarningFn
[SizeIndex
];
1224 Value
*ConvertedShadow2
=
1225 IRB
.CreateZExt(ConvertedShadow
, IRB
.getIntNTy(8 * (1 << SizeIndex
)));
1226 IRB
.CreateCall(Fn
, {ConvertedShadow2
, MS
.TrackOrigins
&& Origin
1228 : (Value
*)IRB
.getInt32(0)});
1230 Value
*Cmp
= IRB
.CreateICmpNE(ConvertedShadow
,
1231 getCleanShadow(ConvertedShadow
), "_mscmp");
1232 Instruction
*CheckTerm
= SplitBlockAndInsertIfThen(
1234 /* Unreachable */ !MS
.Recover
, MS
.ColdCallWeights
);
1236 IRB
.SetInsertPoint(CheckTerm
);
1237 insertWarningFn(IRB
, Origin
);
1238 LLVM_DEBUG(dbgs() << " CHECK: " << *Cmp
<< "\n");
1242 void materializeChecks(bool InstrumentWithCalls
) {
1243 for (const auto &ShadowData
: InstrumentationList
) {
1244 Instruction
*OrigIns
= ShadowData
.OrigIns
;
1245 Value
*Shadow
= ShadowData
.Shadow
;
1246 Value
*Origin
= ShadowData
.Origin
;
1247 materializeOneCheck(OrigIns
, Shadow
, Origin
, InstrumentWithCalls
);
1249 LLVM_DEBUG(dbgs() << "DONE:\n" << F
);
1252 BasicBlock
*insertKmsanPrologue(Function
&F
) {
1254 SplitBlock(&F
.getEntryBlock(), F
.getEntryBlock().getFirstNonPHI());
1255 IRBuilder
<> IRB(F
.getEntryBlock().getFirstNonPHI());
1256 Value
*ContextState
= IRB
.CreateCall(MS
.MsanGetContextStateFn
, {});
1257 Constant
*Zero
= IRB
.getInt32(0);
1258 MS
.ParamTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1259 {Zero
, IRB
.getInt32(0)}, "param_shadow");
1260 MS
.RetvalTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1261 {Zero
, IRB
.getInt32(1)}, "retval_shadow");
1262 MS
.VAArgTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1263 {Zero
, IRB
.getInt32(2)}, "va_arg_shadow");
1264 MS
.VAArgOriginTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1265 {Zero
, IRB
.getInt32(3)}, "va_arg_origin");
1266 MS
.VAArgOverflowSizeTLS
=
1267 IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1268 {Zero
, IRB
.getInt32(4)}, "va_arg_overflow_size");
1269 MS
.ParamOriginTLS
= IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1270 {Zero
, IRB
.getInt32(5)}, "param_origin");
1271 MS
.RetvalOriginTLS
=
1272 IRB
.CreateGEP(MS
.MsanContextStateTy
, ContextState
,
1273 {Zero
, IRB
.getInt32(6)}, "retval_origin");
1277 /// Add MemorySanitizer instrumentation to a function.
1278 bool runOnFunction() {
1279 // In the presence of unreachable blocks, we may see Phi nodes with
1280 // incoming nodes from such blocks. Since InstVisitor skips unreachable
1281 // blocks, such nodes will not have any shadow value associated with them.
1282 // It's easier to remove unreachable blocks than deal with missing shadow.
1283 removeUnreachableBlocks(F
);
1285 // Iterate all BBs in depth-first order and create shadow instructions
1286 // for all instructions (where applicable).
1287 // For PHI nodes we create dummy shadow PHIs which will be finalized later.
1288 for (BasicBlock
*BB
: depth_first(ActualFnStart
))
1291 // Finalize PHI nodes.
1292 for (PHINode
*PN
: ShadowPHINodes
) {
1293 PHINode
*PNS
= cast
<PHINode
>(getShadow(PN
));
1294 PHINode
*PNO
= MS
.TrackOrigins
? cast
<PHINode
>(getOrigin(PN
)) : nullptr;
1295 size_t NumValues
= PN
->getNumIncomingValues();
1296 for (size_t v
= 0; v
< NumValues
; v
++) {
1297 PNS
->addIncoming(getShadow(PN
, v
), PN
->getIncomingBlock(v
));
1298 if (PNO
) PNO
->addIncoming(getOrigin(PN
, v
), PN
->getIncomingBlock(v
));
1302 VAHelper
->finalizeInstrumentation();
1304 // Poison llvm.lifetime.start intrinsics, if we haven't fallen back to
1305 // instrumenting only allocas.
1306 if (InstrumentLifetimeStart
) {
1307 for (auto Item
: LifetimeStartList
) {
1308 instrumentAlloca(*Item
.second
, Item
.first
);
1309 AllocaSet
.erase(Item
.second
);
1312 // Poison the allocas for which we didn't instrument the corresponding
1313 // lifetime intrinsics.
1314 for (AllocaInst
*AI
: AllocaSet
)
1315 instrumentAlloca(*AI
);
1317 bool InstrumentWithCalls
= ClInstrumentationWithCallThreshold
>= 0 &&
1318 InstrumentationList
.size() + StoreList
.size() >
1319 (unsigned)ClInstrumentationWithCallThreshold
;
1321 // Insert shadow value checks.
1322 materializeChecks(InstrumentWithCalls
);
1324 // Delayed instrumentation of StoreInst.
1325 // This may not add new address checks.
1326 materializeStores(InstrumentWithCalls
);
1331 /// Compute the shadow type that corresponds to a given Value.
1332 Type
*getShadowTy(Value
*V
) {
1333 return getShadowTy(V
->getType());
1336 /// Compute the shadow type that corresponds to a given Type.
1337 Type
*getShadowTy(Type
*OrigTy
) {
1338 if (!OrigTy
->isSized()) {
1341 // For integer type, shadow is the same as the original type.
1342 // This may return weird-sized types like i1.
1343 if (IntegerType
*IT
= dyn_cast
<IntegerType
>(OrigTy
))
1345 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1346 if (VectorType
*VT
= dyn_cast
<VectorType
>(OrigTy
)) {
1347 uint32_t EltSize
= DL
.getTypeSizeInBits(VT
->getElementType());
1348 return VectorType::get(IntegerType::get(*MS
.C
, EltSize
),
1349 VT
->getNumElements());
1351 if (ArrayType
*AT
= dyn_cast
<ArrayType
>(OrigTy
)) {
1352 return ArrayType::get(getShadowTy(AT
->getElementType()),
1353 AT
->getNumElements());
1355 if (StructType
*ST
= dyn_cast
<StructType
>(OrigTy
)) {
1356 SmallVector
<Type
*, 4> Elements
;
1357 for (unsigned i
= 0, n
= ST
->getNumElements(); i
< n
; i
++)
1358 Elements
.push_back(getShadowTy(ST
->getElementType(i
)));
1359 StructType
*Res
= StructType::get(*MS
.C
, Elements
, ST
->isPacked());
1360 LLVM_DEBUG(dbgs() << "getShadowTy: " << *ST
<< " ===> " << *Res
<< "\n");
1363 uint32_t TypeSize
= DL
.getTypeSizeInBits(OrigTy
);
1364 return IntegerType::get(*MS
.C
, TypeSize
);
1367 /// Flatten a vector type.
1368 Type
*getShadowTyNoVec(Type
*ty
) {
1369 if (VectorType
*vt
= dyn_cast
<VectorType
>(ty
))
1370 return IntegerType::get(*MS
.C
, vt
->getBitWidth());
1374 /// Convert a shadow value to it's flattened variant.
1375 Value
*convertToShadowTyNoVec(Value
*V
, IRBuilder
<> &IRB
) {
1376 Type
*Ty
= V
->getType();
1377 Type
*NoVecTy
= getShadowTyNoVec(Ty
);
1378 if (Ty
== NoVecTy
) return V
;
1379 return IRB
.CreateBitCast(V
, NoVecTy
);
1382 /// Compute the integer shadow offset that corresponds to a given
1383 /// application address.
1385 /// Offset = (Addr & ~AndMask) ^ XorMask
1386 Value
*getShadowPtrOffset(Value
*Addr
, IRBuilder
<> &IRB
) {
1387 Value
*OffsetLong
= IRB
.CreatePointerCast(Addr
, MS
.IntptrTy
);
1389 uint64_t AndMask
= MS
.MapParams
->AndMask
;
1392 IRB
.CreateAnd(OffsetLong
, ConstantInt::get(MS
.IntptrTy
, ~AndMask
));
1394 uint64_t XorMask
= MS
.MapParams
->XorMask
;
1397 IRB
.CreateXor(OffsetLong
, ConstantInt::get(MS
.IntptrTy
, XorMask
));
1401 /// Compute the shadow and origin addresses corresponding to a given
1402 /// application address.
1404 /// Shadow = ShadowBase + Offset
1405 /// Origin = (OriginBase + Offset) & ~3ULL
1406 std::pair
<Value
*, Value
*> getShadowOriginPtrUserspace(Value
*Addr
,
1409 unsigned Alignment
) {
1410 Value
*ShadowOffset
= getShadowPtrOffset(Addr
, IRB
);
1411 Value
*ShadowLong
= ShadowOffset
;
1412 uint64_t ShadowBase
= MS
.MapParams
->ShadowBase
;
1413 if (ShadowBase
!= 0) {
1415 IRB
.CreateAdd(ShadowLong
,
1416 ConstantInt::get(MS
.IntptrTy
, ShadowBase
));
1419 IRB
.CreateIntToPtr(ShadowLong
, PointerType::get(ShadowTy
, 0));
1420 Value
*OriginPtr
= nullptr;
1421 if (MS
.TrackOrigins
) {
1422 Value
*OriginLong
= ShadowOffset
;
1423 uint64_t OriginBase
= MS
.MapParams
->OriginBase
;
1424 if (OriginBase
!= 0)
1425 OriginLong
= IRB
.CreateAdd(OriginLong
,
1426 ConstantInt::get(MS
.IntptrTy
, OriginBase
));
1427 if (Alignment
< kMinOriginAlignment
) {
1428 uint64_t Mask
= kMinOriginAlignment
- 1;
1430 IRB
.CreateAnd(OriginLong
, ConstantInt::get(MS
.IntptrTy
, ~Mask
));
1433 IRB
.CreateIntToPtr(OriginLong
, PointerType::get(MS
.OriginTy
, 0));
1435 return std::make_pair(ShadowPtr
, OriginPtr
);
1438 std::pair
<Value
*, Value
*>
1439 getShadowOriginPtrKernel(Value
*Addr
, IRBuilder
<> &IRB
, Type
*ShadowTy
,
1440 unsigned Alignment
, bool isStore
) {
1441 Value
*ShadowOriginPtrs
;
1442 const DataLayout
&DL
= F
.getParent()->getDataLayout();
1443 int Size
= DL
.getTypeStoreSize(ShadowTy
);
1445 FunctionCallee Getter
= MS
.getKmsanShadowOriginAccessFn(isStore
, Size
);
1447 IRB
.CreatePointerCast(Addr
, PointerType::get(IRB
.getInt8Ty(), 0));
1449 ShadowOriginPtrs
= IRB
.CreateCall(Getter
, AddrCast
);
1451 Value
*SizeVal
= ConstantInt::get(MS
.IntptrTy
, Size
);
1452 ShadowOriginPtrs
= IRB
.CreateCall(isStore
? MS
.MsanMetadataPtrForStoreN
1453 : MS
.MsanMetadataPtrForLoadN
,
1454 {AddrCast
, SizeVal
});
1456 Value
*ShadowPtr
= IRB
.CreateExtractValue(ShadowOriginPtrs
, 0);
1457 ShadowPtr
= IRB
.CreatePointerCast(ShadowPtr
, PointerType::get(ShadowTy
, 0));
1458 Value
*OriginPtr
= IRB
.CreateExtractValue(ShadowOriginPtrs
, 1);
1460 return std::make_pair(ShadowPtr
, OriginPtr
);
1463 std::pair
<Value
*, Value
*> getShadowOriginPtr(Value
*Addr
, IRBuilder
<> &IRB
,
1467 std::pair
<Value
*, Value
*> ret
;
1468 if (MS
.CompileKernel
)
1469 ret
= getShadowOriginPtrKernel(Addr
, IRB
, ShadowTy
, Alignment
, isStore
);
1471 ret
= getShadowOriginPtrUserspace(Addr
, IRB
, ShadowTy
, Alignment
);
1475 /// Compute the shadow address for a given function argument.
1477 /// Shadow = ParamTLS+ArgOffset.
1478 Value
*getShadowPtrForArgument(Value
*A
, IRBuilder
<> &IRB
,
1480 Value
*Base
= IRB
.CreatePointerCast(MS
.ParamTLS
, MS
.IntptrTy
);
1482 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
1483 return IRB
.CreateIntToPtr(Base
, PointerType::get(getShadowTy(A
), 0),
1487 /// Compute the origin address for a given function argument.
1488 Value
*getOriginPtrForArgument(Value
*A
, IRBuilder
<> &IRB
,
1490 if (!MS
.TrackOrigins
)
1492 Value
*Base
= IRB
.CreatePointerCast(MS
.ParamOriginTLS
, MS
.IntptrTy
);
1494 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
1495 return IRB
.CreateIntToPtr(Base
, PointerType::get(MS
.OriginTy
, 0),
1499 /// Compute the shadow address for a retval.
1500 Value
*getShadowPtrForRetval(Value
*A
, IRBuilder
<> &IRB
) {
1501 return IRB
.CreatePointerCast(MS
.RetvalTLS
,
1502 PointerType::get(getShadowTy(A
), 0),
1506 /// Compute the origin address for a retval.
1507 Value
*getOriginPtrForRetval(IRBuilder
<> &IRB
) {
1508 // We keep a single origin for the entire retval. Might be too optimistic.
1509 return MS
.RetvalOriginTLS
;
1512 /// Set SV to be the shadow value for V.
1513 void setShadow(Value
*V
, Value
*SV
) {
1514 assert(!ShadowMap
.count(V
) && "Values may only have one shadow");
1515 ShadowMap
[V
] = PropagateShadow
? SV
: getCleanShadow(V
);
1518 /// Set Origin to be the origin value for V.
1519 void setOrigin(Value
*V
, Value
*Origin
) {
1520 if (!MS
.TrackOrigins
) return;
1521 assert(!OriginMap
.count(V
) && "Values may only have one origin");
1522 LLVM_DEBUG(dbgs() << "ORIGIN: " << *V
<< " ==> " << *Origin
<< "\n");
1523 OriginMap
[V
] = Origin
;
1526 Constant
*getCleanShadow(Type
*OrigTy
) {
1527 Type
*ShadowTy
= getShadowTy(OrigTy
);
1530 return Constant::getNullValue(ShadowTy
);
1533 /// Create a clean shadow value for a given value.
1535 /// Clean shadow (all zeroes) means all bits of the value are defined
1537 Constant
*getCleanShadow(Value
*V
) {
1538 return getCleanShadow(V
->getType());
1541 /// Create a dirty shadow of a given shadow type.
1542 Constant
*getPoisonedShadow(Type
*ShadowTy
) {
1544 if (isa
<IntegerType
>(ShadowTy
) || isa
<VectorType
>(ShadowTy
))
1545 return Constant::getAllOnesValue(ShadowTy
);
1546 if (ArrayType
*AT
= dyn_cast
<ArrayType
>(ShadowTy
)) {
1547 SmallVector
<Constant
*, 4> Vals(AT
->getNumElements(),
1548 getPoisonedShadow(AT
->getElementType()));
1549 return ConstantArray::get(AT
, Vals
);
1551 if (StructType
*ST
= dyn_cast
<StructType
>(ShadowTy
)) {
1552 SmallVector
<Constant
*, 4> Vals
;
1553 for (unsigned i
= 0, n
= ST
->getNumElements(); i
< n
; i
++)
1554 Vals
.push_back(getPoisonedShadow(ST
->getElementType(i
)));
1555 return ConstantStruct::get(ST
, Vals
);
1557 llvm_unreachable("Unexpected shadow type");
1560 /// Create a dirty shadow for a given value.
1561 Constant
*getPoisonedShadow(Value
*V
) {
1562 Type
*ShadowTy
= getShadowTy(V
);
1565 return getPoisonedShadow(ShadowTy
);
1568 /// Create a clean (zero) origin.
1569 Value
*getCleanOrigin() {
1570 return Constant::getNullValue(MS
.OriginTy
);
1573 /// Get the shadow value for a given Value.
1575 /// This function either returns the value set earlier with setShadow,
1576 /// or extracts if from ParamTLS (for function arguments).
1577 Value
*getShadow(Value
*V
) {
1578 if (!PropagateShadow
) return getCleanShadow(V
);
1579 if (Instruction
*I
= dyn_cast
<Instruction
>(V
)) {
1580 if (I
->getMetadata("nosanitize"))
1581 return getCleanShadow(V
);
1582 // For instructions the shadow is already stored in the map.
1583 Value
*Shadow
= ShadowMap
[V
];
1585 LLVM_DEBUG(dbgs() << "No shadow: " << *V
<< "\n" << *(I
->getParent()));
1587 assert(Shadow
&& "No shadow for a value");
1591 if (UndefValue
*U
= dyn_cast
<UndefValue
>(V
)) {
1592 Value
*AllOnes
= PoisonUndef
? getPoisonedShadow(V
) : getCleanShadow(V
);
1593 LLVM_DEBUG(dbgs() << "Undef: " << *U
<< " ==> " << *AllOnes
<< "\n");
1597 if (Argument
*A
= dyn_cast
<Argument
>(V
)) {
1598 // For arguments we compute the shadow on demand and store it in the map.
1599 Value
**ShadowPtr
= &ShadowMap
[V
];
1602 Function
*F
= A
->getParent();
1603 IRBuilder
<> EntryIRB(ActualFnStart
->getFirstNonPHI());
1604 unsigned ArgOffset
= 0;
1605 const DataLayout
&DL
= F
->getParent()->getDataLayout();
1606 for (auto &FArg
: F
->args()) {
1607 if (!FArg
.getType()->isSized()) {
1608 LLVM_DEBUG(dbgs() << "Arg is not sized\n");
1613 ? DL
.getTypeAllocSize(FArg
.getType()->getPointerElementType())
1614 : DL
.getTypeAllocSize(FArg
.getType());
1616 bool Overflow
= ArgOffset
+ Size
> kParamTLSSize
;
1617 Value
*Base
= getShadowPtrForArgument(&FArg
, EntryIRB
, ArgOffset
);
1618 if (FArg
.hasByValAttr()) {
1619 // ByVal pointer itself has clean shadow. We copy the actual
1620 // argument shadow to the underlying memory.
1621 // Figure out maximal valid memcpy alignment.
1622 unsigned ArgAlign
= FArg
.getParamAlignment();
1623 if (ArgAlign
== 0) {
1624 Type
*EltType
= A
->getType()->getPointerElementType();
1625 ArgAlign
= DL
.getABITypeAlignment(EltType
);
1627 Value
*CpShadowPtr
=
1628 getShadowOriginPtr(V
, EntryIRB
, EntryIRB
.getInt8Ty(), ArgAlign
,
1631 // TODO(glider): need to copy origins.
1633 // ParamTLS overflow.
1634 EntryIRB
.CreateMemSet(
1635 CpShadowPtr
, Constant::getNullValue(EntryIRB
.getInt8Ty()),
1638 unsigned CopyAlign
= std::min(ArgAlign
, kShadowTLSAlignment
);
1639 Value
*Cpy
= EntryIRB
.CreateMemCpy(CpShadowPtr
, CopyAlign
, Base
,
1641 LLVM_DEBUG(dbgs() << " ByValCpy: " << *Cpy
<< "\n");
1644 *ShadowPtr
= getCleanShadow(V
);
1647 // ParamTLS overflow.
1648 *ShadowPtr
= getCleanShadow(V
);
1650 *ShadowPtr
= EntryIRB
.CreateAlignedLoad(getShadowTy(&FArg
), Base
,
1651 kShadowTLSAlignment
);
1655 << " ARG: " << FArg
<< " ==> " << **ShadowPtr
<< "\n");
1656 if (MS
.TrackOrigins
&& !Overflow
) {
1658 getOriginPtrForArgument(&FArg
, EntryIRB
, ArgOffset
);
1659 setOrigin(A
, EntryIRB
.CreateLoad(MS
.OriginTy
, OriginPtr
));
1661 setOrigin(A
, getCleanOrigin());
1664 ArgOffset
+= alignTo(Size
, kShadowTLSAlignment
);
1666 assert(*ShadowPtr
&& "Could not find shadow for an argument");
1669 // For everything else the shadow is zero.
1670 return getCleanShadow(V
);
1673 /// Get the shadow for i-th argument of the instruction I.
1674 Value
*getShadow(Instruction
*I
, int i
) {
1675 return getShadow(I
->getOperand(i
));
1678 /// Get the origin for a value.
1679 Value
*getOrigin(Value
*V
) {
1680 if (!MS
.TrackOrigins
) return nullptr;
1681 if (!PropagateShadow
) return getCleanOrigin();
1682 if (isa
<Constant
>(V
)) return getCleanOrigin();
1683 assert((isa
<Instruction
>(V
) || isa
<Argument
>(V
)) &&
1684 "Unexpected value type in getOrigin()");
1685 if (Instruction
*I
= dyn_cast
<Instruction
>(V
)) {
1686 if (I
->getMetadata("nosanitize"))
1687 return getCleanOrigin();
1689 Value
*Origin
= OriginMap
[V
];
1690 assert(Origin
&& "Missing origin");
1694 /// Get the origin for i-th argument of the instruction I.
1695 Value
*getOrigin(Instruction
*I
, int i
) {
1696 return getOrigin(I
->getOperand(i
));
1699 /// Remember the place where a shadow check should be inserted.
1701 /// This location will be later instrumented with a check that will print a
1702 /// UMR warning in runtime if the shadow value is not 0.
1703 void insertShadowCheck(Value
*Shadow
, Value
*Origin
, Instruction
*OrigIns
) {
1705 if (!InsertChecks
) return;
1707 Type
*ShadowTy
= Shadow
->getType();
1708 assert((isa
<IntegerType
>(ShadowTy
) || isa
<VectorType
>(ShadowTy
)) &&
1709 "Can only insert checks for integer and vector shadow types");
1711 InstrumentationList
.push_back(
1712 ShadowOriginAndInsertPoint(Shadow
, Origin
, OrigIns
));
1715 /// Remember the place where a shadow check should be inserted.
1717 /// This location will be later instrumented with a check that will print a
1718 /// UMR warning in runtime if the value is not fully defined.
1719 void insertShadowCheck(Value
*Val
, Instruction
*OrigIns
) {
1721 Value
*Shadow
, *Origin
;
1722 if (ClCheckConstantShadow
) {
1723 Shadow
= getShadow(Val
);
1724 if (!Shadow
) return;
1725 Origin
= getOrigin(Val
);
1727 Shadow
= dyn_cast_or_null
<Instruction
>(getShadow(Val
));
1728 if (!Shadow
) return;
1729 Origin
= dyn_cast_or_null
<Instruction
>(getOrigin(Val
));
1731 insertShadowCheck(Shadow
, Origin
, OrigIns
);
1734 AtomicOrdering
addReleaseOrdering(AtomicOrdering a
) {
1736 case AtomicOrdering::NotAtomic
:
1737 return AtomicOrdering::NotAtomic
;
1738 case AtomicOrdering::Unordered
:
1739 case AtomicOrdering::Monotonic
:
1740 case AtomicOrdering::Release
:
1741 return AtomicOrdering::Release
;
1742 case AtomicOrdering::Acquire
:
1743 case AtomicOrdering::AcquireRelease
:
1744 return AtomicOrdering::AcquireRelease
;
1745 case AtomicOrdering::SequentiallyConsistent
:
1746 return AtomicOrdering::SequentiallyConsistent
;
1748 llvm_unreachable("Unknown ordering");
1751 AtomicOrdering
addAcquireOrdering(AtomicOrdering a
) {
1753 case AtomicOrdering::NotAtomic
:
1754 return AtomicOrdering::NotAtomic
;
1755 case AtomicOrdering::Unordered
:
1756 case AtomicOrdering::Monotonic
:
1757 case AtomicOrdering::Acquire
:
1758 return AtomicOrdering::Acquire
;
1759 case AtomicOrdering::Release
:
1760 case AtomicOrdering::AcquireRelease
:
1761 return AtomicOrdering::AcquireRelease
;
1762 case AtomicOrdering::SequentiallyConsistent
:
1763 return AtomicOrdering::SequentiallyConsistent
;
1765 llvm_unreachable("Unknown ordering");
1768 // ------------------- Visitors.
1769 using InstVisitor
<MemorySanitizerVisitor
>::visit
;
1770 void visit(Instruction
&I
) {
1771 if (!I
.getMetadata("nosanitize"))
1772 InstVisitor
<MemorySanitizerVisitor
>::visit(I
);
1775 /// Instrument LoadInst
1777 /// Loads the corresponding shadow and (optionally) origin.
1778 /// Optionally, checks that the load address is fully defined.
1779 void visitLoadInst(LoadInst
&I
) {
1780 assert(I
.getType()->isSized() && "Load type must have size");
1781 assert(!I
.getMetadata("nosanitize"));
1782 IRBuilder
<> IRB(I
.getNextNode());
1783 Type
*ShadowTy
= getShadowTy(&I
);
1784 Value
*Addr
= I
.getPointerOperand();
1785 Value
*ShadowPtr
, *OriginPtr
;
1786 unsigned Alignment
= I
.getAlignment();
1787 if (PropagateShadow
) {
1788 std::tie(ShadowPtr
, OriginPtr
) =
1789 getShadowOriginPtr(Addr
, IRB
, ShadowTy
, Alignment
, /*isStore*/ false);
1791 IRB
.CreateAlignedLoad(ShadowTy
, ShadowPtr
, Alignment
, "_msld"));
1793 setShadow(&I
, getCleanShadow(&I
));
1796 if (ClCheckAccessAddress
)
1797 insertShadowCheck(I
.getPointerOperand(), &I
);
1800 I
.setOrdering(addAcquireOrdering(I
.getOrdering()));
1802 if (MS
.TrackOrigins
) {
1803 if (PropagateShadow
) {
1804 unsigned OriginAlignment
= std::max(kMinOriginAlignment
, Alignment
);
1806 &I
, IRB
.CreateAlignedLoad(MS
.OriginTy
, OriginPtr
, OriginAlignment
));
1808 setOrigin(&I
, getCleanOrigin());
1813 /// Instrument StoreInst
1815 /// Stores the corresponding shadow and (optionally) origin.
1816 /// Optionally, checks that the store address is fully defined.
1817 void visitStoreInst(StoreInst
&I
) {
1818 StoreList
.push_back(&I
);
1819 if (ClCheckAccessAddress
)
1820 insertShadowCheck(I
.getPointerOperand(), &I
);
1823 void handleCASOrRMW(Instruction
&I
) {
1824 assert(isa
<AtomicRMWInst
>(I
) || isa
<AtomicCmpXchgInst
>(I
));
1826 IRBuilder
<> IRB(&I
);
1827 Value
*Addr
= I
.getOperand(0);
1828 Value
*ShadowPtr
= getShadowOriginPtr(Addr
, IRB
, I
.getType(),
1829 /*Alignment*/ 1, /*isStore*/ true)
1832 if (ClCheckAccessAddress
)
1833 insertShadowCheck(Addr
, &I
);
1835 // Only test the conditional argument of cmpxchg instruction.
1836 // The other argument can potentially be uninitialized, but we can not
1837 // detect this situation reliably without possible false positives.
1838 if (isa
<AtomicCmpXchgInst
>(I
))
1839 insertShadowCheck(I
.getOperand(1), &I
);
1841 IRB
.CreateStore(getCleanShadow(&I
), ShadowPtr
);
1843 setShadow(&I
, getCleanShadow(&I
));
1844 setOrigin(&I
, getCleanOrigin());
1847 void visitAtomicRMWInst(AtomicRMWInst
&I
) {
1849 I
.setOrdering(addReleaseOrdering(I
.getOrdering()));
1852 void visitAtomicCmpXchgInst(AtomicCmpXchgInst
&I
) {
1854 I
.setSuccessOrdering(addReleaseOrdering(I
.getSuccessOrdering()));
1857 // Vector manipulation.
1858 void visitExtractElementInst(ExtractElementInst
&I
) {
1859 insertShadowCheck(I
.getOperand(1), &I
);
1860 IRBuilder
<> IRB(&I
);
1861 setShadow(&I
, IRB
.CreateExtractElement(getShadow(&I
, 0), I
.getOperand(1),
1863 setOrigin(&I
, getOrigin(&I
, 0));
1866 void visitInsertElementInst(InsertElementInst
&I
) {
1867 insertShadowCheck(I
.getOperand(2), &I
);
1868 IRBuilder
<> IRB(&I
);
1869 setShadow(&I
, IRB
.CreateInsertElement(getShadow(&I
, 0), getShadow(&I
, 1),
1870 I
.getOperand(2), "_msprop"));
1871 setOriginForNaryOp(I
);
1874 void visitShuffleVectorInst(ShuffleVectorInst
&I
) {
1875 insertShadowCheck(I
.getOperand(2), &I
);
1876 IRBuilder
<> IRB(&I
);
1877 setShadow(&I
, IRB
.CreateShuffleVector(getShadow(&I
, 0), getShadow(&I
, 1),
1878 I
.getOperand(2), "_msprop"));
1879 setOriginForNaryOp(I
);
1883 void visitSExtInst(SExtInst
&I
) {
1884 IRBuilder
<> IRB(&I
);
1885 setShadow(&I
, IRB
.CreateSExt(getShadow(&I
, 0), I
.getType(), "_msprop"));
1886 setOrigin(&I
, getOrigin(&I
, 0));
1889 void visitZExtInst(ZExtInst
&I
) {
1890 IRBuilder
<> IRB(&I
);
1891 setShadow(&I
, IRB
.CreateZExt(getShadow(&I
, 0), I
.getType(), "_msprop"));
1892 setOrigin(&I
, getOrigin(&I
, 0));
1895 void visitTruncInst(TruncInst
&I
) {
1896 IRBuilder
<> IRB(&I
);
1897 setShadow(&I
, IRB
.CreateTrunc(getShadow(&I
, 0), I
.getType(), "_msprop"));
1898 setOrigin(&I
, getOrigin(&I
, 0));
1901 void visitBitCastInst(BitCastInst
&I
) {
1902 // Special case: if this is the bitcast (there is exactly 1 allowed) between
1903 // a musttail call and a ret, don't instrument. New instructions are not
1904 // allowed after a musttail call.
1905 if (auto *CI
= dyn_cast
<CallInst
>(I
.getOperand(0)))
1906 if (CI
->isMustTailCall())
1908 IRBuilder
<> IRB(&I
);
1909 setShadow(&I
, IRB
.CreateBitCast(getShadow(&I
, 0), getShadowTy(&I
)));
1910 setOrigin(&I
, getOrigin(&I
, 0));
1913 void visitPtrToIntInst(PtrToIntInst
&I
) {
1914 IRBuilder
<> IRB(&I
);
1915 setShadow(&I
, IRB
.CreateIntCast(getShadow(&I
, 0), getShadowTy(&I
), false,
1916 "_msprop_ptrtoint"));
1917 setOrigin(&I
, getOrigin(&I
, 0));
1920 void visitIntToPtrInst(IntToPtrInst
&I
) {
1921 IRBuilder
<> IRB(&I
);
1922 setShadow(&I
, IRB
.CreateIntCast(getShadow(&I
, 0), getShadowTy(&I
), false,
1923 "_msprop_inttoptr"));
1924 setOrigin(&I
, getOrigin(&I
, 0));
1927 void visitFPToSIInst(CastInst
& I
) { handleShadowOr(I
); }
1928 void visitFPToUIInst(CastInst
& I
) { handleShadowOr(I
); }
1929 void visitSIToFPInst(CastInst
& I
) { handleShadowOr(I
); }
1930 void visitUIToFPInst(CastInst
& I
) { handleShadowOr(I
); }
1931 void visitFPExtInst(CastInst
& I
) { handleShadowOr(I
); }
1932 void visitFPTruncInst(CastInst
& I
) { handleShadowOr(I
); }
1934 /// Propagate shadow for bitwise AND.
1936 /// This code is exact, i.e. if, for example, a bit in the left argument
1937 /// is defined and 0, then neither the value not definedness of the
1938 /// corresponding bit in B don't affect the resulting shadow.
1939 void visitAnd(BinaryOperator
&I
) {
1940 IRBuilder
<> IRB(&I
);
1941 // "And" of 0 and a poisoned value results in unpoisoned value.
1942 // 1&1 => 1; 0&1 => 0; p&1 => p;
1943 // 1&0 => 0; 0&0 => 0; p&0 => 0;
1944 // 1&p => p; 0&p => 0; p&p => p;
1945 // S = (S1 & S2) | (V1 & S2) | (S1 & V2)
1946 Value
*S1
= getShadow(&I
, 0);
1947 Value
*S2
= getShadow(&I
, 1);
1948 Value
*V1
= I
.getOperand(0);
1949 Value
*V2
= I
.getOperand(1);
1950 if (V1
->getType() != S1
->getType()) {
1951 V1
= IRB
.CreateIntCast(V1
, S1
->getType(), false);
1952 V2
= IRB
.CreateIntCast(V2
, S2
->getType(), false);
1954 Value
*S1S2
= IRB
.CreateAnd(S1
, S2
);
1955 Value
*V1S2
= IRB
.CreateAnd(V1
, S2
);
1956 Value
*S1V2
= IRB
.CreateAnd(S1
, V2
);
1957 setShadow(&I
, IRB
.CreateOr({S1S2
, V1S2
, S1V2
}));
1958 setOriginForNaryOp(I
);
1961 void visitOr(BinaryOperator
&I
) {
1962 IRBuilder
<> IRB(&I
);
1963 // "Or" of 1 and a poisoned value results in unpoisoned value.
1964 // 1|1 => 1; 0|1 => 1; p|1 => 1;
1965 // 1|0 => 1; 0|0 => 0; p|0 => p;
1966 // 1|p => 1; 0|p => p; p|p => p;
1967 // S = (S1 & S2) | (~V1 & S2) | (S1 & ~V2)
1968 Value
*S1
= getShadow(&I
, 0);
1969 Value
*S2
= getShadow(&I
, 1);
1970 Value
*V1
= IRB
.CreateNot(I
.getOperand(0));
1971 Value
*V2
= IRB
.CreateNot(I
.getOperand(1));
1972 if (V1
->getType() != S1
->getType()) {
1973 V1
= IRB
.CreateIntCast(V1
, S1
->getType(), false);
1974 V2
= IRB
.CreateIntCast(V2
, S2
->getType(), false);
1976 Value
*S1S2
= IRB
.CreateAnd(S1
, S2
);
1977 Value
*V1S2
= IRB
.CreateAnd(V1
, S2
);
1978 Value
*S1V2
= IRB
.CreateAnd(S1
, V2
);
1979 setShadow(&I
, IRB
.CreateOr({S1S2
, V1S2
, S1V2
}));
1980 setOriginForNaryOp(I
);
1983 /// Default propagation of shadow and/or origin.
1985 /// This class implements the general case of shadow propagation, used in all
1986 /// cases where we don't know and/or don't care about what the operation
1987 /// actually does. It converts all input shadow values to a common type
1988 /// (extending or truncating as necessary), and bitwise OR's them.
1990 /// This is much cheaper than inserting checks (i.e. requiring inputs to be
1991 /// fully initialized), and less prone to false positives.
1993 /// This class also implements the general case of origin propagation. For a
1994 /// Nary operation, result origin is set to the origin of an argument that is
1995 /// not entirely initialized. If there is more than one such arguments, the
1996 /// rightmost of them is picked. It does not matter which one is picked if all
1997 /// arguments are initialized.
1998 template <bool CombineShadow
>
2000 Value
*Shadow
= nullptr;
2001 Value
*Origin
= nullptr;
2003 MemorySanitizerVisitor
*MSV
;
2006 Combiner(MemorySanitizerVisitor
*MSV
, IRBuilder
<> &IRB
)
2007 : IRB(IRB
), MSV(MSV
) {}
2009 /// Add a pair of shadow and origin values to the mix.
2010 Combiner
&Add(Value
*OpShadow
, Value
*OpOrigin
) {
2011 if (CombineShadow
) {
2016 OpShadow
= MSV
->CreateShadowCast(IRB
, OpShadow
, Shadow
->getType());
2017 Shadow
= IRB
.CreateOr(Shadow
, OpShadow
, "_msprop");
2021 if (MSV
->MS
.TrackOrigins
) {
2026 Constant
*ConstOrigin
= dyn_cast
<Constant
>(OpOrigin
);
2027 // No point in adding something that might result in 0 origin value.
2028 if (!ConstOrigin
|| !ConstOrigin
->isNullValue()) {
2029 Value
*FlatShadow
= MSV
->convertToShadowTyNoVec(OpShadow
, IRB
);
2031 IRB
.CreateICmpNE(FlatShadow
, MSV
->getCleanShadow(FlatShadow
));
2032 Origin
= IRB
.CreateSelect(Cond
, OpOrigin
, Origin
);
2039 /// Add an application value to the mix.
2040 Combiner
&Add(Value
*V
) {
2041 Value
*OpShadow
= MSV
->getShadow(V
);
2042 Value
*OpOrigin
= MSV
->MS
.TrackOrigins
? MSV
->getOrigin(V
) : nullptr;
2043 return Add(OpShadow
, OpOrigin
);
2046 /// Set the current combined values as the given instruction's shadow
2048 void Done(Instruction
*I
) {
2049 if (CombineShadow
) {
2051 Shadow
= MSV
->CreateShadowCast(IRB
, Shadow
, MSV
->getShadowTy(I
));
2052 MSV
->setShadow(I
, Shadow
);
2054 if (MSV
->MS
.TrackOrigins
) {
2056 MSV
->setOrigin(I
, Origin
);
2061 using ShadowAndOriginCombiner
= Combiner
<true>;
2062 using OriginCombiner
= Combiner
<false>;
2064 /// Propagate origin for arbitrary operation.
2065 void setOriginForNaryOp(Instruction
&I
) {
2066 if (!MS
.TrackOrigins
) return;
2067 IRBuilder
<> IRB(&I
);
2068 OriginCombiner
OC(this, IRB
);
2069 for (Instruction::op_iterator OI
= I
.op_begin(); OI
!= I
.op_end(); ++OI
)
2074 size_t VectorOrPrimitiveTypeSizeInBits(Type
*Ty
) {
2075 assert(!(Ty
->isVectorTy() && Ty
->getScalarType()->isPointerTy()) &&
2076 "Vector of pointers is not a valid shadow type");
2077 return Ty
->isVectorTy() ?
2078 Ty
->getVectorNumElements() * Ty
->getScalarSizeInBits() :
2079 Ty
->getPrimitiveSizeInBits();
2082 /// Cast between two shadow types, extending or truncating as
2084 Value
*CreateShadowCast(IRBuilder
<> &IRB
, Value
*V
, Type
*dstTy
,
2085 bool Signed
= false) {
2086 Type
*srcTy
= V
->getType();
2087 size_t srcSizeInBits
= VectorOrPrimitiveTypeSizeInBits(srcTy
);
2088 size_t dstSizeInBits
= VectorOrPrimitiveTypeSizeInBits(dstTy
);
2089 if (srcSizeInBits
> 1 && dstSizeInBits
== 1)
2090 return IRB
.CreateICmpNE(V
, getCleanShadow(V
));
2092 if (dstTy
->isIntegerTy() && srcTy
->isIntegerTy())
2093 return IRB
.CreateIntCast(V
, dstTy
, Signed
);
2094 if (dstTy
->isVectorTy() && srcTy
->isVectorTy() &&
2095 dstTy
->getVectorNumElements() == srcTy
->getVectorNumElements())
2096 return IRB
.CreateIntCast(V
, dstTy
, Signed
);
2097 Value
*V1
= IRB
.CreateBitCast(V
, Type::getIntNTy(*MS
.C
, srcSizeInBits
));
2099 IRB
.CreateIntCast(V1
, Type::getIntNTy(*MS
.C
, dstSizeInBits
), Signed
);
2100 return IRB
.CreateBitCast(V2
, dstTy
);
2101 // TODO: handle struct types.
2104 /// Cast an application value to the type of its own shadow.
2105 Value
*CreateAppToShadowCast(IRBuilder
<> &IRB
, Value
*V
) {
2106 Type
*ShadowTy
= getShadowTy(V
);
2107 if (V
->getType() == ShadowTy
)
2109 if (V
->getType()->isPtrOrPtrVectorTy())
2110 return IRB
.CreatePtrToInt(V
, ShadowTy
);
2112 return IRB
.CreateBitCast(V
, ShadowTy
);
2115 /// Propagate shadow for arbitrary operation.
2116 void handleShadowOr(Instruction
&I
) {
2117 IRBuilder
<> IRB(&I
);
2118 ShadowAndOriginCombiner
SC(this, IRB
);
2119 for (Instruction::op_iterator OI
= I
.op_begin(); OI
!= I
.op_end(); ++OI
)
2124 void visitFNeg(UnaryOperator
&I
) { handleShadowOr(I
); }
2126 // Handle multiplication by constant.
2128 // Handle a special case of multiplication by constant that may have one or
2129 // more zeros in the lower bits. This makes corresponding number of lower bits
2130 // of the result zero as well. We model it by shifting the other operand
2131 // shadow left by the required number of bits. Effectively, we transform
2132 // (X * (A * 2**B)) to ((X << B) * A) and instrument (X << B) as (Sx << B).
2133 // We use multiplication by 2**N instead of shift to cover the case of
2134 // multiplication by 0, which may occur in some elements of a vector operand.
2135 void handleMulByConstant(BinaryOperator
&I
, Constant
*ConstArg
,
2137 Constant
*ShadowMul
;
2138 Type
*Ty
= ConstArg
->getType();
2139 if (Ty
->isVectorTy()) {
2140 unsigned NumElements
= Ty
->getVectorNumElements();
2141 Type
*EltTy
= Ty
->getSequentialElementType();
2142 SmallVector
<Constant
*, 16> Elements
;
2143 for (unsigned Idx
= 0; Idx
< NumElements
; ++Idx
) {
2144 if (ConstantInt
*Elt
=
2145 dyn_cast
<ConstantInt
>(ConstArg
->getAggregateElement(Idx
))) {
2146 const APInt
&V
= Elt
->getValue();
2147 APInt V2
= APInt(V
.getBitWidth(), 1) << V
.countTrailingZeros();
2148 Elements
.push_back(ConstantInt::get(EltTy
, V2
));
2150 Elements
.push_back(ConstantInt::get(EltTy
, 1));
2153 ShadowMul
= ConstantVector::get(Elements
);
2155 if (ConstantInt
*Elt
= dyn_cast
<ConstantInt
>(ConstArg
)) {
2156 const APInt
&V
= Elt
->getValue();
2157 APInt V2
= APInt(V
.getBitWidth(), 1) << V
.countTrailingZeros();
2158 ShadowMul
= ConstantInt::get(Ty
, V2
);
2160 ShadowMul
= ConstantInt::get(Ty
, 1);
2164 IRBuilder
<> IRB(&I
);
2166 IRB
.CreateMul(getShadow(OtherArg
), ShadowMul
, "msprop_mul_cst"));
2167 setOrigin(&I
, getOrigin(OtherArg
));
2170 void visitMul(BinaryOperator
&I
) {
2171 Constant
*constOp0
= dyn_cast
<Constant
>(I
.getOperand(0));
2172 Constant
*constOp1
= dyn_cast
<Constant
>(I
.getOperand(1));
2173 if (constOp0
&& !constOp1
)
2174 handleMulByConstant(I
, constOp0
, I
.getOperand(1));
2175 else if (constOp1
&& !constOp0
)
2176 handleMulByConstant(I
, constOp1
, I
.getOperand(0));
2181 void visitFAdd(BinaryOperator
&I
) { handleShadowOr(I
); }
2182 void visitFSub(BinaryOperator
&I
) { handleShadowOr(I
); }
2183 void visitFMul(BinaryOperator
&I
) { handleShadowOr(I
); }
2184 void visitAdd(BinaryOperator
&I
) { handleShadowOr(I
); }
2185 void visitSub(BinaryOperator
&I
) { handleShadowOr(I
); }
2186 void visitXor(BinaryOperator
&I
) { handleShadowOr(I
); }
2188 void handleIntegerDiv(Instruction
&I
) {
2189 IRBuilder
<> IRB(&I
);
2190 // Strict on the second argument.
2191 insertShadowCheck(I
.getOperand(1), &I
);
2192 setShadow(&I
, getShadow(&I
, 0));
2193 setOrigin(&I
, getOrigin(&I
, 0));
2196 void visitUDiv(BinaryOperator
&I
) { handleIntegerDiv(I
); }
2197 void visitSDiv(BinaryOperator
&I
) { handleIntegerDiv(I
); }
2198 void visitURem(BinaryOperator
&I
) { handleIntegerDiv(I
); }
2199 void visitSRem(BinaryOperator
&I
) { handleIntegerDiv(I
); }
2201 // Floating point division is side-effect free. We can not require that the
2202 // divisor is fully initialized and must propagate shadow. See PR37523.
2203 void visitFDiv(BinaryOperator
&I
) { handleShadowOr(I
); }
2204 void visitFRem(BinaryOperator
&I
) { handleShadowOr(I
); }
2206 /// Instrument == and != comparisons.
2208 /// Sometimes the comparison result is known even if some of the bits of the
2209 /// arguments are not.
2210 void handleEqualityComparison(ICmpInst
&I
) {
2211 IRBuilder
<> IRB(&I
);
2212 Value
*A
= I
.getOperand(0);
2213 Value
*B
= I
.getOperand(1);
2214 Value
*Sa
= getShadow(A
);
2215 Value
*Sb
= getShadow(B
);
2217 // Get rid of pointers and vectors of pointers.
2218 // For ints (and vectors of ints), types of A and Sa match,
2219 // and this is a no-op.
2220 A
= IRB
.CreatePointerCast(A
, Sa
->getType());
2221 B
= IRB
.CreatePointerCast(B
, Sb
->getType());
2223 // A == B <==> (C = A^B) == 0
2224 // A != B <==> (C = A^B) != 0
2226 Value
*C
= IRB
.CreateXor(A
, B
);
2227 Value
*Sc
= IRB
.CreateOr(Sa
, Sb
);
2228 // Now dealing with i = (C == 0) comparison (or C != 0, does not matter now)
2229 // Result is defined if one of the following is true
2230 // * there is a defined 1 bit in C
2231 // * C is fully defined
2232 // Si = !(C & ~Sc) && Sc
2233 Value
*Zero
= Constant::getNullValue(Sc
->getType());
2234 Value
*MinusOne
= Constant::getAllOnesValue(Sc
->getType());
2236 IRB
.CreateAnd(IRB
.CreateICmpNE(Sc
, Zero
),
2238 IRB
.CreateAnd(IRB
.CreateXor(Sc
, MinusOne
), C
), Zero
));
2239 Si
->setName("_msprop_icmp");
2241 setOriginForNaryOp(I
);
2244 /// Build the lowest possible value of V, taking into account V's
2245 /// uninitialized bits.
2246 Value
*getLowestPossibleValue(IRBuilder
<> &IRB
, Value
*A
, Value
*Sa
,
2249 // Split shadow into sign bit and other bits.
2250 Value
*SaOtherBits
= IRB
.CreateLShr(IRB
.CreateShl(Sa
, 1), 1);
2251 Value
*SaSignBit
= IRB
.CreateXor(Sa
, SaOtherBits
);
2252 // Maximise the undefined shadow bit, minimize other undefined bits.
2254 IRB
.CreateOr(IRB
.CreateAnd(A
, IRB
.CreateNot(SaOtherBits
)), SaSignBit
);
2256 // Minimize undefined bits.
2257 return IRB
.CreateAnd(A
, IRB
.CreateNot(Sa
));
2261 /// Build the highest possible value of V, taking into account V's
2262 /// uninitialized bits.
2263 Value
*getHighestPossibleValue(IRBuilder
<> &IRB
, Value
*A
, Value
*Sa
,
2266 // Split shadow into sign bit and other bits.
2267 Value
*SaOtherBits
= IRB
.CreateLShr(IRB
.CreateShl(Sa
, 1), 1);
2268 Value
*SaSignBit
= IRB
.CreateXor(Sa
, SaOtherBits
);
2269 // Minimise the undefined shadow bit, maximise other undefined bits.
2271 IRB
.CreateOr(IRB
.CreateAnd(A
, IRB
.CreateNot(SaSignBit
)), SaOtherBits
);
2273 // Maximize undefined bits.
2274 return IRB
.CreateOr(A
, Sa
);
2278 /// Instrument relational comparisons.
2280 /// This function does exact shadow propagation for all relational
2281 /// comparisons of integers, pointers and vectors of those.
2282 /// FIXME: output seems suboptimal when one of the operands is a constant
2283 void handleRelationalComparisonExact(ICmpInst
&I
) {
2284 IRBuilder
<> IRB(&I
);
2285 Value
*A
= I
.getOperand(0);
2286 Value
*B
= I
.getOperand(1);
2287 Value
*Sa
= getShadow(A
);
2288 Value
*Sb
= getShadow(B
);
2290 // Get rid of pointers and vectors of pointers.
2291 // For ints (and vectors of ints), types of A and Sa match,
2292 // and this is a no-op.
2293 A
= IRB
.CreatePointerCast(A
, Sa
->getType());
2294 B
= IRB
.CreatePointerCast(B
, Sb
->getType());
2296 // Let [a0, a1] be the interval of possible values of A, taking into account
2297 // its undefined bits. Let [b0, b1] be the interval of possible values of B.
2298 // Then (A cmp B) is defined iff (a0 cmp b1) == (a1 cmp b0).
2299 bool IsSigned
= I
.isSigned();
2300 Value
*S1
= IRB
.CreateICmp(I
.getPredicate(),
2301 getLowestPossibleValue(IRB
, A
, Sa
, IsSigned
),
2302 getHighestPossibleValue(IRB
, B
, Sb
, IsSigned
));
2303 Value
*S2
= IRB
.CreateICmp(I
.getPredicate(),
2304 getHighestPossibleValue(IRB
, A
, Sa
, IsSigned
),
2305 getLowestPossibleValue(IRB
, B
, Sb
, IsSigned
));
2306 Value
*Si
= IRB
.CreateXor(S1
, S2
);
2308 setOriginForNaryOp(I
);
2311 /// Instrument signed relational comparisons.
2313 /// Handle sign bit tests: x<0, x>=0, x<=-1, x>-1 by propagating the highest
2314 /// bit of the shadow. Everything else is delegated to handleShadowOr().
2315 void handleSignedRelationalComparison(ICmpInst
&I
) {
2317 Value
*op
= nullptr;
2318 CmpInst::Predicate pre
;
2319 if ((constOp
= dyn_cast
<Constant
>(I
.getOperand(1)))) {
2320 op
= I
.getOperand(0);
2321 pre
= I
.getPredicate();
2322 } else if ((constOp
= dyn_cast
<Constant
>(I
.getOperand(0)))) {
2323 op
= I
.getOperand(1);
2324 pre
= I
.getSwappedPredicate();
2330 if ((constOp
->isNullValue() &&
2331 (pre
== CmpInst::ICMP_SLT
|| pre
== CmpInst::ICMP_SGE
)) ||
2332 (constOp
->isAllOnesValue() &&
2333 (pre
== CmpInst::ICMP_SGT
|| pre
== CmpInst::ICMP_SLE
))) {
2334 IRBuilder
<> IRB(&I
);
2335 Value
*Shadow
= IRB
.CreateICmpSLT(getShadow(op
), getCleanShadow(op
),
2337 setShadow(&I
, Shadow
);
2338 setOrigin(&I
, getOrigin(op
));
2344 void visitICmpInst(ICmpInst
&I
) {
2345 if (!ClHandleICmp
) {
2349 if (I
.isEquality()) {
2350 handleEqualityComparison(I
);
2354 assert(I
.isRelational());
2355 if (ClHandleICmpExact
) {
2356 handleRelationalComparisonExact(I
);
2360 handleSignedRelationalComparison(I
);
2364 assert(I
.isUnsigned());
2365 if ((isa
<Constant
>(I
.getOperand(0)) || isa
<Constant
>(I
.getOperand(1)))) {
2366 handleRelationalComparisonExact(I
);
2373 void visitFCmpInst(FCmpInst
&I
) {
2377 void handleShift(BinaryOperator
&I
) {
2378 IRBuilder
<> IRB(&I
);
2379 // If any of the S2 bits are poisoned, the whole thing is poisoned.
2380 // Otherwise perform the same shift on S1.
2381 Value
*S1
= getShadow(&I
, 0);
2382 Value
*S2
= getShadow(&I
, 1);
2383 Value
*S2Conv
= IRB
.CreateSExt(IRB
.CreateICmpNE(S2
, getCleanShadow(S2
)),
2385 Value
*V2
= I
.getOperand(1);
2386 Value
*Shift
= IRB
.CreateBinOp(I
.getOpcode(), S1
, V2
);
2387 setShadow(&I
, IRB
.CreateOr(Shift
, S2Conv
));
2388 setOriginForNaryOp(I
);
2391 void visitShl(BinaryOperator
&I
) { handleShift(I
); }
2392 void visitAShr(BinaryOperator
&I
) { handleShift(I
); }
2393 void visitLShr(BinaryOperator
&I
) { handleShift(I
); }
2395 /// Instrument llvm.memmove
2397 /// At this point we don't know if llvm.memmove will be inlined or not.
2398 /// If we don't instrument it and it gets inlined,
2399 /// our interceptor will not kick in and we will lose the memmove.
2400 /// If we instrument the call here, but it does not get inlined,
2401 /// we will memove the shadow twice: which is bad in case
2402 /// of overlapping regions. So, we simply lower the intrinsic to a call.
2404 /// Similar situation exists for memcpy and memset.
2405 void visitMemMoveInst(MemMoveInst
&I
) {
2406 IRBuilder
<> IRB(&I
);
2409 {IRB
.CreatePointerCast(I
.getArgOperand(0), IRB
.getInt8PtrTy()),
2410 IRB
.CreatePointerCast(I
.getArgOperand(1), IRB
.getInt8PtrTy()),
2411 IRB
.CreateIntCast(I
.getArgOperand(2), MS
.IntptrTy
, false)});
2412 I
.eraseFromParent();
2415 // Similar to memmove: avoid copying shadow twice.
2416 // This is somewhat unfortunate as it may slowdown small constant memcpys.
2417 // FIXME: consider doing manual inline for small constant sizes and proper
2419 void visitMemCpyInst(MemCpyInst
&I
) {
2420 IRBuilder
<> IRB(&I
);
2423 {IRB
.CreatePointerCast(I
.getArgOperand(0), IRB
.getInt8PtrTy()),
2424 IRB
.CreatePointerCast(I
.getArgOperand(1), IRB
.getInt8PtrTy()),
2425 IRB
.CreateIntCast(I
.getArgOperand(2), MS
.IntptrTy
, false)});
2426 I
.eraseFromParent();
2430 void visitMemSetInst(MemSetInst
&I
) {
2431 IRBuilder
<> IRB(&I
);
2434 {IRB
.CreatePointerCast(I
.getArgOperand(0), IRB
.getInt8PtrTy()),
2435 IRB
.CreateIntCast(I
.getArgOperand(1), IRB
.getInt32Ty(), false),
2436 IRB
.CreateIntCast(I
.getArgOperand(2), MS
.IntptrTy
, false)});
2437 I
.eraseFromParent();
2440 void visitVAStartInst(VAStartInst
&I
) {
2441 VAHelper
->visitVAStartInst(I
);
2444 void visitVACopyInst(VACopyInst
&I
) {
2445 VAHelper
->visitVACopyInst(I
);
2448 /// Handle vector store-like intrinsics.
2450 /// Instrument intrinsics that look like a simple SIMD store: writes memory,
2451 /// has 1 pointer argument and 1 vector argument, returns void.
2452 bool handleVectorStoreIntrinsic(IntrinsicInst
&I
) {
2453 IRBuilder
<> IRB(&I
);
2454 Value
* Addr
= I
.getArgOperand(0);
2455 Value
*Shadow
= getShadow(&I
, 1);
2456 Value
*ShadowPtr
, *OriginPtr
;
2458 // We don't know the pointer alignment (could be unaligned SSE store!).
2459 // Have to assume to worst case.
2460 std::tie(ShadowPtr
, OriginPtr
) = getShadowOriginPtr(
2461 Addr
, IRB
, Shadow
->getType(), /*Alignment*/ 1, /*isStore*/ true);
2462 IRB
.CreateAlignedStore(Shadow
, ShadowPtr
, 1);
2464 if (ClCheckAccessAddress
)
2465 insertShadowCheck(Addr
, &I
);
2467 // FIXME: factor out common code from materializeStores
2468 if (MS
.TrackOrigins
) IRB
.CreateStore(getOrigin(&I
, 1), OriginPtr
);
2472 /// Handle vector load-like intrinsics.
2474 /// Instrument intrinsics that look like a simple SIMD load: reads memory,
2475 /// has 1 pointer argument, returns a vector.
2476 bool handleVectorLoadIntrinsic(IntrinsicInst
&I
) {
2477 IRBuilder
<> IRB(&I
);
2478 Value
*Addr
= I
.getArgOperand(0);
2480 Type
*ShadowTy
= getShadowTy(&I
);
2481 Value
*ShadowPtr
, *OriginPtr
;
2482 if (PropagateShadow
) {
2483 // We don't know the pointer alignment (could be unaligned SSE load!).
2484 // Have to assume to worst case.
2485 unsigned Alignment
= 1;
2486 std::tie(ShadowPtr
, OriginPtr
) =
2487 getShadowOriginPtr(Addr
, IRB
, ShadowTy
, Alignment
, /*isStore*/ false);
2489 IRB
.CreateAlignedLoad(ShadowTy
, ShadowPtr
, Alignment
, "_msld"));
2491 setShadow(&I
, getCleanShadow(&I
));
2494 if (ClCheckAccessAddress
)
2495 insertShadowCheck(Addr
, &I
);
2497 if (MS
.TrackOrigins
) {
2498 if (PropagateShadow
)
2499 setOrigin(&I
, IRB
.CreateLoad(MS
.OriginTy
, OriginPtr
));
2501 setOrigin(&I
, getCleanOrigin());
2506 /// Handle (SIMD arithmetic)-like intrinsics.
2508 /// Instrument intrinsics with any number of arguments of the same type,
2509 /// equal to the return type. The type should be simple (no aggregates or
2510 /// pointers; vectors are fine).
2511 /// Caller guarantees that this intrinsic does not access memory.
2512 bool maybeHandleSimpleNomemIntrinsic(IntrinsicInst
&I
) {
2513 Type
*RetTy
= I
.getType();
2514 if (!(RetTy
->isIntOrIntVectorTy() ||
2515 RetTy
->isFPOrFPVectorTy() ||
2516 RetTy
->isX86_MMXTy()))
2519 unsigned NumArgOperands
= I
.getNumArgOperands();
2521 for (unsigned i
= 0; i
< NumArgOperands
; ++i
) {
2522 Type
*Ty
= I
.getArgOperand(i
)->getType();
2527 IRBuilder
<> IRB(&I
);
2528 ShadowAndOriginCombiner
SC(this, IRB
);
2529 for (unsigned i
= 0; i
< NumArgOperands
; ++i
)
2530 SC
.Add(I
.getArgOperand(i
));
2536 /// Heuristically instrument unknown intrinsics.
2538 /// The main purpose of this code is to do something reasonable with all
2539 /// random intrinsics we might encounter, most importantly - SIMD intrinsics.
2540 /// We recognize several classes of intrinsics by their argument types and
2541 /// ModRefBehaviour and apply special intrumentation when we are reasonably
2542 /// sure that we know what the intrinsic does.
2544 /// We special-case intrinsics where this approach fails. See llvm.bswap
2545 /// handling as an example of that.
2546 bool handleUnknownIntrinsic(IntrinsicInst
&I
) {
2547 unsigned NumArgOperands
= I
.getNumArgOperands();
2548 if (NumArgOperands
== 0)
2551 if (NumArgOperands
== 2 &&
2552 I
.getArgOperand(0)->getType()->isPointerTy() &&
2553 I
.getArgOperand(1)->getType()->isVectorTy() &&
2554 I
.getType()->isVoidTy() &&
2555 !I
.onlyReadsMemory()) {
2556 // This looks like a vector store.
2557 return handleVectorStoreIntrinsic(I
);
2560 if (NumArgOperands
== 1 &&
2561 I
.getArgOperand(0)->getType()->isPointerTy() &&
2562 I
.getType()->isVectorTy() &&
2563 I
.onlyReadsMemory()) {
2564 // This looks like a vector load.
2565 return handleVectorLoadIntrinsic(I
);
2568 if (I
.doesNotAccessMemory())
2569 if (maybeHandleSimpleNomemIntrinsic(I
))
2572 // FIXME: detect and handle SSE maskstore/maskload
2576 void handleInvariantGroup(IntrinsicInst
&I
) {
2577 setShadow(&I
, getShadow(&I
, 0));
2578 setOrigin(&I
, getOrigin(&I
, 0));
2581 void handleLifetimeStart(IntrinsicInst
&I
) {
2584 DenseMap
<Value
*, AllocaInst
*> AllocaForValue
;
2586 llvm::findAllocaForValue(I
.getArgOperand(1), AllocaForValue
);
2588 InstrumentLifetimeStart
= false;
2589 LifetimeStartList
.push_back(std::make_pair(&I
, AI
));
2592 void handleBswap(IntrinsicInst
&I
) {
2593 IRBuilder
<> IRB(&I
);
2594 Value
*Op
= I
.getArgOperand(0);
2595 Type
*OpType
= Op
->getType();
2596 Function
*BswapFunc
= Intrinsic::getDeclaration(
2597 F
.getParent(), Intrinsic::bswap
, makeArrayRef(&OpType
, 1));
2598 setShadow(&I
, IRB
.CreateCall(BswapFunc
, getShadow(Op
)));
2599 setOrigin(&I
, getOrigin(Op
));
2602 // Instrument vector convert instrinsic.
2604 // This function instruments intrinsics like cvtsi2ss:
2605 // %Out = int_xxx_cvtyyy(%ConvertOp)
2607 // %Out = int_xxx_cvtyyy(%CopyOp, %ConvertOp)
2608 // Intrinsic converts \p NumUsedElements elements of \p ConvertOp to the same
2609 // number \p Out elements, and (if has 2 arguments) copies the rest of the
2610 // elements from \p CopyOp.
2611 // In most cases conversion involves floating-point value which may trigger a
2612 // hardware exception when not fully initialized. For this reason we require
2613 // \p ConvertOp[0:NumUsedElements] to be fully initialized and trap otherwise.
2614 // We copy the shadow of \p CopyOp[NumUsedElements:] to \p
2615 // Out[NumUsedElements:]. This means that intrinsics without \p CopyOp always
2616 // return a fully initialized value.
2617 void handleVectorConvertIntrinsic(IntrinsicInst
&I
, int NumUsedElements
) {
2618 IRBuilder
<> IRB(&I
);
2619 Value
*CopyOp
, *ConvertOp
;
2621 switch (I
.getNumArgOperands()) {
2623 assert(isa
<ConstantInt
>(I
.getArgOperand(2)) && "Invalid rounding mode");
2626 CopyOp
= I
.getArgOperand(0);
2627 ConvertOp
= I
.getArgOperand(1);
2630 ConvertOp
= I
.getArgOperand(0);
2634 llvm_unreachable("Cvt intrinsic with unsupported number of arguments.");
2637 // The first *NumUsedElements* elements of ConvertOp are converted to the
2638 // same number of output elements. The rest of the output is copied from
2639 // CopyOp, or (if not available) filled with zeroes.
2640 // Combine shadow for elements of ConvertOp that are used in this operation,
2641 // and insert a check.
2642 // FIXME: consider propagating shadow of ConvertOp, at least in the case of
2643 // int->any conversion.
2644 Value
*ConvertShadow
= getShadow(ConvertOp
);
2645 Value
*AggShadow
= nullptr;
2646 if (ConvertOp
->getType()->isVectorTy()) {
2647 AggShadow
= IRB
.CreateExtractElement(
2648 ConvertShadow
, ConstantInt::get(IRB
.getInt32Ty(), 0));
2649 for (int i
= 1; i
< NumUsedElements
; ++i
) {
2650 Value
*MoreShadow
= IRB
.CreateExtractElement(
2651 ConvertShadow
, ConstantInt::get(IRB
.getInt32Ty(), i
));
2652 AggShadow
= IRB
.CreateOr(AggShadow
, MoreShadow
);
2655 AggShadow
= ConvertShadow
;
2657 assert(AggShadow
->getType()->isIntegerTy());
2658 insertShadowCheck(AggShadow
, getOrigin(ConvertOp
), &I
);
2660 // Build result shadow by zero-filling parts of CopyOp shadow that come from
2663 assert(CopyOp
->getType() == I
.getType());
2664 assert(CopyOp
->getType()->isVectorTy());
2665 Value
*ResultShadow
= getShadow(CopyOp
);
2666 Type
*EltTy
= ResultShadow
->getType()->getVectorElementType();
2667 for (int i
= 0; i
< NumUsedElements
; ++i
) {
2668 ResultShadow
= IRB
.CreateInsertElement(
2669 ResultShadow
, ConstantInt::getNullValue(EltTy
),
2670 ConstantInt::get(IRB
.getInt32Ty(), i
));
2672 setShadow(&I
, ResultShadow
);
2673 setOrigin(&I
, getOrigin(CopyOp
));
2675 setShadow(&I
, getCleanShadow(&I
));
2676 setOrigin(&I
, getCleanOrigin());
2680 // Given a scalar or vector, extract lower 64 bits (or less), and return all
2681 // zeroes if it is zero, and all ones otherwise.
2682 Value
*Lower64ShadowExtend(IRBuilder
<> &IRB
, Value
*S
, Type
*T
) {
2683 if (S
->getType()->isVectorTy())
2684 S
= CreateShadowCast(IRB
, S
, IRB
.getInt64Ty(), /* Signed */ true);
2685 assert(S
->getType()->getPrimitiveSizeInBits() <= 64);
2686 Value
*S2
= IRB
.CreateICmpNE(S
, getCleanShadow(S
));
2687 return CreateShadowCast(IRB
, S2
, T
, /* Signed */ true);
2690 // Given a vector, extract its first element, and return all
2691 // zeroes if it is zero, and all ones otherwise.
2692 Value
*LowerElementShadowExtend(IRBuilder
<> &IRB
, Value
*S
, Type
*T
) {
2693 Value
*S1
= IRB
.CreateExtractElement(S
, (uint64_t)0);
2694 Value
*S2
= IRB
.CreateICmpNE(S1
, getCleanShadow(S1
));
2695 return CreateShadowCast(IRB
, S2
, T
, /* Signed */ true);
2698 Value
*VariableShadowExtend(IRBuilder
<> &IRB
, Value
*S
) {
2699 Type
*T
= S
->getType();
2700 assert(T
->isVectorTy());
2701 Value
*S2
= IRB
.CreateICmpNE(S
, getCleanShadow(S
));
2702 return IRB
.CreateSExt(S2
, T
);
2705 // Instrument vector shift instrinsic.
2707 // This function instruments intrinsics like int_x86_avx2_psll_w.
2708 // Intrinsic shifts %In by %ShiftSize bits.
2709 // %ShiftSize may be a vector. In that case the lower 64 bits determine shift
2710 // size, and the rest is ignored. Behavior is defined even if shift size is
2711 // greater than register (or field) width.
2712 void handleVectorShiftIntrinsic(IntrinsicInst
&I
, bool Variable
) {
2713 assert(I
.getNumArgOperands() == 2);
2714 IRBuilder
<> IRB(&I
);
2715 // If any of the S2 bits are poisoned, the whole thing is poisoned.
2716 // Otherwise perform the same shift on S1.
2717 Value
*S1
= getShadow(&I
, 0);
2718 Value
*S2
= getShadow(&I
, 1);
2719 Value
*S2Conv
= Variable
? VariableShadowExtend(IRB
, S2
)
2720 : Lower64ShadowExtend(IRB
, S2
, getShadowTy(&I
));
2721 Value
*V1
= I
.getOperand(0);
2722 Value
*V2
= I
.getOperand(1);
2723 Value
*Shift
= IRB
.CreateCall(I
.getFunctionType(), I
.getCalledValue(),
2724 {IRB
.CreateBitCast(S1
, V1
->getType()), V2
});
2725 Shift
= IRB
.CreateBitCast(Shift
, getShadowTy(&I
));
2726 setShadow(&I
, IRB
.CreateOr(Shift
, S2Conv
));
2727 setOriginForNaryOp(I
);
2730 // Get an X86_MMX-sized vector type.
2731 Type
*getMMXVectorTy(unsigned EltSizeInBits
) {
2732 const unsigned X86_MMXSizeInBits
= 64;
2733 assert(EltSizeInBits
!= 0 && (X86_MMXSizeInBits
% EltSizeInBits
) == 0 &&
2734 "Illegal MMX vector element size");
2735 return VectorType::get(IntegerType::get(*MS
.C
, EltSizeInBits
),
2736 X86_MMXSizeInBits
/ EltSizeInBits
);
2739 // Returns a signed counterpart for an (un)signed-saturate-and-pack
2741 Intrinsic::ID
getSignedPackIntrinsic(Intrinsic::ID id
) {
2743 case Intrinsic::x86_sse2_packsswb_128
:
2744 case Intrinsic::x86_sse2_packuswb_128
:
2745 return Intrinsic::x86_sse2_packsswb_128
;
2747 case Intrinsic::x86_sse2_packssdw_128
:
2748 case Intrinsic::x86_sse41_packusdw
:
2749 return Intrinsic::x86_sse2_packssdw_128
;
2751 case Intrinsic::x86_avx2_packsswb
:
2752 case Intrinsic::x86_avx2_packuswb
:
2753 return Intrinsic::x86_avx2_packsswb
;
2755 case Intrinsic::x86_avx2_packssdw
:
2756 case Intrinsic::x86_avx2_packusdw
:
2757 return Intrinsic::x86_avx2_packssdw
;
2759 case Intrinsic::x86_mmx_packsswb
:
2760 case Intrinsic::x86_mmx_packuswb
:
2761 return Intrinsic::x86_mmx_packsswb
;
2763 case Intrinsic::x86_mmx_packssdw
:
2764 return Intrinsic::x86_mmx_packssdw
;
2766 llvm_unreachable("unexpected intrinsic id");
2770 // Instrument vector pack instrinsic.
2772 // This function instruments intrinsics like x86_mmx_packsswb, that
2773 // packs elements of 2 input vectors into half as many bits with saturation.
2774 // Shadow is propagated with the signed variant of the same intrinsic applied
2775 // to sext(Sa != zeroinitializer), sext(Sb != zeroinitializer).
2776 // EltSizeInBits is used only for x86mmx arguments.
2777 void handleVectorPackIntrinsic(IntrinsicInst
&I
, unsigned EltSizeInBits
= 0) {
2778 assert(I
.getNumArgOperands() == 2);
2779 bool isX86_MMX
= I
.getOperand(0)->getType()->isX86_MMXTy();
2780 IRBuilder
<> IRB(&I
);
2781 Value
*S1
= getShadow(&I
, 0);
2782 Value
*S2
= getShadow(&I
, 1);
2783 assert(isX86_MMX
|| S1
->getType()->isVectorTy());
2785 // SExt and ICmpNE below must apply to individual elements of input vectors.
2786 // In case of x86mmx arguments, cast them to appropriate vector types and
2788 Type
*T
= isX86_MMX
? getMMXVectorTy(EltSizeInBits
) : S1
->getType();
2790 S1
= IRB
.CreateBitCast(S1
, T
);
2791 S2
= IRB
.CreateBitCast(S2
, T
);
2793 Value
*S1_ext
= IRB
.CreateSExt(
2794 IRB
.CreateICmpNE(S1
, Constant::getNullValue(T
)), T
);
2795 Value
*S2_ext
= IRB
.CreateSExt(
2796 IRB
.CreateICmpNE(S2
, Constant::getNullValue(T
)), T
);
2798 Type
*X86_MMXTy
= Type::getX86_MMXTy(*MS
.C
);
2799 S1_ext
= IRB
.CreateBitCast(S1_ext
, X86_MMXTy
);
2800 S2_ext
= IRB
.CreateBitCast(S2_ext
, X86_MMXTy
);
2803 Function
*ShadowFn
= Intrinsic::getDeclaration(
2804 F
.getParent(), getSignedPackIntrinsic(I
.getIntrinsicID()));
2807 IRB
.CreateCall(ShadowFn
, {S1_ext
, S2_ext
}, "_msprop_vector_pack");
2808 if (isX86_MMX
) S
= IRB
.CreateBitCast(S
, getShadowTy(&I
));
2810 setOriginForNaryOp(I
);
2813 // Instrument sum-of-absolute-differencies intrinsic.
2814 void handleVectorSadIntrinsic(IntrinsicInst
&I
) {
2815 const unsigned SignificantBitsPerResultElement
= 16;
2816 bool isX86_MMX
= I
.getOperand(0)->getType()->isX86_MMXTy();
2817 Type
*ResTy
= isX86_MMX
? IntegerType::get(*MS
.C
, 64) : I
.getType();
2818 unsigned ZeroBitsPerResultElement
=
2819 ResTy
->getScalarSizeInBits() - SignificantBitsPerResultElement
;
2821 IRBuilder
<> IRB(&I
);
2822 Value
*S
= IRB
.CreateOr(getShadow(&I
, 0), getShadow(&I
, 1));
2823 S
= IRB
.CreateBitCast(S
, ResTy
);
2824 S
= IRB
.CreateSExt(IRB
.CreateICmpNE(S
, Constant::getNullValue(ResTy
)),
2826 S
= IRB
.CreateLShr(S
, ZeroBitsPerResultElement
);
2827 S
= IRB
.CreateBitCast(S
, getShadowTy(&I
));
2829 setOriginForNaryOp(I
);
2832 // Instrument multiply-add intrinsic.
2833 void handleVectorPmaddIntrinsic(IntrinsicInst
&I
,
2834 unsigned EltSizeInBits
= 0) {
2835 bool isX86_MMX
= I
.getOperand(0)->getType()->isX86_MMXTy();
2836 Type
*ResTy
= isX86_MMX
? getMMXVectorTy(EltSizeInBits
* 2) : I
.getType();
2837 IRBuilder
<> IRB(&I
);
2838 Value
*S
= IRB
.CreateOr(getShadow(&I
, 0), getShadow(&I
, 1));
2839 S
= IRB
.CreateBitCast(S
, ResTy
);
2840 S
= IRB
.CreateSExt(IRB
.CreateICmpNE(S
, Constant::getNullValue(ResTy
)),
2842 S
= IRB
.CreateBitCast(S
, getShadowTy(&I
));
2844 setOriginForNaryOp(I
);
2847 // Instrument compare-packed intrinsic.
2848 // Basically, an or followed by sext(icmp ne 0) to end up with all-zeros or
2850 void handleVectorComparePackedIntrinsic(IntrinsicInst
&I
) {
2851 IRBuilder
<> IRB(&I
);
2852 Type
*ResTy
= getShadowTy(&I
);
2853 Value
*S0
= IRB
.CreateOr(getShadow(&I
, 0), getShadow(&I
, 1));
2854 Value
*S
= IRB
.CreateSExt(
2855 IRB
.CreateICmpNE(S0
, Constant::getNullValue(ResTy
)), ResTy
);
2857 setOriginForNaryOp(I
);
2860 // Instrument compare-scalar intrinsic.
2861 // This handles both cmp* intrinsics which return the result in the first
2862 // element of a vector, and comi* which return the result as i32.
2863 void handleVectorCompareScalarIntrinsic(IntrinsicInst
&I
) {
2864 IRBuilder
<> IRB(&I
);
2865 Value
*S0
= IRB
.CreateOr(getShadow(&I
, 0), getShadow(&I
, 1));
2866 Value
*S
= LowerElementShadowExtend(IRB
, S0
, getShadowTy(&I
));
2868 setOriginForNaryOp(I
);
2871 void handleStmxcsr(IntrinsicInst
&I
) {
2872 IRBuilder
<> IRB(&I
);
2873 Value
* Addr
= I
.getArgOperand(0);
2874 Type
*Ty
= IRB
.getInt32Ty();
2876 getShadowOriginPtr(Addr
, IRB
, Ty
, /*Alignment*/ 1, /*isStore*/ true)
2879 IRB
.CreateStore(getCleanShadow(Ty
),
2880 IRB
.CreatePointerCast(ShadowPtr
, Ty
->getPointerTo()));
2882 if (ClCheckAccessAddress
)
2883 insertShadowCheck(Addr
, &I
);
2886 void handleLdmxcsr(IntrinsicInst
&I
) {
2887 if (!InsertChecks
) return;
2889 IRBuilder
<> IRB(&I
);
2890 Value
*Addr
= I
.getArgOperand(0);
2891 Type
*Ty
= IRB
.getInt32Ty();
2892 unsigned Alignment
= 1;
2893 Value
*ShadowPtr
, *OriginPtr
;
2894 std::tie(ShadowPtr
, OriginPtr
) =
2895 getShadowOriginPtr(Addr
, IRB
, Ty
, Alignment
, /*isStore*/ false);
2897 if (ClCheckAccessAddress
)
2898 insertShadowCheck(Addr
, &I
);
2900 Value
*Shadow
= IRB
.CreateAlignedLoad(Ty
, ShadowPtr
, Alignment
, "_ldmxcsr");
2901 Value
*Origin
= MS
.TrackOrigins
? IRB
.CreateLoad(MS
.OriginTy
, OriginPtr
)
2903 insertShadowCheck(Shadow
, Origin
, &I
);
2906 void handleMaskedStore(IntrinsicInst
&I
) {
2907 IRBuilder
<> IRB(&I
);
2908 Value
*V
= I
.getArgOperand(0);
2909 Value
*Addr
= I
.getArgOperand(1);
2910 unsigned Align
= cast
<ConstantInt
>(I
.getArgOperand(2))->getZExtValue();
2911 Value
*Mask
= I
.getArgOperand(3);
2912 Value
*Shadow
= getShadow(V
);
2916 std::tie(ShadowPtr
, OriginPtr
) = getShadowOriginPtr(
2917 Addr
, IRB
, Shadow
->getType(), Align
, /*isStore*/ true);
2919 if (ClCheckAccessAddress
) {
2920 insertShadowCheck(Addr
, &I
);
2921 // Uninitialized mask is kind of like uninitialized address, but not as
2923 insertShadowCheck(Mask
, &I
);
2926 IRB
.CreateMaskedStore(Shadow
, ShadowPtr
, Align
, Mask
);
2928 if (MS
.TrackOrigins
) {
2929 auto &DL
= F
.getParent()->getDataLayout();
2930 paintOrigin(IRB
, getOrigin(V
), OriginPtr
,
2931 DL
.getTypeStoreSize(Shadow
->getType()),
2932 std::max(Align
, kMinOriginAlignment
));
2936 bool handleMaskedLoad(IntrinsicInst
&I
) {
2937 IRBuilder
<> IRB(&I
);
2938 Value
*Addr
= I
.getArgOperand(0);
2939 unsigned Align
= cast
<ConstantInt
>(I
.getArgOperand(1))->getZExtValue();
2940 Value
*Mask
= I
.getArgOperand(2);
2941 Value
*PassThru
= I
.getArgOperand(3);
2943 Type
*ShadowTy
= getShadowTy(&I
);
2944 Value
*ShadowPtr
, *OriginPtr
;
2945 if (PropagateShadow
) {
2946 std::tie(ShadowPtr
, OriginPtr
) =
2947 getShadowOriginPtr(Addr
, IRB
, ShadowTy
, Align
, /*isStore*/ false);
2948 setShadow(&I
, IRB
.CreateMaskedLoad(ShadowPtr
, Align
, Mask
,
2949 getShadow(PassThru
), "_msmaskedld"));
2951 setShadow(&I
, getCleanShadow(&I
));
2954 if (ClCheckAccessAddress
) {
2955 insertShadowCheck(Addr
, &I
);
2956 insertShadowCheck(Mask
, &I
);
2959 if (MS
.TrackOrigins
) {
2960 if (PropagateShadow
) {
2961 // Choose between PassThru's and the loaded value's origins.
2962 Value
*MaskedPassThruShadow
= IRB
.CreateAnd(
2963 getShadow(PassThru
), IRB
.CreateSExt(IRB
.CreateNeg(Mask
), ShadowTy
));
2965 Value
*Acc
= IRB
.CreateExtractElement(
2966 MaskedPassThruShadow
, ConstantInt::get(IRB
.getInt32Ty(), 0));
2967 for (int i
= 1, N
= PassThru
->getType()->getVectorNumElements(); i
< N
;
2969 Value
*More
= IRB
.CreateExtractElement(
2970 MaskedPassThruShadow
, ConstantInt::get(IRB
.getInt32Ty(), i
));
2971 Acc
= IRB
.CreateOr(Acc
, More
);
2974 Value
*Origin
= IRB
.CreateSelect(
2975 IRB
.CreateICmpNE(Acc
, Constant::getNullValue(Acc
->getType())),
2976 getOrigin(PassThru
), IRB
.CreateLoad(MS
.OriginTy
, OriginPtr
));
2978 setOrigin(&I
, Origin
);
2980 setOrigin(&I
, getCleanOrigin());
2986 // Instrument BMI / BMI2 intrinsics.
2987 // All of these intrinsics are Z = I(X, Y)
2988 // where the types of all operands and the result match, and are either i32 or i64.
2989 // The following instrumentation happens to work for all of them:
2990 // Sz = I(Sx, Y) | (sext (Sy != 0))
2991 void handleBmiIntrinsic(IntrinsicInst
&I
) {
2992 IRBuilder
<> IRB(&I
);
2993 Type
*ShadowTy
= getShadowTy(&I
);
2995 // If any bit of the mask operand is poisoned, then the whole thing is.
2996 Value
*SMask
= getShadow(&I
, 1);
2997 SMask
= IRB
.CreateSExt(IRB
.CreateICmpNE(SMask
, getCleanShadow(ShadowTy
)),
2999 // Apply the same intrinsic to the shadow of the first operand.
3000 Value
*S
= IRB
.CreateCall(I
.getCalledFunction(),
3001 {getShadow(&I
, 0), I
.getOperand(1)});
3002 S
= IRB
.CreateOr(SMask
, S
);
3004 setOriginForNaryOp(I
);
3007 void visitIntrinsicInst(IntrinsicInst
&I
) {
3008 switch (I
.getIntrinsicID()) {
3009 case Intrinsic::lifetime_start
:
3010 handleLifetimeStart(I
);
3012 case Intrinsic::launder_invariant_group
:
3013 case Intrinsic::strip_invariant_group
:
3014 handleInvariantGroup(I
);
3016 case Intrinsic::bswap
:
3019 case Intrinsic::masked_store
:
3020 handleMaskedStore(I
);
3022 case Intrinsic::masked_load
:
3023 handleMaskedLoad(I
);
3025 case Intrinsic::x86_sse_stmxcsr
:
3028 case Intrinsic::x86_sse_ldmxcsr
:
3031 case Intrinsic::x86_avx512_vcvtsd2usi64
:
3032 case Intrinsic::x86_avx512_vcvtsd2usi32
:
3033 case Intrinsic::x86_avx512_vcvtss2usi64
:
3034 case Intrinsic::x86_avx512_vcvtss2usi32
:
3035 case Intrinsic::x86_avx512_cvttss2usi64
:
3036 case Intrinsic::x86_avx512_cvttss2usi
:
3037 case Intrinsic::x86_avx512_cvttsd2usi64
:
3038 case Intrinsic::x86_avx512_cvttsd2usi
:
3039 case Intrinsic::x86_avx512_cvtusi2ss
:
3040 case Intrinsic::x86_avx512_cvtusi642sd
:
3041 case Intrinsic::x86_avx512_cvtusi642ss
:
3042 case Intrinsic::x86_sse2_cvtsd2si64
:
3043 case Intrinsic::x86_sse2_cvtsd2si
:
3044 case Intrinsic::x86_sse2_cvtsd2ss
:
3045 case Intrinsic::x86_sse2_cvttsd2si64
:
3046 case Intrinsic::x86_sse2_cvttsd2si
:
3047 case Intrinsic::x86_sse_cvtss2si64
:
3048 case Intrinsic::x86_sse_cvtss2si
:
3049 case Intrinsic::x86_sse_cvttss2si64
:
3050 case Intrinsic::x86_sse_cvttss2si
:
3051 handleVectorConvertIntrinsic(I
, 1);
3053 case Intrinsic::x86_sse_cvtps2pi
:
3054 case Intrinsic::x86_sse_cvttps2pi
:
3055 handleVectorConvertIntrinsic(I
, 2);
3058 case Intrinsic::x86_avx512_psll_w_512
:
3059 case Intrinsic::x86_avx512_psll_d_512
:
3060 case Intrinsic::x86_avx512_psll_q_512
:
3061 case Intrinsic::x86_avx512_pslli_w_512
:
3062 case Intrinsic::x86_avx512_pslli_d_512
:
3063 case Intrinsic::x86_avx512_pslli_q_512
:
3064 case Intrinsic::x86_avx512_psrl_w_512
:
3065 case Intrinsic::x86_avx512_psrl_d_512
:
3066 case Intrinsic::x86_avx512_psrl_q_512
:
3067 case Intrinsic::x86_avx512_psra_w_512
:
3068 case Intrinsic::x86_avx512_psra_d_512
:
3069 case Intrinsic::x86_avx512_psra_q_512
:
3070 case Intrinsic::x86_avx512_psrli_w_512
:
3071 case Intrinsic::x86_avx512_psrli_d_512
:
3072 case Intrinsic::x86_avx512_psrli_q_512
:
3073 case Intrinsic::x86_avx512_psrai_w_512
:
3074 case Intrinsic::x86_avx512_psrai_d_512
:
3075 case Intrinsic::x86_avx512_psrai_q_512
:
3076 case Intrinsic::x86_avx512_psra_q_256
:
3077 case Intrinsic::x86_avx512_psra_q_128
:
3078 case Intrinsic::x86_avx512_psrai_q_256
:
3079 case Intrinsic::x86_avx512_psrai_q_128
:
3080 case Intrinsic::x86_avx2_psll_w
:
3081 case Intrinsic::x86_avx2_psll_d
:
3082 case Intrinsic::x86_avx2_psll_q
:
3083 case Intrinsic::x86_avx2_pslli_w
:
3084 case Intrinsic::x86_avx2_pslli_d
:
3085 case Intrinsic::x86_avx2_pslli_q
:
3086 case Intrinsic::x86_avx2_psrl_w
:
3087 case Intrinsic::x86_avx2_psrl_d
:
3088 case Intrinsic::x86_avx2_psrl_q
:
3089 case Intrinsic::x86_avx2_psra_w
:
3090 case Intrinsic::x86_avx2_psra_d
:
3091 case Intrinsic::x86_avx2_psrli_w
:
3092 case Intrinsic::x86_avx2_psrli_d
:
3093 case Intrinsic::x86_avx2_psrli_q
:
3094 case Intrinsic::x86_avx2_psrai_w
:
3095 case Intrinsic::x86_avx2_psrai_d
:
3096 case Intrinsic::x86_sse2_psll_w
:
3097 case Intrinsic::x86_sse2_psll_d
:
3098 case Intrinsic::x86_sse2_psll_q
:
3099 case Intrinsic::x86_sse2_pslli_w
:
3100 case Intrinsic::x86_sse2_pslli_d
:
3101 case Intrinsic::x86_sse2_pslli_q
:
3102 case Intrinsic::x86_sse2_psrl_w
:
3103 case Intrinsic::x86_sse2_psrl_d
:
3104 case Intrinsic::x86_sse2_psrl_q
:
3105 case Intrinsic::x86_sse2_psra_w
:
3106 case Intrinsic::x86_sse2_psra_d
:
3107 case Intrinsic::x86_sse2_psrli_w
:
3108 case Intrinsic::x86_sse2_psrli_d
:
3109 case Intrinsic::x86_sse2_psrli_q
:
3110 case Intrinsic::x86_sse2_psrai_w
:
3111 case Intrinsic::x86_sse2_psrai_d
:
3112 case Intrinsic::x86_mmx_psll_w
:
3113 case Intrinsic::x86_mmx_psll_d
:
3114 case Intrinsic::x86_mmx_psll_q
:
3115 case Intrinsic::x86_mmx_pslli_w
:
3116 case Intrinsic::x86_mmx_pslli_d
:
3117 case Intrinsic::x86_mmx_pslli_q
:
3118 case Intrinsic::x86_mmx_psrl_w
:
3119 case Intrinsic::x86_mmx_psrl_d
:
3120 case Intrinsic::x86_mmx_psrl_q
:
3121 case Intrinsic::x86_mmx_psra_w
:
3122 case Intrinsic::x86_mmx_psra_d
:
3123 case Intrinsic::x86_mmx_psrli_w
:
3124 case Intrinsic::x86_mmx_psrli_d
:
3125 case Intrinsic::x86_mmx_psrli_q
:
3126 case Intrinsic::x86_mmx_psrai_w
:
3127 case Intrinsic::x86_mmx_psrai_d
:
3128 handleVectorShiftIntrinsic(I
, /* Variable */ false);
3130 case Intrinsic::x86_avx2_psllv_d
:
3131 case Intrinsic::x86_avx2_psllv_d_256
:
3132 case Intrinsic::x86_avx512_psllv_d_512
:
3133 case Intrinsic::x86_avx2_psllv_q
:
3134 case Intrinsic::x86_avx2_psllv_q_256
:
3135 case Intrinsic::x86_avx512_psllv_q_512
:
3136 case Intrinsic::x86_avx2_psrlv_d
:
3137 case Intrinsic::x86_avx2_psrlv_d_256
:
3138 case Intrinsic::x86_avx512_psrlv_d_512
:
3139 case Intrinsic::x86_avx2_psrlv_q
:
3140 case Intrinsic::x86_avx2_psrlv_q_256
:
3141 case Intrinsic::x86_avx512_psrlv_q_512
:
3142 case Intrinsic::x86_avx2_psrav_d
:
3143 case Intrinsic::x86_avx2_psrav_d_256
:
3144 case Intrinsic::x86_avx512_psrav_d_512
:
3145 case Intrinsic::x86_avx512_psrav_q_128
:
3146 case Intrinsic::x86_avx512_psrav_q_256
:
3147 case Intrinsic::x86_avx512_psrav_q_512
:
3148 handleVectorShiftIntrinsic(I
, /* Variable */ true);
3151 case Intrinsic::x86_sse2_packsswb_128
:
3152 case Intrinsic::x86_sse2_packssdw_128
:
3153 case Intrinsic::x86_sse2_packuswb_128
:
3154 case Intrinsic::x86_sse41_packusdw
:
3155 case Intrinsic::x86_avx2_packsswb
:
3156 case Intrinsic::x86_avx2_packssdw
:
3157 case Intrinsic::x86_avx2_packuswb
:
3158 case Intrinsic::x86_avx2_packusdw
:
3159 handleVectorPackIntrinsic(I
);
3162 case Intrinsic::x86_mmx_packsswb
:
3163 case Intrinsic::x86_mmx_packuswb
:
3164 handleVectorPackIntrinsic(I
, 16);
3167 case Intrinsic::x86_mmx_packssdw
:
3168 handleVectorPackIntrinsic(I
, 32);
3171 case Intrinsic::x86_mmx_psad_bw
:
3172 case Intrinsic::x86_sse2_psad_bw
:
3173 case Intrinsic::x86_avx2_psad_bw
:
3174 handleVectorSadIntrinsic(I
);
3177 case Intrinsic::x86_sse2_pmadd_wd
:
3178 case Intrinsic::x86_avx2_pmadd_wd
:
3179 case Intrinsic::x86_ssse3_pmadd_ub_sw_128
:
3180 case Intrinsic::x86_avx2_pmadd_ub_sw
:
3181 handleVectorPmaddIntrinsic(I
);
3184 case Intrinsic::x86_ssse3_pmadd_ub_sw
:
3185 handleVectorPmaddIntrinsic(I
, 8);
3188 case Intrinsic::x86_mmx_pmadd_wd
:
3189 handleVectorPmaddIntrinsic(I
, 16);
3192 case Intrinsic::x86_sse_cmp_ss
:
3193 case Intrinsic::x86_sse2_cmp_sd
:
3194 case Intrinsic::x86_sse_comieq_ss
:
3195 case Intrinsic::x86_sse_comilt_ss
:
3196 case Intrinsic::x86_sse_comile_ss
:
3197 case Intrinsic::x86_sse_comigt_ss
:
3198 case Intrinsic::x86_sse_comige_ss
:
3199 case Intrinsic::x86_sse_comineq_ss
:
3200 case Intrinsic::x86_sse_ucomieq_ss
:
3201 case Intrinsic::x86_sse_ucomilt_ss
:
3202 case Intrinsic::x86_sse_ucomile_ss
:
3203 case Intrinsic::x86_sse_ucomigt_ss
:
3204 case Intrinsic::x86_sse_ucomige_ss
:
3205 case Intrinsic::x86_sse_ucomineq_ss
:
3206 case Intrinsic::x86_sse2_comieq_sd
:
3207 case Intrinsic::x86_sse2_comilt_sd
:
3208 case Intrinsic::x86_sse2_comile_sd
:
3209 case Intrinsic::x86_sse2_comigt_sd
:
3210 case Intrinsic::x86_sse2_comige_sd
:
3211 case Intrinsic::x86_sse2_comineq_sd
:
3212 case Intrinsic::x86_sse2_ucomieq_sd
:
3213 case Intrinsic::x86_sse2_ucomilt_sd
:
3214 case Intrinsic::x86_sse2_ucomile_sd
:
3215 case Intrinsic::x86_sse2_ucomigt_sd
:
3216 case Intrinsic::x86_sse2_ucomige_sd
:
3217 case Intrinsic::x86_sse2_ucomineq_sd
:
3218 handleVectorCompareScalarIntrinsic(I
);
3221 case Intrinsic::x86_sse_cmp_ps
:
3222 case Intrinsic::x86_sse2_cmp_pd
:
3223 // FIXME: For x86_avx_cmp_pd_256 and x86_avx_cmp_ps_256 this function
3224 // generates reasonably looking IR that fails in the backend with "Do not
3225 // know how to split the result of this operator!".
3226 handleVectorComparePackedIntrinsic(I
);
3229 case Intrinsic::x86_bmi_bextr_32
:
3230 case Intrinsic::x86_bmi_bextr_64
:
3231 case Intrinsic::x86_bmi_bzhi_32
:
3232 case Intrinsic::x86_bmi_bzhi_64
:
3233 case Intrinsic::x86_bmi_pdep_32
:
3234 case Intrinsic::x86_bmi_pdep_64
:
3235 case Intrinsic::x86_bmi_pext_32
:
3236 case Intrinsic::x86_bmi_pext_64
:
3237 handleBmiIntrinsic(I
);
3240 case Intrinsic::is_constant
:
3241 // The result of llvm.is.constant() is always defined.
3242 setShadow(&I
, getCleanShadow(&I
));
3243 setOrigin(&I
, getCleanOrigin());
3247 if (!handleUnknownIntrinsic(I
))
3248 visitInstruction(I
);
3253 void visitCallSite(CallSite CS
) {
3254 Instruction
&I
= *CS
.getInstruction();
3255 assert(!I
.getMetadata("nosanitize"));
3256 assert((CS
.isCall() || CS
.isInvoke() || CS
.isCallBr()) &&
3257 "Unknown type of CallSite");
3258 if (CS
.isCallBr() || (CS
.isCall() && cast
<CallInst
>(&I
)->isInlineAsm())) {
3259 // For inline asm (either a call to asm function, or callbr instruction),
3260 // do the usual thing: check argument shadow and mark all outputs as
3261 // clean. Note that any side effects of the inline asm that are not
3262 // immediately visible in its constraints are not handled.
3263 if (ClHandleAsmConservative
&& MS
.CompileKernel
)
3264 visitAsmInstruction(I
);
3266 visitInstruction(I
);
3270 CallInst
*Call
= cast
<CallInst
>(&I
);
3271 assert(!isa
<IntrinsicInst
>(&I
) && "intrinsics are handled elsewhere");
3273 // We are going to insert code that relies on the fact that the callee
3274 // will become a non-readonly function after it is instrumented by us. To
3275 // prevent this code from being optimized out, mark that function
3276 // non-readonly in advance.
3277 if (Function
*Func
= Call
->getCalledFunction()) {
3278 // Clear out readonly/readnone attributes.
3280 B
.addAttribute(Attribute::ReadOnly
)
3281 .addAttribute(Attribute::ReadNone
);
3282 Func
->removeAttributes(AttributeList::FunctionIndex
, B
);
3285 maybeMarkSanitizerLibraryCallNoBuiltin(Call
, TLI
);
3287 IRBuilder
<> IRB(&I
);
3289 unsigned ArgOffset
= 0;
3290 LLVM_DEBUG(dbgs() << " CallSite: " << I
<< "\n");
3291 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin(), End
= CS
.arg_end();
3292 ArgIt
!= End
; ++ArgIt
) {
3294 unsigned i
= ArgIt
- CS
.arg_begin();
3295 if (!A
->getType()->isSized()) {
3296 LLVM_DEBUG(dbgs() << "Arg " << i
<< " is not sized: " << I
<< "\n");
3300 Value
*Store
= nullptr;
3301 // Compute the Shadow for arg even if it is ByVal, because
3302 // in that case getShadow() will copy the actual arg shadow to
3303 // __msan_param_tls.
3304 Value
*ArgShadow
= getShadow(A
);
3305 Value
*ArgShadowBase
= getShadowPtrForArgument(A
, IRB
, ArgOffset
);
3306 LLVM_DEBUG(dbgs() << " Arg#" << i
<< ": " << *A
3307 << " Shadow: " << *ArgShadow
<< "\n");
3308 bool ArgIsInitialized
= false;
3309 const DataLayout
&DL
= F
.getParent()->getDataLayout();
3310 if (CS
.paramHasAttr(i
, Attribute::ByVal
)) {
3311 assert(A
->getType()->isPointerTy() &&
3312 "ByVal argument is not a pointer!");
3313 Size
= DL
.getTypeAllocSize(A
->getType()->getPointerElementType());
3314 if (ArgOffset
+ Size
> kParamTLSSize
) break;
3315 unsigned ParamAlignment
= CS
.getParamAlignment(i
);
3316 unsigned Alignment
= std::min(ParamAlignment
, kShadowTLSAlignment
);
3318 getShadowOriginPtr(A
, IRB
, IRB
.getInt8Ty(), Alignment
,
3322 Store
= IRB
.CreateMemCpy(ArgShadowBase
, Alignment
, AShadowPtr
,
3324 // TODO(glider): need to copy origins.
3326 Size
= DL
.getTypeAllocSize(A
->getType());
3327 if (ArgOffset
+ Size
> kParamTLSSize
) break;
3328 Store
= IRB
.CreateAlignedStore(ArgShadow
, ArgShadowBase
,
3329 kShadowTLSAlignment
);
3330 Constant
*Cst
= dyn_cast
<Constant
>(ArgShadow
);
3331 if (Cst
&& Cst
->isNullValue()) ArgIsInitialized
= true;
3333 if (MS
.TrackOrigins
&& !ArgIsInitialized
)
3334 IRB
.CreateStore(getOrigin(A
),
3335 getOriginPtrForArgument(A
, IRB
, ArgOffset
));
3337 assert(Size
!= 0 && Store
!= nullptr);
3338 LLVM_DEBUG(dbgs() << " Param:" << *Store
<< "\n");
3339 ArgOffset
+= alignTo(Size
, 8);
3341 LLVM_DEBUG(dbgs() << " done with call args\n");
3343 FunctionType
*FT
= CS
.getFunctionType();
3344 if (FT
->isVarArg()) {
3345 VAHelper
->visitCallSite(CS
, IRB
);
3348 // Now, get the shadow for the RetVal.
3349 if (!I
.getType()->isSized()) return;
3350 // Don't emit the epilogue for musttail call returns.
3351 if (CS
.isCall() && cast
<CallInst
>(&I
)->isMustTailCall()) return;
3352 IRBuilder
<> IRBBefore(&I
);
3353 // Until we have full dynamic coverage, make sure the retval shadow is 0.
3354 Value
*Base
= getShadowPtrForRetval(&I
, IRBBefore
);
3355 IRBBefore
.CreateAlignedStore(getCleanShadow(&I
), Base
, kShadowTLSAlignment
);
3356 BasicBlock::iterator NextInsn
;
3358 NextInsn
= ++I
.getIterator();
3359 assert(NextInsn
!= I
.getParent()->end());
3361 BasicBlock
*NormalDest
= cast
<InvokeInst
>(&I
)->getNormalDest();
3362 if (!NormalDest
->getSinglePredecessor()) {
3363 // FIXME: this case is tricky, so we are just conservative here.
3364 // Perhaps we need to split the edge between this BB and NormalDest,
3365 // but a naive attempt to use SplitEdge leads to a crash.
3366 setShadow(&I
, getCleanShadow(&I
));
3367 setOrigin(&I
, getCleanOrigin());
3370 // FIXME: NextInsn is likely in a basic block that has not been visited yet.
3371 // Anything inserted there will be instrumented by MSan later!
3372 NextInsn
= NormalDest
->getFirstInsertionPt();
3373 assert(NextInsn
!= NormalDest
->end() &&
3374 "Could not find insertion point for retval shadow load");
3376 IRBuilder
<> IRBAfter(&*NextInsn
);
3377 Value
*RetvalShadow
= IRBAfter
.CreateAlignedLoad(
3378 getShadowTy(&I
), getShadowPtrForRetval(&I
, IRBAfter
),
3379 kShadowTLSAlignment
, "_msret");
3380 setShadow(&I
, RetvalShadow
);
3381 if (MS
.TrackOrigins
)
3382 setOrigin(&I
, IRBAfter
.CreateLoad(MS
.OriginTy
,
3383 getOriginPtrForRetval(IRBAfter
)));
3386 bool isAMustTailRetVal(Value
*RetVal
) {
3387 if (auto *I
= dyn_cast
<BitCastInst
>(RetVal
)) {
3388 RetVal
= I
->getOperand(0);
3390 if (auto *I
= dyn_cast
<CallInst
>(RetVal
)) {
3391 return I
->isMustTailCall();
3396 void visitReturnInst(ReturnInst
&I
) {
3397 IRBuilder
<> IRB(&I
);
3398 Value
*RetVal
= I
.getReturnValue();
3399 if (!RetVal
) return;
3400 // Don't emit the epilogue for musttail call returns.
3401 if (isAMustTailRetVal(RetVal
)) return;
3402 Value
*ShadowPtr
= getShadowPtrForRetval(RetVal
, IRB
);
3403 if (CheckReturnValue
) {
3404 insertShadowCheck(RetVal
, &I
);
3405 Value
*Shadow
= getCleanShadow(RetVal
);
3406 IRB
.CreateAlignedStore(Shadow
, ShadowPtr
, kShadowTLSAlignment
);
3408 Value
*Shadow
= getShadow(RetVal
);
3409 IRB
.CreateAlignedStore(Shadow
, ShadowPtr
, kShadowTLSAlignment
);
3410 if (MS
.TrackOrigins
)
3411 IRB
.CreateStore(getOrigin(RetVal
), getOriginPtrForRetval(IRB
));
3415 void visitPHINode(PHINode
&I
) {
3416 IRBuilder
<> IRB(&I
);
3417 if (!PropagateShadow
) {
3418 setShadow(&I
, getCleanShadow(&I
));
3419 setOrigin(&I
, getCleanOrigin());
3423 ShadowPHINodes
.push_back(&I
);
3424 setShadow(&I
, IRB
.CreatePHI(getShadowTy(&I
), I
.getNumIncomingValues(),
3426 if (MS
.TrackOrigins
)
3427 setOrigin(&I
, IRB
.CreatePHI(MS
.OriginTy
, I
.getNumIncomingValues(),
3431 Value
*getLocalVarDescription(AllocaInst
&I
) {
3432 SmallString
<2048> StackDescriptionStorage
;
3433 raw_svector_ostream
StackDescription(StackDescriptionStorage
);
3434 // We create a string with a description of the stack allocation and
3435 // pass it into __msan_set_alloca_origin.
3436 // It will be printed by the run-time if stack-originated UMR is found.
3437 // The first 4 bytes of the string are set to '----' and will be replaced
3438 // by __msan_va_arg_overflow_size_tls at the first call.
3439 StackDescription
<< "----" << I
.getName() << "@" << F
.getName();
3440 return createPrivateNonConstGlobalForString(*F
.getParent(),
3441 StackDescription
.str());
3444 void poisonAllocaUserspace(AllocaInst
&I
, IRBuilder
<> &IRB
, Value
*Len
) {
3445 if (PoisonStack
&& ClPoisonStackWithCall
) {
3446 IRB
.CreateCall(MS
.MsanPoisonStackFn
,
3447 {IRB
.CreatePointerCast(&I
, IRB
.getInt8PtrTy()), Len
});
3449 Value
*ShadowBase
, *OriginBase
;
3450 std::tie(ShadowBase
, OriginBase
) =
3451 getShadowOriginPtr(&I
, IRB
, IRB
.getInt8Ty(), 1, /*isStore*/ true);
3453 Value
*PoisonValue
= IRB
.getInt8(PoisonStack
? ClPoisonStackPattern
: 0);
3454 IRB
.CreateMemSet(ShadowBase
, PoisonValue
, Len
, I
.getAlignment());
3457 if (PoisonStack
&& MS
.TrackOrigins
) {
3458 Value
*Descr
= getLocalVarDescription(I
);
3459 IRB
.CreateCall(MS
.MsanSetAllocaOrigin4Fn
,
3460 {IRB
.CreatePointerCast(&I
, IRB
.getInt8PtrTy()), Len
,
3461 IRB
.CreatePointerCast(Descr
, IRB
.getInt8PtrTy()),
3462 IRB
.CreatePointerCast(&F
, MS
.IntptrTy
)});
3466 void poisonAllocaKmsan(AllocaInst
&I
, IRBuilder
<> &IRB
, Value
*Len
) {
3467 Value
*Descr
= getLocalVarDescription(I
);
3469 IRB
.CreateCall(MS
.MsanPoisonAllocaFn
,
3470 {IRB
.CreatePointerCast(&I
, IRB
.getInt8PtrTy()), Len
,
3471 IRB
.CreatePointerCast(Descr
, IRB
.getInt8PtrTy())});
3473 IRB
.CreateCall(MS
.MsanUnpoisonAllocaFn
,
3474 {IRB
.CreatePointerCast(&I
, IRB
.getInt8PtrTy()), Len
});
3478 void instrumentAlloca(AllocaInst
&I
, Instruction
*InsPoint
= nullptr) {
3481 IRBuilder
<> IRB(InsPoint
->getNextNode());
3482 const DataLayout
&DL
= F
.getParent()->getDataLayout();
3483 uint64_t TypeSize
= DL
.getTypeAllocSize(I
.getAllocatedType());
3484 Value
*Len
= ConstantInt::get(MS
.IntptrTy
, TypeSize
);
3485 if (I
.isArrayAllocation())
3486 Len
= IRB
.CreateMul(Len
, I
.getArraySize());
3488 if (MS
.CompileKernel
)
3489 poisonAllocaKmsan(I
, IRB
, Len
);
3491 poisonAllocaUserspace(I
, IRB
, Len
);
3494 void visitAllocaInst(AllocaInst
&I
) {
3495 setShadow(&I
, getCleanShadow(&I
));
3496 setOrigin(&I
, getCleanOrigin());
3497 // We'll get to this alloca later unless it's poisoned at the corresponding
3498 // llvm.lifetime.start.
3499 AllocaSet
.insert(&I
);
3502 void visitSelectInst(SelectInst
& I
) {
3503 IRBuilder
<> IRB(&I
);
3504 // a = select b, c, d
3505 Value
*B
= I
.getCondition();
3506 Value
*C
= I
.getTrueValue();
3507 Value
*D
= I
.getFalseValue();
3508 Value
*Sb
= getShadow(B
);
3509 Value
*Sc
= getShadow(C
);
3510 Value
*Sd
= getShadow(D
);
3512 // Result shadow if condition shadow is 0.
3513 Value
*Sa0
= IRB
.CreateSelect(B
, Sc
, Sd
);
3515 if (I
.getType()->isAggregateType()) {
3516 // To avoid "sign extending" i1 to an arbitrary aggregate type, we just do
3517 // an extra "select". This results in much more compact IR.
3518 // Sa = select Sb, poisoned, (select b, Sc, Sd)
3519 Sa1
= getPoisonedShadow(getShadowTy(I
.getType()));
3521 // Sa = select Sb, [ (c^d) | Sc | Sd ], [ b ? Sc : Sd ]
3522 // If Sb (condition is poisoned), look for bits in c and d that are equal
3523 // and both unpoisoned.
3524 // If !Sb (condition is unpoisoned), simply pick one of Sc and Sd.
3526 // Cast arguments to shadow-compatible type.
3527 C
= CreateAppToShadowCast(IRB
, C
);
3528 D
= CreateAppToShadowCast(IRB
, D
);
3530 // Result shadow if condition shadow is 1.
3531 Sa1
= IRB
.CreateOr({IRB
.CreateXor(C
, D
), Sc
, Sd
});
3533 Value
*Sa
= IRB
.CreateSelect(Sb
, Sa1
, Sa0
, "_msprop_select");
3535 if (MS
.TrackOrigins
) {
3536 // Origins are always i32, so any vector conditions must be flattened.
3537 // FIXME: consider tracking vector origins for app vectors?
3538 if (B
->getType()->isVectorTy()) {
3539 Type
*FlatTy
= getShadowTyNoVec(B
->getType());
3540 B
= IRB
.CreateICmpNE(IRB
.CreateBitCast(B
, FlatTy
),
3541 ConstantInt::getNullValue(FlatTy
));
3542 Sb
= IRB
.CreateICmpNE(IRB
.CreateBitCast(Sb
, FlatTy
),
3543 ConstantInt::getNullValue(FlatTy
));
3545 // a = select b, c, d
3546 // Oa = Sb ? Ob : (b ? Oc : Od)
3548 &I
, IRB
.CreateSelect(Sb
, getOrigin(I
.getCondition()),
3549 IRB
.CreateSelect(B
, getOrigin(I
.getTrueValue()),
3550 getOrigin(I
.getFalseValue()))));
3554 void visitLandingPadInst(LandingPadInst
&I
) {
3556 // See https://github.com/google/sanitizers/issues/504
3557 setShadow(&I
, getCleanShadow(&I
));
3558 setOrigin(&I
, getCleanOrigin());
3561 void visitCatchSwitchInst(CatchSwitchInst
&I
) {
3562 setShadow(&I
, getCleanShadow(&I
));
3563 setOrigin(&I
, getCleanOrigin());
3566 void visitFuncletPadInst(FuncletPadInst
&I
) {
3567 setShadow(&I
, getCleanShadow(&I
));
3568 setOrigin(&I
, getCleanOrigin());
3571 void visitGetElementPtrInst(GetElementPtrInst
&I
) {
3575 void visitExtractValueInst(ExtractValueInst
&I
) {
3576 IRBuilder
<> IRB(&I
);
3577 Value
*Agg
= I
.getAggregateOperand();
3578 LLVM_DEBUG(dbgs() << "ExtractValue: " << I
<< "\n");
3579 Value
*AggShadow
= getShadow(Agg
);
3580 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow
<< "\n");
3581 Value
*ResShadow
= IRB
.CreateExtractValue(AggShadow
, I
.getIndices());
3582 LLVM_DEBUG(dbgs() << " ResShadow: " << *ResShadow
<< "\n");
3583 setShadow(&I
, ResShadow
);
3584 setOriginForNaryOp(I
);
3587 void visitInsertValueInst(InsertValueInst
&I
) {
3588 IRBuilder
<> IRB(&I
);
3589 LLVM_DEBUG(dbgs() << "InsertValue: " << I
<< "\n");
3590 Value
*AggShadow
= getShadow(I
.getAggregateOperand());
3591 Value
*InsShadow
= getShadow(I
.getInsertedValueOperand());
3592 LLVM_DEBUG(dbgs() << " AggShadow: " << *AggShadow
<< "\n");
3593 LLVM_DEBUG(dbgs() << " InsShadow: " << *InsShadow
<< "\n");
3594 Value
*Res
= IRB
.CreateInsertValue(AggShadow
, InsShadow
, I
.getIndices());
3595 LLVM_DEBUG(dbgs() << " Res: " << *Res
<< "\n");
3597 setOriginForNaryOp(I
);
3600 void dumpInst(Instruction
&I
) {
3601 if (CallInst
*CI
= dyn_cast
<CallInst
>(&I
)) {
3602 errs() << "ZZZ call " << CI
->getCalledFunction()->getName() << "\n";
3604 errs() << "ZZZ " << I
.getOpcodeName() << "\n";
3606 errs() << "QQQ " << I
<< "\n";
3609 void visitResumeInst(ResumeInst
&I
) {
3610 LLVM_DEBUG(dbgs() << "Resume: " << I
<< "\n");
3611 // Nothing to do here.
3614 void visitCleanupReturnInst(CleanupReturnInst
&CRI
) {
3615 LLVM_DEBUG(dbgs() << "CleanupReturn: " << CRI
<< "\n");
3616 // Nothing to do here.
3619 void visitCatchReturnInst(CatchReturnInst
&CRI
) {
3620 LLVM_DEBUG(dbgs() << "CatchReturn: " << CRI
<< "\n");
3621 // Nothing to do here.
3624 void instrumentAsmArgument(Value
*Operand
, Instruction
&I
, IRBuilder
<> &IRB
,
3625 const DataLayout
&DL
, bool isOutput
) {
3626 // For each assembly argument, we check its value for being initialized.
3627 // If the argument is a pointer, we assume it points to a single element
3628 // of the corresponding type (or to a 8-byte word, if the type is unsized).
3629 // Each such pointer is instrumented with a call to the runtime library.
3630 Type
*OpType
= Operand
->getType();
3631 // Check the operand value itself.
3632 insertShadowCheck(Operand
, &I
);
3633 if (!OpType
->isPointerTy() || !isOutput
) {
3637 Type
*ElType
= OpType
->getPointerElementType();
3638 if (!ElType
->isSized())
3640 int Size
= DL
.getTypeStoreSize(ElType
);
3641 Value
*Ptr
= IRB
.CreatePointerCast(Operand
, IRB
.getInt8PtrTy());
3642 Value
*SizeVal
= ConstantInt::get(MS
.IntptrTy
, Size
);
3643 IRB
.CreateCall(MS
.MsanInstrumentAsmStoreFn
, {Ptr
, SizeVal
});
3646 /// Get the number of output arguments returned by pointers.
3647 int getNumOutputArgs(InlineAsm
*IA
, CallBase
*CB
) {
3648 int NumRetOutputs
= 0;
3650 Type
*RetTy
= cast
<Value
>(CB
)->getType();
3651 if (!RetTy
->isVoidTy()) {
3652 // Register outputs are returned via the CallInst return value.
3653 auto *ST
= dyn_cast
<StructType
>(RetTy
);
3655 NumRetOutputs
= ST
->getNumElements();
3659 InlineAsm::ConstraintInfoVector Constraints
= IA
->ParseConstraints();
3660 for (size_t i
= 0, n
= Constraints
.size(); i
< n
; i
++) {
3661 InlineAsm::ConstraintInfo Info
= Constraints
[i
];
3662 switch (Info
.Type
) {
3663 case InlineAsm::isOutput
:
3670 return NumOutputs
- NumRetOutputs
;
3673 void visitAsmInstruction(Instruction
&I
) {
3674 // Conservative inline assembly handling: check for poisoned shadow of
3675 // asm() arguments, then unpoison the result and all the memory locations
3676 // pointed to by those arguments.
3677 // An inline asm() statement in C++ contains lists of input and output
3678 // arguments used by the assembly code. These are mapped to operands of the
3679 // CallInst as follows:
3680 // - nR register outputs ("=r) are returned by value in a single structure
3681 // (SSA value of the CallInst);
3682 // - nO other outputs ("=m" and others) are returned by pointer as first
3683 // nO operands of the CallInst;
3684 // - nI inputs ("r", "m" and others) are passed to CallInst as the
3685 // remaining nI operands.
3686 // The total number of asm() arguments in the source is nR+nO+nI, and the
3687 // corresponding CallInst has nO+nI+1 operands (the last operand is the
3688 // function to be called).
3689 const DataLayout
&DL
= F
.getParent()->getDataLayout();
3690 CallBase
*CB
= cast
<CallBase
>(&I
);
3691 IRBuilder
<> IRB(&I
);
3692 InlineAsm
*IA
= cast
<InlineAsm
>(CB
->getCalledValue());
3693 int OutputArgs
= getNumOutputArgs(IA
, CB
);
3694 // The last operand of a CallInst is the function itself.
3695 int NumOperands
= CB
->getNumOperands() - 1;
3697 // Check input arguments. Doing so before unpoisoning output arguments, so
3698 // that we won't overwrite uninit values before checking them.
3699 for (int i
= OutputArgs
; i
< NumOperands
; i
++) {
3700 Value
*Operand
= CB
->getOperand(i
);
3701 instrumentAsmArgument(Operand
, I
, IRB
, DL
, /*isOutput*/ false);
3703 // Unpoison output arguments. This must happen before the actual InlineAsm
3704 // call, so that the shadow for memory published in the asm() statement
3706 for (int i
= 0; i
< OutputArgs
; i
++) {
3707 Value
*Operand
= CB
->getOperand(i
);
3708 instrumentAsmArgument(Operand
, I
, IRB
, DL
, /*isOutput*/ true);
3711 setShadow(&I
, getCleanShadow(&I
));
3712 setOrigin(&I
, getCleanOrigin());
3715 void visitInstruction(Instruction
&I
) {
3716 // Everything else: stop propagating and check for poisoned shadow.
3717 if (ClDumpStrictInstructions
)
3719 LLVM_DEBUG(dbgs() << "DEFAULT: " << I
<< "\n");
3720 for (size_t i
= 0, n
= I
.getNumOperands(); i
< n
; i
++) {
3721 Value
*Operand
= I
.getOperand(i
);
3722 if (Operand
->getType()->isSized())
3723 insertShadowCheck(Operand
, &I
);
3725 setShadow(&I
, getCleanShadow(&I
));
3726 setOrigin(&I
, getCleanOrigin());
3730 /// AMD64-specific implementation of VarArgHelper.
3731 struct VarArgAMD64Helper
: public VarArgHelper
{
3732 // An unfortunate workaround for asymmetric lowering of va_arg stuff.
3733 // See a comment in visitCallSite for more details.
3734 static const unsigned AMD64GpEndOffset
= 48; // AMD64 ABI Draft 0.99.6 p3.5.7
3735 static const unsigned AMD64FpEndOffsetSSE
= 176;
3736 // If SSE is disabled, fp_offset in va_list is zero.
3737 static const unsigned AMD64FpEndOffsetNoSSE
= AMD64GpEndOffset
;
3739 unsigned AMD64FpEndOffset
;
3741 MemorySanitizer
&MS
;
3742 MemorySanitizerVisitor
&MSV
;
3743 Value
*VAArgTLSCopy
= nullptr;
3744 Value
*VAArgTLSOriginCopy
= nullptr;
3745 Value
*VAArgOverflowSize
= nullptr;
3747 SmallVector
<CallInst
*, 16> VAStartInstrumentationList
;
3749 enum ArgKind
{ AK_GeneralPurpose
, AK_FloatingPoint
, AK_Memory
};
3751 VarArgAMD64Helper(Function
&F
, MemorySanitizer
&MS
,
3752 MemorySanitizerVisitor
&MSV
)
3753 : F(F
), MS(MS
), MSV(MSV
) {
3754 AMD64FpEndOffset
= AMD64FpEndOffsetSSE
;
3755 for (const auto &Attr
: F
.getAttributes().getFnAttributes()) {
3756 if (Attr
.isStringAttribute() &&
3757 (Attr
.getKindAsString() == "target-features")) {
3758 if (Attr
.getValueAsString().contains("-sse"))
3759 AMD64FpEndOffset
= AMD64FpEndOffsetNoSSE
;
3765 ArgKind
classifyArgument(Value
* arg
) {
3766 // A very rough approximation of X86_64 argument classification rules.
3767 Type
*T
= arg
->getType();
3768 if (T
->isFPOrFPVectorTy() || T
->isX86_MMXTy())
3769 return AK_FloatingPoint
;
3770 if (T
->isIntegerTy() && T
->getPrimitiveSizeInBits() <= 64)
3771 return AK_GeneralPurpose
;
3772 if (T
->isPointerTy())
3773 return AK_GeneralPurpose
;
3777 // For VarArg functions, store the argument shadow in an ABI-specific format
3778 // that corresponds to va_list layout.
3779 // We do this because Clang lowers va_arg in the frontend, and this pass
3780 // only sees the low level code that deals with va_list internals.
3781 // A much easier alternative (provided that Clang emits va_arg instructions)
3782 // would have been to associate each live instance of va_list with a copy of
3783 // MSanParamTLS, and extract shadow on va_arg() call in the argument list
3785 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{
3786 unsigned GpOffset
= 0;
3787 unsigned FpOffset
= AMD64GpEndOffset
;
3788 unsigned OverflowOffset
= AMD64FpEndOffset
;
3789 const DataLayout
&DL
= F
.getParent()->getDataLayout();
3790 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin(), End
= CS
.arg_end();
3791 ArgIt
!= End
; ++ArgIt
) {
3793 unsigned ArgNo
= CS
.getArgumentNo(ArgIt
);
3794 bool IsFixed
= ArgNo
< CS
.getFunctionType()->getNumParams();
3795 bool IsByVal
= CS
.paramHasAttr(ArgNo
, Attribute::ByVal
);
3797 // ByVal arguments always go to the overflow area.
3798 // Fixed arguments passed through the overflow area will be stepped
3799 // over by va_start, so don't count them towards the offset.
3802 assert(A
->getType()->isPointerTy());
3803 Type
*RealTy
= A
->getType()->getPointerElementType();
3804 uint64_t ArgSize
= DL
.getTypeAllocSize(RealTy
);
3805 Value
*ShadowBase
= getShadowPtrForVAArgument(
3806 RealTy
, IRB
, OverflowOffset
, alignTo(ArgSize
, 8));
3807 Value
*OriginBase
= nullptr;
3808 if (MS
.TrackOrigins
)
3809 OriginBase
= getOriginPtrForVAArgument(RealTy
, IRB
, OverflowOffset
);
3810 OverflowOffset
+= alignTo(ArgSize
, 8);
3813 Value
*ShadowPtr
, *OriginPtr
;
3814 std::tie(ShadowPtr
, OriginPtr
) =
3815 MSV
.getShadowOriginPtr(A
, IRB
, IRB
.getInt8Ty(), kShadowTLSAlignment
,
3818 IRB
.CreateMemCpy(ShadowBase
, kShadowTLSAlignment
, ShadowPtr
,
3819 kShadowTLSAlignment
, ArgSize
);
3820 if (MS
.TrackOrigins
)
3821 IRB
.CreateMemCpy(OriginBase
, kShadowTLSAlignment
, OriginPtr
,
3822 kShadowTLSAlignment
, ArgSize
);
3824 ArgKind AK
= classifyArgument(A
);
3825 if (AK
== AK_GeneralPurpose
&& GpOffset
>= AMD64GpEndOffset
)
3827 if (AK
== AK_FloatingPoint
&& FpOffset
>= AMD64FpEndOffset
)
3829 Value
*ShadowBase
, *OriginBase
= nullptr;
3831 case AK_GeneralPurpose
:
3833 getShadowPtrForVAArgument(A
->getType(), IRB
, GpOffset
, 8);
3834 if (MS
.TrackOrigins
)
3836 getOriginPtrForVAArgument(A
->getType(), IRB
, GpOffset
);
3839 case AK_FloatingPoint
:
3841 getShadowPtrForVAArgument(A
->getType(), IRB
, FpOffset
, 16);
3842 if (MS
.TrackOrigins
)
3844 getOriginPtrForVAArgument(A
->getType(), IRB
, FpOffset
);
3850 uint64_t ArgSize
= DL
.getTypeAllocSize(A
->getType());
3852 getShadowPtrForVAArgument(A
->getType(), IRB
, OverflowOffset
, 8);
3853 if (MS
.TrackOrigins
)
3855 getOriginPtrForVAArgument(A
->getType(), IRB
, OverflowOffset
);
3856 OverflowOffset
+= alignTo(ArgSize
, 8);
3858 // Take fixed arguments into account for GpOffset and FpOffset,
3859 // but don't actually store shadows for them.
3860 // TODO(glider): don't call get*PtrForVAArgument() for them.
3865 Value
*Shadow
= MSV
.getShadow(A
);
3866 IRB
.CreateAlignedStore(Shadow
, ShadowBase
, kShadowTLSAlignment
);
3867 if (MS
.TrackOrigins
) {
3868 Value
*Origin
= MSV
.getOrigin(A
);
3869 unsigned StoreSize
= DL
.getTypeStoreSize(Shadow
->getType());
3870 MSV
.paintOrigin(IRB
, Origin
, OriginBase
, StoreSize
,
3871 std::max(kShadowTLSAlignment
, kMinOriginAlignment
));
3875 Constant
*OverflowSize
=
3876 ConstantInt::get(IRB
.getInt64Ty(), OverflowOffset
- AMD64FpEndOffset
);
3877 IRB
.CreateStore(OverflowSize
, MS
.VAArgOverflowSizeTLS
);
3880 /// Compute the shadow address for a given va_arg.
3881 Value
*getShadowPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
,
3882 unsigned ArgOffset
, unsigned ArgSize
) {
3883 // Make sure we don't overflow __msan_va_arg_tls.
3884 if (ArgOffset
+ ArgSize
> kParamTLSSize
)
3886 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgTLS
, MS
.IntptrTy
);
3887 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
3888 return IRB
.CreateIntToPtr(Base
, PointerType::get(MSV
.getShadowTy(Ty
), 0),
3892 /// Compute the origin address for a given va_arg.
3893 Value
*getOriginPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
, int ArgOffset
) {
3894 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgOriginTLS
, MS
.IntptrTy
);
3895 // getOriginPtrForVAArgument() is always called after
3896 // getShadowPtrForVAArgument(), so __msan_va_arg_origin_tls can never
3898 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
3899 return IRB
.CreateIntToPtr(Base
, PointerType::get(MS
.OriginTy
, 0),
3903 void unpoisonVAListTagForInst(IntrinsicInst
&I
) {
3904 IRBuilder
<> IRB(&I
);
3905 Value
*VAListTag
= I
.getArgOperand(0);
3906 Value
*ShadowPtr
, *OriginPtr
;
3907 unsigned Alignment
= 8;
3908 std::tie(ShadowPtr
, OriginPtr
) =
3909 MSV
.getShadowOriginPtr(VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
,
3912 // Unpoison the whole __va_list_tag.
3913 // FIXME: magic ABI constants.
3914 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
3915 /* size */ 24, Alignment
, false);
3916 // We shouldn't need to zero out the origins, as they're only checked for
3920 void visitVAStartInst(VAStartInst
&I
) override
{
3921 if (F
.getCallingConv() == CallingConv::Win64
)
3923 VAStartInstrumentationList
.push_back(&I
);
3924 unpoisonVAListTagForInst(I
);
3927 void visitVACopyInst(VACopyInst
&I
) override
{
3928 if (F
.getCallingConv() == CallingConv::Win64
) return;
3929 unpoisonVAListTagForInst(I
);
3932 void finalizeInstrumentation() override
{
3933 assert(!VAArgOverflowSize
&& !VAArgTLSCopy
&&
3934 "finalizeInstrumentation called twice");
3935 if (!VAStartInstrumentationList
.empty()) {
3936 // If there is a va_start in this function, make a backup copy of
3937 // va_arg_tls somewhere in the function entry block.
3938 IRBuilder
<> IRB(MSV
.ActualFnStart
->getFirstNonPHI());
3940 IRB
.CreateLoad(IRB
.getInt64Ty(), MS
.VAArgOverflowSizeTLS
);
3942 IRB
.CreateAdd(ConstantInt::get(MS
.IntptrTy
, AMD64FpEndOffset
),
3944 VAArgTLSCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
3945 IRB
.CreateMemCpy(VAArgTLSCopy
, 8, MS
.VAArgTLS
, 8, CopySize
);
3946 if (MS
.TrackOrigins
) {
3947 VAArgTLSOriginCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
3948 IRB
.CreateMemCpy(VAArgTLSOriginCopy
, 8, MS
.VAArgOriginTLS
, 8, CopySize
);
3952 // Instrument va_start.
3953 // Copy va_list shadow from the backup copy of the TLS contents.
3954 for (size_t i
= 0, n
= VAStartInstrumentationList
.size(); i
< n
; i
++) {
3955 CallInst
*OrigInst
= VAStartInstrumentationList
[i
];
3956 IRBuilder
<> IRB(OrigInst
->getNextNode());
3957 Value
*VAListTag
= OrigInst
->getArgOperand(0);
3959 Type
*RegSaveAreaPtrTy
= Type::getInt64PtrTy(*MS
.C
);
3960 Value
*RegSaveAreaPtrPtr
= IRB
.CreateIntToPtr(
3961 IRB
.CreateAdd(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
3962 ConstantInt::get(MS
.IntptrTy
, 16)),
3963 PointerType::get(RegSaveAreaPtrTy
, 0));
3964 Value
*RegSaveAreaPtr
=
3965 IRB
.CreateLoad(RegSaveAreaPtrTy
, RegSaveAreaPtrPtr
);
3966 Value
*RegSaveAreaShadowPtr
, *RegSaveAreaOriginPtr
;
3967 unsigned Alignment
= 16;
3968 std::tie(RegSaveAreaShadowPtr
, RegSaveAreaOriginPtr
) =
3969 MSV
.getShadowOriginPtr(RegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
3970 Alignment
, /*isStore*/ true);
3971 IRB
.CreateMemCpy(RegSaveAreaShadowPtr
, Alignment
, VAArgTLSCopy
, Alignment
,
3973 if (MS
.TrackOrigins
)
3974 IRB
.CreateMemCpy(RegSaveAreaOriginPtr
, Alignment
, VAArgTLSOriginCopy
,
3975 Alignment
, AMD64FpEndOffset
);
3976 Type
*OverflowArgAreaPtrTy
= Type::getInt64PtrTy(*MS
.C
);
3977 Value
*OverflowArgAreaPtrPtr
= IRB
.CreateIntToPtr(
3978 IRB
.CreateAdd(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
3979 ConstantInt::get(MS
.IntptrTy
, 8)),
3980 PointerType::get(OverflowArgAreaPtrTy
, 0));
3981 Value
*OverflowArgAreaPtr
=
3982 IRB
.CreateLoad(OverflowArgAreaPtrTy
, OverflowArgAreaPtrPtr
);
3983 Value
*OverflowArgAreaShadowPtr
, *OverflowArgAreaOriginPtr
;
3984 std::tie(OverflowArgAreaShadowPtr
, OverflowArgAreaOriginPtr
) =
3985 MSV
.getShadowOriginPtr(OverflowArgAreaPtr
, IRB
, IRB
.getInt8Ty(),
3986 Alignment
, /*isStore*/ true);
3987 Value
*SrcPtr
= IRB
.CreateConstGEP1_32(IRB
.getInt8Ty(), VAArgTLSCopy
,
3989 IRB
.CreateMemCpy(OverflowArgAreaShadowPtr
, Alignment
, SrcPtr
, Alignment
,
3991 if (MS
.TrackOrigins
) {
3992 SrcPtr
= IRB
.CreateConstGEP1_32(IRB
.getInt8Ty(), VAArgTLSOriginCopy
,
3994 IRB
.CreateMemCpy(OverflowArgAreaOriginPtr
, Alignment
, SrcPtr
, Alignment
,
4001 /// MIPS64-specific implementation of VarArgHelper.
4002 struct VarArgMIPS64Helper
: public VarArgHelper
{
4004 MemorySanitizer
&MS
;
4005 MemorySanitizerVisitor
&MSV
;
4006 Value
*VAArgTLSCopy
= nullptr;
4007 Value
*VAArgSize
= nullptr;
4009 SmallVector
<CallInst
*, 16> VAStartInstrumentationList
;
4011 VarArgMIPS64Helper(Function
&F
, MemorySanitizer
&MS
,
4012 MemorySanitizerVisitor
&MSV
) : F(F
), MS(MS
), MSV(MSV
) {}
4014 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{
4015 unsigned VAArgOffset
= 0;
4016 const DataLayout
&DL
= F
.getParent()->getDataLayout();
4017 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin() +
4018 CS
.getFunctionType()->getNumParams(), End
= CS
.arg_end();
4019 ArgIt
!= End
; ++ArgIt
) {
4020 Triple
TargetTriple(F
.getParent()->getTargetTriple());
4023 uint64_t ArgSize
= DL
.getTypeAllocSize(A
->getType());
4024 if (TargetTriple
.getArch() == Triple::mips64
) {
4025 // Adjusting the shadow for argument with size < 8 to match the placement
4026 // of bits in big endian system
4028 VAArgOffset
+= (8 - ArgSize
);
4030 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
, VAArgOffset
, ArgSize
);
4031 VAArgOffset
+= ArgSize
;
4032 VAArgOffset
= alignTo(VAArgOffset
, 8);
4035 IRB
.CreateAlignedStore(MSV
.getShadow(A
), Base
, kShadowTLSAlignment
);
4038 Constant
*TotalVAArgSize
= ConstantInt::get(IRB
.getInt64Ty(), VAArgOffset
);
4039 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of
4040 // a new class member i.e. it is the total size of all VarArgs.
4041 IRB
.CreateStore(TotalVAArgSize
, MS
.VAArgOverflowSizeTLS
);
4044 /// Compute the shadow address for a given va_arg.
4045 Value
*getShadowPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
,
4046 unsigned ArgOffset
, unsigned ArgSize
) {
4047 // Make sure we don't overflow __msan_va_arg_tls.
4048 if (ArgOffset
+ ArgSize
> kParamTLSSize
)
4050 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgTLS
, MS
.IntptrTy
);
4051 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
4052 return IRB
.CreateIntToPtr(Base
, PointerType::get(MSV
.getShadowTy(Ty
), 0),
4056 void visitVAStartInst(VAStartInst
&I
) override
{
4057 IRBuilder
<> IRB(&I
);
4058 VAStartInstrumentationList
.push_back(&I
);
4059 Value
*VAListTag
= I
.getArgOperand(0);
4060 Value
*ShadowPtr
, *OriginPtr
;
4061 unsigned Alignment
= 8;
4062 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4063 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4064 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4065 /* size */ 8, Alignment
, false);
4068 void visitVACopyInst(VACopyInst
&I
) override
{
4069 IRBuilder
<> IRB(&I
);
4070 VAStartInstrumentationList
.push_back(&I
);
4071 Value
*VAListTag
= I
.getArgOperand(0);
4072 Value
*ShadowPtr
, *OriginPtr
;
4073 unsigned Alignment
= 8;
4074 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4075 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4076 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4077 /* size */ 8, Alignment
, false);
4080 void finalizeInstrumentation() override
{
4081 assert(!VAArgSize
&& !VAArgTLSCopy
&&
4082 "finalizeInstrumentation called twice");
4083 IRBuilder
<> IRB(MSV
.ActualFnStart
->getFirstNonPHI());
4084 VAArgSize
= IRB
.CreateLoad(IRB
.getInt64Ty(), MS
.VAArgOverflowSizeTLS
);
4085 Value
*CopySize
= IRB
.CreateAdd(ConstantInt::get(MS
.IntptrTy
, 0),
4088 if (!VAStartInstrumentationList
.empty()) {
4089 // If there is a va_start in this function, make a backup copy of
4090 // va_arg_tls somewhere in the function entry block.
4091 VAArgTLSCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
4092 IRB
.CreateMemCpy(VAArgTLSCopy
, 8, MS
.VAArgTLS
, 8, CopySize
);
4095 // Instrument va_start.
4096 // Copy va_list shadow from the backup copy of the TLS contents.
4097 for (size_t i
= 0, n
= VAStartInstrumentationList
.size(); i
< n
; i
++) {
4098 CallInst
*OrigInst
= VAStartInstrumentationList
[i
];
4099 IRBuilder
<> IRB(OrigInst
->getNextNode());
4100 Value
*VAListTag
= OrigInst
->getArgOperand(0);
4101 Type
*RegSaveAreaPtrTy
= Type::getInt64PtrTy(*MS
.C
);
4102 Value
*RegSaveAreaPtrPtr
=
4103 IRB
.CreateIntToPtr(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
4104 PointerType::get(RegSaveAreaPtrTy
, 0));
4105 Value
*RegSaveAreaPtr
=
4106 IRB
.CreateLoad(RegSaveAreaPtrTy
, RegSaveAreaPtrPtr
);
4107 Value
*RegSaveAreaShadowPtr
, *RegSaveAreaOriginPtr
;
4108 unsigned Alignment
= 8;
4109 std::tie(RegSaveAreaShadowPtr
, RegSaveAreaOriginPtr
) =
4110 MSV
.getShadowOriginPtr(RegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4111 Alignment
, /*isStore*/ true);
4112 IRB
.CreateMemCpy(RegSaveAreaShadowPtr
, Alignment
, VAArgTLSCopy
, Alignment
,
4118 /// AArch64-specific implementation of VarArgHelper.
4119 struct VarArgAArch64Helper
: public VarArgHelper
{
4120 static const unsigned kAArch64GrArgSize
= 64;
4121 static const unsigned kAArch64VrArgSize
= 128;
4123 static const unsigned AArch64GrBegOffset
= 0;
4124 static const unsigned AArch64GrEndOffset
= kAArch64GrArgSize
;
4125 // Make VR space aligned to 16 bytes.
4126 static const unsigned AArch64VrBegOffset
= AArch64GrEndOffset
;
4127 static const unsigned AArch64VrEndOffset
= AArch64VrBegOffset
4128 + kAArch64VrArgSize
;
4129 static const unsigned AArch64VAEndOffset
= AArch64VrEndOffset
;
4132 MemorySanitizer
&MS
;
4133 MemorySanitizerVisitor
&MSV
;
4134 Value
*VAArgTLSCopy
= nullptr;
4135 Value
*VAArgOverflowSize
= nullptr;
4137 SmallVector
<CallInst
*, 16> VAStartInstrumentationList
;
4139 enum ArgKind
{ AK_GeneralPurpose
, AK_FloatingPoint
, AK_Memory
};
4141 VarArgAArch64Helper(Function
&F
, MemorySanitizer
&MS
,
4142 MemorySanitizerVisitor
&MSV
) : F(F
), MS(MS
), MSV(MSV
) {}
4144 ArgKind
classifyArgument(Value
* arg
) {
4145 Type
*T
= arg
->getType();
4146 if (T
->isFPOrFPVectorTy())
4147 return AK_FloatingPoint
;
4148 if ((T
->isIntegerTy() && T
->getPrimitiveSizeInBits() <= 64)
4149 || (T
->isPointerTy()))
4150 return AK_GeneralPurpose
;
4154 // The instrumentation stores the argument shadow in a non ABI-specific
4155 // format because it does not know which argument is named (since Clang,
4156 // like x86_64 case, lowers the va_args in the frontend and this pass only
4157 // sees the low level code that deals with va_list internals).
4158 // The first seven GR registers are saved in the first 56 bytes of the
4159 // va_arg tls arra, followers by the first 8 FP/SIMD registers, and then
4160 // the remaining arguments.
4161 // Using constant offset within the va_arg TLS array allows fast copy
4162 // in the finalize instrumentation.
4163 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{
4164 unsigned GrOffset
= AArch64GrBegOffset
;
4165 unsigned VrOffset
= AArch64VrBegOffset
;
4166 unsigned OverflowOffset
= AArch64VAEndOffset
;
4168 const DataLayout
&DL
= F
.getParent()->getDataLayout();
4169 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin(), End
= CS
.arg_end();
4170 ArgIt
!= End
; ++ArgIt
) {
4172 unsigned ArgNo
= CS
.getArgumentNo(ArgIt
);
4173 bool IsFixed
= ArgNo
< CS
.getFunctionType()->getNumParams();
4174 ArgKind AK
= classifyArgument(A
);
4175 if (AK
== AK_GeneralPurpose
&& GrOffset
>= AArch64GrEndOffset
)
4177 if (AK
== AK_FloatingPoint
&& VrOffset
>= AArch64VrEndOffset
)
4181 case AK_GeneralPurpose
:
4182 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
, GrOffset
, 8);
4185 case AK_FloatingPoint
:
4186 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
, VrOffset
, 8);
4190 // Don't count fixed arguments in the overflow area - va_start will
4191 // skip right over them.
4194 uint64_t ArgSize
= DL
.getTypeAllocSize(A
->getType());
4195 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
, OverflowOffset
,
4196 alignTo(ArgSize
, 8));
4197 OverflowOffset
+= alignTo(ArgSize
, 8);
4200 // Count Gp/Vr fixed arguments to their respective offsets, but don't
4201 // bother to actually store a shadow.
4206 IRB
.CreateAlignedStore(MSV
.getShadow(A
), Base
, kShadowTLSAlignment
);
4208 Constant
*OverflowSize
=
4209 ConstantInt::get(IRB
.getInt64Ty(), OverflowOffset
- AArch64VAEndOffset
);
4210 IRB
.CreateStore(OverflowSize
, MS
.VAArgOverflowSizeTLS
);
4213 /// Compute the shadow address for a given va_arg.
4214 Value
*getShadowPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
,
4215 unsigned ArgOffset
, unsigned ArgSize
) {
4216 // Make sure we don't overflow __msan_va_arg_tls.
4217 if (ArgOffset
+ ArgSize
> kParamTLSSize
)
4219 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgTLS
, MS
.IntptrTy
);
4220 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
4221 return IRB
.CreateIntToPtr(Base
, PointerType::get(MSV
.getShadowTy(Ty
), 0),
4225 void visitVAStartInst(VAStartInst
&I
) override
{
4226 IRBuilder
<> IRB(&I
);
4227 VAStartInstrumentationList
.push_back(&I
);
4228 Value
*VAListTag
= I
.getArgOperand(0);
4229 Value
*ShadowPtr
, *OriginPtr
;
4230 unsigned Alignment
= 8;
4231 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4232 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4233 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4234 /* size */ 32, Alignment
, false);
4237 void visitVACopyInst(VACopyInst
&I
) override
{
4238 IRBuilder
<> IRB(&I
);
4239 VAStartInstrumentationList
.push_back(&I
);
4240 Value
*VAListTag
= I
.getArgOperand(0);
4241 Value
*ShadowPtr
, *OriginPtr
;
4242 unsigned Alignment
= 8;
4243 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4244 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4245 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4246 /* size */ 32, Alignment
, false);
4249 // Retrieve a va_list field of 'void*' size.
4250 Value
* getVAField64(IRBuilder
<> &IRB
, Value
*VAListTag
, int offset
) {
4251 Value
*SaveAreaPtrPtr
=
4253 IRB
.CreateAdd(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
4254 ConstantInt::get(MS
.IntptrTy
, offset
)),
4255 Type::getInt64PtrTy(*MS
.C
));
4256 return IRB
.CreateLoad(Type::getInt64Ty(*MS
.C
), SaveAreaPtrPtr
);
4259 // Retrieve a va_list field of 'int' size.
4260 Value
* getVAField32(IRBuilder
<> &IRB
, Value
*VAListTag
, int offset
) {
4261 Value
*SaveAreaPtr
=
4263 IRB
.CreateAdd(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
4264 ConstantInt::get(MS
.IntptrTy
, offset
)),
4265 Type::getInt32PtrTy(*MS
.C
));
4266 Value
*SaveArea32
= IRB
.CreateLoad(IRB
.getInt32Ty(), SaveAreaPtr
);
4267 return IRB
.CreateSExt(SaveArea32
, MS
.IntptrTy
);
4270 void finalizeInstrumentation() override
{
4271 assert(!VAArgOverflowSize
&& !VAArgTLSCopy
&&
4272 "finalizeInstrumentation called twice");
4273 if (!VAStartInstrumentationList
.empty()) {
4274 // If there is a va_start in this function, make a backup copy of
4275 // va_arg_tls somewhere in the function entry block.
4276 IRBuilder
<> IRB(MSV
.ActualFnStart
->getFirstNonPHI());
4278 IRB
.CreateLoad(IRB
.getInt64Ty(), MS
.VAArgOverflowSizeTLS
);
4280 IRB
.CreateAdd(ConstantInt::get(MS
.IntptrTy
, AArch64VAEndOffset
),
4282 VAArgTLSCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
4283 IRB
.CreateMemCpy(VAArgTLSCopy
, 8, MS
.VAArgTLS
, 8, CopySize
);
4286 Value
*GrArgSize
= ConstantInt::get(MS
.IntptrTy
, kAArch64GrArgSize
);
4287 Value
*VrArgSize
= ConstantInt::get(MS
.IntptrTy
, kAArch64VrArgSize
);
4289 // Instrument va_start, copy va_list shadow from the backup copy of
4290 // the TLS contents.
4291 for (size_t i
= 0, n
= VAStartInstrumentationList
.size(); i
< n
; i
++) {
4292 CallInst
*OrigInst
= VAStartInstrumentationList
[i
];
4293 IRBuilder
<> IRB(OrigInst
->getNextNode());
4295 Value
*VAListTag
= OrigInst
->getArgOperand(0);
4297 // The variadic ABI for AArch64 creates two areas to save the incoming
4298 // argument registers (one for 64-bit general register xn-x7 and another
4299 // for 128-bit FP/SIMD vn-v7).
4300 // We need then to propagate the shadow arguments on both regions
4301 // 'va::__gr_top + va::__gr_offs' and 'va::__vr_top + va::__vr_offs'.
4302 // The remaning arguments are saved on shadow for 'va::stack'.
4303 // One caveat is it requires only to propagate the non-named arguments,
4304 // however on the call site instrumentation 'all' the arguments are
4305 // saved. So to copy the shadow values from the va_arg TLS array
4306 // we need to adjust the offset for both GR and VR fields based on
4307 // the __{gr,vr}_offs value (since they are stores based on incoming
4308 // named arguments).
4310 // Read the stack pointer from the va_list.
4311 Value
*StackSaveAreaPtr
= getVAField64(IRB
, VAListTag
, 0);
4313 // Read both the __gr_top and __gr_off and add them up.
4314 Value
*GrTopSaveAreaPtr
= getVAField64(IRB
, VAListTag
, 8);
4315 Value
*GrOffSaveArea
= getVAField32(IRB
, VAListTag
, 24);
4317 Value
*GrRegSaveAreaPtr
= IRB
.CreateAdd(GrTopSaveAreaPtr
, GrOffSaveArea
);
4319 // Read both the __vr_top and __vr_off and add them up.
4320 Value
*VrTopSaveAreaPtr
= getVAField64(IRB
, VAListTag
, 16);
4321 Value
*VrOffSaveArea
= getVAField32(IRB
, VAListTag
, 28);
4323 Value
*VrRegSaveAreaPtr
= IRB
.CreateAdd(VrTopSaveAreaPtr
, VrOffSaveArea
);
4325 // It does not know how many named arguments is being used and, on the
4326 // callsite all the arguments were saved. Since __gr_off is defined as
4327 // '0 - ((8 - named_gr) * 8)', the idea is to just propagate the variadic
4328 // argument by ignoring the bytes of shadow from named arguments.
4329 Value
*GrRegSaveAreaShadowPtrOff
=
4330 IRB
.CreateAdd(GrArgSize
, GrOffSaveArea
);
4332 Value
*GrRegSaveAreaShadowPtr
=
4333 MSV
.getShadowOriginPtr(GrRegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4334 /*Alignment*/ 8, /*isStore*/ true)
4337 Value
*GrSrcPtr
= IRB
.CreateInBoundsGEP(IRB
.getInt8Ty(), VAArgTLSCopy
,
4338 GrRegSaveAreaShadowPtrOff
);
4339 Value
*GrCopySize
= IRB
.CreateSub(GrArgSize
, GrRegSaveAreaShadowPtrOff
);
4341 IRB
.CreateMemCpy(GrRegSaveAreaShadowPtr
, 8, GrSrcPtr
, 8, GrCopySize
);
4343 // Again, but for FP/SIMD values.
4344 Value
*VrRegSaveAreaShadowPtrOff
=
4345 IRB
.CreateAdd(VrArgSize
, VrOffSaveArea
);
4347 Value
*VrRegSaveAreaShadowPtr
=
4348 MSV
.getShadowOriginPtr(VrRegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4349 /*Alignment*/ 8, /*isStore*/ true)
4352 Value
*VrSrcPtr
= IRB
.CreateInBoundsGEP(
4354 IRB
.CreateInBoundsGEP(IRB
.getInt8Ty(), VAArgTLSCopy
,
4355 IRB
.getInt32(AArch64VrBegOffset
)),
4356 VrRegSaveAreaShadowPtrOff
);
4357 Value
*VrCopySize
= IRB
.CreateSub(VrArgSize
, VrRegSaveAreaShadowPtrOff
);
4359 IRB
.CreateMemCpy(VrRegSaveAreaShadowPtr
, 8, VrSrcPtr
, 8, VrCopySize
);
4361 // And finally for remaining arguments.
4362 Value
*StackSaveAreaShadowPtr
=
4363 MSV
.getShadowOriginPtr(StackSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4364 /*Alignment*/ 16, /*isStore*/ true)
4367 Value
*StackSrcPtr
=
4368 IRB
.CreateInBoundsGEP(IRB
.getInt8Ty(), VAArgTLSCopy
,
4369 IRB
.getInt32(AArch64VAEndOffset
));
4371 IRB
.CreateMemCpy(StackSaveAreaShadowPtr
, 16, StackSrcPtr
, 16,
4377 /// PowerPC64-specific implementation of VarArgHelper.
4378 struct VarArgPowerPC64Helper
: public VarArgHelper
{
4380 MemorySanitizer
&MS
;
4381 MemorySanitizerVisitor
&MSV
;
4382 Value
*VAArgTLSCopy
= nullptr;
4383 Value
*VAArgSize
= nullptr;
4385 SmallVector
<CallInst
*, 16> VAStartInstrumentationList
;
4387 VarArgPowerPC64Helper(Function
&F
, MemorySanitizer
&MS
,
4388 MemorySanitizerVisitor
&MSV
) : F(F
), MS(MS
), MSV(MSV
) {}
4390 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{
4391 // For PowerPC, we need to deal with alignment of stack arguments -
4392 // they are mostly aligned to 8 bytes, but vectors and i128 arrays
4393 // are aligned to 16 bytes, byvals can be aligned to 8 or 16 bytes,
4394 // and QPX vectors are aligned to 32 bytes. For that reason, we
4395 // compute current offset from stack pointer (which is always properly
4396 // aligned), and offset for the first vararg, then subtract them.
4398 Triple
TargetTriple(F
.getParent()->getTargetTriple());
4399 // Parameter save area starts at 48 bytes from frame pointer for ABIv1,
4400 // and 32 bytes for ABIv2. This is usually determined by target
4401 // endianness, but in theory could be overriden by function attribute.
4402 // For simplicity, we ignore it here (it'd only matter for QPX vectors).
4403 if (TargetTriple
.getArch() == Triple::ppc64
)
4407 unsigned VAArgOffset
= VAArgBase
;
4408 const DataLayout
&DL
= F
.getParent()->getDataLayout();
4409 for (CallSite::arg_iterator ArgIt
= CS
.arg_begin(), End
= CS
.arg_end();
4410 ArgIt
!= End
; ++ArgIt
) {
4412 unsigned ArgNo
= CS
.getArgumentNo(ArgIt
);
4413 bool IsFixed
= ArgNo
< CS
.getFunctionType()->getNumParams();
4414 bool IsByVal
= CS
.paramHasAttr(ArgNo
, Attribute::ByVal
);
4416 assert(A
->getType()->isPointerTy());
4417 Type
*RealTy
= A
->getType()->getPointerElementType();
4418 uint64_t ArgSize
= DL
.getTypeAllocSize(RealTy
);
4419 uint64_t ArgAlign
= CS
.getParamAlignment(ArgNo
);
4422 VAArgOffset
= alignTo(VAArgOffset
, ArgAlign
);
4424 Value
*Base
= getShadowPtrForVAArgument(
4425 RealTy
, IRB
, VAArgOffset
- VAArgBase
, ArgSize
);
4427 Value
*AShadowPtr
, *AOriginPtr
;
4428 std::tie(AShadowPtr
, AOriginPtr
) =
4429 MSV
.getShadowOriginPtr(A
, IRB
, IRB
.getInt8Ty(),
4430 kShadowTLSAlignment
, /*isStore*/ false);
4432 IRB
.CreateMemCpy(Base
, kShadowTLSAlignment
, AShadowPtr
,
4433 kShadowTLSAlignment
, ArgSize
);
4436 VAArgOffset
+= alignTo(ArgSize
, 8);
4439 uint64_t ArgSize
= DL
.getTypeAllocSize(A
->getType());
4440 uint64_t ArgAlign
= 8;
4441 if (A
->getType()->isArrayTy()) {
4442 // Arrays are aligned to element size, except for long double
4443 // arrays, which are aligned to 8 bytes.
4444 Type
*ElementTy
= A
->getType()->getArrayElementType();
4445 if (!ElementTy
->isPPC_FP128Ty())
4446 ArgAlign
= DL
.getTypeAllocSize(ElementTy
);
4447 } else if (A
->getType()->isVectorTy()) {
4448 // Vectors are naturally aligned.
4449 ArgAlign
= DL
.getTypeAllocSize(A
->getType());
4453 VAArgOffset
= alignTo(VAArgOffset
, ArgAlign
);
4454 if (DL
.isBigEndian()) {
4455 // Adjusting the shadow for argument with size < 8 to match the placement
4456 // of bits in big endian system
4458 VAArgOffset
+= (8 - ArgSize
);
4461 Base
= getShadowPtrForVAArgument(A
->getType(), IRB
,
4462 VAArgOffset
- VAArgBase
, ArgSize
);
4464 IRB
.CreateAlignedStore(MSV
.getShadow(A
), Base
, kShadowTLSAlignment
);
4466 VAArgOffset
+= ArgSize
;
4467 VAArgOffset
= alignTo(VAArgOffset
, 8);
4470 VAArgBase
= VAArgOffset
;
4473 Constant
*TotalVAArgSize
= ConstantInt::get(IRB
.getInt64Ty(),
4474 VAArgOffset
- VAArgBase
);
4475 // Here using VAArgOverflowSizeTLS as VAArgSizeTLS to avoid creation of
4476 // a new class member i.e. it is the total size of all VarArgs.
4477 IRB
.CreateStore(TotalVAArgSize
, MS
.VAArgOverflowSizeTLS
);
4480 /// Compute the shadow address for a given va_arg.
4481 Value
*getShadowPtrForVAArgument(Type
*Ty
, IRBuilder
<> &IRB
,
4482 unsigned ArgOffset
, unsigned ArgSize
) {
4483 // Make sure we don't overflow __msan_va_arg_tls.
4484 if (ArgOffset
+ ArgSize
> kParamTLSSize
)
4486 Value
*Base
= IRB
.CreatePointerCast(MS
.VAArgTLS
, MS
.IntptrTy
);
4487 Base
= IRB
.CreateAdd(Base
, ConstantInt::get(MS
.IntptrTy
, ArgOffset
));
4488 return IRB
.CreateIntToPtr(Base
, PointerType::get(MSV
.getShadowTy(Ty
), 0),
4492 void visitVAStartInst(VAStartInst
&I
) override
{
4493 IRBuilder
<> IRB(&I
);
4494 VAStartInstrumentationList
.push_back(&I
);
4495 Value
*VAListTag
= I
.getArgOperand(0);
4496 Value
*ShadowPtr
, *OriginPtr
;
4497 unsigned Alignment
= 8;
4498 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4499 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4500 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4501 /* size */ 8, Alignment
, false);
4504 void visitVACopyInst(VACopyInst
&I
) override
{
4505 IRBuilder
<> IRB(&I
);
4506 Value
*VAListTag
= I
.getArgOperand(0);
4507 Value
*ShadowPtr
, *OriginPtr
;
4508 unsigned Alignment
= 8;
4509 std::tie(ShadowPtr
, OriginPtr
) = MSV
.getShadowOriginPtr(
4510 VAListTag
, IRB
, IRB
.getInt8Ty(), Alignment
, /*isStore*/ true);
4511 // Unpoison the whole __va_list_tag.
4512 // FIXME: magic ABI constants.
4513 IRB
.CreateMemSet(ShadowPtr
, Constant::getNullValue(IRB
.getInt8Ty()),
4514 /* size */ 8, Alignment
, false);
4517 void finalizeInstrumentation() override
{
4518 assert(!VAArgSize
&& !VAArgTLSCopy
&&
4519 "finalizeInstrumentation called twice");
4520 IRBuilder
<> IRB(MSV
.ActualFnStart
->getFirstNonPHI());
4521 VAArgSize
= IRB
.CreateLoad(IRB
.getInt64Ty(), MS
.VAArgOverflowSizeTLS
);
4522 Value
*CopySize
= IRB
.CreateAdd(ConstantInt::get(MS
.IntptrTy
, 0),
4525 if (!VAStartInstrumentationList
.empty()) {
4526 // If there is a va_start in this function, make a backup copy of
4527 // va_arg_tls somewhere in the function entry block.
4528 VAArgTLSCopy
= IRB
.CreateAlloca(Type::getInt8Ty(*MS
.C
), CopySize
);
4529 IRB
.CreateMemCpy(VAArgTLSCopy
, 8, MS
.VAArgTLS
, 8, CopySize
);
4532 // Instrument va_start.
4533 // Copy va_list shadow from the backup copy of the TLS contents.
4534 for (size_t i
= 0, n
= VAStartInstrumentationList
.size(); i
< n
; i
++) {
4535 CallInst
*OrigInst
= VAStartInstrumentationList
[i
];
4536 IRBuilder
<> IRB(OrigInst
->getNextNode());
4537 Value
*VAListTag
= OrigInst
->getArgOperand(0);
4538 Type
*RegSaveAreaPtrTy
= Type::getInt64PtrTy(*MS
.C
);
4539 Value
*RegSaveAreaPtrPtr
=
4540 IRB
.CreateIntToPtr(IRB
.CreatePtrToInt(VAListTag
, MS
.IntptrTy
),
4541 PointerType::get(RegSaveAreaPtrTy
, 0));
4542 Value
*RegSaveAreaPtr
=
4543 IRB
.CreateLoad(RegSaveAreaPtrTy
, RegSaveAreaPtrPtr
);
4544 Value
*RegSaveAreaShadowPtr
, *RegSaveAreaOriginPtr
;
4545 unsigned Alignment
= 8;
4546 std::tie(RegSaveAreaShadowPtr
, RegSaveAreaOriginPtr
) =
4547 MSV
.getShadowOriginPtr(RegSaveAreaPtr
, IRB
, IRB
.getInt8Ty(),
4548 Alignment
, /*isStore*/ true);
4549 IRB
.CreateMemCpy(RegSaveAreaShadowPtr
, Alignment
, VAArgTLSCopy
, Alignment
,
4555 /// A no-op implementation of VarArgHelper.
4556 struct VarArgNoOpHelper
: public VarArgHelper
{
4557 VarArgNoOpHelper(Function
&F
, MemorySanitizer
&MS
,
4558 MemorySanitizerVisitor
&MSV
) {}
4560 void visitCallSite(CallSite
&CS
, IRBuilder
<> &IRB
) override
{}
4562 void visitVAStartInst(VAStartInst
&I
) override
{}
4564 void visitVACopyInst(VACopyInst
&I
) override
{}
4566 void finalizeInstrumentation() override
{}
4569 } // end anonymous namespace
4571 static VarArgHelper
*CreateVarArgHelper(Function
&Func
, MemorySanitizer
&Msan
,
4572 MemorySanitizerVisitor
&Visitor
) {
4573 // VarArg handling is only implemented on AMD64. False positives are possible
4574 // on other platforms.
4575 Triple
TargetTriple(Func
.getParent()->getTargetTriple());
4576 if (TargetTriple
.getArch() == Triple::x86_64
)
4577 return new VarArgAMD64Helper(Func
, Msan
, Visitor
);
4578 else if (TargetTriple
.isMIPS64())
4579 return new VarArgMIPS64Helper(Func
, Msan
, Visitor
);
4580 else if (TargetTriple
.getArch() == Triple::aarch64
)
4581 return new VarArgAArch64Helper(Func
, Msan
, Visitor
);
4582 else if (TargetTriple
.getArch() == Triple::ppc64
||
4583 TargetTriple
.getArch() == Triple::ppc64le
)
4584 return new VarArgPowerPC64Helper(Func
, Msan
, Visitor
);
4586 return new VarArgNoOpHelper(Func
, Msan
, Visitor
);
4589 bool MemorySanitizer::sanitizeFunction(Function
&F
, TargetLibraryInfo
&TLI
) {
4590 if (!CompileKernel
&& F
.getName() == kMsanModuleCtorName
)
4593 MemorySanitizerVisitor
Visitor(F
, *this, TLI
);
4595 // Clear out readonly/readnone attributes.
4597 B
.addAttribute(Attribute::ReadOnly
)
4598 .addAttribute(Attribute::ReadNone
);
4599 F
.removeAttributes(AttributeList::FunctionIndex
, B
);
4601 return Visitor
.runOnFunction();