Silence -Wunused-variable in release builds.
[llvm/stm8.git] / docs / tutorial / LangImpl4.html
blobfe54fb5c01f6b4116343e82a614c560b9b491ac3
1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
2 "http://www.w3.org/TR/html4/strict.dtd">
4 <html>
5 <head>
6 <title>Kaleidoscope: Adding JIT and Optimizer Support</title>
7 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
8 <meta name="author" content="Chris Lattner">
9 <link rel="stylesheet" href="../llvm.css" type="text/css">
10 </head>
12 <body>
14 <h1>Kaleidoscope: Adding JIT and Optimizer Support</h1>
16 <ul>
17 <li><a href="index.html">Up to Tutorial Index</a></li>
18 <li>Chapter 4
19 <ol>
20 <li><a href="#intro">Chapter 4 Introduction</a></li>
21 <li><a href="#trivialconstfold">Trivial Constant Folding</a></li>
22 <li><a href="#optimizerpasses">LLVM Optimization Passes</a></li>
23 <li><a href="#jit">Adding a JIT Compiler</a></li>
24 <li><a href="#code">Full Code Listing</a></li>
25 </ol>
26 </li>
27 <li><a href="LangImpl5.html">Chapter 5</a>: Extending the Language: Control
28 Flow</li>
29 </ul>
31 <div class="doc_author">
32 <p>Written by <a href="mailto:sabre@nondot.org">Chris Lattner</a></p>
33 </div>
35 <!-- *********************************************************************** -->
36 <h2><a name="intro">Chapter 4 Introduction</a></h2>
37 <!-- *********************************************************************** -->
39 <div>
41 <p>Welcome to Chapter 4 of the "<a href="index.html">Implementing a language
42 with LLVM</a>" tutorial. Chapters 1-3 described the implementation of a simple
43 language and added support for generating LLVM IR. This chapter describes
44 two new techniques: adding optimizer support to your language, and adding JIT
45 compiler support. These additions will demonstrate how to get nice, efficient code
46 for the Kaleidoscope language.</p>
48 </div>
50 <!-- *********************************************************************** -->
51 <h2><a name="trivialconstfold">Trivial Constant Folding</a></h2>
52 <!-- *********************************************************************** -->
54 <div>
56 <p>
57 Our demonstration for Chapter 3 is elegant and easy to extend. Unfortunately,
58 it does not produce wonderful code. The IRBuilder, however, does give us
59 obvious optimizations when compiling simple code:</p>
61 <div class="doc_code">
62 <pre>
63 ready&gt; <b>def test(x) 1+2+x;</b>
64 Read function definition:
65 define double @test(double %x) {
66 entry:
67 %addtmp = fadd double 3.000000e+00, %x
68 ret double %addtmp
70 </pre>
71 </div>
73 <p>This code is not a literal transcription of the AST built by parsing the
74 input. That would be:
76 <div class="doc_code">
77 <pre>
78 ready&gt; <b>def test(x) 1+2+x;</b>
79 Read function definition:
80 define double @test(double %x) {
81 entry:
82 %addtmp = fadd double 2.000000e+00, 1.000000e+00
83 %addtmp1 = fadd double %addtmp, %x
84 ret double %addtmp1
86 </pre>
87 </div>
89 <p>Constant folding, as seen above, in particular, is a very common and very
90 important optimization: so much so that many language implementors implement
91 constant folding support in their AST representation.</p>
93 <p>With LLVM, you don't need this support in the AST. Since all calls to build
94 LLVM IR go through the LLVM IR builder, the builder itself checked to see if
95 there was a constant folding opportunity when you call it. If so, it just does
96 the constant fold and return the constant instead of creating an instruction.
98 <p>Well, that was easy :). In practice, we recommend always using
99 <tt>IRBuilder</tt> when generating code like this. It has no
100 "syntactic overhead" for its use (you don't have to uglify your compiler with
101 constant checks everywhere) and it can dramatically reduce the amount of
102 LLVM IR that is generated in some cases (particular for languages with a macro
103 preprocessor or that use a lot of constants).</p>
105 <p>On the other hand, the <tt>IRBuilder</tt> is limited by the fact
106 that it does all of its analysis inline with the code as it is built. If you
107 take a slightly more complex example:</p>
109 <div class="doc_code">
110 <pre>
111 ready&gt; <b>def test(x) (1+2+x)*(x+(1+2));</b>
112 ready> Read function definition:
113 define double @test(double %x) {
114 entry:
115 %addtmp = fadd double 3.000000e+00, %x
116 %addtmp1 = fadd double %x, 3.000000e+00
117 %multmp = fmul double %addtmp, %addtmp1
118 ret double %multmp
120 </pre>
121 </div>
123 <p>In this case, the LHS and RHS of the multiplication are the same value. We'd
124 really like to see this generate "<tt>tmp = x+3; result = tmp*tmp;</tt>" instead
125 of computing "<tt>x+3</tt>" twice.</p>
127 <p>Unfortunately, no amount of local analysis will be able to detect and correct
128 this. This requires two transformations: reassociation of expressions (to
129 make the add's lexically identical) and Common Subexpression Elimination (CSE)
130 to delete the redundant add instruction. Fortunately, LLVM provides a broad
131 range of optimizations that you can use, in the form of "passes".</p>
133 </div>
135 <!-- *********************************************************************** -->
136 <h2><a name="optimizerpasses">LLVM Optimization Passes</a></h2>
137 <!-- *********************************************************************** -->
139 <div>
141 <p>LLVM provides many optimization passes, which do many different sorts of
142 things and have different tradeoffs. Unlike other systems, LLVM doesn't hold
143 to the mistaken notion that one set of optimizations is right for all languages
144 and for all situations. LLVM allows a compiler implementor to make complete
145 decisions about what optimizations to use, in which order, and in what
146 situation.</p>
148 <p>As a concrete example, LLVM supports both "whole module" passes, which look
149 across as large of body of code as they can (often a whole file, but if run
150 at link time, this can be a substantial portion of the whole program). It also
151 supports and includes "per-function" passes which just operate on a single
152 function at a time, without looking at other functions. For more information
153 on passes and how they are run, see the <a href="../WritingAnLLVMPass.html">How
154 to Write a Pass</a> document and the <a href="../Passes.html">List of LLVM
155 Passes</a>.</p>
157 <p>For Kaleidoscope, we are currently generating functions on the fly, one at
158 a time, as the user types them in. We aren't shooting for the ultimate
159 optimization experience in this setting, but we also want to catch the easy and
160 quick stuff where possible. As such, we will choose to run a few per-function
161 optimizations as the user types the function in. If we wanted to make a "static
162 Kaleidoscope compiler", we would use exactly the code we have now, except that
163 we would defer running the optimizer until the entire file has been parsed.</p>
165 <p>In order to get per-function optimizations going, we need to set up a
166 <a href="../WritingAnLLVMPass.html#passmanager">FunctionPassManager</a> to hold and
167 organize the LLVM optimizations that we want to run. Once we have that, we can
168 add a set of optimizations to run. The code looks like this:</p>
170 <div class="doc_code">
171 <pre>
172 FunctionPassManager OurFPM(TheModule);
174 // Set up the optimizer pipeline. Start with registering info about how the
175 // target lays out data structures.
176 OurFPM.add(new TargetData(*TheExecutionEngine->getTargetData()));
177 // Provide basic AliasAnalysis support for GVN.
178 OurFPM.add(createBasicAliasAnalysisPass());
179 // Do simple "peephole" optimizations and bit-twiddling optzns.
180 OurFPM.add(createInstructionCombiningPass());
181 // Reassociate expressions.
182 OurFPM.add(createReassociatePass());
183 // Eliminate Common SubExpressions.
184 OurFPM.add(createGVNPass());
185 // Simplify the control flow graph (deleting unreachable blocks, etc).
186 OurFPM.add(createCFGSimplificationPass());
188 OurFPM.doInitialization();
190 // Set the global so the code gen can use this.
191 TheFPM = &amp;OurFPM;
193 // Run the main "interpreter loop" now.
194 MainLoop();
195 </pre>
196 </div>
198 <p>This code defines a <tt>FunctionPassManager</tt>, "<tt>OurFPM</tt>". It
199 requires a pointer to the <tt>Module</tt> to construct itself. Once it is set
200 up, we use a series of "add" calls to add a bunch of LLVM passes. The first
201 pass is basically boilerplate, it adds a pass so that later optimizations know
202 how the data structures in the program are laid out. The
203 "<tt>TheExecutionEngine</tt>" variable is related to the JIT, which we will get
204 to in the next section.</p>
206 <p>In this case, we choose to add 4 optimization passes. The passes we chose
207 here are a pretty standard set of "cleanup" optimizations that are useful for
208 a wide variety of code. I won't delve into what they do but, believe me,
209 they are a good starting place :).</p>
211 <p>Once the PassManager is set up, we need to make use of it. We do this by
212 running it after our newly created function is constructed (in
213 <tt>FunctionAST::Codegen</tt>), but before it is returned to the client:</p>
215 <div class="doc_code">
216 <pre>
217 if (Value *RetVal = Body->Codegen()) {
218 // Finish off the function.
219 Builder.CreateRet(RetVal);
221 // Validate the generated code, checking for consistency.
222 verifyFunction(*TheFunction);
224 <b>// Optimize the function.
225 TheFPM-&gt;run(*TheFunction);</b>
227 return TheFunction;
229 </pre>
230 </div>
232 <p>As you can see, this is pretty straightforward. The
233 <tt>FunctionPassManager</tt> optimizes and updates the LLVM Function* in place,
234 improving (hopefully) its body. With this in place, we can try our test above
235 again:</p>
237 <div class="doc_code">
238 <pre>
239 ready&gt; <b>def test(x) (1+2+x)*(x+(1+2));</b>
240 ready> Read function definition:
241 define double @test(double %x) {
242 entry:
243 %addtmp = fadd double %x, 3.000000e+00
244 %multmp = fmul double %addtmp, %addtmp
245 ret double %multmp
247 </pre>
248 </div>
250 <p>As expected, we now get our nicely optimized code, saving a floating point
251 add instruction from every execution of this function.</p>
253 <p>LLVM provides a wide variety of optimizations that can be used in certain
254 circumstances. Some <a href="../Passes.html">documentation about the various
255 passes</a> is available, but it isn't very complete. Another good source of
256 ideas can come from looking at the passes that <tt>llvm-gcc</tt> or
257 <tt>llvm-ld</tt> run to get started. The "<tt>opt</tt>" tool allows you to
258 experiment with passes from the command line, so you can see if they do
259 anything.</p>
261 <p>Now that we have reasonable code coming out of our front-end, lets talk about
262 executing it!</p>
264 </div>
266 <!-- *********************************************************************** -->
267 <h2><a name="jit">Adding a JIT Compiler</a></h2>
268 <!-- *********************************************************************** -->
270 <div>
272 <p>Code that is available in LLVM IR can have a wide variety of tools
273 applied to it. For example, you can run optimizations on it (as we did above),
274 you can dump it out in textual or binary forms, you can compile the code to an
275 assembly file (.s) for some target, or you can JIT compile it. The nice thing
276 about the LLVM IR representation is that it is the "common currency" between
277 many different parts of the compiler.
278 </p>
280 <p>In this section, we'll add JIT compiler support to our interpreter. The
281 basic idea that we want for Kaleidoscope is to have the user enter function
282 bodies as they do now, but immediately evaluate the top-level expressions they
283 type in. For example, if they type in "1 + 2;", we should evaluate and print
284 out 3. If they define a function, they should be able to call it from the
285 command line.</p>
287 <p>In order to do this, we first declare and initialize the JIT. This is done
288 by adding a global variable and a call in <tt>main</tt>:</p>
290 <div class="doc_code">
291 <pre>
292 <b>static ExecutionEngine *TheExecutionEngine;</b>
294 int main() {
296 <b>// Create the JIT. This takes ownership of the module.
297 TheExecutionEngine = EngineBuilder(TheModule).create();</b>
300 </pre>
301 </div>
303 <p>This creates an abstract "Execution Engine" which can be either a JIT
304 compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler
305 for you if one is available for your platform, otherwise it will fall back to
306 the interpreter.</p>
308 <p>Once the <tt>ExecutionEngine</tt> is created, the JIT is ready to be used.
309 There are a variety of APIs that are useful, but the simplest one is the
310 "<tt>getPointerToFunction(F)</tt>" method. This method JIT compiles the
311 specified LLVM Function and returns a function pointer to the generated machine
312 code. In our case, this means that we can change the code that parses a
313 top-level expression to look like this:</p>
315 <div class="doc_code">
316 <pre>
317 static void HandleTopLevelExpression() {
318 // Evaluate a top-level expression into an anonymous function.
319 if (FunctionAST *F = ParseTopLevelExpr()) {
320 if (Function *LF = F-&gt;Codegen()) {
321 LF->dump(); // Dump the function for exposition purposes.
323 <b>// JIT the function, returning a function pointer.
324 void *FPtr = TheExecutionEngine-&gt;getPointerToFunction(LF);
326 // Cast it to the right type (takes no arguments, returns a double) so we
327 // can call it as a native function.
328 double (*FP)() = (double (*)())(intptr_t)FPtr;
329 fprintf(stderr, "Evaluated to %f\n", FP());</b>
331 </pre>
332 </div>
334 <p>Recall that we compile top-level expressions into a self-contained LLVM
335 function that takes no arguments and returns the computed double. Because the
336 LLVM JIT compiler matches the native platform ABI, this means that you can just
337 cast the result pointer to a function pointer of that type and call it directly.
338 This means, there is no difference between JIT compiled code and native machine
339 code that is statically linked into your application.</p>
341 <p>With just these two changes, lets see how Kaleidoscope works now!</p>
343 <div class="doc_code">
344 <pre>
345 ready&gt; <b>4+5;</b>
346 define double @""() {
347 entry:
348 ret double 9.000000e+00
351 <em>Evaluated to 9.000000</em>
352 </pre>
353 </div>
355 <p>Well this looks like it is basically working. The dump of the function
356 shows the "no argument function that always returns double" that we synthesize
357 for each top-level expression that is typed in. This demonstrates very basic
358 functionality, but can we do more?</p>
360 <div class="doc_code">
361 <pre>
362 ready&gt; <b>def testfunc(x y) x + y*2; </b>
363 Read function definition:
364 define double @testfunc(double %x, double %y) {
365 entry:
366 %multmp = fmul double %y, 2.000000e+00
367 %addtmp = fadd double %multmp, %x
368 ret double %addtmp
371 ready&gt; <b>testfunc(4, 10);</b>
372 define double @""() {
373 entry:
374 %calltmp = call double @testfunc(double 4.000000e+00, double 1.000000e+01)
375 ret double %calltmp
378 <em>Evaluated to 24.000000</em>
379 </pre>
380 </div>
382 <p>This illustrates that we can now call user code, but there is something a bit
383 subtle going on here. Note that we only invoke the JIT on the anonymous
384 functions that <em>call testfunc</em>, but we never invoked it
385 on <em>testfunc</em> itself. What actually happened here is that the JIT
386 scanned for all non-JIT'd functions transitively called from the anonymous
387 function and compiled all of them before returning
388 from <tt>getPointerToFunction()</tt>.</p>
390 <p>The JIT provides a number of other more advanced interfaces for things like
391 freeing allocated machine code, rejit'ing functions to update them, etc.
392 However, even with this simple code, we get some surprisingly powerful
393 capabilities - check this out (I removed the dump of the anonymous functions,
394 you should get the idea by now :) :</p>
396 <div class="doc_code">
397 <pre>
398 ready&gt; <b>extern sin(x);</b>
399 Read extern:
400 declare double @sin(double)
402 ready&gt; <b>extern cos(x);</b>
403 Read extern:
404 declare double @cos(double)
406 ready&gt; <b>sin(1.0);</b>
407 <em>Evaluated to 0.841471</em>
409 ready&gt; <b>def foo(x) sin(x)*sin(x) + cos(x)*cos(x);</b>
410 Read function definition:
411 define double @foo(double %x) {
412 entry:
413 %calltmp = call double @sin(double %x)
414 %multmp = fmul double %calltmp, %calltmp
415 %calltmp2 = call double @cos(double %x)
416 %multmp4 = fmul double %calltmp2, %calltmp2
417 %addtmp = fadd double %multmp, %multmp4
418 ret double %addtmp
421 ready&gt; <b>foo(4.0);</b>
422 <em>Evaluated to 1.000000</em>
423 </pre>
424 </div>
426 <p>Whoa, how does the JIT know about sin and cos? The answer is surprisingly
427 simple: in this
428 example, the JIT started execution of a function and got to a function call. It
429 realized that the function was not yet JIT compiled and invoked the standard set
430 of routines to resolve the function. In this case, there is no body defined
431 for the function, so the JIT ended up calling "<tt>dlsym("sin")</tt>" on the
432 Kaleidoscope process itself.
433 Since "<tt>sin</tt>" is defined within the JIT's address space, it simply
434 patches up calls in the module to call the libm version of <tt>sin</tt>
435 directly.</p>
437 <p>The LLVM JIT provides a number of interfaces (look in the
438 <tt>ExecutionEngine.h</tt> file) for controlling how unknown functions get
439 resolved. It allows you to establish explicit mappings between IR objects and
440 addresses (useful for LLVM global variables that you want to map to static
441 tables, for example), allows you to dynamically decide on the fly based on the
442 function name, and even allows you to have the JIT compile functions lazily the
443 first time they're called.</p>
445 <p>One interesting application of this is that we can now extend the language
446 by writing arbitrary C++ code to implement operations. For example, if we add:
447 </p>
449 <div class="doc_code">
450 <pre>
451 /// putchard - putchar that takes a double and returns 0.
452 extern "C"
453 double putchard(double X) {
454 putchar((char)X);
455 return 0;
457 </pre>
458 </div>
460 <p>Now we can produce simple output to the console by using things like:
461 "<tt>extern putchard(x); putchard(120);</tt>", which prints a lowercase 'x' on
462 the console (120 is the ASCII code for 'x'). Similar code could be used to
463 implement file I/O, console input, and many other capabilities in
464 Kaleidoscope.</p>
466 <p>This completes the JIT and optimizer chapter of the Kaleidoscope tutorial. At
467 this point, we can compile a non-Turing-complete programming language, optimize
468 and JIT compile it in a user-driven way. Next up we'll look into <a
469 href="LangImpl5.html">extending the language with control flow constructs</a>,
470 tackling some interesting LLVM IR issues along the way.</p>
472 </div>
474 <!-- *********************************************************************** -->
475 <h2><a name="code">Full Code Listing</a></h2>
476 <!-- *********************************************************************** -->
478 <div>
481 Here is the complete code listing for our running example, enhanced with the
482 LLVM JIT and optimizer. To build this example, use:
483 </p>
485 <div class="doc_code">
486 <pre>
487 # Compile
488 g++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit native` -O3 -o toy
489 # Run
490 ./toy
491 </pre>
492 </div>
495 If you are compiling this on Linux, make sure to add the "-rdynamic" option
496 as well. This makes sure that the external functions are resolved properly
497 at runtime.</p>
499 <p>Here is the code:</p>
501 <div class="doc_code">
502 <pre>
503 #include "llvm/DerivedTypes.h"
504 #include "llvm/ExecutionEngine/ExecutionEngine.h"
505 #include "llvm/ExecutionEngine/JIT.h"
506 #include "llvm/LLVMContext.h"
507 #include "llvm/Module.h"
508 #include "llvm/PassManager.h"
509 #include "llvm/Analysis/Verifier.h"
510 #include "llvm/Analysis/Passes.h"
511 #include "llvm/Target/TargetData.h"
512 #include "llvm/Target/TargetSelect.h"
513 #include "llvm/Transforms/Scalar.h"
514 #include "llvm/Support/IRBuilder.h"
515 #include &lt;cstdio&gt;
516 #include &lt;string&gt;
517 #include &lt;map&gt;
518 #include &lt;vector&gt;
519 using namespace llvm;
521 //===----------------------------------------------------------------------===//
522 // Lexer
523 //===----------------------------------------------------------------------===//
525 // The lexer returns tokens [0-255] if it is an unknown character, otherwise one
526 // of these for known things.
527 enum Token {
528 tok_eof = -1,
530 // commands
531 tok_def = -2, tok_extern = -3,
533 // primary
534 tok_identifier = -4, tok_number = -5
537 static std::string IdentifierStr; // Filled in if tok_identifier
538 static double NumVal; // Filled in if tok_number
540 /// gettok - Return the next token from standard input.
541 static int gettok() {
542 static int LastChar = ' ';
544 // Skip any whitespace.
545 while (isspace(LastChar))
546 LastChar = getchar();
548 if (isalpha(LastChar)) { // identifier: [a-zA-Z][a-zA-Z0-9]*
549 IdentifierStr = LastChar;
550 while (isalnum((LastChar = getchar())))
551 IdentifierStr += LastChar;
553 if (IdentifierStr == "def") return tok_def;
554 if (IdentifierStr == "extern") return tok_extern;
555 return tok_identifier;
558 if (isdigit(LastChar) || LastChar == '.') { // Number: [0-9.]+
559 std::string NumStr;
560 do {
561 NumStr += LastChar;
562 LastChar = getchar();
563 } while (isdigit(LastChar) || LastChar == '.');
565 NumVal = strtod(NumStr.c_str(), 0);
566 return tok_number;
569 if (LastChar == '#') {
570 // Comment until end of line.
571 do LastChar = getchar();
572 while (LastChar != EOF &amp;&amp; LastChar != '\n' &amp;&amp; LastChar != '\r');
574 if (LastChar != EOF)
575 return gettok();
578 // Check for end of file. Don't eat the EOF.
579 if (LastChar == EOF)
580 return tok_eof;
582 // Otherwise, just return the character as its ascii value.
583 int ThisChar = LastChar;
584 LastChar = getchar();
585 return ThisChar;
588 //===----------------------------------------------------------------------===//
589 // Abstract Syntax Tree (aka Parse Tree)
590 //===----------------------------------------------------------------------===//
592 /// ExprAST - Base class for all expression nodes.
593 class ExprAST {
594 public:
595 virtual ~ExprAST() {}
596 virtual Value *Codegen() = 0;
599 /// NumberExprAST - Expression class for numeric literals like "1.0".
600 class NumberExprAST : public ExprAST {
601 double Val;
602 public:
603 NumberExprAST(double val) : Val(val) {}
604 virtual Value *Codegen();
607 /// VariableExprAST - Expression class for referencing a variable, like "a".
608 class VariableExprAST : public ExprAST {
609 std::string Name;
610 public:
611 VariableExprAST(const std::string &amp;name) : Name(name) {}
612 virtual Value *Codegen();
615 /// BinaryExprAST - Expression class for a binary operator.
616 class BinaryExprAST : public ExprAST {
617 char Op;
618 ExprAST *LHS, *RHS;
619 public:
620 BinaryExprAST(char op, ExprAST *lhs, ExprAST *rhs)
621 : Op(op), LHS(lhs), RHS(rhs) {}
622 virtual Value *Codegen();
625 /// CallExprAST - Expression class for function calls.
626 class CallExprAST : public ExprAST {
627 std::string Callee;
628 std::vector&lt;ExprAST*&gt; Args;
629 public:
630 CallExprAST(const std::string &amp;callee, std::vector&lt;ExprAST*&gt; &amp;args)
631 : Callee(callee), Args(args) {}
632 virtual Value *Codegen();
635 /// PrototypeAST - This class represents the "prototype" for a function,
636 /// which captures its name, and its argument names (thus implicitly the number
637 /// of arguments the function takes).
638 class PrototypeAST {
639 std::string Name;
640 std::vector&lt;std::string&gt; Args;
641 public:
642 PrototypeAST(const std::string &amp;name, const std::vector&lt;std::string&gt; &amp;args)
643 : Name(name), Args(args) {}
645 Function *Codegen();
648 /// FunctionAST - This class represents a function definition itself.
649 class FunctionAST {
650 PrototypeAST *Proto;
651 ExprAST *Body;
652 public:
653 FunctionAST(PrototypeAST *proto, ExprAST *body)
654 : Proto(proto), Body(body) {}
656 Function *Codegen();
659 //===----------------------------------------------------------------------===//
660 // Parser
661 //===----------------------------------------------------------------------===//
663 /// CurTok/getNextToken - Provide a simple token buffer. CurTok is the current
664 /// token the parser is looking at. getNextToken reads another token from the
665 /// lexer and updates CurTok with its results.
666 static int CurTok;
667 static int getNextToken() {
668 return CurTok = gettok();
671 /// BinopPrecedence - This holds the precedence for each binary operator that is
672 /// defined.
673 static std::map&lt;char, int&gt; BinopPrecedence;
675 /// GetTokPrecedence - Get the precedence of the pending binary operator token.
676 static int GetTokPrecedence() {
677 if (!isascii(CurTok))
678 return -1;
680 // Make sure it's a declared binop.
681 int TokPrec = BinopPrecedence[CurTok];
682 if (TokPrec &lt;= 0) return -1;
683 return TokPrec;
686 /// Error* - These are little helper functions for error handling.
687 ExprAST *Error(const char *Str) { fprintf(stderr, "Error: %s\n", Str);return 0;}
688 PrototypeAST *ErrorP(const char *Str) { Error(Str); return 0; }
689 FunctionAST *ErrorF(const char *Str) { Error(Str); return 0; }
691 static ExprAST *ParseExpression();
693 /// identifierexpr
694 /// ::= identifier
695 /// ::= identifier '(' expression* ')'
696 static ExprAST *ParseIdentifierExpr() {
697 std::string IdName = IdentifierStr;
699 getNextToken(); // eat identifier.
701 if (CurTok != '(') // Simple variable ref.
702 return new VariableExprAST(IdName);
704 // Call.
705 getNextToken(); // eat (
706 std::vector&lt;ExprAST*&gt; Args;
707 if (CurTok != ')') {
708 while (1) {
709 ExprAST *Arg = ParseExpression();
710 if (!Arg) return 0;
711 Args.push_back(Arg);
713 if (CurTok == ')') break;
715 if (CurTok != ',')
716 return Error("Expected ')' or ',' in argument list");
717 getNextToken();
721 // Eat the ')'.
722 getNextToken();
724 return new CallExprAST(IdName, Args);
727 /// numberexpr ::= number
728 static ExprAST *ParseNumberExpr() {
729 ExprAST *Result = new NumberExprAST(NumVal);
730 getNextToken(); // consume the number
731 return Result;
734 /// parenexpr ::= '(' expression ')'
735 static ExprAST *ParseParenExpr() {
736 getNextToken(); // eat (.
737 ExprAST *V = ParseExpression();
738 if (!V) return 0;
740 if (CurTok != ')')
741 return Error("expected ')'");
742 getNextToken(); // eat ).
743 return V;
746 /// primary
747 /// ::= identifierexpr
748 /// ::= numberexpr
749 /// ::= parenexpr
750 static ExprAST *ParsePrimary() {
751 switch (CurTok) {
752 default: return Error("unknown token when expecting an expression");
753 case tok_identifier: return ParseIdentifierExpr();
754 case tok_number: return ParseNumberExpr();
755 case '(': return ParseParenExpr();
759 /// binoprhs
760 /// ::= ('+' primary)*
761 static ExprAST *ParseBinOpRHS(int ExprPrec, ExprAST *LHS) {
762 // If this is a binop, find its precedence.
763 while (1) {
764 int TokPrec = GetTokPrecedence();
766 // If this is a binop that binds at least as tightly as the current binop,
767 // consume it, otherwise we are done.
768 if (TokPrec &lt; ExprPrec)
769 return LHS;
771 // Okay, we know this is a binop.
772 int BinOp = CurTok;
773 getNextToken(); // eat binop
775 // Parse the primary expression after the binary operator.
776 ExprAST *RHS = ParsePrimary();
777 if (!RHS) return 0;
779 // If BinOp binds less tightly with RHS than the operator after RHS, let
780 // the pending operator take RHS as its LHS.
781 int NextPrec = GetTokPrecedence();
782 if (TokPrec &lt; NextPrec) {
783 RHS = ParseBinOpRHS(TokPrec+1, RHS);
784 if (RHS == 0) return 0;
787 // Merge LHS/RHS.
788 LHS = new BinaryExprAST(BinOp, LHS, RHS);
792 /// expression
793 /// ::= primary binoprhs
795 static ExprAST *ParseExpression() {
796 ExprAST *LHS = ParsePrimary();
797 if (!LHS) return 0;
799 return ParseBinOpRHS(0, LHS);
802 /// prototype
803 /// ::= id '(' id* ')'
804 static PrototypeAST *ParsePrototype() {
805 if (CurTok != tok_identifier)
806 return ErrorP("Expected function name in prototype");
808 std::string FnName = IdentifierStr;
809 getNextToken();
811 if (CurTok != '(')
812 return ErrorP("Expected '(' in prototype");
814 std::vector&lt;std::string&gt; ArgNames;
815 while (getNextToken() == tok_identifier)
816 ArgNames.push_back(IdentifierStr);
817 if (CurTok != ')')
818 return ErrorP("Expected ')' in prototype");
820 // success.
821 getNextToken(); // eat ')'.
823 return new PrototypeAST(FnName, ArgNames);
826 /// definition ::= 'def' prototype expression
827 static FunctionAST *ParseDefinition() {
828 getNextToken(); // eat def.
829 PrototypeAST *Proto = ParsePrototype();
830 if (Proto == 0) return 0;
832 if (ExprAST *E = ParseExpression())
833 return new FunctionAST(Proto, E);
834 return 0;
837 /// toplevelexpr ::= expression
838 static FunctionAST *ParseTopLevelExpr() {
839 if (ExprAST *E = ParseExpression()) {
840 // Make an anonymous proto.
841 PrototypeAST *Proto = new PrototypeAST("", std::vector&lt;std::string&gt;());
842 return new FunctionAST(Proto, E);
844 return 0;
847 /// external ::= 'extern' prototype
848 static PrototypeAST *ParseExtern() {
849 getNextToken(); // eat extern.
850 return ParsePrototype();
853 //===----------------------------------------------------------------------===//
854 // Code Generation
855 //===----------------------------------------------------------------------===//
857 static Module *TheModule;
858 static IRBuilder&lt;&gt; Builder(getGlobalContext());
859 static std::map&lt;std::string, Value*&gt; NamedValues;
860 static FunctionPassManager *TheFPM;
862 Value *ErrorV(const char *Str) { Error(Str); return 0; }
864 Value *NumberExprAST::Codegen() {
865 return ConstantFP::get(getGlobalContext(), APFloat(Val));
868 Value *VariableExprAST::Codegen() {
869 // Look this variable up in the function.
870 Value *V = NamedValues[Name];
871 return V ? V : ErrorV("Unknown variable name");
874 Value *BinaryExprAST::Codegen() {
875 Value *L = LHS-&gt;Codegen();
876 Value *R = RHS-&gt;Codegen();
877 if (L == 0 || R == 0) return 0;
879 switch (Op) {
880 case '+': return Builder.CreateFAdd(L, R, "addtmp");
881 case '-': return Builder.CreateFSub(L, R, "subtmp");
882 case '*': return Builder.CreateFMul(L, R, "multmp");
883 case '&lt;':
884 L = Builder.CreateFCmpULT(L, R, "cmptmp");
885 // Convert bool 0/1 to double 0.0 or 1.0
886 return Builder.CreateUIToFP(L, Type::getDoubleTy(getGlobalContext()),
887 "booltmp");
888 default: return ErrorV("invalid binary operator");
892 Value *CallExprAST::Codegen() {
893 // Look up the name in the global module table.
894 Function *CalleeF = TheModule-&gt;getFunction(Callee);
895 if (CalleeF == 0)
896 return ErrorV("Unknown function referenced");
898 // If argument mismatch error.
899 if (CalleeF-&gt;arg_size() != Args.size())
900 return ErrorV("Incorrect # arguments passed");
902 std::vector&lt;Value*&gt; ArgsV;
903 for (unsigned i = 0, e = Args.size(); i != e; ++i) {
904 ArgsV.push_back(Args[i]-&gt;Codegen());
905 if (ArgsV.back() == 0) return 0;
908 return Builder.CreateCall(CalleeF, ArgsV.begin(), ArgsV.end(), "calltmp");
911 Function *PrototypeAST::Codegen() {
912 // Make the function type: double(double,double) etc.
913 std::vector&lt;const Type*&gt; Doubles(Args.size(),
914 Type::getDoubleTy(getGlobalContext()));
915 FunctionType *FT = FunctionType::get(Type::getDoubleTy(getGlobalContext()),
916 Doubles, false);
918 Function *F = Function::Create(FT, Function::ExternalLinkage, Name, TheModule);
920 // If F conflicted, there was already something named 'Name'. If it has a
921 // body, don't allow redefinition or reextern.
922 if (F-&gt;getName() != Name) {
923 // Delete the one we just made and get the existing one.
924 F-&gt;eraseFromParent();
925 F = TheModule-&gt;getFunction(Name);
927 // If F already has a body, reject this.
928 if (!F-&gt;empty()) {
929 ErrorF("redefinition of function");
930 return 0;
933 // If F took a different number of args, reject.
934 if (F-&gt;arg_size() != Args.size()) {
935 ErrorF("redefinition of function with different # args");
936 return 0;
940 // Set names for all arguments.
941 unsigned Idx = 0;
942 for (Function::arg_iterator AI = F-&gt;arg_begin(); Idx != Args.size();
943 ++AI, ++Idx) {
944 AI-&gt;setName(Args[Idx]);
946 // Add arguments to variable symbol table.
947 NamedValues[Args[Idx]] = AI;
950 return F;
953 Function *FunctionAST::Codegen() {
954 NamedValues.clear();
956 Function *TheFunction = Proto-&gt;Codegen();
957 if (TheFunction == 0)
958 return 0;
960 // Create a new basic block to start insertion into.
961 BasicBlock *BB = BasicBlock::Create(getGlobalContext(), "entry", TheFunction);
962 Builder.SetInsertPoint(BB);
964 if (Value *RetVal = Body-&gt;Codegen()) {
965 // Finish off the function.
966 Builder.CreateRet(RetVal);
968 // Validate the generated code, checking for consistency.
969 verifyFunction(*TheFunction);
971 // Optimize the function.
972 TheFPM-&gt;run(*TheFunction);
974 return TheFunction;
977 // Error reading body, remove function.
978 TheFunction-&gt;eraseFromParent();
979 return 0;
982 //===----------------------------------------------------------------------===//
983 // Top-Level parsing and JIT Driver
984 //===----------------------------------------------------------------------===//
986 static ExecutionEngine *TheExecutionEngine;
988 static void HandleDefinition() {
989 if (FunctionAST *F = ParseDefinition()) {
990 if (Function *LF = F-&gt;Codegen()) {
991 fprintf(stderr, "Read function definition:");
992 LF-&gt;dump();
994 } else {
995 // Skip token for error recovery.
996 getNextToken();
1000 static void HandleExtern() {
1001 if (PrototypeAST *P = ParseExtern()) {
1002 if (Function *F = P-&gt;Codegen()) {
1003 fprintf(stderr, "Read extern: ");
1004 F-&gt;dump();
1006 } else {
1007 // Skip token for error recovery.
1008 getNextToken();
1012 static void HandleTopLevelExpression() {
1013 // Evaluate a top-level expression into an anonymous function.
1014 if (FunctionAST *F = ParseTopLevelExpr()) {
1015 if (Function *LF = F-&gt;Codegen()) {
1016 // JIT the function, returning a function pointer.
1017 void *FPtr = TheExecutionEngine-&gt;getPointerToFunction(LF);
1019 // Cast it to the right type (takes no arguments, returns a double) so we
1020 // can call it as a native function.
1021 double (*FP)() = (double (*)())(intptr_t)FPtr;
1022 fprintf(stderr, "Evaluated to %f\n", FP());
1024 } else {
1025 // Skip token for error recovery.
1026 getNextToken();
1030 /// top ::= definition | external | expression | ';'
1031 static void MainLoop() {
1032 while (1) {
1033 fprintf(stderr, "ready&gt; ");
1034 switch (CurTok) {
1035 case tok_eof: return;
1036 case ';': getNextToken(); break; // ignore top-level semicolons.
1037 case tok_def: HandleDefinition(); break;
1038 case tok_extern: HandleExtern(); break;
1039 default: HandleTopLevelExpression(); break;
1044 //===----------------------------------------------------------------------===//
1045 // "Library" functions that can be "extern'd" from user code.
1046 //===----------------------------------------------------------------------===//
1048 /// putchard - putchar that takes a double and returns 0.
1049 extern "C"
1050 double putchard(double X) {
1051 putchar((char)X);
1052 return 0;
1055 //===----------------------------------------------------------------------===//
1056 // Main driver code.
1057 //===----------------------------------------------------------------------===//
1059 int main() {
1060 InitializeNativeTarget();
1061 LLVMContext &amp;Context = getGlobalContext();
1063 // Install standard binary operators.
1064 // 1 is lowest precedence.
1065 BinopPrecedence['&lt;'] = 10;
1066 BinopPrecedence['+'] = 20;
1067 BinopPrecedence['-'] = 20;
1068 BinopPrecedence['*'] = 40; // highest.
1070 // Prime the first token.
1071 fprintf(stderr, "ready&gt; ");
1072 getNextToken();
1074 // Make the module, which holds all the code.
1075 TheModule = new Module("my cool jit", Context);
1077 // Create the JIT. This takes ownership of the module.
1078 std::string ErrStr;
1079 TheExecutionEngine = EngineBuilder(TheModule).setErrorStr(&amp;ErrStr).create();
1080 if (!TheExecutionEngine) {
1081 fprintf(stderr, "Could not create ExecutionEngine: %s\n", ErrStr.c_str());
1082 exit(1);
1085 FunctionPassManager OurFPM(TheModule);
1087 // Set up the optimizer pipeline. Start with registering info about how the
1088 // target lays out data structures.
1089 OurFPM.add(new TargetData(*TheExecutionEngine-&gt;getTargetData()));
1090 // Provide basic AliasAnalysis support for GVN.
1091 OurFPM.add(createBasicAliasAnalysisPass());
1092 // Do simple "peephole" optimizations and bit-twiddling optzns.
1093 OurFPM.add(createInstructionCombiningPass());
1094 // Reassociate expressions.
1095 OurFPM.add(createReassociatePass());
1096 // Eliminate Common SubExpressions.
1097 OurFPM.add(createGVNPass());
1098 // Simplify the control flow graph (deleting unreachable blocks, etc).
1099 OurFPM.add(createCFGSimplificationPass());
1101 OurFPM.doInitialization();
1103 // Set the global so the code gen can use this.
1104 TheFPM = &amp;OurFPM;
1106 // Run the main "interpreter loop" now.
1107 MainLoop();
1109 TheFPM = 0;
1111 // Print out all of the generated code.
1112 TheModule-&gt;dump();
1114 return 0;
1116 </pre>
1117 </div>
1119 <a href="LangImpl5.html">Next: Extending the language: control flow</a>
1120 </div>
1122 <!-- *********************************************************************** -->
1123 <hr>
1124 <address>
1125 <a href="http://jigsaw.w3.org/css-validator/check/referer"><img
1126 src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!"></a>
1127 <a href="http://validator.w3.org/check/referer"><img
1128 src="http://www.w3.org/Icons/valid-html401" alt="Valid HTML 4.01!"></a>
1130 <a href="mailto:sabre@nondot.org">Chris Lattner</a><br>
1131 <a href="http://llvm.org/">The LLVM Compiler Infrastructure</a><br>
1132 Last modified: $Date$
1133 </address>
1134 </body>
1135 </html>