Merge branch 'master' into msp430
[llvm/msp430.git] / docs / tutorial / LangImpl4.html
blob9a3bfd21471e73526f1ed6c1d5ec45efe46d0756
1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
2 "http://www.w3.org/TR/html4/strict.dtd">
4 <html>
5 <head>
6 <title>Kaleidoscope: Adding JIT and Optimizer Support</title>
7 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
8 <meta name="author" content="Chris Lattner">
9 <link rel="stylesheet" href="../llvm.css" type="text/css">
10 </head>
12 <body>
14 <div class="doc_title">Kaleidoscope: Adding JIT and Optimizer Support</div>
16 <ul>
17 <li><a href="index.html">Up to Tutorial Index</a></li>
18 <li>Chapter 4
19 <ol>
20 <li><a href="#intro">Chapter 4 Introduction</a></li>
21 <li><a href="#trivialconstfold">Trivial Constant Folding</a></li>
22 <li><a href="#optimizerpasses">LLVM Optimization Passes</a></li>
23 <li><a href="#jit">Adding a JIT Compiler</a></li>
24 <li><a href="#code">Full Code Listing</a></li>
25 </ol>
26 </li>
27 <li><a href="LangImpl5.html">Chapter 5</a>: Extending the Language: Control
28 Flow</li>
29 </ul>
31 <div class="doc_author">
32 <p>Written by <a href="mailto:sabre@nondot.org">Chris Lattner</a></p>
33 </div>
35 <!-- *********************************************************************** -->
36 <div class="doc_section"><a name="intro">Chapter 4 Introduction</a></div>
37 <!-- *********************************************************************** -->
39 <div class="doc_text">
41 <p>Welcome to Chapter 4 of the "<a href="index.html">Implementing a language
42 with LLVM</a>" tutorial. Chapters 1-3 described the implementation of a simple
43 language and added support for generating LLVM IR. This chapter describes
44 two new techniques: adding optimizer support to your language, and adding JIT
45 compiler support. These additions will demonstrate how to get nice, efficient code
46 for the Kaleidoscope language.</p>
48 </div>
50 <!-- *********************************************************************** -->
51 <div class="doc_section"><a name="trivialconstfold">Trivial Constant
52 Folding</a></div>
53 <!-- *********************************************************************** -->
55 <div class="doc_text">
57 <p>
58 Our demonstration for Chapter 3 is elegant and easy to extend. Unfortunately,
59 it does not produce wonderful code. The IRBuilder, however, does give us
60 obvious optimizations when compiling simple code:</p>
62 <div class="doc_code">
63 <pre>
64 ready&gt; <b>def test(x) 1+2+x;</b>
65 Read function definition:
66 define double @test(double %x) {
67 entry:
68 %addtmp = add double 3.000000e+00, %x
69 ret double %addtmp
71 </pre>
72 </div>
74 <p>This code is not a literal transcription of the AST built by parsing the
75 input. That would be:
77 <div class="doc_code">
78 <pre>
79 ready&gt; <b>def test(x) 1+2+x;</b>
80 Read function definition:
81 define double @test(double %x) {
82 entry:
83 %addtmp = add double 2.000000e+00, 1.000000e+00
84 %addtmp1 = add double %addtmp, %x
85 ret double %addtmp1
87 </pre>
88 </div>
90 <p>Constant folding, as seen above, in particular, is a very common and very
91 important optimization: so much so that many language implementors implement
92 constant folding support in their AST representation.</p>
94 <p>With LLVM, you don't need this support in the AST. Since all calls to build
95 LLVM IR go through the LLVM IR builder, the builder itself checked to see if
96 there was a constant folding opportunity when you call it. If so, it just does
97 the constant fold and return the constant instead of creating an instruction.
99 <p>Well, that was easy :). In practice, we recommend always using
100 <tt>IRBuilder</tt> when generating code like this. It has no
101 "syntactic overhead" for its use (you don't have to uglify your compiler with
102 constant checks everywhere) and it can dramatically reduce the amount of
103 LLVM IR that is generated in some cases (particular for languages with a macro
104 preprocessor or that use a lot of constants).</p>
106 <p>On the other hand, the <tt>IRBuilder</tt> is limited by the fact
107 that it does all of its analysis inline with the code as it is built. If you
108 take a slightly more complex example:</p>
110 <div class="doc_code">
111 <pre>
112 ready&gt; <b>def test(x) (1+2+x)*(x+(1+2));</b>
113 ready> Read function definition:
114 define double @test(double %x) {
115 entry:
116 %addtmp = add double 3.000000e+00, %x
117 %addtmp1 = add double %x, 3.000000e+00
118 %multmp = mul double %addtmp, %addtmp1
119 ret double %multmp
121 </pre>
122 </div>
124 <p>In this case, the LHS and RHS of the multiplication are the same value. We'd
125 really like to see this generate "<tt>tmp = x+3; result = tmp*tmp;</tt>" instead
126 of computing "<tt>x+3</tt>" twice.</p>
128 <p>Unfortunately, no amount of local analysis will be able to detect and correct
129 this. This requires two transformations: reassociation of expressions (to
130 make the add's lexically identical) and Common Subexpression Elimination (CSE)
131 to delete the redundant add instruction. Fortunately, LLVM provides a broad
132 range of optimizations that you can use, in the form of "passes".</p>
134 </div>
136 <!-- *********************************************************************** -->
137 <div class="doc_section"><a name="optimizerpasses">LLVM Optimization
138 Passes</a></div>
139 <!-- *********************************************************************** -->
141 <div class="doc_text">
143 <p>LLVM provides many optimization passes, which do many different sorts of
144 things and have different tradeoffs. Unlike other systems, LLVM doesn't hold
145 to the mistaken notion that one set of optimizations is right for all languages
146 and for all situations. LLVM allows a compiler implementor to make complete
147 decisions about what optimizations to use, in which order, and in what
148 situation.</p>
150 <p>As a concrete example, LLVM supports both "whole module" passes, which look
151 across as large of body of code as they can (often a whole file, but if run
152 at link time, this can be a substantial portion of the whole program). It also
153 supports and includes "per-function" passes which just operate on a single
154 function at a time, without looking at other functions. For more information
155 on passes and how they are run, see the <a href="../WritingAnLLVMPass.html">How
156 to Write a Pass</a> document and the <a href="../Passes.html">List of LLVM
157 Passes</a>.</p>
159 <p>For Kaleidoscope, we are currently generating functions on the fly, one at
160 a time, as the user types them in. We aren't shooting for the ultimate
161 optimization experience in this setting, but we also want to catch the easy and
162 quick stuff where possible. As such, we will choose to run a few per-function
163 optimizations as the user types the function in. If we wanted to make a "static
164 Kaleidoscope compiler", we would use exactly the code we have now, except that
165 we would defer running the optimizer until the entire file has been parsed.</p>
167 <p>In order to get per-function optimizations going, we need to set up a
168 <a href="../WritingAnLLVMPass.html#passmanager">FunctionPassManager</a> to hold and
169 organize the LLVM optimizations that we want to run. Once we have that, we can
170 add a set of optimizations to run. The code looks like this:</p>
172 <div class="doc_code">
173 <pre>
174 ExistingModuleProvider OurModuleProvider(TheModule);
175 FunctionPassManager OurFPM(&amp;OurModuleProvider);
177 // Set up the optimizer pipeline. Start with registering info about how the
178 // target lays out data structures.
179 OurFPM.add(new TargetData(*TheExecutionEngine->getTargetData()));
180 // Do simple "peephole" optimizations and bit-twiddling optzns.
181 OurFPM.add(createInstructionCombiningPass());
182 // Reassociate expressions.
183 OurFPM.add(createReassociatePass());
184 // Eliminate Common SubExpressions.
185 OurFPM.add(createGVNPass());
186 // Simplify the control flow graph (deleting unreachable blocks, etc).
187 OurFPM.add(createCFGSimplificationPass());
189 // Set the global so the code gen can use this.
190 TheFPM = &amp;OurFPM;
192 // Run the main "interpreter loop" now.
193 MainLoop();
194 </pre>
195 </div>
197 <p>This code defines two objects, an <tt>ExistingModuleProvider</tt> and a
198 <tt>FunctionPassManager</tt>. The former is basically a wrapper around our
199 <tt>Module</tt> that the PassManager requires. It provides certain flexibility
200 that we're not going to take advantage of here, so I won't dive into any details
201 about it.</p>
203 <p>The meat of the matter here, is the definition of "<tt>OurFPM</tt>". It
204 requires a pointer to the <tt>Module</tt> (through the <tt>ModuleProvider</tt>)
205 to construct itself. Once it is set up, we use a series of "add" calls to add
206 a bunch of LLVM passes. The first pass is basically boilerplate, it adds a pass
207 so that later optimizations know how the data structures in the program are
208 layed out. The "<tt>TheExecutionEngine</tt>" variable is related to the JIT,
209 which we will get to in the next section.</p>
211 <p>In this case, we choose to add 4 optimization passes. The passes we chose
212 here are a pretty standard set of "cleanup" optimizations that are useful for
213 a wide variety of code. I won't delve into what they do but, believe me,
214 they are a good starting place :).</p>
216 <p>Once the PassManager is set up, we need to make use of it. We do this by
217 running it after our newly created function is constructed (in
218 <tt>FunctionAST::Codegen</tt>), but before it is returned to the client:</p>
220 <div class="doc_code">
221 <pre>
222 if (Value *RetVal = Body->Codegen()) {
223 // Finish off the function.
224 Builder.CreateRet(RetVal);
226 // Validate the generated code, checking for consistency.
227 verifyFunction(*TheFunction);
229 <b>// Optimize the function.
230 TheFPM-&gt;run(*TheFunction);</b>
232 return TheFunction;
234 </pre>
235 </div>
237 <p>As you can see, this is pretty straightforward. The
238 <tt>FunctionPassManager</tt> optimizes and updates the LLVM Function* in place,
239 improving (hopefully) its body. With this in place, we can try our test above
240 again:</p>
242 <div class="doc_code">
243 <pre>
244 ready&gt; <b>def test(x) (1+2+x)*(x+(1+2));</b>
245 ready> Read function definition:
246 define double @test(double %x) {
247 entry:
248 %addtmp = add double %x, 3.000000e+00
249 %multmp = mul double %addtmp, %addtmp
250 ret double %multmp
252 </pre>
253 </div>
255 <p>As expected, we now get our nicely optimized code, saving a floating point
256 add instruction from every execution of this function.</p>
258 <p>LLVM provides a wide variety of optimizations that can be used in certain
259 circumstances. Some <a href="../Passes.html">documentation about the various
260 passes</a> is available, but it isn't very complete. Another good source of
261 ideas can come from looking at the passes that <tt>llvm-gcc</tt> or
262 <tt>llvm-ld</tt> run to get started. The "<tt>opt</tt>" tool allows you to
263 experiment with passes from the command line, so you can see if they do
264 anything.</p>
266 <p>Now that we have reasonable code coming out of our front-end, lets talk about
267 executing it!</p>
269 </div>
271 <!-- *********************************************************************** -->
272 <div class="doc_section"><a name="jit">Adding a JIT Compiler</a></div>
273 <!-- *********************************************************************** -->
275 <div class="doc_text">
277 <p>Code that is available in LLVM IR can have a wide variety of tools
278 applied to it. For example, you can run optimizations on it (as we did above),
279 you can dump it out in textual or binary forms, you can compile the code to an
280 assembly file (.s) for some target, or you can JIT compile it. The nice thing
281 about the LLVM IR representation is that it is the "common currency" between
282 many different parts of the compiler.
283 </p>
285 <p>In this section, we'll add JIT compiler support to our interpreter. The
286 basic idea that we want for Kaleidoscope is to have the user enter function
287 bodies as they do now, but immediately evaluate the top-level expressions they
288 type in. For example, if they type in "1 + 2;", we should evaluate and print
289 out 3. If they define a function, they should be able to call it from the
290 command line.</p>
292 <p>In order to do this, we first declare and initialize the JIT. This is done
293 by adding a global variable and a call in <tt>main</tt>:</p>
295 <div class="doc_code">
296 <pre>
297 <b>static ExecutionEngine *TheExecutionEngine;</b>
299 int main() {
301 <b>// Create the JIT.
302 TheExecutionEngine = ExecutionEngine::create(TheModule);</b>
305 </pre>
306 </div>
308 <p>This creates an abstract "Execution Engine" which can be either a JIT
309 compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler
310 for you if one is available for your platform, otherwise it will fall back to
311 the interpreter.</p>
313 <p>Once the <tt>ExecutionEngine</tt> is created, the JIT is ready to be used.
314 There are a variety of APIs that are useful, but the simplest one is the
315 "<tt>getPointerToFunction(F)</tt>" method. This method JIT compiles the
316 specified LLVM Function and returns a function pointer to the generated machine
317 code. In our case, this means that we can change the code that parses a
318 top-level expression to look like this:</p>
320 <div class="doc_code">
321 <pre>
322 static void HandleTopLevelExpression() {
323 // Evaluate a top level expression into an anonymous function.
324 if (FunctionAST *F = ParseTopLevelExpr()) {
325 if (Function *LF = F-&gt;Codegen()) {
326 LF->dump(); // Dump the function for exposition purposes.
328 <b>// JIT the function, returning a function pointer.
329 void *FPtr = TheExecutionEngine-&gt;getPointerToFunction(LF);
331 // Cast it to the right type (takes no arguments, returns a double) so we
332 // can call it as a native function.
333 double (*FP)() = (double (*)())FPtr;
334 fprintf(stderr, "Evaluated to %f\n", FP());</b>
336 </pre>
337 </div>
339 <p>Recall that we compile top-level expressions into a self-contained LLVM
340 function that takes no arguments and returns the computed double. Because the
341 LLVM JIT compiler matches the native platform ABI, this means that you can just
342 cast the result pointer to a function pointer of that type and call it directly.
343 This means, there is no difference between JIT compiled code and native machine
344 code that is statically linked into your application.</p>
346 <p>With just these two changes, lets see how Kaleidoscope works now!</p>
348 <div class="doc_code">
349 <pre>
350 ready&gt; <b>4+5;</b>
351 define double @""() {
352 entry:
353 ret double 9.000000e+00
356 <em>Evaluated to 9.000000</em>
357 </pre>
358 </div>
360 <p>Well this looks like it is basically working. The dump of the function
361 shows the "no argument function that always returns double" that we synthesize
362 for each top level expression that is typed in. This demonstrates very basic
363 functionality, but can we do more?</p>
365 <div class="doc_code">
366 <pre>
367 ready&gt; <b>def testfunc(x y) x + y*2; </b>
368 Read function definition:
369 define double @testfunc(double %x, double %y) {
370 entry:
371 %multmp = mul double %y, 2.000000e+00
372 %addtmp = add double %multmp, %x
373 ret double %addtmp
376 ready&gt; <b>testfunc(4, 10);</b>
377 define double @""() {
378 entry:
379 %calltmp = call double @testfunc( double 4.000000e+00, double 1.000000e+01 )
380 ret double %calltmp
383 <em>Evaluated to 24.000000</em>
384 </pre>
385 </div>
387 <p>This illustrates that we can now call user code, but there is something a bit subtle
388 going on here. Note that we only invoke the JIT on the anonymous functions
389 that <em>call testfunc</em>, but we never invoked it on <em>testfunc
390 </em>itself.</p>
392 <p>What actually happened here is that the anonymous function was
393 JIT'd when requested. When the Kaleidoscope app calls through the function
394 pointer that is returned, the anonymous function starts executing. It ends up
395 making the call to the "testfunc" function, and ends up in a stub that invokes
396 the JIT, lazily, on testfunc. Once the JIT finishes lazily compiling testfunc,
397 it returns and the code re-executes the call.</p>
399 <p>In summary, the JIT will lazily JIT code, on the fly, as it is needed. The
400 JIT provides a number of other more advanced interfaces for things like freeing
401 allocated machine code, rejit'ing functions to update them, etc. However, even
402 with this simple code, we get some surprisingly powerful capabilities - check
403 this out (I removed the dump of the anonymous functions, you should get the idea
404 by now :) :</p>
406 <div class="doc_code">
407 <pre>
408 ready&gt; <b>extern sin(x);</b>
409 Read extern:
410 declare double @sin(double)
412 ready&gt; <b>extern cos(x);</b>
413 Read extern:
414 declare double @cos(double)
416 ready&gt; <b>sin(1.0);</b>
417 <em>Evaluated to 0.841471</em>
419 ready&gt; <b>def foo(x) sin(x)*sin(x) + cos(x)*cos(x);</b>
420 Read function definition:
421 define double @foo(double %x) {
422 entry:
423 %calltmp = call double @sin( double %x )
424 %multmp = mul double %calltmp, %calltmp
425 %calltmp2 = call double @cos( double %x )
426 %multmp4 = mul double %calltmp2, %calltmp2
427 %addtmp = add double %multmp, %multmp4
428 ret double %addtmp
431 ready&gt; <b>foo(4.0);</b>
432 <em>Evaluated to 1.000000</em>
433 </pre>
434 </div>
436 <p>Whoa, how does the JIT know about sin and cos? The answer is surprisingly
437 simple: in this
438 example, the JIT started execution of a function and got to a function call. It
439 realized that the function was not yet JIT compiled and invoked the standard set
440 of routines to resolve the function. In this case, there is no body defined
441 for the function, so the JIT ended up calling "<tt>dlsym("sin")</tt>" on the
442 Kaleidoscope process itself.
443 Since "<tt>sin</tt>" is defined within the JIT's address space, it simply
444 patches up calls in the module to call the libm version of <tt>sin</tt>
445 directly.</p>
447 <p>The LLVM JIT provides a number of interfaces (look in the
448 <tt>ExecutionEngine.h</tt> file) for controlling how unknown functions get
449 resolved. It allows you to establish explicit mappings between IR objects and
450 addresses (useful for LLVM global variables that you want to map to static
451 tables, for example), allows you to dynamically decide on the fly based on the
452 function name, and even allows you to have the JIT abort itself if any lazy
453 compilation is attempted.</p>
455 <p>One interesting application of this is that we can now extend the language
456 by writing arbitrary C++ code to implement operations. For example, if we add:
457 </p>
459 <div class="doc_code">
460 <pre>
461 /// putchard - putchar that takes a double and returns 0.
462 extern "C"
463 double putchard(double X) {
464 putchar((char)X);
465 return 0;
467 </pre>
468 </div>
470 <p>Now we can produce simple output to the console by using things like:
471 "<tt>extern putchard(x); putchard(120);</tt>", which prints a lowercase 'x' on
472 the console (120 is the ASCII code for 'x'). Similar code could be used to
473 implement file I/O, console input, and many other capabilities in
474 Kaleidoscope.</p>
476 <p>This completes the JIT and optimizer chapter of the Kaleidoscope tutorial. At
477 this point, we can compile a non-Turing-complete programming language, optimize
478 and JIT compile it in a user-driven way. Next up we'll look into <a
479 href="LangImpl5.html">extending the language with control flow constructs</a>,
480 tackling some interesting LLVM IR issues along the way.</p>
482 </div>
484 <!-- *********************************************************************** -->
485 <div class="doc_section"><a name="code">Full Code Listing</a></div>
486 <!-- *********************************************************************** -->
488 <div class="doc_text">
491 Here is the complete code listing for our running example, enhanced with the
492 LLVM JIT and optimizer. To build this example, use:
493 </p>
495 <div class="doc_code">
496 <pre>
497 # Compile
498 g++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit native` -O3 -o toy
499 # Run
500 ./toy
501 </pre>
502 </div>
505 If you are compiling this on Linux, make sure to add the "-rdynamic" option
506 as well. This makes sure that the external functions are resolved properly
507 at runtime.</p>
509 <p>Here is the code:</p>
511 <div class="doc_code">
512 <pre>
513 #include "llvm/DerivedTypes.h"
514 #include "llvm/ExecutionEngine/ExecutionEngine.h"
515 #include "llvm/Module.h"
516 #include "llvm/ModuleProvider.h"
517 #include "llvm/PassManager.h"
518 #include "llvm/Analysis/Verifier.h"
519 #include "llvm/Target/TargetData.h"
520 #include "llvm/Transforms/Scalar.h"
521 #include "llvm/Support/IRBuilder.h"
522 #include &lt;cstdio&gt;
523 #include &lt;string&gt;
524 #include &lt;map&gt;
525 #include &lt;vector&gt;
526 using namespace llvm;
528 //===----------------------------------------------------------------------===//
529 // Lexer
530 //===----------------------------------------------------------------------===//
532 // The lexer returns tokens [0-255] if it is an unknown character, otherwise one
533 // of these for known things.
534 enum Token {
535 tok_eof = -1,
537 // commands
538 tok_def = -2, tok_extern = -3,
540 // primary
541 tok_identifier = -4, tok_number = -5,
544 static std::string IdentifierStr; // Filled in if tok_identifier
545 static double NumVal; // Filled in if tok_number
547 /// gettok - Return the next token from standard input.
548 static int gettok() {
549 static int LastChar = ' ';
551 // Skip any whitespace.
552 while (isspace(LastChar))
553 LastChar = getchar();
555 if (isalpha(LastChar)) { // identifier: [a-zA-Z][a-zA-Z0-9]*
556 IdentifierStr = LastChar;
557 while (isalnum((LastChar = getchar())))
558 IdentifierStr += LastChar;
560 if (IdentifierStr == "def") return tok_def;
561 if (IdentifierStr == "extern") return tok_extern;
562 return tok_identifier;
565 if (isdigit(LastChar) || LastChar == '.') { // Number: [0-9.]+
566 std::string NumStr;
567 do {
568 NumStr += LastChar;
569 LastChar = getchar();
570 } while (isdigit(LastChar) || LastChar == '.');
572 NumVal = strtod(NumStr.c_str(), 0);
573 return tok_number;
576 if (LastChar == '#') {
577 // Comment until end of line.
578 do LastChar = getchar();
579 while (LastChar != EOF &amp;&amp; LastChar != '\n' &amp;&amp; LastChar != '\r');
581 if (LastChar != EOF)
582 return gettok();
585 // Check for end of file. Don't eat the EOF.
586 if (LastChar == EOF)
587 return tok_eof;
589 // Otherwise, just return the character as its ascii value.
590 int ThisChar = LastChar;
591 LastChar = getchar();
592 return ThisChar;
595 //===----------------------------------------------------------------------===//
596 // Abstract Syntax Tree (aka Parse Tree)
597 //===----------------------------------------------------------------------===//
599 /// ExprAST - Base class for all expression nodes.
600 class ExprAST {
601 public:
602 virtual ~ExprAST() {}
603 virtual Value *Codegen() = 0;
606 /// NumberExprAST - Expression class for numeric literals like "1.0".
607 class NumberExprAST : public ExprAST {
608 double Val;
609 public:
610 NumberExprAST(double val) : Val(val) {}
611 virtual Value *Codegen();
614 /// VariableExprAST - Expression class for referencing a variable, like "a".
615 class VariableExprAST : public ExprAST {
616 std::string Name;
617 public:
618 VariableExprAST(const std::string &amp;name) : Name(name) {}
619 virtual Value *Codegen();
622 /// BinaryExprAST - Expression class for a binary operator.
623 class BinaryExprAST : public ExprAST {
624 char Op;
625 ExprAST *LHS, *RHS;
626 public:
627 BinaryExprAST(char op, ExprAST *lhs, ExprAST *rhs)
628 : Op(op), LHS(lhs), RHS(rhs) {}
629 virtual Value *Codegen();
632 /// CallExprAST - Expression class for function calls.
633 class CallExprAST : public ExprAST {
634 std::string Callee;
635 std::vector&lt;ExprAST*&gt; Args;
636 public:
637 CallExprAST(const std::string &amp;callee, std::vector&lt;ExprAST*&gt; &amp;args)
638 : Callee(callee), Args(args) {}
639 virtual Value *Codegen();
642 /// PrototypeAST - This class represents the "prototype" for a function,
643 /// which captures its argument names as well as if it is an operator.
644 class PrototypeAST {
645 std::string Name;
646 std::vector&lt;std::string&gt; Args;
647 public:
648 PrototypeAST(const std::string &amp;name, const std::vector&lt;std::string&gt; &amp;args)
649 : Name(name), Args(args) {}
651 Function *Codegen();
654 /// FunctionAST - This class represents a function definition itself.
655 class FunctionAST {
656 PrototypeAST *Proto;
657 ExprAST *Body;
658 public:
659 FunctionAST(PrototypeAST *proto, ExprAST *body)
660 : Proto(proto), Body(body) {}
662 Function *Codegen();
665 //===----------------------------------------------------------------------===//
666 // Parser
667 //===----------------------------------------------------------------------===//
669 /// CurTok/getNextToken - Provide a simple token buffer. CurTok is the current
670 /// token the parser it looking at. getNextToken reads another token from the
671 /// lexer and updates CurTok with its results.
672 static int CurTok;
673 static int getNextToken() {
674 return CurTok = gettok();
677 /// BinopPrecedence - This holds the precedence for each binary operator that is
678 /// defined.
679 static std::map&lt;char, int&gt; BinopPrecedence;
681 /// GetTokPrecedence - Get the precedence of the pending binary operator token.
682 static int GetTokPrecedence() {
683 if (!isascii(CurTok))
684 return -1;
686 // Make sure it's a declared binop.
687 int TokPrec = BinopPrecedence[CurTok];
688 if (TokPrec &lt;= 0) return -1;
689 return TokPrec;
692 /// Error* - These are little helper functions for error handling.
693 ExprAST *Error(const char *Str) { fprintf(stderr, "Error: %s\n", Str);return 0;}
694 PrototypeAST *ErrorP(const char *Str) { Error(Str); return 0; }
695 FunctionAST *ErrorF(const char *Str) { Error(Str); return 0; }
697 static ExprAST *ParseExpression();
699 /// identifierexpr
700 /// ::= identifier
701 /// ::= identifier '(' expression* ')'
702 static ExprAST *ParseIdentifierExpr() {
703 std::string IdName = IdentifierStr;
705 getNextToken(); // eat identifier.
707 if (CurTok != '(') // Simple variable ref.
708 return new VariableExprAST(IdName);
710 // Call.
711 getNextToken(); // eat (
712 std::vector&lt;ExprAST*&gt; Args;
713 if (CurTok != ')') {
714 while (1) {
715 ExprAST *Arg = ParseExpression();
716 if (!Arg) return 0;
717 Args.push_back(Arg);
719 if (CurTok == ')') break;
721 if (CurTok != ',')
722 return Error("Expected ')' or ',' in argument list");
723 getNextToken();
727 // Eat the ')'.
728 getNextToken();
730 return new CallExprAST(IdName, Args);
733 /// numberexpr ::= number
734 static ExprAST *ParseNumberExpr() {
735 ExprAST *Result = new NumberExprAST(NumVal);
736 getNextToken(); // consume the number
737 return Result;
740 /// parenexpr ::= '(' expression ')'
741 static ExprAST *ParseParenExpr() {
742 getNextToken(); // eat (.
743 ExprAST *V = ParseExpression();
744 if (!V) return 0;
746 if (CurTok != ')')
747 return Error("expected ')'");
748 getNextToken(); // eat ).
749 return V;
752 /// primary
753 /// ::= identifierexpr
754 /// ::= numberexpr
755 /// ::= parenexpr
756 static ExprAST *ParsePrimary() {
757 switch (CurTok) {
758 default: return Error("unknown token when expecting an expression");
759 case tok_identifier: return ParseIdentifierExpr();
760 case tok_number: return ParseNumberExpr();
761 case '(': return ParseParenExpr();
765 /// binoprhs
766 /// ::= ('+' primary)*
767 static ExprAST *ParseBinOpRHS(int ExprPrec, ExprAST *LHS) {
768 // If this is a binop, find its precedence.
769 while (1) {
770 int TokPrec = GetTokPrecedence();
772 // If this is a binop that binds at least as tightly as the current binop,
773 // consume it, otherwise we are done.
774 if (TokPrec &lt; ExprPrec)
775 return LHS;
777 // Okay, we know this is a binop.
778 int BinOp = CurTok;
779 getNextToken(); // eat binop
781 // Parse the primary expression after the binary operator.
782 ExprAST *RHS = ParsePrimary();
783 if (!RHS) return 0;
785 // If BinOp binds less tightly with RHS than the operator after RHS, let
786 // the pending operator take RHS as its LHS.
787 int NextPrec = GetTokPrecedence();
788 if (TokPrec &lt; NextPrec) {
789 RHS = ParseBinOpRHS(TokPrec+1, RHS);
790 if (RHS == 0) return 0;
793 // Merge LHS/RHS.
794 LHS = new BinaryExprAST(BinOp, LHS, RHS);
798 /// expression
799 /// ::= primary binoprhs
801 static ExprAST *ParseExpression() {
802 ExprAST *LHS = ParsePrimary();
803 if (!LHS) return 0;
805 return ParseBinOpRHS(0, LHS);
808 /// prototype
809 /// ::= id '(' id* ')'
810 static PrototypeAST *ParsePrototype() {
811 if (CurTok != tok_identifier)
812 return ErrorP("Expected function name in prototype");
814 std::string FnName = IdentifierStr;
815 getNextToken();
817 if (CurTok != '(')
818 return ErrorP("Expected '(' in prototype");
820 std::vector&lt;std::string&gt; ArgNames;
821 while (getNextToken() == tok_identifier)
822 ArgNames.push_back(IdentifierStr);
823 if (CurTok != ')')
824 return ErrorP("Expected ')' in prototype");
826 // success.
827 getNextToken(); // eat ')'.
829 return new PrototypeAST(FnName, ArgNames);
832 /// definition ::= 'def' prototype expression
833 static FunctionAST *ParseDefinition() {
834 getNextToken(); // eat def.
835 PrototypeAST *Proto = ParsePrototype();
836 if (Proto == 0) return 0;
838 if (ExprAST *E = ParseExpression())
839 return new FunctionAST(Proto, E);
840 return 0;
843 /// toplevelexpr ::= expression
844 static FunctionAST *ParseTopLevelExpr() {
845 if (ExprAST *E = ParseExpression()) {
846 // Make an anonymous proto.
847 PrototypeAST *Proto = new PrototypeAST("", std::vector&lt;std::string&gt;());
848 return new FunctionAST(Proto, E);
850 return 0;
853 /// external ::= 'extern' prototype
854 static PrototypeAST *ParseExtern() {
855 getNextToken(); // eat extern.
856 return ParsePrototype();
859 //===----------------------------------------------------------------------===//
860 // Code Generation
861 //===----------------------------------------------------------------------===//
863 static Module *TheModule;
864 static IRBuilder&lt;&gt; Builder;
865 static std::map&lt;std::string, Value*&gt; NamedValues;
866 static FunctionPassManager *TheFPM;
868 Value *ErrorV(const char *Str) { Error(Str); return 0; }
870 Value *NumberExprAST::Codegen() {
871 return ConstantFP::get(APFloat(Val));
874 Value *VariableExprAST::Codegen() {
875 // Look this variable up in the function.
876 Value *V = NamedValues[Name];
877 return V ? V : ErrorV("Unknown variable name");
880 Value *BinaryExprAST::Codegen() {
881 Value *L = LHS-&gt;Codegen();
882 Value *R = RHS-&gt;Codegen();
883 if (L == 0 || R == 0) return 0;
885 switch (Op) {
886 case '+': return Builder.CreateAdd(L, R, "addtmp");
887 case '-': return Builder.CreateSub(L, R, "subtmp");
888 case '*': return Builder.CreateMul(L, R, "multmp");
889 case '&lt;':
890 L = Builder.CreateFCmpULT(L, R, "cmptmp");
891 // Convert bool 0/1 to double 0.0 or 1.0
892 return Builder.CreateUIToFP(L, Type::DoubleTy, "booltmp");
893 default: return ErrorV("invalid binary operator");
897 Value *CallExprAST::Codegen() {
898 // Look up the name in the global module table.
899 Function *CalleeF = TheModule-&gt;getFunction(Callee);
900 if (CalleeF == 0)
901 return ErrorV("Unknown function referenced");
903 // If argument mismatch error.
904 if (CalleeF-&gt;arg_size() != Args.size())
905 return ErrorV("Incorrect # arguments passed");
907 std::vector&lt;Value*&gt; ArgsV;
908 for (unsigned i = 0, e = Args.size(); i != e; ++i) {
909 ArgsV.push_back(Args[i]-&gt;Codegen());
910 if (ArgsV.back() == 0) return 0;
913 return Builder.CreateCall(CalleeF, ArgsV.begin(), ArgsV.end(), "calltmp");
916 Function *PrototypeAST::Codegen() {
917 // Make the function type: double(double,double) etc.
918 std::vector&lt;const Type*&gt; Doubles(Args.size(), Type::DoubleTy);
919 FunctionType *FT = FunctionType::get(Type::DoubleTy, Doubles, false);
921 Function *F = Function::Create(FT, Function::ExternalLinkage, Name, TheModule);
923 // If F conflicted, there was already something named 'Name'. If it has a
924 // body, don't allow redefinition or reextern.
925 if (F-&gt;getName() != Name) {
926 // Delete the one we just made and get the existing one.
927 F-&gt;eraseFromParent();
928 F = TheModule-&gt;getFunction(Name);
930 // If F already has a body, reject this.
931 if (!F-&gt;empty()) {
932 ErrorF("redefinition of function");
933 return 0;
936 // If F took a different number of args, reject.
937 if (F-&gt;arg_size() != Args.size()) {
938 ErrorF("redefinition of function with different # args");
939 return 0;
943 // Set names for all arguments.
944 unsigned Idx = 0;
945 for (Function::arg_iterator AI = F-&gt;arg_begin(); Idx != Args.size();
946 ++AI, ++Idx) {
947 AI-&gt;setName(Args[Idx]);
949 // Add arguments to variable symbol table.
950 NamedValues[Args[Idx]] = AI;
953 return F;
956 Function *FunctionAST::Codegen() {
957 NamedValues.clear();
959 Function *TheFunction = Proto-&gt;Codegen();
960 if (TheFunction == 0)
961 return 0;
963 // Create a new basic block to start insertion into.
964 BasicBlock *BB = BasicBlock::Create("entry", TheFunction);
965 Builder.SetInsertPoint(BB);
967 if (Value *RetVal = Body-&gt;Codegen()) {
968 // Finish off the function.
969 Builder.CreateRet(RetVal);
971 // Validate the generated code, checking for consistency.
972 verifyFunction(*TheFunction);
974 // Optimize the function.
975 TheFPM-&gt;run(*TheFunction);
977 return TheFunction;
980 // Error reading body, remove function.
981 TheFunction-&gt;eraseFromParent();
982 return 0;
985 //===----------------------------------------------------------------------===//
986 // Top-Level parsing and JIT Driver
987 //===----------------------------------------------------------------------===//
989 static ExecutionEngine *TheExecutionEngine;
991 static void HandleDefinition() {
992 if (FunctionAST *F = ParseDefinition()) {
993 if (Function *LF = F-&gt;Codegen()) {
994 fprintf(stderr, "Read function definition:");
995 LF-&gt;dump();
997 } else {
998 // Skip token for error recovery.
999 getNextToken();
1003 static void HandleExtern() {
1004 if (PrototypeAST *P = ParseExtern()) {
1005 if (Function *F = P-&gt;Codegen()) {
1006 fprintf(stderr, "Read extern: ");
1007 F-&gt;dump();
1009 } else {
1010 // Skip token for error recovery.
1011 getNextToken();
1015 static void HandleTopLevelExpression() {
1016 // Evaluate a top level expression into an anonymous function.
1017 if (FunctionAST *F = ParseTopLevelExpr()) {
1018 if (Function *LF = F-&gt;Codegen()) {
1019 // JIT the function, returning a function pointer.
1020 void *FPtr = TheExecutionEngine-&gt;getPointerToFunction(LF);
1022 // Cast it to the right type (takes no arguments, returns a double) so we
1023 // can call it as a native function.
1024 double (*FP)() = (double (*)())FPtr;
1025 fprintf(stderr, "Evaluated to %f\n", FP());
1027 } else {
1028 // Skip token for error recovery.
1029 getNextToken();
1033 /// top ::= definition | external | expression | ';'
1034 static void MainLoop() {
1035 while (1) {
1036 fprintf(stderr, "ready&gt; ");
1037 switch (CurTok) {
1038 case tok_eof: return;
1039 case ';': getNextToken(); break; // ignore top level semicolons.
1040 case tok_def: HandleDefinition(); break;
1041 case tok_extern: HandleExtern(); break;
1042 default: HandleTopLevelExpression(); break;
1049 //===----------------------------------------------------------------------===//
1050 // "Library" functions that can be "extern'd" from user code.
1051 //===----------------------------------------------------------------------===//
1053 /// putchard - putchar that takes a double and returns 0.
1054 extern "C"
1055 double putchard(double X) {
1056 putchar((char)X);
1057 return 0;
1060 //===----------------------------------------------------------------------===//
1061 // Main driver code.
1062 //===----------------------------------------------------------------------===//
1064 int main() {
1065 // Install standard binary operators.
1066 // 1 is lowest precedence.
1067 BinopPrecedence['&lt;'] = 10;
1068 BinopPrecedence['+'] = 20;
1069 BinopPrecedence['-'] = 20;
1070 BinopPrecedence['*'] = 40; // highest.
1072 // Prime the first token.
1073 fprintf(stderr, "ready&gt; ");
1074 getNextToken();
1076 // Make the module, which holds all the code.
1077 TheModule = new Module("my cool jit");
1079 // Create the JIT.
1080 TheExecutionEngine = ExecutionEngine::create(TheModule);
1083 ExistingModuleProvider OurModuleProvider(TheModule);
1084 FunctionPassManager OurFPM(&amp;OurModuleProvider);
1086 // Set up the optimizer pipeline. Start with registering info about how the
1087 // target lays out data structures.
1088 OurFPM.add(new TargetData(*TheExecutionEngine-&gt;getTargetData()));
1089 // Do simple "peephole" optimizations and bit-twiddling optzns.
1090 OurFPM.add(createInstructionCombiningPass());
1091 // Reassociate expressions.
1092 OurFPM.add(createReassociatePass());
1093 // Eliminate Common SubExpressions.
1094 OurFPM.add(createGVNPass());
1095 // Simplify the control flow graph (deleting unreachable blocks, etc).
1096 OurFPM.add(createCFGSimplificationPass());
1098 // Set the global so the code gen can use this.
1099 TheFPM = &amp;OurFPM;
1101 // Run the main "interpreter loop" now.
1102 MainLoop();
1104 TheFPM = 0;
1106 // Print out all of the generated code.
1107 TheModule-&gt;dump();
1108 } // Free module provider (and thus the module) and pass manager.
1110 return 0;
1112 </pre>
1113 </div>
1115 <a href="LangImpl5.html">Next: Extending the language: control flow</a>
1116 </div>
1118 <!-- *********************************************************************** -->
1119 <hr>
1120 <address>
1121 <a href="http://jigsaw.w3.org/css-validator/check/referer"><img
1122 src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!"></a>
1123 <a href="http://validator.w3.org/check/referer"><img
1124 src="http://www.w3.org/Icons/valid-html401" alt="Valid HTML 4.01!"></a>
1126 <a href="mailto:sabre@nondot.org">Chris Lattner</a><br>
1127 <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
1128 Last modified: $Date: 2007-10-17 11:05:13 -0700 (Wed, 17 Oct 2007) $
1129 </address>
1130 </body>
1131 </html>