Add a function for profiling to run at shutdown. Unlike the existing API, this
[llvm/stm8.git] / docs / tutorial / LangImpl4.html
blob3eb5be4d14b607b16df24e265ac0e9a9e26a8cd9
1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
2 "http://www.w3.org/TR/html4/strict.dtd">
4 <html>
5 <head>
6 <title>Kaleidoscope: Adding JIT and Optimizer Support</title>
7 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
8 <meta name="author" content="Chris Lattner">
9 <link rel="stylesheet" href="../llvm.css" type="text/css">
10 </head>
12 <body>
14 <div class="doc_title">Kaleidoscope: Adding JIT and Optimizer Support</div>
16 <ul>
17 <li><a href="index.html">Up to Tutorial Index</a></li>
18 <li>Chapter 4
19 <ol>
20 <li><a href="#intro">Chapter 4 Introduction</a></li>
21 <li><a href="#trivialconstfold">Trivial Constant Folding</a></li>
22 <li><a href="#optimizerpasses">LLVM Optimization Passes</a></li>
23 <li><a href="#jit">Adding a JIT Compiler</a></li>
24 <li><a href="#code">Full Code Listing</a></li>
25 </ol>
26 </li>
27 <li><a href="LangImpl5.html">Chapter 5</a>: Extending the Language: Control
28 Flow</li>
29 </ul>
31 <div class="doc_author">
32 <p>Written by <a href="mailto:sabre@nondot.org">Chris Lattner</a></p>
33 </div>
35 <!-- *********************************************************************** -->
36 <div class="doc_section"><a name="intro">Chapter 4 Introduction</a></div>
37 <!-- *********************************************************************** -->
39 <div class="doc_text">
41 <p>Welcome to Chapter 4 of the "<a href="index.html">Implementing a language
42 with LLVM</a>" tutorial. Chapters 1-3 described the implementation of a simple
43 language and added support for generating LLVM IR. This chapter describes
44 two new techniques: adding optimizer support to your language, and adding JIT
45 compiler support. These additions will demonstrate how to get nice, efficient code
46 for the Kaleidoscope language.</p>
48 </div>
50 <!-- *********************************************************************** -->
51 <div class="doc_section"><a name="trivialconstfold">Trivial Constant
52 Folding</a></div>
53 <!-- *********************************************************************** -->
55 <div class="doc_text">
57 <p>
58 Our demonstration for Chapter 3 is elegant and easy to extend. Unfortunately,
59 it does not produce wonderful code. The IRBuilder, however, does give us
60 obvious optimizations when compiling simple code:</p>
62 <div class="doc_code">
63 <pre>
64 ready&gt; <b>def test(x) 1+2+x;</b>
65 Read function definition:
66 define double @test(double %x) {
67 entry:
68 %addtmp = fadd double 3.000000e+00, %x
69 ret double %addtmp
71 </pre>
72 </div>
74 <p>This code is not a literal transcription of the AST built by parsing the
75 input. That would be:
77 <div class="doc_code">
78 <pre>
79 ready&gt; <b>def test(x) 1+2+x;</b>
80 Read function definition:
81 define double @test(double %x) {
82 entry:
83 %addtmp = fadd double 2.000000e+00, 1.000000e+00
84 %addtmp1 = fadd double %addtmp, %x
85 ret double %addtmp1
87 </pre>
88 </div>
90 <p>Constant folding, as seen above, in particular, is a very common and very
91 important optimization: so much so that many language implementors implement
92 constant folding support in their AST representation.</p>
94 <p>With LLVM, you don't need this support in the AST. Since all calls to build
95 LLVM IR go through the LLVM IR builder, the builder itself checked to see if
96 there was a constant folding opportunity when you call it. If so, it just does
97 the constant fold and return the constant instead of creating an instruction.
99 <p>Well, that was easy :). In practice, we recommend always using
100 <tt>IRBuilder</tt> when generating code like this. It has no
101 "syntactic overhead" for its use (you don't have to uglify your compiler with
102 constant checks everywhere) and it can dramatically reduce the amount of
103 LLVM IR that is generated in some cases (particular for languages with a macro
104 preprocessor or that use a lot of constants).</p>
106 <p>On the other hand, the <tt>IRBuilder</tt> is limited by the fact
107 that it does all of its analysis inline with the code as it is built. If you
108 take a slightly more complex example:</p>
110 <div class="doc_code">
111 <pre>
112 ready&gt; <b>def test(x) (1+2+x)*(x+(1+2));</b>
113 ready> Read function definition:
114 define double @test(double %x) {
115 entry:
116 %addtmp = fadd double 3.000000e+00, %x
117 %addtmp1 = fadd double %x, 3.000000e+00
118 %multmp = fmul double %addtmp, %addtmp1
119 ret double %multmp
121 </pre>
122 </div>
124 <p>In this case, the LHS and RHS of the multiplication are the same value. We'd
125 really like to see this generate "<tt>tmp = x+3; result = tmp*tmp;</tt>" instead
126 of computing "<tt>x+3</tt>" twice.</p>
128 <p>Unfortunately, no amount of local analysis will be able to detect and correct
129 this. This requires two transformations: reassociation of expressions (to
130 make the add's lexically identical) and Common Subexpression Elimination (CSE)
131 to delete the redundant add instruction. Fortunately, LLVM provides a broad
132 range of optimizations that you can use, in the form of "passes".</p>
134 </div>
136 <!-- *********************************************************************** -->
137 <div class="doc_section"><a name="optimizerpasses">LLVM Optimization
138 Passes</a></div>
139 <!-- *********************************************************************** -->
141 <div class="doc_text">
143 <p>LLVM provides many optimization passes, which do many different sorts of
144 things and have different tradeoffs. Unlike other systems, LLVM doesn't hold
145 to the mistaken notion that one set of optimizations is right for all languages
146 and for all situations. LLVM allows a compiler implementor to make complete
147 decisions about what optimizations to use, in which order, and in what
148 situation.</p>
150 <p>As a concrete example, LLVM supports both "whole module" passes, which look
151 across as large of body of code as they can (often a whole file, but if run
152 at link time, this can be a substantial portion of the whole program). It also
153 supports and includes "per-function" passes which just operate on a single
154 function at a time, without looking at other functions. For more information
155 on passes and how they are run, see the <a href="../WritingAnLLVMPass.html">How
156 to Write a Pass</a> document and the <a href="../Passes.html">List of LLVM
157 Passes</a>.</p>
159 <p>For Kaleidoscope, we are currently generating functions on the fly, one at
160 a time, as the user types them in. We aren't shooting for the ultimate
161 optimization experience in this setting, but we also want to catch the easy and
162 quick stuff where possible. As such, we will choose to run a few per-function
163 optimizations as the user types the function in. If we wanted to make a "static
164 Kaleidoscope compiler", we would use exactly the code we have now, except that
165 we would defer running the optimizer until the entire file has been parsed.</p>
167 <p>In order to get per-function optimizations going, we need to set up a
168 <a href="../WritingAnLLVMPass.html#passmanager">FunctionPassManager</a> to hold and
169 organize the LLVM optimizations that we want to run. Once we have that, we can
170 add a set of optimizations to run. The code looks like this:</p>
172 <div class="doc_code">
173 <pre>
174 FunctionPassManager OurFPM(TheModule);
176 // Set up the optimizer pipeline. Start with registering info about how the
177 // target lays out data structures.
178 OurFPM.add(new TargetData(*TheExecutionEngine->getTargetData()));
179 // Provide basic AliasAnalysis support for GVN.
180 OurFPM.add(createBasicAliasAnalysisPass());
181 // Do simple "peephole" optimizations and bit-twiddling optzns.
182 OurFPM.add(createInstructionCombiningPass());
183 // Reassociate expressions.
184 OurFPM.add(createReassociatePass());
185 // Eliminate Common SubExpressions.
186 OurFPM.add(createGVNPass());
187 // Simplify the control flow graph (deleting unreachable blocks, etc).
188 OurFPM.add(createCFGSimplificationPass());
190 OurFPM.doInitialization();
192 // Set the global so the code gen can use this.
193 TheFPM = &amp;OurFPM;
195 // Run the main "interpreter loop" now.
196 MainLoop();
197 </pre>
198 </div>
200 <p>This code defines a <tt>FunctionPassManager</tt>, "<tt>OurFPM</tt>". It
201 requires a pointer to the <tt>Module</tt> to construct itself. Once it is set
202 up, we use a series of "add" calls to add a bunch of LLVM passes. The first
203 pass is basically boilerplate, it adds a pass so that later optimizations know
204 how the data structures in the program are laid out. The
205 "<tt>TheExecutionEngine</tt>" variable is related to the JIT, which we will get
206 to in the next section.</p>
208 <p>In this case, we choose to add 4 optimization passes. The passes we chose
209 here are a pretty standard set of "cleanup" optimizations that are useful for
210 a wide variety of code. I won't delve into what they do but, believe me,
211 they are a good starting place :).</p>
213 <p>Once the PassManager is set up, we need to make use of it. We do this by
214 running it after our newly created function is constructed (in
215 <tt>FunctionAST::Codegen</tt>), but before it is returned to the client:</p>
217 <div class="doc_code">
218 <pre>
219 if (Value *RetVal = Body->Codegen()) {
220 // Finish off the function.
221 Builder.CreateRet(RetVal);
223 // Validate the generated code, checking for consistency.
224 verifyFunction(*TheFunction);
226 <b>// Optimize the function.
227 TheFPM-&gt;run(*TheFunction);</b>
229 return TheFunction;
231 </pre>
232 </div>
234 <p>As you can see, this is pretty straightforward. The
235 <tt>FunctionPassManager</tt> optimizes and updates the LLVM Function* in place,
236 improving (hopefully) its body. With this in place, we can try our test above
237 again:</p>
239 <div class="doc_code">
240 <pre>
241 ready&gt; <b>def test(x) (1+2+x)*(x+(1+2));</b>
242 ready> Read function definition:
243 define double @test(double %x) {
244 entry:
245 %addtmp = fadd double %x, 3.000000e+00
246 %multmp = fmul double %addtmp, %addtmp
247 ret double %multmp
249 </pre>
250 </div>
252 <p>As expected, we now get our nicely optimized code, saving a floating point
253 add instruction from every execution of this function.</p>
255 <p>LLVM provides a wide variety of optimizations that can be used in certain
256 circumstances. Some <a href="../Passes.html">documentation about the various
257 passes</a> is available, but it isn't very complete. Another good source of
258 ideas can come from looking at the passes that <tt>llvm-gcc</tt> or
259 <tt>llvm-ld</tt> run to get started. The "<tt>opt</tt>" tool allows you to
260 experiment with passes from the command line, so you can see if they do
261 anything.</p>
263 <p>Now that we have reasonable code coming out of our front-end, lets talk about
264 executing it!</p>
266 </div>
268 <!-- *********************************************************************** -->
269 <div class="doc_section"><a name="jit">Adding a JIT Compiler</a></div>
270 <!-- *********************************************************************** -->
272 <div class="doc_text">
274 <p>Code that is available in LLVM IR can have a wide variety of tools
275 applied to it. For example, you can run optimizations on it (as we did above),
276 you can dump it out in textual or binary forms, you can compile the code to an
277 assembly file (.s) for some target, or you can JIT compile it. The nice thing
278 about the LLVM IR representation is that it is the "common currency" between
279 many different parts of the compiler.
280 </p>
282 <p>In this section, we'll add JIT compiler support to our interpreter. The
283 basic idea that we want for Kaleidoscope is to have the user enter function
284 bodies as they do now, but immediately evaluate the top-level expressions they
285 type in. For example, if they type in "1 + 2;", we should evaluate and print
286 out 3. If they define a function, they should be able to call it from the
287 command line.</p>
289 <p>In order to do this, we first declare and initialize the JIT. This is done
290 by adding a global variable and a call in <tt>main</tt>:</p>
292 <div class="doc_code">
293 <pre>
294 <b>static ExecutionEngine *TheExecutionEngine;</b>
296 int main() {
298 <b>// Create the JIT. This takes ownership of the module.
299 TheExecutionEngine = EngineBuilder(TheModule).create();</b>
302 </pre>
303 </div>
305 <p>This creates an abstract "Execution Engine" which can be either a JIT
306 compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler
307 for you if one is available for your platform, otherwise it will fall back to
308 the interpreter.</p>
310 <p>Once the <tt>ExecutionEngine</tt> is created, the JIT is ready to be used.
311 There are a variety of APIs that are useful, but the simplest one is the
312 "<tt>getPointerToFunction(F)</tt>" method. This method JIT compiles the
313 specified LLVM Function and returns a function pointer to the generated machine
314 code. In our case, this means that we can change the code that parses a
315 top-level expression to look like this:</p>
317 <div class="doc_code">
318 <pre>
319 static void HandleTopLevelExpression() {
320 // Evaluate a top-level expression into an anonymous function.
321 if (FunctionAST *F = ParseTopLevelExpr()) {
322 if (Function *LF = F-&gt;Codegen()) {
323 LF->dump(); // Dump the function for exposition purposes.
325 <b>// JIT the function, returning a function pointer.
326 void *FPtr = TheExecutionEngine-&gt;getPointerToFunction(LF);
328 // Cast it to the right type (takes no arguments, returns a double) so we
329 // can call it as a native function.
330 double (*FP)() = (double (*)())(intptr_t)FPtr;
331 fprintf(stderr, "Evaluated to %f\n", FP());</b>
333 </pre>
334 </div>
336 <p>Recall that we compile top-level expressions into a self-contained LLVM
337 function that takes no arguments and returns the computed double. Because the
338 LLVM JIT compiler matches the native platform ABI, this means that you can just
339 cast the result pointer to a function pointer of that type and call it directly.
340 This means, there is no difference between JIT compiled code and native machine
341 code that is statically linked into your application.</p>
343 <p>With just these two changes, lets see how Kaleidoscope works now!</p>
345 <div class="doc_code">
346 <pre>
347 ready&gt; <b>4+5;</b>
348 define double @""() {
349 entry:
350 ret double 9.000000e+00
353 <em>Evaluated to 9.000000</em>
354 </pre>
355 </div>
357 <p>Well this looks like it is basically working. The dump of the function
358 shows the "no argument function that always returns double" that we synthesize
359 for each top-level expression that is typed in. This demonstrates very basic
360 functionality, but can we do more?</p>
362 <div class="doc_code">
363 <pre>
364 ready&gt; <b>def testfunc(x y) x + y*2; </b>
365 Read function definition:
366 define double @testfunc(double %x, double %y) {
367 entry:
368 %multmp = fmul double %y, 2.000000e+00
369 %addtmp = fadd double %multmp, %x
370 ret double %addtmp
373 ready&gt; <b>testfunc(4, 10);</b>
374 define double @""() {
375 entry:
376 %calltmp = call double @testfunc(double 4.000000e+00, double 1.000000e+01)
377 ret double %calltmp
380 <em>Evaluated to 24.000000</em>
381 </pre>
382 </div>
384 <p>This illustrates that we can now call user code, but there is something a bit
385 subtle going on here. Note that we only invoke the JIT on the anonymous
386 functions that <em>call testfunc</em>, but we never invoked it
387 on <em>testfunc</em> itself. What actually happened here is that the JIT
388 scanned for all non-JIT'd functions transitively called from the anonymous
389 function and compiled all of them before returning
390 from <tt>getPointerToFunction()</tt>.</p>
392 <p>The JIT provides a number of other more advanced interfaces for things like
393 freeing allocated machine code, rejit'ing functions to update them, etc.
394 However, even with this simple code, we get some surprisingly powerful
395 capabilities - check this out (I removed the dump of the anonymous functions,
396 you should get the idea by now :) :</p>
398 <div class="doc_code">
399 <pre>
400 ready&gt; <b>extern sin(x);</b>
401 Read extern:
402 declare double @sin(double)
404 ready&gt; <b>extern cos(x);</b>
405 Read extern:
406 declare double @cos(double)
408 ready&gt; <b>sin(1.0);</b>
409 <em>Evaluated to 0.841471</em>
411 ready&gt; <b>def foo(x) sin(x)*sin(x) + cos(x)*cos(x);</b>
412 Read function definition:
413 define double @foo(double %x) {
414 entry:
415 %calltmp = call double @sin(double %x)
416 %multmp = fmul double %calltmp, %calltmp
417 %calltmp2 = call double @cos(double %x)
418 %multmp4 = fmul double %calltmp2, %calltmp2
419 %addtmp = fadd double %multmp, %multmp4
420 ret double %addtmp
423 ready&gt; <b>foo(4.0);</b>
424 <em>Evaluated to 1.000000</em>
425 </pre>
426 </div>
428 <p>Whoa, how does the JIT know about sin and cos? The answer is surprisingly
429 simple: in this
430 example, the JIT started execution of a function and got to a function call. It
431 realized that the function was not yet JIT compiled and invoked the standard set
432 of routines to resolve the function. In this case, there is no body defined
433 for the function, so the JIT ended up calling "<tt>dlsym("sin")</tt>" on the
434 Kaleidoscope process itself.
435 Since "<tt>sin</tt>" is defined within the JIT's address space, it simply
436 patches up calls in the module to call the libm version of <tt>sin</tt>
437 directly.</p>
439 <p>The LLVM JIT provides a number of interfaces (look in the
440 <tt>ExecutionEngine.h</tt> file) for controlling how unknown functions get
441 resolved. It allows you to establish explicit mappings between IR objects and
442 addresses (useful for LLVM global variables that you want to map to static
443 tables, for example), allows you to dynamically decide on the fly based on the
444 function name, and even allows you to have the JIT compile functions lazily the
445 first time they're called.</p>
447 <p>One interesting application of this is that we can now extend the language
448 by writing arbitrary C++ code to implement operations. For example, if we add:
449 </p>
451 <div class="doc_code">
452 <pre>
453 /// putchard - putchar that takes a double and returns 0.
454 extern "C"
455 double putchard(double X) {
456 putchar((char)X);
457 return 0;
459 </pre>
460 </div>
462 <p>Now we can produce simple output to the console by using things like:
463 "<tt>extern putchard(x); putchard(120);</tt>", which prints a lowercase 'x' on
464 the console (120 is the ASCII code for 'x'). Similar code could be used to
465 implement file I/O, console input, and many other capabilities in
466 Kaleidoscope.</p>
468 <p>This completes the JIT and optimizer chapter of the Kaleidoscope tutorial. At
469 this point, we can compile a non-Turing-complete programming language, optimize
470 and JIT compile it in a user-driven way. Next up we'll look into <a
471 href="LangImpl5.html">extending the language with control flow constructs</a>,
472 tackling some interesting LLVM IR issues along the way.</p>
474 </div>
476 <!-- *********************************************************************** -->
477 <div class="doc_section"><a name="code">Full Code Listing</a></div>
478 <!-- *********************************************************************** -->
480 <div class="doc_text">
483 Here is the complete code listing for our running example, enhanced with the
484 LLVM JIT and optimizer. To build this example, use:
485 </p>
487 <div class="doc_code">
488 <pre>
489 # Compile
490 g++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit native` -O3 -o toy
491 # Run
492 ./toy
493 </pre>
494 </div>
497 If you are compiling this on Linux, make sure to add the "-rdynamic" option
498 as well. This makes sure that the external functions are resolved properly
499 at runtime.</p>
501 <p>Here is the code:</p>
503 <div class="doc_code">
504 <pre>
505 #include "llvm/DerivedTypes.h"
506 #include "llvm/ExecutionEngine/ExecutionEngine.h"
507 #include "llvm/ExecutionEngine/JIT.h"
508 #include "llvm/LLVMContext.h"
509 #include "llvm/Module.h"
510 #include "llvm/PassManager.h"
511 #include "llvm/Analysis/Verifier.h"
512 #include "llvm/Analysis/Passes.h"
513 #include "llvm/Target/TargetData.h"
514 #include "llvm/Target/TargetSelect.h"
515 #include "llvm/Transforms/Scalar.h"
516 #include "llvm/Support/IRBuilder.h"
517 #include &lt;cstdio&gt;
518 #include &lt;string&gt;
519 #include &lt;map&gt;
520 #include &lt;vector&gt;
521 using namespace llvm;
523 //===----------------------------------------------------------------------===//
524 // Lexer
525 //===----------------------------------------------------------------------===//
527 // The lexer returns tokens [0-255] if it is an unknown character, otherwise one
528 // of these for known things.
529 enum Token {
530 tok_eof = -1,
532 // commands
533 tok_def = -2, tok_extern = -3,
535 // primary
536 tok_identifier = -4, tok_number = -5
539 static std::string IdentifierStr; // Filled in if tok_identifier
540 static double NumVal; // Filled in if tok_number
542 /// gettok - Return the next token from standard input.
543 static int gettok() {
544 static int LastChar = ' ';
546 // Skip any whitespace.
547 while (isspace(LastChar))
548 LastChar = getchar();
550 if (isalpha(LastChar)) { // identifier: [a-zA-Z][a-zA-Z0-9]*
551 IdentifierStr = LastChar;
552 while (isalnum((LastChar = getchar())))
553 IdentifierStr += LastChar;
555 if (IdentifierStr == "def") return tok_def;
556 if (IdentifierStr == "extern") return tok_extern;
557 return tok_identifier;
560 if (isdigit(LastChar) || LastChar == '.') { // Number: [0-9.]+
561 std::string NumStr;
562 do {
563 NumStr += LastChar;
564 LastChar = getchar();
565 } while (isdigit(LastChar) || LastChar == '.');
567 NumVal = strtod(NumStr.c_str(), 0);
568 return tok_number;
571 if (LastChar == '#') {
572 // Comment until end of line.
573 do LastChar = getchar();
574 while (LastChar != EOF &amp;&amp; LastChar != '\n' &amp;&amp; LastChar != '\r');
576 if (LastChar != EOF)
577 return gettok();
580 // Check for end of file. Don't eat the EOF.
581 if (LastChar == EOF)
582 return tok_eof;
584 // Otherwise, just return the character as its ascii value.
585 int ThisChar = LastChar;
586 LastChar = getchar();
587 return ThisChar;
590 //===----------------------------------------------------------------------===//
591 // Abstract Syntax Tree (aka Parse Tree)
592 //===----------------------------------------------------------------------===//
594 /// ExprAST - Base class for all expression nodes.
595 class ExprAST {
596 public:
597 virtual ~ExprAST() {}
598 virtual Value *Codegen() = 0;
601 /// NumberExprAST - Expression class for numeric literals like "1.0".
602 class NumberExprAST : public ExprAST {
603 double Val;
604 public:
605 NumberExprAST(double val) : Val(val) {}
606 virtual Value *Codegen();
609 /// VariableExprAST - Expression class for referencing a variable, like "a".
610 class VariableExprAST : public ExprAST {
611 std::string Name;
612 public:
613 VariableExprAST(const std::string &amp;name) : Name(name) {}
614 virtual Value *Codegen();
617 /// BinaryExprAST - Expression class for a binary operator.
618 class BinaryExprAST : public ExprAST {
619 char Op;
620 ExprAST *LHS, *RHS;
621 public:
622 BinaryExprAST(char op, ExprAST *lhs, ExprAST *rhs)
623 : Op(op), LHS(lhs), RHS(rhs) {}
624 virtual Value *Codegen();
627 /// CallExprAST - Expression class for function calls.
628 class CallExprAST : public ExprAST {
629 std::string Callee;
630 std::vector&lt;ExprAST*&gt; Args;
631 public:
632 CallExprAST(const std::string &amp;callee, std::vector&lt;ExprAST*&gt; &amp;args)
633 : Callee(callee), Args(args) {}
634 virtual Value *Codegen();
637 /// PrototypeAST - This class represents the "prototype" for a function,
638 /// which captures its name, and its argument names (thus implicitly the number
639 /// of arguments the function takes).
640 class PrototypeAST {
641 std::string Name;
642 std::vector&lt;std::string&gt; Args;
643 public:
644 PrototypeAST(const std::string &amp;name, const std::vector&lt;std::string&gt; &amp;args)
645 : Name(name), Args(args) {}
647 Function *Codegen();
650 /// FunctionAST - This class represents a function definition itself.
651 class FunctionAST {
652 PrototypeAST *Proto;
653 ExprAST *Body;
654 public:
655 FunctionAST(PrototypeAST *proto, ExprAST *body)
656 : Proto(proto), Body(body) {}
658 Function *Codegen();
661 //===----------------------------------------------------------------------===//
662 // Parser
663 //===----------------------------------------------------------------------===//
665 /// CurTok/getNextToken - Provide a simple token buffer. CurTok is the current
666 /// token the parser is looking at. getNextToken reads another token from the
667 /// lexer and updates CurTok with its results.
668 static int CurTok;
669 static int getNextToken() {
670 return CurTok = gettok();
673 /// BinopPrecedence - This holds the precedence for each binary operator that is
674 /// defined.
675 static std::map&lt;char, int&gt; BinopPrecedence;
677 /// GetTokPrecedence - Get the precedence of the pending binary operator token.
678 static int GetTokPrecedence() {
679 if (!isascii(CurTok))
680 return -1;
682 // Make sure it's a declared binop.
683 int TokPrec = BinopPrecedence[CurTok];
684 if (TokPrec &lt;= 0) return -1;
685 return TokPrec;
688 /// Error* - These are little helper functions for error handling.
689 ExprAST *Error(const char *Str) { fprintf(stderr, "Error: %s\n", Str);return 0;}
690 PrototypeAST *ErrorP(const char *Str) { Error(Str); return 0; }
691 FunctionAST *ErrorF(const char *Str) { Error(Str); return 0; }
693 static ExprAST *ParseExpression();
695 /// identifierexpr
696 /// ::= identifier
697 /// ::= identifier '(' expression* ')'
698 static ExprAST *ParseIdentifierExpr() {
699 std::string IdName = IdentifierStr;
701 getNextToken(); // eat identifier.
703 if (CurTok != '(') // Simple variable ref.
704 return new VariableExprAST(IdName);
706 // Call.
707 getNextToken(); // eat (
708 std::vector&lt;ExprAST*&gt; Args;
709 if (CurTok != ')') {
710 while (1) {
711 ExprAST *Arg = ParseExpression();
712 if (!Arg) return 0;
713 Args.push_back(Arg);
715 if (CurTok == ')') break;
717 if (CurTok != ',')
718 return Error("Expected ')' or ',' in argument list");
719 getNextToken();
723 // Eat the ')'.
724 getNextToken();
726 return new CallExprAST(IdName, Args);
729 /// numberexpr ::= number
730 static ExprAST *ParseNumberExpr() {
731 ExprAST *Result = new NumberExprAST(NumVal);
732 getNextToken(); // consume the number
733 return Result;
736 /// parenexpr ::= '(' expression ')'
737 static ExprAST *ParseParenExpr() {
738 getNextToken(); // eat (.
739 ExprAST *V = ParseExpression();
740 if (!V) return 0;
742 if (CurTok != ')')
743 return Error("expected ')'");
744 getNextToken(); // eat ).
745 return V;
748 /// primary
749 /// ::= identifierexpr
750 /// ::= numberexpr
751 /// ::= parenexpr
752 static ExprAST *ParsePrimary() {
753 switch (CurTok) {
754 default: return Error("unknown token when expecting an expression");
755 case tok_identifier: return ParseIdentifierExpr();
756 case tok_number: return ParseNumberExpr();
757 case '(': return ParseParenExpr();
761 /// binoprhs
762 /// ::= ('+' primary)*
763 static ExprAST *ParseBinOpRHS(int ExprPrec, ExprAST *LHS) {
764 // If this is a binop, find its precedence.
765 while (1) {
766 int TokPrec = GetTokPrecedence();
768 // If this is a binop that binds at least as tightly as the current binop,
769 // consume it, otherwise we are done.
770 if (TokPrec &lt; ExprPrec)
771 return LHS;
773 // Okay, we know this is a binop.
774 int BinOp = CurTok;
775 getNextToken(); // eat binop
777 // Parse the primary expression after the binary operator.
778 ExprAST *RHS = ParsePrimary();
779 if (!RHS) return 0;
781 // If BinOp binds less tightly with RHS than the operator after RHS, let
782 // the pending operator take RHS as its LHS.
783 int NextPrec = GetTokPrecedence();
784 if (TokPrec &lt; NextPrec) {
785 RHS = ParseBinOpRHS(TokPrec+1, RHS);
786 if (RHS == 0) return 0;
789 // Merge LHS/RHS.
790 LHS = new BinaryExprAST(BinOp, LHS, RHS);
794 /// expression
795 /// ::= primary binoprhs
797 static ExprAST *ParseExpression() {
798 ExprAST *LHS = ParsePrimary();
799 if (!LHS) return 0;
801 return ParseBinOpRHS(0, LHS);
804 /// prototype
805 /// ::= id '(' id* ')'
806 static PrototypeAST *ParsePrototype() {
807 if (CurTok != tok_identifier)
808 return ErrorP("Expected function name in prototype");
810 std::string FnName = IdentifierStr;
811 getNextToken();
813 if (CurTok != '(')
814 return ErrorP("Expected '(' in prototype");
816 std::vector&lt;std::string&gt; ArgNames;
817 while (getNextToken() == tok_identifier)
818 ArgNames.push_back(IdentifierStr);
819 if (CurTok != ')')
820 return ErrorP("Expected ')' in prototype");
822 // success.
823 getNextToken(); // eat ')'.
825 return new PrototypeAST(FnName, ArgNames);
828 /// definition ::= 'def' prototype expression
829 static FunctionAST *ParseDefinition() {
830 getNextToken(); // eat def.
831 PrototypeAST *Proto = ParsePrototype();
832 if (Proto == 0) return 0;
834 if (ExprAST *E = ParseExpression())
835 return new FunctionAST(Proto, E);
836 return 0;
839 /// toplevelexpr ::= expression
840 static FunctionAST *ParseTopLevelExpr() {
841 if (ExprAST *E = ParseExpression()) {
842 // Make an anonymous proto.
843 PrototypeAST *Proto = new PrototypeAST("", std::vector&lt;std::string&gt;());
844 return new FunctionAST(Proto, E);
846 return 0;
849 /// external ::= 'extern' prototype
850 static PrototypeAST *ParseExtern() {
851 getNextToken(); // eat extern.
852 return ParsePrototype();
855 //===----------------------------------------------------------------------===//
856 // Code Generation
857 //===----------------------------------------------------------------------===//
859 static Module *TheModule;
860 static IRBuilder&lt;&gt; Builder(getGlobalContext());
861 static std::map&lt;std::string, Value*&gt; NamedValues;
862 static FunctionPassManager *TheFPM;
864 Value *ErrorV(const char *Str) { Error(Str); return 0; }
866 Value *NumberExprAST::Codegen() {
867 return ConstantFP::get(getGlobalContext(), APFloat(Val));
870 Value *VariableExprAST::Codegen() {
871 // Look this variable up in the function.
872 Value *V = NamedValues[Name];
873 return V ? V : ErrorV("Unknown variable name");
876 Value *BinaryExprAST::Codegen() {
877 Value *L = LHS-&gt;Codegen();
878 Value *R = RHS-&gt;Codegen();
879 if (L == 0 || R == 0) return 0;
881 switch (Op) {
882 case '+': return Builder.CreateFAdd(L, R, "addtmp");
883 case '-': return Builder.CreateFSub(L, R, "subtmp");
884 case '*': return Builder.CreateFMul(L, R, "multmp");
885 case '&lt;':
886 L = Builder.CreateFCmpULT(L, R, "cmptmp");
887 // Convert bool 0/1 to double 0.0 or 1.0
888 return Builder.CreateUIToFP(L, Type::getDoubleTy(getGlobalContext()),
889 "booltmp");
890 default: return ErrorV("invalid binary operator");
894 Value *CallExprAST::Codegen() {
895 // Look up the name in the global module table.
896 Function *CalleeF = TheModule-&gt;getFunction(Callee);
897 if (CalleeF == 0)
898 return ErrorV("Unknown function referenced");
900 // If argument mismatch error.
901 if (CalleeF-&gt;arg_size() != Args.size())
902 return ErrorV("Incorrect # arguments passed");
904 std::vector&lt;Value*&gt; ArgsV;
905 for (unsigned i = 0, e = Args.size(); i != e; ++i) {
906 ArgsV.push_back(Args[i]-&gt;Codegen());
907 if (ArgsV.back() == 0) return 0;
910 return Builder.CreateCall(CalleeF, ArgsV.begin(), ArgsV.end(), "calltmp");
913 Function *PrototypeAST::Codegen() {
914 // Make the function type: double(double,double) etc.
915 std::vector&lt;const Type*&gt; Doubles(Args.size(),
916 Type::getDoubleTy(getGlobalContext()));
917 FunctionType *FT = FunctionType::get(Type::getDoubleTy(getGlobalContext()),
918 Doubles, false);
920 Function *F = Function::Create(FT, Function::ExternalLinkage, Name, TheModule);
922 // If F conflicted, there was already something named 'Name'. If it has a
923 // body, don't allow redefinition or reextern.
924 if (F-&gt;getName() != Name) {
925 // Delete the one we just made and get the existing one.
926 F-&gt;eraseFromParent();
927 F = TheModule-&gt;getFunction(Name);
929 // If F already has a body, reject this.
930 if (!F-&gt;empty()) {
931 ErrorF("redefinition of function");
932 return 0;
935 // If F took a different number of args, reject.
936 if (F-&gt;arg_size() != Args.size()) {
937 ErrorF("redefinition of function with different # args");
938 return 0;
942 // Set names for all arguments.
943 unsigned Idx = 0;
944 for (Function::arg_iterator AI = F-&gt;arg_begin(); Idx != Args.size();
945 ++AI, ++Idx) {
946 AI-&gt;setName(Args[Idx]);
948 // Add arguments to variable symbol table.
949 NamedValues[Args[Idx]] = AI;
952 return F;
955 Function *FunctionAST::Codegen() {
956 NamedValues.clear();
958 Function *TheFunction = Proto-&gt;Codegen();
959 if (TheFunction == 0)
960 return 0;
962 // Create a new basic block to start insertion into.
963 BasicBlock *BB = BasicBlock::Create(getGlobalContext(), "entry", TheFunction);
964 Builder.SetInsertPoint(BB);
966 if (Value *RetVal = Body-&gt;Codegen()) {
967 // Finish off the function.
968 Builder.CreateRet(RetVal);
970 // Validate the generated code, checking for consistency.
971 verifyFunction(*TheFunction);
973 // Optimize the function.
974 TheFPM-&gt;run(*TheFunction);
976 return TheFunction;
979 // Error reading body, remove function.
980 TheFunction-&gt;eraseFromParent();
981 return 0;
984 //===----------------------------------------------------------------------===//
985 // Top-Level parsing and JIT Driver
986 //===----------------------------------------------------------------------===//
988 static ExecutionEngine *TheExecutionEngine;
990 static void HandleDefinition() {
991 if (FunctionAST *F = ParseDefinition()) {
992 if (Function *LF = F-&gt;Codegen()) {
993 fprintf(stderr, "Read function definition:");
994 LF-&gt;dump();
996 } else {
997 // Skip token for error recovery.
998 getNextToken();
1002 static void HandleExtern() {
1003 if (PrototypeAST *P = ParseExtern()) {
1004 if (Function *F = P-&gt;Codegen()) {
1005 fprintf(stderr, "Read extern: ");
1006 F-&gt;dump();
1008 } else {
1009 // Skip token for error recovery.
1010 getNextToken();
1014 static void HandleTopLevelExpression() {
1015 // Evaluate a top-level expression into an anonymous function.
1016 if (FunctionAST *F = ParseTopLevelExpr()) {
1017 if (Function *LF = F-&gt;Codegen()) {
1018 // JIT the function, returning a function pointer.
1019 void *FPtr = TheExecutionEngine-&gt;getPointerToFunction(LF);
1021 // Cast it to the right type (takes no arguments, returns a double) so we
1022 // can call it as a native function.
1023 double (*FP)() = (double (*)())(intptr_t)FPtr;
1024 fprintf(stderr, "Evaluated to %f\n", FP());
1026 } else {
1027 // Skip token for error recovery.
1028 getNextToken();
1032 /// top ::= definition | external | expression | ';'
1033 static void MainLoop() {
1034 while (1) {
1035 fprintf(stderr, "ready&gt; ");
1036 switch (CurTok) {
1037 case tok_eof: return;
1038 case ';': getNextToken(); break; // ignore top-level semicolons.
1039 case tok_def: HandleDefinition(); break;
1040 case tok_extern: HandleExtern(); break;
1041 default: HandleTopLevelExpression(); break;
1046 //===----------------------------------------------------------------------===//
1047 // "Library" functions that can be "extern'd" from user code.
1048 //===----------------------------------------------------------------------===//
1050 /// putchard - putchar that takes a double and returns 0.
1051 extern "C"
1052 double putchard(double X) {
1053 putchar((char)X);
1054 return 0;
1057 //===----------------------------------------------------------------------===//
1058 // Main driver code.
1059 //===----------------------------------------------------------------------===//
1061 int main() {
1062 InitializeNativeTarget();
1063 LLVMContext &amp;Context = getGlobalContext();
1065 // Install standard binary operators.
1066 // 1 is lowest precedence.
1067 BinopPrecedence['&lt;'] = 10;
1068 BinopPrecedence['+'] = 20;
1069 BinopPrecedence['-'] = 20;
1070 BinopPrecedence['*'] = 40; // highest.
1072 // Prime the first token.
1073 fprintf(stderr, "ready&gt; ");
1074 getNextToken();
1076 // Make the module, which holds all the code.
1077 TheModule = new Module("my cool jit", Context);
1079 // Create the JIT. This takes ownership of the module.
1080 std::string ErrStr;
1081 TheExecutionEngine = EngineBuilder(TheModule).setErrorStr(&ErrStr).create();
1082 if (!TheExecutionEngine) {
1083 fprintf(stderr, "Could not create ExecutionEngine: %s\n", ErrStr.c_str());
1084 exit(1);
1087 FunctionPassManager OurFPM(TheModule);
1089 // Set up the optimizer pipeline. Start with registering info about how the
1090 // target lays out data structures.
1091 OurFPM.add(new TargetData(*TheExecutionEngine-&gt;getTargetData()));
1092 // Provide basic AliasAnalysis support for GVN.
1093 OurFPM.add(createBasicAliasAnalysisPass());
1094 // Do simple "peephole" optimizations and bit-twiddling optzns.
1095 OurFPM.add(createInstructionCombiningPass());
1096 // Reassociate expressions.
1097 OurFPM.add(createReassociatePass());
1098 // Eliminate Common SubExpressions.
1099 OurFPM.add(createGVNPass());
1100 // Simplify the control flow graph (deleting unreachable blocks, etc).
1101 OurFPM.add(createCFGSimplificationPass());
1103 OurFPM.doInitialization();
1105 // Set the global so the code gen can use this.
1106 TheFPM = &amp;OurFPM;
1108 // Run the main "interpreter loop" now.
1109 MainLoop();
1111 TheFPM = 0;
1113 // Print out all of the generated code.
1114 TheModule-&gt;dump();
1116 return 0;
1118 </pre>
1119 </div>
1121 <a href="LangImpl5.html">Next: Extending the language: control flow</a>
1122 </div>
1124 <!-- *********************************************************************** -->
1125 <hr>
1126 <address>
1127 <a href="http://jigsaw.w3.org/css-validator/check/referer"><img
1128 src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!"></a>
1129 <a href="http://validator.w3.org/check/referer"><img
1130 src="http://www.w3.org/Icons/valid-html401" alt="Valid HTML 4.01!"></a>
1132 <a href="mailto:sabre@nondot.org">Chris Lattner</a><br>
1133 <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
1134 Last modified: $Date$
1135 </address>
1136 </body>
1137 </html>