1 <!DOCTYPE HTML PUBLIC
"-//W3C//DTD HTML 4.01//EN"
2 "http://www.w3.org/TR/html4/strict.dtd">
6 <title>Kaleidoscope: Adding JIT and Optimizer Support
</title>
7 <meta http-equiv=
"Content-Type" content=
"text/html; charset=utf-8">
8 <meta name=
"author" content=
"Chris Lattner">
9 <link rel=
"stylesheet" href=
"../llvm.css" type=
"text/css">
14 <div class=
"doc_title">Kaleidoscope: Adding JIT and Optimizer Support
</div>
17 <li><a href=
"index.html">Up to Tutorial Index
</a></li>
20 <li><a href=
"#intro">Chapter
4 Introduction
</a></li>
21 <li><a href=
"#trivialconstfold">Trivial Constant Folding
</a></li>
22 <li><a href=
"#optimizerpasses">LLVM Optimization Passes
</a></li>
23 <li><a href=
"#jit">Adding a JIT Compiler
</a></li>
24 <li><a href=
"#code">Full Code Listing
</a></li>
27 <li><a href=
"LangImpl5.html">Chapter
5</a>: Extending the Language: Control
31 <div class=
"doc_author">
32 <p>Written by
<a href=
"mailto:sabre@nondot.org">Chris Lattner
</a></p>
35 <!-- *********************************************************************** -->
36 <div class=
"doc_section"><a name=
"intro">Chapter
4 Introduction
</a></div>
37 <!-- *********************************************************************** -->
39 <div class=
"doc_text">
41 <p>Welcome to Chapter
4 of the
"<a href="index.html
">Implementing a language
42 with LLVM</a>" tutorial. Chapters
1-
3 described the implementation of a simple
43 language and added support for generating LLVM IR. This chapter describes
44 two new techniques: adding optimizer support to your language, and adding JIT
45 compiler support. These additions will demonstrate how to get nice, efficient code
46 for the Kaleidoscope language.
</p>
50 <!-- *********************************************************************** -->
51 <div class=
"doc_section"><a name=
"trivialconstfold">Trivial Constant
53 <!-- *********************************************************************** -->
55 <div class=
"doc_text">
58 Our demonstration for Chapter
3 is elegant and easy to extend. Unfortunately,
59 it does not produce wonderful code. The IRBuilder, however, does give us
60 obvious optimizations when compiling simple code:
</p>
62 <div class=
"doc_code">
64 ready
> <b>def test(x)
1+
2+x;
</b>
65 Read function definition:
66 define double @test(double %x) {
68 %addtmp = fadd double
3.000000e+00, %x
74 <p>This code is not a literal transcription of the AST built by parsing the
77 <div class=
"doc_code">
79 ready
> <b>def test(x)
1+
2+x;
</b>
80 Read function definition:
81 define double @test(double %x) {
83 %addtmp = fadd double
2.000000e+00,
1.000000e+00
84 %addtmp1 = fadd double %addtmp, %x
90 <p>Constant folding, as seen above, in particular, is a very common and very
91 important optimization: so much so that many language implementors implement
92 constant folding support in their AST representation.
</p>
94 <p>With LLVM, you don't need this support in the AST. Since all calls to build
95 LLVM IR go through the LLVM IR builder, the builder itself checked to see if
96 there was a constant folding opportunity when you call it. If so, it just does
97 the constant fold and return the constant instead of creating an instruction.
99 <p>Well, that was easy :). In practice, we recommend always using
100 <tt>IRBuilder
</tt> when generating code like this. It has no
101 "syntactic overhead" for its use (you don't have to uglify your compiler with
102 constant checks everywhere) and it can dramatically reduce the amount of
103 LLVM IR that is generated in some cases (particular for languages with a macro
104 preprocessor or that use a lot of constants).
</p>
106 <p>On the other hand, the
<tt>IRBuilder
</tt> is limited by the fact
107 that it does all of its analysis inline with the code as it is built. If you
108 take a slightly more complex example:
</p>
110 <div class=
"doc_code">
112 ready
> <b>def test(x) (
1+
2+x)*(x+(
1+
2));
</b>
113 ready
> Read function definition:
114 define double @test(double %x) {
116 %addtmp = fadd double
3.000000e+00, %x
117 %addtmp1 = fadd double %x,
3.000000e+00
118 %multmp = fmul double %addtmp, %addtmp1
124 <p>In this case, the LHS and RHS of the multiplication are the same value. We'd
125 really like to see this generate
"<tt>tmp = x+3; result = tmp*tmp;</tt>" instead
126 of computing
"<tt>x+3</tt>" twice.
</p>
128 <p>Unfortunately, no amount of local analysis will be able to detect and correct
129 this. This requires two transformations: reassociation of expressions (to
130 make the add's lexically identical) and Common Subexpression Elimination (CSE)
131 to delete the redundant add instruction. Fortunately, LLVM provides a broad
132 range of optimizations that you can use, in the form of
"passes".
</p>
136 <!-- *********************************************************************** -->
137 <div class=
"doc_section"><a name=
"optimizerpasses">LLVM Optimization
139 <!-- *********************************************************************** -->
141 <div class=
"doc_text">
143 <p>LLVM provides many optimization passes, which do many different sorts of
144 things and have different tradeoffs. Unlike other systems, LLVM doesn't hold
145 to the mistaken notion that one set of optimizations is right for all languages
146 and for all situations. LLVM allows a compiler implementor to make complete
147 decisions about what optimizations to use, in which order, and in what
150 <p>As a concrete example, LLVM supports both
"whole module" passes, which look
151 across as large of body of code as they can (often a whole file, but if run
152 at link time, this can be a substantial portion of the whole program). It also
153 supports and includes
"per-function" passes which just operate on a single
154 function at a time, without looking at other functions. For more information
155 on passes and how they are run, see the
<a href=
"../WritingAnLLVMPass.html">How
156 to Write a Pass
</a> document and the
<a href=
"../Passes.html">List of LLVM
159 <p>For Kaleidoscope, we are currently generating functions on the fly, one at
160 a time, as the user types them in. We aren't shooting for the ultimate
161 optimization experience in this setting, but we also want to catch the easy and
162 quick stuff where possible. As such, we will choose to run a few per-function
163 optimizations as the user types the function in. If we wanted to make a
"static
164 Kaleidoscope compiler", we would use exactly the code we have now, except that
165 we would defer running the optimizer until the entire file has been parsed.
</p>
167 <p>In order to get per-function optimizations going, we need to set up a
168 <a href=
"../WritingAnLLVMPass.html#passmanager">FunctionPassManager
</a> to hold and
169 organize the LLVM optimizations that we want to run. Once we have that, we can
170 add a set of optimizations to run. The code looks like this:
</p>
172 <div class=
"doc_code">
174 FunctionPassManager OurFPM(TheModule);
176 // Set up the optimizer pipeline. Start with registering info about how the
177 // target lays out data structures.
178 OurFPM.add(new TargetData(*TheExecutionEngine-
>getTargetData()));
179 // Provide basic AliasAnalysis support for GVN.
180 OurFPM.add(createBasicAliasAnalysisPass());
181 // Do simple
"peephole" optimizations and bit-twiddling optzns.
182 OurFPM.add(createInstructionCombiningPass());
183 // Reassociate expressions.
184 OurFPM.add(createReassociatePass());
185 // Eliminate Common SubExpressions.
186 OurFPM.add(createGVNPass());
187 // Simplify the control flow graph (deleting unreachable blocks, etc).
188 OurFPM.add(createCFGSimplificationPass());
190 OurFPM.doInitialization();
192 // Set the global so the code gen can use this.
193 TheFPM =
&OurFPM;
195 // Run the main
"interpreter loop" now.
200 <p>This code defines a
<tt>FunctionPassManager
</tt>,
"<tt>OurFPM</tt>". It
201 requires a pointer to the
<tt>Module
</tt> to construct itself. Once it is set
202 up, we use a series of
"add" calls to add a bunch of LLVM passes. The first
203 pass is basically boilerplate, it adds a pass so that later optimizations know
204 how the data structures in the program are laid out. The
205 "<tt>TheExecutionEngine</tt>" variable is related to the JIT, which we will get
206 to in the next section.
</p>
208 <p>In this case, we choose to add
4 optimization passes. The passes we chose
209 here are a pretty standard set of
"cleanup" optimizations that are useful for
210 a wide variety of code. I won't delve into what they do but, believe me,
211 they are a good starting place :).
</p>
213 <p>Once the PassManager is set up, we need to make use of it. We do this by
214 running it after our newly created function is constructed (in
215 <tt>FunctionAST::Codegen
</tt>), but before it is returned to the client:
</p>
217 <div class=
"doc_code">
219 if (Value *RetVal = Body-
>Codegen()) {
220 // Finish off the function.
221 Builder.CreateRet(RetVal);
223 // Validate the generated code, checking for consistency.
224 verifyFunction(*TheFunction);
226 <b>// Optimize the function.
227 TheFPM-
>run(*TheFunction);
</b>
234 <p>As you can see, this is pretty straightforward. The
235 <tt>FunctionPassManager
</tt> optimizes and updates the LLVM Function* in place,
236 improving (hopefully) its body. With this in place, we can try our test above
239 <div class=
"doc_code">
241 ready
> <b>def test(x) (
1+
2+x)*(x+(
1+
2));
</b>
242 ready
> Read function definition:
243 define double @test(double %x) {
245 %addtmp = fadd double %x,
3.000000e+00
246 %multmp = fmul double %addtmp, %addtmp
252 <p>As expected, we now get our nicely optimized code, saving a floating point
253 add instruction from every execution of this function.
</p>
255 <p>LLVM provides a wide variety of optimizations that can be used in certain
256 circumstances. Some
<a href=
"../Passes.html">documentation about the various
257 passes
</a> is available, but it isn't very complete. Another good source of
258 ideas can come from looking at the passes that
<tt>llvm-gcc
</tt> or
259 <tt>llvm-ld
</tt> run to get started. The
"<tt>opt</tt>" tool allows you to
260 experiment with passes from the command line, so you can see if they do
263 <p>Now that we have reasonable code coming out of our front-end, lets talk about
268 <!-- *********************************************************************** -->
269 <div class=
"doc_section"><a name=
"jit">Adding a JIT Compiler
</a></div>
270 <!-- *********************************************************************** -->
272 <div class=
"doc_text">
274 <p>Code that is available in LLVM IR can have a wide variety of tools
275 applied to it. For example, you can run optimizations on it (as we did above),
276 you can dump it out in textual or binary forms, you can compile the code to an
277 assembly file (.s) for some target, or you can JIT compile it. The nice thing
278 about the LLVM IR representation is that it is the
"common currency" between
279 many different parts of the compiler.
282 <p>In this section, we'll add JIT compiler support to our interpreter. The
283 basic idea that we want for Kaleidoscope is to have the user enter function
284 bodies as they do now, but immediately evaluate the top-level expressions they
285 type in. For example, if they type in
"1 + 2;", we should evaluate and print
286 out
3. If they define a function, they should be able to call it from the
289 <p>In order to do this, we first declare and initialize the JIT. This is done
290 by adding a global variable and a call in
<tt>main
</tt>:
</p>
292 <div class=
"doc_code">
294 <b>static ExecutionEngine *TheExecutionEngine;
</b>
298 <b>// Create the JIT. This takes ownership of the module.
299 TheExecutionEngine = EngineBuilder(TheModule).create();
</b>
305 <p>This creates an abstract
"Execution Engine" which can be either a JIT
306 compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler
307 for you if one is available for your platform, otherwise it will fall back to
310 <p>Once the
<tt>ExecutionEngine
</tt> is created, the JIT is ready to be used.
311 There are a variety of APIs that are useful, but the simplest one is the
312 "<tt>getPointerToFunction(F)</tt>" method. This method JIT compiles the
313 specified LLVM Function and returns a function pointer to the generated machine
314 code. In our case, this means that we can change the code that parses a
315 top-level expression to look like this:
</p>
317 <div class=
"doc_code">
319 static void HandleTopLevelExpression() {
320 // Evaluate a top-level expression into an anonymous function.
321 if (FunctionAST *F = ParseTopLevelExpr()) {
322 if (Function *LF = F-
>Codegen()) {
323 LF-
>dump(); // Dump the function for exposition purposes.
325 <b>// JIT the function, returning a function pointer.
326 void *FPtr = TheExecutionEngine-
>getPointerToFunction(LF);
328 // Cast it to the right type (takes no arguments, returns a double) so we
329 // can call it as a native function.
330 double (*FP)() = (double (*)())(intptr_t)FPtr;
331 fprintf(stderr,
"Evaluated to %f\n", FP());
</b>
336 <p>Recall that we compile top-level expressions into a self-contained LLVM
337 function that takes no arguments and returns the computed double. Because the
338 LLVM JIT compiler matches the native platform ABI, this means that you can just
339 cast the result pointer to a function pointer of that type and call it directly.
340 This means, there is no difference between JIT compiled code and native machine
341 code that is statically linked into your application.
</p>
343 <p>With just these two changes, lets see how Kaleidoscope works now!
</p>
345 <div class=
"doc_code">
347 ready
> <b>4+
5;
</b>
348 define double @
""() {
350 ret double
9.000000e+00
353 <em>Evaluated to
9.000000</em>
357 <p>Well this looks like it is basically working. The dump of the function
358 shows the
"no argument function that always returns double" that we synthesize
359 for each top-level expression that is typed in. This demonstrates very basic
360 functionality, but can we do more?
</p>
362 <div class=
"doc_code">
364 ready
> <b>def testfunc(x y) x + y*
2;
</b>
365 Read function definition:
366 define double @testfunc(double %x, double %y) {
368 %multmp = fmul double %y,
2.000000e+00
369 %addtmp = fadd double %multmp, %x
373 ready
> <b>testfunc(
4,
10);
</b>
374 define double @
""() {
376 %calltmp = call double @testfunc(double
4.000000e+00, double
1.000000e+01)
380 <em>Evaluated to
24.000000</em>
384 <p>This illustrates that we can now call user code, but there is something a bit
385 subtle going on here. Note that we only invoke the JIT on the anonymous
386 functions that
<em>call testfunc
</em>, but we never invoked it
387 on
<em>testfunc
</em> itself. What actually happened here is that the JIT
388 scanned for all non-JIT'd functions transitively called from the anonymous
389 function and compiled all of them before returning
390 from
<tt>getPointerToFunction()
</tt>.
</p>
392 <p>The JIT provides a number of other more advanced interfaces for things like
393 freeing allocated machine code, rejit'ing functions to update them, etc.
394 However, even with this simple code, we get some surprisingly powerful
395 capabilities - check this out (I removed the dump of the anonymous functions,
396 you should get the idea by now :) :
</p>
398 <div class=
"doc_code">
400 ready
> <b>extern sin(x);
</b>
402 declare double @sin(double)
404 ready
> <b>extern cos(x);
</b>
406 declare double @cos(double)
408 ready
> <b>sin(
1.0);
</b>
409 <em>Evaluated to
0.841471</em>
411 ready
> <b>def foo(x) sin(x)*sin(x) + cos(x)*cos(x);
</b>
412 Read function definition:
413 define double @foo(double %x) {
415 %calltmp = call double @sin(double %x)
416 %multmp = fmul double %calltmp, %calltmp
417 %calltmp2 = call double @cos(double %x)
418 %multmp4 = fmul double %calltmp2, %calltmp2
419 %addtmp = fadd double %multmp, %multmp4
423 ready
> <b>foo(
4.0);
</b>
424 <em>Evaluated to
1.000000</em>
428 <p>Whoa, how does the JIT know about sin and cos? The answer is surprisingly
430 example, the JIT started execution of a function and got to a function call. It
431 realized that the function was not yet JIT compiled and invoked the standard set
432 of routines to resolve the function. In this case, there is no body defined
433 for the function, so the JIT ended up calling
"<tt>dlsym("sin
")</tt>" on the
434 Kaleidoscope process itself.
435 Since
"<tt>sin</tt>" is defined within the JIT's address space, it simply
436 patches up calls in the module to call the libm version of
<tt>sin
</tt>
439 <p>The LLVM JIT provides a number of interfaces (look in the
440 <tt>ExecutionEngine.h
</tt> file) for controlling how unknown functions get
441 resolved. It allows you to establish explicit mappings between IR objects and
442 addresses (useful for LLVM global variables that you want to map to static
443 tables, for example), allows you to dynamically decide on the fly based on the
444 function name, and even allows you to have the JIT compile functions lazily the
445 first time they're called.
</p>
447 <p>One interesting application of this is that we can now extend the language
448 by writing arbitrary C++ code to implement operations. For example, if we add:
451 <div class=
"doc_code">
453 /// putchard - putchar that takes a double and returns
0.
455 double putchard(double X) {
462 <p>Now we can produce simple output to the console by using things like:
463 "<tt>extern putchard(x); putchard(120);</tt>", which prints a lowercase 'x' on
464 the console (
120 is the ASCII code for 'x'). Similar code could be used to
465 implement file I/O, console input, and many other capabilities in
468 <p>This completes the JIT and optimizer chapter of the Kaleidoscope tutorial. At
469 this point, we can compile a non-Turing-complete programming language, optimize
470 and JIT compile it in a user-driven way. Next up we'll look into
<a
471 href=
"LangImpl5.html">extending the language with control flow constructs
</a>,
472 tackling some interesting LLVM IR issues along the way.
</p>
476 <!-- *********************************************************************** -->
477 <div class=
"doc_section"><a name=
"code">Full Code Listing
</a></div>
478 <!-- *********************************************************************** -->
480 <div class=
"doc_text">
483 Here is the complete code listing for our running example, enhanced with the
484 LLVM JIT and optimizer. To build this example, use:
487 <div class=
"doc_code">
490 g++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit native` -O3 -o toy
497 If you are compiling this on Linux, make sure to add the
"-rdynamic" option
498 as well. This makes sure that the external functions are resolved properly
501 <p>Here is the code:
</p>
503 <div class=
"doc_code">
505 #include
"llvm/DerivedTypes.h"
506 #include
"llvm/ExecutionEngine/ExecutionEngine.h"
507 #include
"llvm/ExecutionEngine/JIT.h"
508 #include
"llvm/LLVMContext.h"
509 #include
"llvm/Module.h"
510 #include
"llvm/PassManager.h"
511 #include
"llvm/Analysis/Verifier.h"
512 #include
"llvm/Analysis/Passes.h"
513 #include
"llvm/Target/TargetData.h"
514 #include
"llvm/Target/TargetSelect.h"
515 #include
"llvm/Transforms/Scalar.h"
516 #include
"llvm/Support/IRBuilder.h"
517 #include
<cstdio
>
518 #include
<string
>
520 #include
<vector
>
521 using namespace llvm;
523 //===----------------------------------------------------------------------===//
525 //===----------------------------------------------------------------------===//
527 // The lexer returns tokens [
0-
255] if it is an unknown character, otherwise one
528 // of these for known things.
533 tok_def = -
2, tok_extern = -
3,
536 tok_identifier = -
4, tok_number = -
5
539 static std::string IdentifierStr; // Filled in if tok_identifier
540 static double NumVal; // Filled in if tok_number
542 /// gettok - Return the next token from standard input.
543 static int gettok() {
544 static int LastChar = ' ';
546 // Skip any whitespace.
547 while (isspace(LastChar))
548 LastChar = getchar();
550 if (isalpha(LastChar)) { // identifier: [a-zA-Z][a-zA-Z0-
9]*
551 IdentifierStr = LastChar;
552 while (isalnum((LastChar = getchar())))
553 IdentifierStr += LastChar;
555 if (IdentifierStr ==
"def") return tok_def;
556 if (IdentifierStr ==
"extern") return tok_extern;
557 return tok_identifier;
560 if (isdigit(LastChar) || LastChar == '.') { // Number: [
0-
9.]+
564 LastChar = getchar();
565 } while (isdigit(LastChar) || LastChar == '.');
567 NumVal = strtod(NumStr.c_str(),
0);
571 if (LastChar == '#') {
572 // Comment until end of line.
573 do LastChar = getchar();
574 while (LastChar != EOF
&& LastChar != '\n'
&& LastChar != '\r');
580 // Check for end of file. Don't eat the EOF.
584 // Otherwise, just return the character as its ascii value.
585 int ThisChar = LastChar;
586 LastChar = getchar();
590 //===----------------------------------------------------------------------===//
591 // Abstract Syntax Tree (aka Parse Tree)
592 //===----------------------------------------------------------------------===//
594 /// ExprAST - Base class for all expression nodes.
597 virtual ~ExprAST() {}
598 virtual Value *Codegen() =
0;
601 /// NumberExprAST - Expression class for numeric literals like
"1.0".
602 class NumberExprAST : public ExprAST {
605 NumberExprAST(double val) : Val(val) {}
606 virtual Value *Codegen();
609 /// VariableExprAST - Expression class for referencing a variable, like
"a".
610 class VariableExprAST : public ExprAST {
613 VariableExprAST(const std::string
&name) : Name(name) {}
614 virtual Value *Codegen();
617 /// BinaryExprAST - Expression class for a binary operator.
618 class BinaryExprAST : public ExprAST {
622 BinaryExprAST(char op, ExprAST *lhs, ExprAST *rhs)
623 : Op(op), LHS(lhs), RHS(rhs) {}
624 virtual Value *Codegen();
627 /// CallExprAST - Expression class for function calls.
628 class CallExprAST : public ExprAST {
630 std::vector
<ExprAST*
> Args;
632 CallExprAST(const std::string
&callee, std::vector
<ExprAST*
> &args)
633 : Callee(callee), Args(args) {}
634 virtual Value *Codegen();
637 /// PrototypeAST - This class represents the
"prototype" for a function,
638 /// which captures its name, and its argument names (thus implicitly the number
639 /// of arguments the function takes).
642 std::vector
<std::string
> Args;
644 PrototypeAST(const std::string
&name, const std::vector
<std::string
> &args)
645 : Name(name), Args(args) {}
650 /// FunctionAST - This class represents a function definition itself.
655 FunctionAST(PrototypeAST *proto, ExprAST *body)
656 : Proto(proto), Body(body) {}
661 //===----------------------------------------------------------------------===//
663 //===----------------------------------------------------------------------===//
665 /// CurTok/getNextToken - Provide a simple token buffer. CurTok is the current
666 /// token the parser is looking at. getNextToken reads another token from the
667 /// lexer and updates CurTok with its results.
669 static int getNextToken() {
670 return CurTok = gettok();
673 /// BinopPrecedence - This holds the precedence for each binary operator that is
675 static std::map
<char, int
> BinopPrecedence;
677 /// GetTokPrecedence - Get the precedence of the pending binary operator token.
678 static int GetTokPrecedence() {
679 if (!isascii(CurTok))
682 // Make sure it's a declared binop.
683 int TokPrec = BinopPrecedence[CurTok];
684 if (TokPrec
<=
0) return -
1;
688 /// Error* - These are little helper functions for error handling.
689 ExprAST *Error(const char *Str) { fprintf(stderr,
"Error: %s\n", Str);return
0;}
690 PrototypeAST *ErrorP(const char *Str) { Error(Str); return
0; }
691 FunctionAST *ErrorF(const char *Str) { Error(Str); return
0; }
693 static ExprAST *ParseExpression();
697 /// ::= identifier '(' expression* ')'
698 static ExprAST *ParseIdentifierExpr() {
699 std::string IdName = IdentifierStr;
701 getNextToken(); // eat identifier.
703 if (CurTok != '(') // Simple variable ref.
704 return new VariableExprAST(IdName);
707 getNextToken(); // eat (
708 std::vector
<ExprAST*
> Args;
711 ExprAST *Arg = ParseExpression();
715 if (CurTok == ')') break;
718 return Error(
"Expected ')' or ',' in argument list");
726 return new CallExprAST(IdName, Args);
729 /// numberexpr ::= number
730 static ExprAST *ParseNumberExpr() {
731 ExprAST *Result = new NumberExprAST(NumVal);
732 getNextToken(); // consume the number
736 /// parenexpr ::= '(' expression ')'
737 static ExprAST *ParseParenExpr() {
738 getNextToken(); // eat (.
739 ExprAST *V = ParseExpression();
743 return Error(
"expected ')'");
744 getNextToken(); // eat ).
749 /// ::= identifierexpr
752 static ExprAST *ParsePrimary() {
754 default: return Error(
"unknown token when expecting an expression");
755 case tok_identifier: return ParseIdentifierExpr();
756 case tok_number: return ParseNumberExpr();
757 case '(': return ParseParenExpr();
762 /// ::= ('+' primary)*
763 static ExprAST *ParseBinOpRHS(int ExprPrec, ExprAST *LHS) {
764 // If this is a binop, find its precedence.
766 int TokPrec = GetTokPrecedence();
768 // If this is a binop that binds at least as tightly as the current binop,
769 // consume it, otherwise we are done.
770 if (TokPrec
< ExprPrec)
773 // Okay, we know this is a binop.
775 getNextToken(); // eat binop
777 // Parse the primary expression after the binary operator.
778 ExprAST *RHS = ParsePrimary();
781 // If BinOp binds less tightly with RHS than the operator after RHS, let
782 // the pending operator take RHS as its LHS.
783 int NextPrec = GetTokPrecedence();
784 if (TokPrec
< NextPrec) {
785 RHS = ParseBinOpRHS(TokPrec+
1, RHS);
786 if (RHS ==
0) return
0;
790 LHS = new BinaryExprAST(BinOp, LHS, RHS);
795 /// ::= primary binoprhs
797 static ExprAST *ParseExpression() {
798 ExprAST *LHS = ParsePrimary();
801 return ParseBinOpRHS(
0, LHS);
805 /// ::= id '(' id* ')'
806 static PrototypeAST *ParsePrototype() {
807 if (CurTok != tok_identifier)
808 return ErrorP(
"Expected function name in prototype");
810 std::string FnName = IdentifierStr;
814 return ErrorP(
"Expected '(' in prototype");
816 std::vector
<std::string
> ArgNames;
817 while (getNextToken() == tok_identifier)
818 ArgNames.push_back(IdentifierStr);
820 return ErrorP(
"Expected ')' in prototype");
823 getNextToken(); // eat ')'.
825 return new PrototypeAST(FnName, ArgNames);
828 /// definition ::= 'def' prototype expression
829 static FunctionAST *ParseDefinition() {
830 getNextToken(); // eat def.
831 PrototypeAST *Proto = ParsePrototype();
832 if (Proto ==
0) return
0;
834 if (ExprAST *E = ParseExpression())
835 return new FunctionAST(Proto, E);
839 /// toplevelexpr ::= expression
840 static FunctionAST *ParseTopLevelExpr() {
841 if (ExprAST *E = ParseExpression()) {
842 // Make an anonymous proto.
843 PrototypeAST *Proto = new PrototypeAST(
"", std::vector
<std::string
>());
844 return new FunctionAST(Proto, E);
849 /// external ::= 'extern' prototype
850 static PrototypeAST *ParseExtern() {
851 getNextToken(); // eat extern.
852 return ParsePrototype();
855 //===----------------------------------------------------------------------===//
857 //===----------------------------------------------------------------------===//
859 static Module *TheModule;
860 static IRBuilder
<> Builder(getGlobalContext());
861 static std::map
<std::string, Value*
> NamedValues;
862 static FunctionPassManager *TheFPM;
864 Value *ErrorV(const char *Str) { Error(Str); return
0; }
866 Value *NumberExprAST::Codegen() {
867 return ConstantFP::get(getGlobalContext(), APFloat(Val));
870 Value *VariableExprAST::Codegen() {
871 // Look this variable up in the function.
872 Value *V = NamedValues[Name];
873 return V ? V : ErrorV(
"Unknown variable name");
876 Value *BinaryExprAST::Codegen() {
877 Value *L = LHS-
>Codegen();
878 Value *R = RHS-
>Codegen();
879 if (L ==
0 || R ==
0) return
0;
882 case '+': return Builder.CreateFAdd(L, R,
"addtmp");
883 case '-': return Builder.CreateFSub(L, R,
"subtmp");
884 case '*': return Builder.CreateFMul(L, R,
"multmp");
886 L = Builder.CreateFCmpULT(L, R,
"cmptmp");
887 // Convert bool
0/
1 to double
0.0 or
1.0
888 return Builder.CreateUIToFP(L, Type::getDoubleTy(getGlobalContext()),
890 default: return ErrorV(
"invalid binary operator");
894 Value *CallExprAST::Codegen() {
895 // Look up the name in the global module table.
896 Function *CalleeF = TheModule-
>getFunction(Callee);
898 return ErrorV(
"Unknown function referenced");
900 // If argument mismatch error.
901 if (CalleeF-
>arg_size() != Args.size())
902 return ErrorV(
"Incorrect # arguments passed");
904 std::vector
<Value*
> ArgsV;
905 for (unsigned i =
0, e = Args.size(); i != e; ++i) {
906 ArgsV.push_back(Args[i]-
>Codegen());
907 if (ArgsV.back() ==
0) return
0;
910 return Builder.CreateCall(CalleeF, ArgsV.begin(), ArgsV.end(),
"calltmp");
913 Function *PrototypeAST::Codegen() {
914 // Make the function type: double(double,double) etc.
915 std::vector
<const Type*
> Doubles(Args.size(),
916 Type::getDoubleTy(getGlobalContext()));
917 FunctionType *FT = FunctionType::get(Type::getDoubleTy(getGlobalContext()),
920 Function *F = Function::Create(FT, Function::ExternalLinkage, Name, TheModule);
922 // If F conflicted, there was already something named 'Name'. If it has a
923 // body, don't allow redefinition or reextern.
924 if (F-
>getName() != Name) {
925 // Delete the one we just made and get the existing one.
926 F-
>eraseFromParent();
927 F = TheModule-
>getFunction(Name);
929 // If F already has a body, reject this.
930 if (!F-
>empty()) {
931 ErrorF(
"redefinition of function");
935 // If F took a different number of args, reject.
936 if (F-
>arg_size() != Args.size()) {
937 ErrorF(
"redefinition of function with different # args");
942 // Set names for all arguments.
944 for (Function::arg_iterator AI = F-
>arg_begin(); Idx != Args.size();
946 AI-
>setName(Args[Idx]);
948 // Add arguments to variable symbol table.
949 NamedValues[Args[Idx]] = AI;
955 Function *FunctionAST::Codegen() {
958 Function *TheFunction = Proto-
>Codegen();
959 if (TheFunction ==
0)
962 // Create a new basic block to start insertion into.
963 BasicBlock *BB = BasicBlock::Create(getGlobalContext(),
"entry", TheFunction);
964 Builder.SetInsertPoint(BB);
966 if (Value *RetVal = Body-
>Codegen()) {
967 // Finish off the function.
968 Builder.CreateRet(RetVal);
970 // Validate the generated code, checking for consistency.
971 verifyFunction(*TheFunction);
973 // Optimize the function.
974 TheFPM-
>run(*TheFunction);
979 // Error reading body, remove function.
980 TheFunction-
>eraseFromParent();
984 //===----------------------------------------------------------------------===//
985 // Top-Level parsing and JIT Driver
986 //===----------------------------------------------------------------------===//
988 static ExecutionEngine *TheExecutionEngine;
990 static void HandleDefinition() {
991 if (FunctionAST *F = ParseDefinition()) {
992 if (Function *LF = F-
>Codegen()) {
993 fprintf(stderr,
"Read function definition:");
997 // Skip token for error recovery.
1002 static void HandleExtern() {
1003 if (PrototypeAST *P = ParseExtern()) {
1004 if (Function *F = P-
>Codegen()) {
1005 fprintf(stderr,
"Read extern: ");
1009 // Skip token for error recovery.
1014 static void HandleTopLevelExpression() {
1015 // Evaluate a top-level expression into an anonymous function.
1016 if (FunctionAST *F = ParseTopLevelExpr()) {
1017 if (Function *LF = F-
>Codegen()) {
1018 // JIT the function, returning a function pointer.
1019 void *FPtr = TheExecutionEngine-
>getPointerToFunction(LF);
1021 // Cast it to the right type (takes no arguments, returns a double) so we
1022 // can call it as a native function.
1023 double (*FP)() = (double (*)())(intptr_t)FPtr;
1024 fprintf(stderr,
"Evaluated to %f\n", FP());
1027 // Skip token for error recovery.
1032 /// top ::= definition | external | expression | ';'
1033 static void MainLoop() {
1035 fprintf(stderr,
"ready> ");
1037 case tok_eof: return;
1038 case ';': getNextToken(); break; // ignore top-level semicolons.
1039 case tok_def: HandleDefinition(); break;
1040 case tok_extern: HandleExtern(); break;
1041 default: HandleTopLevelExpression(); break;
1046 //===----------------------------------------------------------------------===//
1047 //
"Library" functions that can be
"extern'd" from user code.
1048 //===----------------------------------------------------------------------===//
1050 /// putchard - putchar that takes a double and returns
0.
1052 double putchard(double X) {
1057 //===----------------------------------------------------------------------===//
1058 // Main driver code.
1059 //===----------------------------------------------------------------------===//
1062 InitializeNativeTarget();
1063 LLVMContext
&Context = getGlobalContext();
1065 // Install standard binary operators.
1066 //
1 is lowest precedence.
1067 BinopPrecedence['
<'] =
10;
1068 BinopPrecedence['+'] =
20;
1069 BinopPrecedence['-'] =
20;
1070 BinopPrecedence['*'] =
40; // highest.
1072 // Prime the first token.
1073 fprintf(stderr,
"ready> ");
1076 // Make the module, which holds all the code.
1077 TheModule = new Module(
"my cool jit", Context);
1079 // Create the JIT. This takes ownership of the module.
1081 TheExecutionEngine = EngineBuilder(TheModule).setErrorStr(&ErrStr).create();
1082 if (!TheExecutionEngine) {
1083 fprintf(stderr,
"Could not create ExecutionEngine: %s\n", ErrStr.c_str());
1087 FunctionPassManager OurFPM(TheModule);
1089 // Set up the optimizer pipeline. Start with registering info about how the
1090 // target lays out data structures.
1091 OurFPM.add(new TargetData(*TheExecutionEngine-
>getTargetData()));
1092 // Provide basic AliasAnalysis support for GVN.
1093 OurFPM.add(createBasicAliasAnalysisPass());
1094 // Do simple
"peephole" optimizations and bit-twiddling optzns.
1095 OurFPM.add(createInstructionCombiningPass());
1096 // Reassociate expressions.
1097 OurFPM.add(createReassociatePass());
1098 // Eliminate Common SubExpressions.
1099 OurFPM.add(createGVNPass());
1100 // Simplify the control flow graph (deleting unreachable blocks, etc).
1101 OurFPM.add(createCFGSimplificationPass());
1103 OurFPM.doInitialization();
1105 // Set the global so the code gen can use this.
1106 TheFPM =
&OurFPM;
1108 // Run the main
"interpreter loop" now.
1113 // Print out all of the generated code.
1114 TheModule-
>dump();
1121 <a href=
"LangImpl5.html">Next: Extending the language: control flow
</a>
1124 <!-- *********************************************************************** -->
1127 <a href=
"http://jigsaw.w3.org/css-validator/check/referer"><img
1128 src=
"http://jigsaw.w3.org/css-validator/images/vcss" alt=
"Valid CSS!"></a>
1129 <a href=
"http://validator.w3.org/check/referer"><img
1130 src=
"http://www.w3.org/Icons/valid-html401" alt=
"Valid HTML 4.01!"></a>
1132 <a href=
"mailto:sabre@nondot.org">Chris Lattner
</a><br>
1133 <a href=
"http://llvm.org">The LLVM Compiler Infrastructure
</a><br>
1134 Last modified: $Date$