1 <!DOCTYPE HTML PUBLIC
"-//W3C//DTD HTML 4.01//EN"
2 "http://www.w3.org/TR/html4/strict.dtd">
6 <title>Kaleidoscope: Adding JIT and Optimizer Support
</title>
7 <meta http-equiv=
"Content-Type" content=
"text/html; charset=utf-8">
8 <meta name=
"author" content=
"Chris Lattner">
9 <link rel=
"stylesheet" href=
"../llvm.css" type=
"text/css">
14 <div class=
"doc_title">Kaleidoscope: Adding JIT and Optimizer Support
</div>
17 <li><a href=
"index.html">Up to Tutorial Index
</a></li>
20 <li><a href=
"#intro">Chapter
4 Introduction
</a></li>
21 <li><a href=
"#trivialconstfold">Trivial Constant Folding
</a></li>
22 <li><a href=
"#optimizerpasses">LLVM Optimization Passes
</a></li>
23 <li><a href=
"#jit">Adding a JIT Compiler
</a></li>
24 <li><a href=
"#code">Full Code Listing
</a></li>
27 <li><a href=
"LangImpl5.html">Chapter
5</a>: Extending the Language: Control
31 <div class=
"doc_author">
32 <p>Written by
<a href=
"mailto:sabre@nondot.org">Chris Lattner
</a></p>
35 <!-- *********************************************************************** -->
36 <div class=
"doc_section"><a name=
"intro">Chapter
4 Introduction
</a></div>
37 <!-- *********************************************************************** -->
39 <div class=
"doc_text">
41 <p>Welcome to Chapter
4 of the
"<a href="index.html
">Implementing a language
42 with LLVM</a>" tutorial. Chapters
1-
3 described the implementation of a simple
43 language and added support for generating LLVM IR. This chapter describes
44 two new techniques: adding optimizer support to your language, and adding JIT
45 compiler support. These additions will demonstrate how to get nice, efficient code
46 for the Kaleidoscope language.
</p>
50 <!-- *********************************************************************** -->
51 <div class=
"doc_section"><a name=
"trivialconstfold">Trivial Constant
53 <!-- *********************************************************************** -->
55 <div class=
"doc_text">
58 Our demonstration for Chapter
3 is elegant and easy to extend. Unfortunately,
59 it does not produce wonderful code. For example, when compiling simple code,
60 we don't get obvious optimizations:
</p>
62 <div class=
"doc_code">
64 ready
> <b>def test(x)
1+
2+x;
</b>
65 Read function definition:
66 define double @test(double %x) {
68 %addtmp = add double
1.000000e+00,
2.000000e+00
69 %addtmp1 = add double %addtmp, %x
75 <p>This code is a very, very literal transcription of the AST built by parsing
76 the input. As such, this transcription lacks optimizations like constant folding (we'd like to get
"<tt>add x, 3.0</tt>" in the example above) as well as other more important
77 optimizations. Constant folding, in particular, is a very common and very
78 important optimization: so much so that many language implementors implement
79 constant folding support in their AST representation.
</p>
81 <p>With LLVM, you don't need this support in the AST. Since all calls to build LLVM IR go through
82 the LLVM builder, it would be nice if the builder itself checked to see if there
83 was a constant folding opportunity when you call it. If so, it could just do
84 the constant fold and return the constant instead of creating an instruction.
85 This is exactly what the
<tt>LLVMFoldingBuilder
</tt> class does. Lets make one
88 <div class=
"doc_code">
90 static LLVMFoldingBuilder Builder;
94 <p>All we did was switch from
<tt>LLVMBuilder
</tt> to
95 <tt>LLVMFoldingBuilder
</tt>. Though we change no other code, we now have all of our
96 instructions implicitly constant folded without us having to do anything
97 about it. For example, the input above now compiles to:
</p>
99 <div class=
"doc_code">
101 ready
> <b>def test(x)
1+
2+x;
</b>
102 Read function definition:
103 define double @test(double %x) {
105 %addtmp = add double
3.000000e+00, %x
111 <p>Well, that was easy :). In practice, we recommend always using
112 <tt>LLVMFoldingBuilder
</tt> when generating code like this. It has no
113 "syntactic overhead" for its use (you don't have to uglify your compiler with
114 constant checks everywhere) and it can dramatically reduce the amount of
115 LLVM IR that is generated in some cases (particular for languages with a macro
116 preprocessor or that use a lot of constants).
</p>
118 <p>On the other hand, the
<tt>LLVMFoldingBuilder
</tt> is limited by the fact
119 that it does all of its analysis inline with the code as it is built. If you
120 take a slightly more complex example:
</p>
122 <div class=
"doc_code">
124 ready
> <b>def test(x) (
1+
2+x)*(x+(
1+
2));
</b>
125 ready
> Read function definition:
126 define double @test(double %x) {
128 %addtmp = add double
3.000000e+00, %x
129 %addtmp1 = add double %x,
3.000000e+00
130 %multmp = mul double %addtmp, %addtmp1
136 <p>In this case, the LHS and RHS of the multiplication are the same value. We'd
137 really like to see this generate
"<tt>tmp = x+3; result = tmp*tmp;</tt>" instead
138 of computing
"<tt>x*3</tt>" twice.
</p>
140 <p>Unfortunately, no amount of local analysis will be able to detect and correct
141 this. This requires two transformations: reassociation of expressions (to
142 make the add's lexically identical) and Common Subexpression Elimination (CSE)
143 to delete the redundant add instruction. Fortunately, LLVM provides a broad
144 range of optimizations that you can use, in the form of
"passes".
</p>
148 <!-- *********************************************************************** -->
149 <div class=
"doc_section"><a name=
"optimizerpasses">LLVM Optimization
151 <!-- *********************************************************************** -->
153 <div class=
"doc_text">
155 <p>LLVM provides many optimization passes, which do many different sorts of
156 things and have different tradeoffs. Unlike other systems, LLVM doesn't hold
157 to the mistaken notion that one set of optimizations is right for all languages
158 and for all situations. LLVM allows a compiler implementor to make complete
159 decisions about what optimizations to use, in which order, and in what
162 <p>As a concrete example, LLVM supports both
"whole module" passes, which look
163 across as large of body of code as they can (often a whole file, but if run
164 at link time, this can be a substantial portion of the whole program). It also
165 supports and includes
"per-function" passes which just operate on a single
166 function at a time, without looking at other functions. For more information
167 on passes and how they are run, see the
<a href=
"../WritingAnLLVMPass.html">How
168 to Write a Pass
</a> document and the
<a href=
"../Passes.html">List of LLVM
171 <p>For Kaleidoscope, we are currently generating functions on the fly, one at
172 a time, as the user types them in. We aren't shooting for the ultimate
173 optimization experience in this setting, but we also want to catch the easy and
174 quick stuff where possible. As such, we will choose to run a few per-function
175 optimizations as the user types the function in. If we wanted to make a
"static
176 Kaleidoscope compiler", we would use exactly the code we have now, except that
177 we would defer running the optimizer until the entire file has been parsed.
</p>
179 <p>In order to get per-function optimizations going, we need to set up a
180 <a href=
"../WritingAnLLVMPass.html#passmanager">FunctionPassManager
</a> to hold and
181 organize the LLVM optimizations that we want to run. Once we have that, we can
182 add a set of optimizations to run. The code looks like this:
</p>
184 <div class=
"doc_code">
186 ExistingModuleProvider OurModuleProvider(TheModule);
187 FunctionPassManager OurFPM(
&OurModuleProvider);
189 // Set up the optimizer pipeline. Start with registering info about how the
190 // target lays out data structures.
191 OurFPM.add(new TargetData(*TheExecutionEngine-
>getTargetData()));
192 // Do simple
"peephole" optimizations and bit-twiddling optzns.
193 OurFPM.add(createInstructionCombiningPass());
194 // Reassociate expressions.
195 OurFPM.add(createReassociatePass());
196 // Eliminate Common SubExpressions.
197 OurFPM.add(createGVNPass());
198 // Simplify the control flow graph (deleting unreachable blocks, etc).
199 OurFPM.add(createCFGSimplificationPass());
201 // Set the global so the code gen can use this.
202 TheFPM =
&OurFPM;
204 // Run the main
"interpreter loop" now.
209 <p>This code defines two objects, an
<tt>ExistingModuleProvider
</tt> and a
210 <tt>FunctionPassManager
</tt>. The former is basically a wrapper around our
211 <tt>Module
</tt> that the PassManager requires. It provides certain flexibility
212 that we're not going to take advantage of here, so I won't dive into any details
215 <p>The meat of the matter here, is the definition of
"<tt>OurFPM</tt>". It
216 requires a pointer to the
<tt>Module
</tt> (through the
<tt>ModuleProvider
</tt>)
217 to construct itself. Once it is set up, we use a series of
"add" calls to add
218 a bunch of LLVM passes. The first pass is basically boilerplate, it adds a pass
219 so that later optimizations know how the data structures in the program are
220 layed out. The
"<tt>TheExecutionEngine</tt>" variable is related to the JIT,
221 which we will get to in the next section.
</p>
223 <p>In this case, we choose to add
4 optimization passes. The passes we chose
224 here are a pretty standard set of
"cleanup" optimizations that are useful for
225 a wide variety of code. I won't delve into what they do but, believe me,
226 they are a good starting place :).
</p>
228 <p>Once the PassManager is set up, we need to make use of it. We do this by
229 running it after our newly created function is constructed (in
230 <tt>FunctionAST::Codegen
</tt>), but before it is returned to the client:
</p>
232 <div class=
"doc_code">
234 if (Value *RetVal = Body-
>Codegen()) {
235 // Finish off the function.
236 Builder.CreateRet(RetVal);
238 // Validate the generated code, checking for consistency.
239 verifyFunction(*TheFunction);
241 <b>// Optimize the function.
242 TheFPM-
>run(*TheFunction);
</b>
249 <p>As you can see, this is pretty straightforward. The
250 <tt>FunctionPassManager
</tt> optimizes and updates the LLVM Function* in place,
251 improving (hopefully) its body. With this in place, we can try our test above
254 <div class=
"doc_code">
256 ready
> <b>def test(x) (
1+
2+x)*(x+(
1+
2));
</b>
257 ready
> Read function definition:
258 define double @test(double %x) {
260 %addtmp = add double %x,
3.000000e+00
261 %multmp = mul double %addtmp, %addtmp
267 <p>As expected, we now get our nicely optimized code, saving a floating point
268 add instruction from every execution of this function.
</p>
270 <p>LLVM provides a wide variety of optimizations that can be used in certain
271 circumstances. Some
<a href=
"../Passes.html">documentation about the various
272 passes
</a> is available, but it isn't very complete. Another good source of
273 ideas can come from looking at the passes that
<tt>llvm-gcc
</tt> or
274 <tt>llvm-ld
</tt> run to get started. The
"<tt>opt</tt>" tool allows you to
275 experiment with passes from the command line, so you can see if they do
278 <p>Now that we have reasonable code coming out of our front-end, lets talk about
283 <!-- *********************************************************************** -->
284 <div class=
"doc_section"><a name=
"jit">Adding a JIT Compiler
</a></div>
285 <!-- *********************************************************************** -->
287 <div class=
"doc_text">
289 <p>Code that is available in LLVM IR can have a wide variety of tools
290 applied to it. For example, you can run optimizations on it (as we did above),
291 you can dump it out in textual or binary forms, you can compile the code to an
292 assembly file (.s) for some target, or you can JIT compile it. The nice thing
293 about the LLVM IR representation is that it is the
"common currency" between
294 many different parts of the compiler.
297 <p>In this section, we'll add JIT compiler support to our interpreter. The
298 basic idea that we want for Kaleidoscope is to have the user enter function
299 bodies as they do now, but immediately evaluate the top-level expressions they
300 type in. For example, if they type in
"1 + 2;", we should evaluate and print
301 out
3. If they define a function, they should be able to call it from the
304 <p>In order to do this, we first declare and initialize the JIT. This is done
305 by adding a global variable and a call in
<tt>main
</tt>:
</p>
307 <div class=
"doc_code">
309 <b>static ExecutionEngine *TheExecutionEngine;
</b>
313 <b>// Create the JIT.
314 TheExecutionEngine = ExecutionEngine::create(TheModule);
</b>
320 <p>This creates an abstract
"Execution Engine" which can be either a JIT
321 compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler
322 for you if one is available for your platform, otherwise it will fall back to
325 <p>Once the
<tt>ExecutionEngine
</tt> is created, the JIT is ready to be used.
326 There are a variety of APIs that are useful, but the simplest one is the
327 "<tt>getPointerToFunction(F)</tt>" method. This method JIT compiles the
328 specified LLVM Function and returns a function pointer to the generated machine
329 code. In our case, this means that we can change the code that parses a
330 top-level expression to look like this:
</p>
332 <div class=
"doc_code">
334 static void HandleTopLevelExpression() {
335 // Evaluate a top level expression into an anonymous function.
336 if (FunctionAST *F = ParseTopLevelExpr()) {
337 if (Function *LF = F-
>Codegen()) {
338 LF-
>dump(); // Dump the function for exposition purposes.
340 <b>// JIT the function, returning a function pointer.
341 void *FPtr = TheExecutionEngine-
>getPointerToFunction(LF);
343 // Cast it to the right type (takes no arguments, returns a double) so we
344 // can call it as a native function.
345 double (*FP)() = (double (*)())FPtr;
346 fprintf(stderr,
"Evaluated to %f\n", FP());
</b>
351 <p>Recall that we compile top-level expressions into a self-contained LLVM
352 function that takes no arguments and returns the computed double. Because the
353 LLVM JIT compiler matches the native platform ABI, this means that you can just
354 cast the result pointer to a function pointer of that type and call it directly.
355 This means, there is no difference between JIT compiled code and native machine
356 code that is statically linked into your application.
</p>
358 <p>With just these two changes, lets see how Kaleidoscope works now!
</p>
360 <div class=
"doc_code">
362 ready
> <b>4+
5;
</b>
363 define double @
""() {
365 ret double
9.000000e+00
368 <em>Evaluated to
9.000000</em>
372 <p>Well this looks like it is basically working. The dump of the function
373 shows the
"no argument function that always returns double" that we synthesize
374 for each top level expression that is typed in. This demonstrates very basic
375 functionality, but can we do more?
</p>
377 <div class=
"doc_code">
379 ready
> <b>def testfunc(x y) x + y*
2;
</b>
380 Read function definition:
381 define double @testfunc(double %x, double %y) {
383 %multmp = mul double %y,
2.000000e+00
384 %addtmp = add double %multmp, %x
388 ready
> <b>testfunc(
4,
10);
</b>
389 define double @
""() {
391 %calltmp = call double @testfunc( double
4.000000e+00, double
1.000000e+01 )
395 <em>Evaluated to
24.000000</em>
399 <p>This illustrates that we can now call user code, but there is something a bit subtle
400 going on here. Note that we only invoke the JIT on the anonymous functions
401 that
<em>call testfunc
</em>, but we never invoked it on
<em>testfunc
404 <p>What actually happened here is that the anonymous function was
405 JIT'd when requested. When the Kaleidoscope app calls through the function
406 pointer that is returned, the anonymous function starts executing. It ends up
407 making the call to the
"testfunc" function, and ends up in a stub that invokes
408 the JIT, lazily, on testfunc. Once the JIT finishes lazily compiling testfunc,
409 it returns and the code re-executes the call.
</p>
411 <p>In summary, the JIT will lazily JIT code, on the fly, as it is needed. The
412 JIT provides a number of other more advanced interfaces for things like freeing
413 allocated machine code, rejit'ing functions to update them, etc. However, even
414 with this simple code, we get some surprisingly powerful capabilities - check
415 this out (I removed the dump of the anonymous functions, you should get the idea
418 <div class=
"doc_code">
420 ready
> <b>extern sin(x);
</b>
422 declare double @sin(double)
424 ready
> <b>extern cos(x);
</b>
426 declare double @cos(double)
428 ready
> <b>sin(
1.0);
</b>
429 <em>Evaluated to
0.841471</em>
431 ready
> <b>def foo(x) sin(x)*sin(x) + cos(x)*cos(x);
</b>
432 Read function definition:
433 define double @foo(double %x) {
435 %calltmp = call double @sin( double %x )
436 %multmp = mul double %calltmp, %calltmp
437 %calltmp2 = call double @cos( double %x )
438 %multmp4 = mul double %calltmp2, %calltmp2
439 %addtmp = add double %multmp, %multmp4
443 ready
> <b>foo(
4.0);
</b>
444 <em>Evaluated to
1.000000</em>
448 <p>Whoa, how does the JIT know about sin and cos? The answer is surprisingly
450 example, the JIT started execution of a function and got to a function call. It
451 realized that the function was not yet JIT compiled and invoked the standard set
452 of routines to resolve the function. In this case, there is no body defined
453 for the function, so the JIT ended up calling
"<tt>dlsym("sin
")</tt>" on the
454 Kaleidoscope process itself.
455 Since
"<tt>sin</tt>" is defined within the JIT's address space, it simply
456 patches up calls in the module to call the libm version of
<tt>sin
</tt>
459 <p>The LLVM JIT provides a number of interfaces (look in the
460 <tt>ExecutionEngine.h
</tt> file) for controlling how unknown functions get
461 resolved. It allows you to establish explicit mappings between IR objects and
462 addresses (useful for LLVM global variables that you want to map to static
463 tables, for example), allows you to dynamically decide on the fly based on the
464 function name, and even allows you to have the JIT abort itself if any lazy
465 compilation is attempted.
</p>
467 <p>One interesting application of this is that we can now extend the language
468 by writing arbitrary C++ code to implement operations. For example, if we add:
471 <div class=
"doc_code">
473 /// putchard - putchar that takes a double and returns
0.
475 double putchard(double X) {
482 <p>Now we can produce simple output to the console by using things like:
483 "<tt>extern putchard(x); putchard(120);</tt>", which prints a lowercase 'x' on
484 the console (
120 is the ASCII code for 'x'). Similar code could be used to
485 implement file I/O, console input, and many other capabilities in
488 <p>This completes the JIT and optimizer chapter of the Kaleidoscope tutorial. At
489 this point, we can compile a non-Turing-complete programming language, optimize
490 and JIT compile it in a user-driven way. Next up we'll look into
<a
491 href=
"LangImpl5.html">extending the language with control flow constructs
</a>,
492 tackling some interesting LLVM IR issues along the way.
</p>
496 <!-- *********************************************************************** -->
497 <div class=
"doc_section"><a name=
"code">Full Code Listing
</a></div>
498 <!-- *********************************************************************** -->
500 <div class=
"doc_text">
503 Here is the complete code listing for our running example, enhanced with the
504 LLVM JIT and optimizer. To build this example, use:
507 <div class=
"doc_code">
510 g++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit native` -O3 -o toy
516 <p>Here is the code:
</p>
518 <div class=
"doc_code">
520 #include
"llvm/DerivedTypes.h"
521 #include
"llvm/ExecutionEngine/ExecutionEngine.h"
522 #include
"llvm/Module.h"
523 #include
"llvm/ModuleProvider.h"
524 #include
"llvm/PassManager.h"
525 #include
"llvm/Analysis/Verifier.h"
526 #include
"llvm/Target/TargetData.h"
527 #include
"llvm/Transforms/Scalar.h"
528 #include
"llvm/Support/LLVMBuilder.h"
529 #include
<cstdio
>
530 #include
<string
>
532 #include
<vector
>
533 using namespace llvm;
535 //===----------------------------------------------------------------------===//
537 //===----------------------------------------------------------------------===//
539 // The lexer returns tokens [
0-
255] if it is an unknown character, otherwise one
540 // of these for known things.
545 tok_def = -
2, tok_extern = -
3,
548 tok_identifier = -
4, tok_number = -
5,
551 static std::string IdentifierStr; // Filled in if tok_identifier
552 static double NumVal; // Filled in if tok_number
554 /// gettok - Return the next token from standard input.
555 static int gettok() {
556 static int LastChar = ' ';
558 // Skip any whitespace.
559 while (isspace(LastChar))
560 LastChar = getchar();
562 if (isalpha(LastChar)) { // identifier: [a-zA-Z][a-zA-Z0-
9]*
563 IdentifierStr = LastChar;
564 while (isalnum((LastChar = getchar())))
565 IdentifierStr += LastChar;
567 if (IdentifierStr ==
"def") return tok_def;
568 if (IdentifierStr ==
"extern") return tok_extern;
569 return tok_identifier;
572 if (isdigit(LastChar) || LastChar == '.') { // Number: [
0-
9.]+
576 LastChar = getchar();
577 } while (isdigit(LastChar) || LastChar == '.');
579 NumVal = strtod(NumStr.c_str(),
0);
583 if (LastChar == '#') {
584 // Comment until end of line.
585 do LastChar = getchar();
586 while (LastChar != EOF
&& LastChar != '\n'
&& LastChar != '\r');
592 // Check for end of file. Don't eat the EOF.
596 // Otherwise, just return the character as its ascii value.
597 int ThisChar = LastChar;
598 LastChar = getchar();
602 //===----------------------------------------------------------------------===//
603 // Abstract Syntax Tree (aka Parse Tree)
604 //===----------------------------------------------------------------------===//
606 /// ExprAST - Base class for all expression nodes.
609 virtual ~ExprAST() {}
610 virtual Value *Codegen() =
0;
613 /// NumberExprAST - Expression class for numeric literals like
"1.0".
614 class NumberExprAST : public ExprAST {
617 NumberExprAST(double val) : Val(val) {}
618 virtual Value *Codegen();
621 /// VariableExprAST - Expression class for referencing a variable, like
"a".
622 class VariableExprAST : public ExprAST {
625 VariableExprAST(const std::string
&name) : Name(name) {}
626 virtual Value *Codegen();
629 /// BinaryExprAST - Expression class for a binary operator.
630 class BinaryExprAST : public ExprAST {
634 BinaryExprAST(char op, ExprAST *lhs, ExprAST *rhs)
635 : Op(op), LHS(lhs), RHS(rhs) {}
636 virtual Value *Codegen();
639 /// CallExprAST - Expression class for function calls.
640 class CallExprAST : public ExprAST {
642 std::vector
<ExprAST*
> Args;
644 CallExprAST(const std::string
&callee, std::vector
<ExprAST*
> &args)
645 : Callee(callee), Args(args) {}
646 virtual Value *Codegen();
649 /// PrototypeAST - This class represents the
"prototype" for a function,
650 /// which captures its argument names as well as if it is an operator.
653 std::vector
<std::string
> Args;
655 PrototypeAST(const std::string
&name, const std::vector
<std::string
> &args)
656 : Name(name), Args(args) {}
661 /// FunctionAST - This class represents a function definition itself.
666 FunctionAST(PrototypeAST *proto, ExprAST *body)
667 : Proto(proto), Body(body) {}
672 //===----------------------------------------------------------------------===//
674 //===----------------------------------------------------------------------===//
676 /// CurTok/getNextToken - Provide a simple token buffer. CurTok is the current
677 /// token the parser it looking at. getNextToken reads another token from the
678 /// lexer and updates CurTok with its results.
680 static int getNextToken() {
681 return CurTok = gettok();
684 /// BinopPrecedence - This holds the precedence for each binary operator that is
686 static std::map
<char, int
> BinopPrecedence;
688 /// GetTokPrecedence - Get the precedence of the pending binary operator token.
689 static int GetTokPrecedence() {
690 if (!isascii(CurTok))
693 // Make sure it's a declared binop.
694 int TokPrec = BinopPrecedence[CurTok];
695 if (TokPrec
<=
0) return -
1;
699 /// Error* - These are little helper functions for error handling.
700 ExprAST *Error(const char *Str) { fprintf(stderr,
"Error: %s\n", Str);return
0;}
701 PrototypeAST *ErrorP(const char *Str) { Error(Str); return
0; }
702 FunctionAST *ErrorF(const char *Str) { Error(Str); return
0; }
704 static ExprAST *ParseExpression();
708 /// ::= identifier '(' expression* ')'
709 static ExprAST *ParseIdentifierExpr() {
710 std::string IdName = IdentifierStr;
712 getNextToken(); // eat identifier.
714 if (CurTok != '(') // Simple variable ref.
715 return new VariableExprAST(IdName);
718 getNextToken(); // eat (
719 std::vector
<ExprAST*
> Args;
722 ExprAST *Arg = ParseExpression();
726 if (CurTok == ')') break;
729 return Error(
"Expected ')'");
737 return new CallExprAST(IdName, Args);
740 /// numberexpr ::= number
741 static ExprAST *ParseNumberExpr() {
742 ExprAST *Result = new NumberExprAST(NumVal);
743 getNextToken(); // consume the number
747 /// parenexpr ::= '(' expression ')'
748 static ExprAST *ParseParenExpr() {
749 getNextToken(); // eat (.
750 ExprAST *V = ParseExpression();
754 return Error(
"expected ')'");
755 getNextToken(); // eat ).
760 /// ::= identifierexpr
763 static ExprAST *ParsePrimary() {
765 default: return Error(
"unknown token when expecting an expression");
766 case tok_identifier: return ParseIdentifierExpr();
767 case tok_number: return ParseNumberExpr();
768 case '(': return ParseParenExpr();
773 /// ::= ('+' primary)*
774 static ExprAST *ParseBinOpRHS(int ExprPrec, ExprAST *LHS) {
775 // If this is a binop, find its precedence.
777 int TokPrec = GetTokPrecedence();
779 // If this is a binop that binds at least as tightly as the current binop,
780 // consume it, otherwise we are done.
781 if (TokPrec
< ExprPrec)
784 // Okay, we know this is a binop.
786 getNextToken(); // eat binop
788 // Parse the primary expression after the binary operator.
789 ExprAST *RHS = ParsePrimary();
792 // If BinOp binds less tightly with RHS than the operator after RHS, let
793 // the pending operator take RHS as its LHS.
794 int NextPrec = GetTokPrecedence();
795 if (TokPrec
< NextPrec) {
796 RHS = ParseBinOpRHS(TokPrec+
1, RHS);
797 if (RHS ==
0) return
0;
801 LHS = new BinaryExprAST(BinOp, LHS, RHS);
806 /// ::= primary binoprhs
808 static ExprAST *ParseExpression() {
809 ExprAST *LHS = ParsePrimary();
812 return ParseBinOpRHS(
0, LHS);
816 /// ::= id '(' id* ')'
817 static PrototypeAST *ParsePrototype() {
818 if (CurTok != tok_identifier)
819 return ErrorP(
"Expected function name in prototype");
821 std::string FnName = IdentifierStr;
825 return ErrorP(
"Expected '(' in prototype");
827 std::vector
<std::string
> ArgNames;
828 while (getNextToken() == tok_identifier)
829 ArgNames.push_back(IdentifierStr);
831 return ErrorP(
"Expected ')' in prototype");
834 getNextToken(); // eat ')'.
836 return new PrototypeAST(FnName, ArgNames);
839 /// definition ::= 'def' prototype expression
840 static FunctionAST *ParseDefinition() {
841 getNextToken(); // eat def.
842 PrototypeAST *Proto = ParsePrototype();
843 if (Proto ==
0) return
0;
845 if (ExprAST *E = ParseExpression())
846 return new FunctionAST(Proto, E);
850 /// toplevelexpr ::= expression
851 static FunctionAST *ParseTopLevelExpr() {
852 if (ExprAST *E = ParseExpression()) {
853 // Make an anonymous proto.
854 PrototypeAST *Proto = new PrototypeAST(
"", std::vector
<std::string
>());
855 return new FunctionAST(Proto, E);
860 /// external ::= 'extern' prototype
861 static PrototypeAST *ParseExtern() {
862 getNextToken(); // eat extern.
863 return ParsePrototype();
866 //===----------------------------------------------------------------------===//
868 //===----------------------------------------------------------------------===//
870 static Module *TheModule;
871 static LLVMFoldingBuilder Builder;
872 static std::map
<std::string, Value*
> NamedValues;
873 static FunctionPassManager *TheFPM;
875 Value *ErrorV(const char *Str) { Error(Str); return
0; }
877 Value *NumberExprAST::Codegen() {
878 return ConstantFP::get(Type::DoubleTy, APFloat(Val));
881 Value *VariableExprAST::Codegen() {
882 // Look this variable up in the function.
883 Value *V = NamedValues[Name];
884 return V ? V : ErrorV(
"Unknown variable name");
887 Value *BinaryExprAST::Codegen() {
888 Value *L = LHS-
>Codegen();
889 Value *R = RHS-
>Codegen();
890 if (L ==
0 || R ==
0) return
0;
893 case '+': return Builder.CreateAdd(L, R,
"addtmp");
894 case '-': return Builder.CreateSub(L, R,
"subtmp");
895 case '*': return Builder.CreateMul(L, R,
"multmp");
897 L = Builder.CreateFCmpULT(L, R,
"cmptmp");
898 // Convert bool
0/
1 to double
0.0 or
1.0
899 return Builder.CreateUIToFP(L, Type::DoubleTy,
"booltmp");
900 default: return ErrorV(
"invalid binary operator");
904 Value *CallExprAST::Codegen() {
905 // Look up the name in the global module table.
906 Function *CalleeF = TheModule-
>getFunction(Callee);
908 return ErrorV(
"Unknown function referenced");
910 // If argument mismatch error.
911 if (CalleeF-
>arg_size() != Args.size())
912 return ErrorV(
"Incorrect # arguments passed");
914 std::vector
<Value*
> ArgsV;
915 for (unsigned i =
0, e = Args.size(); i != e; ++i) {
916 ArgsV.push_back(Args[i]-
>Codegen());
917 if (ArgsV.back() ==
0) return
0;
920 return Builder.CreateCall(CalleeF, ArgsV.begin(), ArgsV.end(),
"calltmp");
923 Function *PrototypeAST::Codegen() {
924 // Make the function type: double(double,double) etc.
925 std::vector
<const Type*
> Doubles(Args.size(), Type::DoubleTy);
926 FunctionType *FT = FunctionType::get(Type::DoubleTy, Doubles, false);
928 Function *F = new Function(FT, Function::ExternalLinkage, Name, TheModule);
930 // If F conflicted, there was already something named 'Name'. If it has a
931 // body, don't allow redefinition or reextern.
932 if (F-
>getName() != Name) {
933 // Delete the one we just made and get the existing one.
934 F-
>eraseFromParent();
935 F = TheModule-
>getFunction(Name);
937 // If F already has a body, reject this.
938 if (!F-
>empty()) {
939 ErrorF(
"redefinition of function");
943 // If F took a different number of args, reject.
944 if (F-
>arg_size() != Args.size()) {
945 ErrorF(
"redefinition of function with different # args");
950 // Set names for all arguments.
952 for (Function::arg_iterator AI = F-
>arg_begin(); Idx != Args.size();
954 AI-
>setName(Args[Idx]);
956 // Add arguments to variable symbol table.
957 NamedValues[Args[Idx]] = AI;
963 Function *FunctionAST::Codegen() {
966 Function *TheFunction = Proto-
>Codegen();
967 if (TheFunction ==
0)
970 // Create a new basic block to start insertion into.
971 BasicBlock *BB = new BasicBlock(
"entry", TheFunction);
972 Builder.SetInsertPoint(BB);
974 if (Value *RetVal = Body-
>Codegen()) {
975 // Finish off the function.
976 Builder.CreateRet(RetVal);
978 // Validate the generated code, checking for consistency.
979 verifyFunction(*TheFunction);
981 // Optimize the function.
982 TheFPM-
>run(*TheFunction);
987 // Error reading body, remove function.
988 TheFunction-
>eraseFromParent();
992 //===----------------------------------------------------------------------===//
993 // Top-Level parsing and JIT Driver
994 //===----------------------------------------------------------------------===//
996 static ExecutionEngine *TheExecutionEngine;
998 static void HandleDefinition() {
999 if (FunctionAST *F = ParseDefinition()) {
1000 if (Function *LF = F-
>Codegen()) {
1001 fprintf(stderr,
"Read function definition:");
1005 // Skip token for error recovery.
1010 static void HandleExtern() {
1011 if (PrototypeAST *P = ParseExtern()) {
1012 if (Function *F = P-
>Codegen()) {
1013 fprintf(stderr,
"Read extern: ");
1017 // Skip token for error recovery.
1022 static void HandleTopLevelExpression() {
1023 // Evaluate a top level expression into an anonymous function.
1024 if (FunctionAST *F = ParseTopLevelExpr()) {
1025 if (Function *LF = F-
>Codegen()) {
1026 // JIT the function, returning a function pointer.
1027 void *FPtr = TheExecutionEngine-
>getPointerToFunction(LF);
1029 // Cast it to the right type (takes no arguments, returns a double) so we
1030 // can call it as a native function.
1031 double (*FP)() = (double (*)())FPtr;
1032 fprintf(stderr,
"Evaluated to %f\n", FP());
1035 // Skip token for error recovery.
1040 /// top ::= definition | external | expression | ';'
1041 static void MainLoop() {
1043 fprintf(stderr,
"ready> ");
1045 case tok_eof: return;
1046 case ';': getNextToken(); break; // ignore top level semicolons.
1047 case tok_def: HandleDefinition(); break;
1048 case tok_extern: HandleExtern(); break;
1049 default: HandleTopLevelExpression(); break;
1056 //===----------------------------------------------------------------------===//
1057 //
"Library" functions that can be
"extern'd" from user code.
1058 //===----------------------------------------------------------------------===//
1060 /// putchard - putchar that takes a double and returns
0.
1062 double putchard(double X) {
1067 //===----------------------------------------------------------------------===//
1068 // Main driver code.
1069 //===----------------------------------------------------------------------===//
1072 // Install standard binary operators.
1073 //
1 is lowest precedence.
1074 BinopPrecedence['
<'] =
10;
1075 BinopPrecedence['+'] =
20;
1076 BinopPrecedence['-'] =
20;
1077 BinopPrecedence['*'] =
40; // highest.
1079 // Prime the first token.
1080 fprintf(stderr,
"ready> ");
1083 // Make the module, which holds all the code.
1084 TheModule = new Module(
"my cool jit");
1087 TheExecutionEngine = ExecutionEngine::create(TheModule);
1090 ExistingModuleProvider OurModuleProvider(TheModule);
1091 FunctionPassManager OurFPM(
&OurModuleProvider);
1093 // Set up the optimizer pipeline. Start with registering info about how the
1094 // target lays out data structures.
1095 OurFPM.add(new TargetData(*TheExecutionEngine-
>getTargetData()));
1096 // Do simple
"peephole" optimizations and bit-twiddling optzns.
1097 OurFPM.add(createInstructionCombiningPass());
1098 // Reassociate expressions.
1099 OurFPM.add(createReassociatePass());
1100 // Eliminate Common SubExpressions.
1101 OurFPM.add(createGVNPass());
1102 // Simplify the control flow graph (deleting unreachable blocks, etc).
1103 OurFPM.add(createCFGSimplificationPass());
1105 // Set the global so the code gen can use this.
1106 TheFPM =
&OurFPM;
1108 // Run the main
"interpreter loop" now.
1112 } // Free module provider and pass manager.
1115 // Print out all of the generated code.
1116 TheModule-
>dump();
1124 <!-- *********************************************************************** -->
1127 <a href=
"http://jigsaw.w3.org/css-validator/check/referer"><img
1128 src=
"http://jigsaw.w3.org/css-validator/images/vcss" alt=
"Valid CSS!"></a>
1129 <a href=
"http://validator.w3.org/check/referer"><img
1130 src=
"http://www.w3.org/Icons/valid-html401" alt=
"Valid HTML 4.01!"></a>
1132 <a href=
"mailto:sabre@nondot.org">Chris Lattner
</a><br>
1133 <a href=
"http://llvm.org">The LLVM Compiler Infrastructure
</a><br>
1134 Last modified: $Date:
2007-
10-
17 11:
05:
13 -
0700 (Wed,
17 Oct
2007) $