zpu: wip on DAG to DAG
[llvm/zpu.git] / docs / tutorial / LangImpl4.html
blob35dcdf4914e39b6cb968840415f80c0230cd5dd8
1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
2 "http://www.w3.org/TR/html4/strict.dtd">
4 <html>
5 <head>
6 <title>Kaleidoscope: Adding JIT and Optimizer Support</title>
7 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
8 <meta name="author" content="Chris Lattner">
9 <link rel="stylesheet" href="../llvm.css" type="text/css">
10 </head>
12 <body>
14 <div class="doc_title">Kaleidoscope: Adding JIT and Optimizer Support</div>
16 <ul>
17 <li><a href="index.html">Up to Tutorial Index</a></li>
18 <li>Chapter 4
19 <ol>
20 <li><a href="#intro">Chapter 4 Introduction</a></li>
21 <li><a href="#trivialconstfold">Trivial Constant Folding</a></li>
22 <li><a href="#optimizerpasses">LLVM Optimization Passes</a></li>
23 <li><a href="#jit">Adding a JIT Compiler</a></li>
24 <li><a href="#code">Full Code Listing</a></li>
25 </ol>
26 </li>
27 <li><a href="LangImpl5.html">Chapter 5</a>: Extending the Language: Control
28 Flow</li>
29 </ul>
31 <div class="doc_author">
32 <p>Written by <a href="mailto:sabre@nondot.org">Chris Lattner</a></p>
33 </div>
35 <!-- *********************************************************************** -->
36 <div class="doc_section"><a name="intro">Chapter 4 Introduction</a></div>
37 <!-- *********************************************************************** -->
39 <div class="doc_text">
41 <p>Welcome to Chapter 4 of the "<a href="index.html">Implementing a language
42 with LLVM</a>" tutorial. Chapters 1-3 described the implementation of a simple
43 language and added support for generating LLVM IR. This chapter describes
44 two new techniques: adding optimizer support to your language, and adding JIT
45 compiler support. These additions will demonstrate how to get nice, efficient code
46 for the Kaleidoscope language.</p>
48 </div>
50 <!-- *********************************************************************** -->
51 <div class="doc_section"><a name="trivialconstfold">Trivial Constant
52 Folding</a></div>
53 <!-- *********************************************************************** -->
55 <div class="doc_text">
57 <p>
58 Our demonstration for Chapter 3 is elegant and easy to extend. Unfortunately,
59 it does not produce wonderful code. The IRBuilder, however, does give us
60 obvious optimizations when compiling simple code:</p>
62 <div class="doc_code">
63 <pre>
64 ready&gt; <b>def test(x) 1+2+x;</b>
65 Read function definition:
66 define double @test(double %x) {
67 entry:
68 %addtmp = fadd double 3.000000e+00, %x
69 ret double %addtmp
71 </pre>
72 </div>
74 <p>This code is not a literal transcription of the AST built by parsing the
75 input. That would be:
77 <div class="doc_code">
78 <pre>
79 ready&gt; <b>def test(x) 1+2+x;</b>
80 Read function definition:
81 define double @test(double %x) {
82 entry:
83 %addtmp = fadd double 2.000000e+00, 1.000000e+00
84 %addtmp1 = fadd double %addtmp, %x
85 ret double %addtmp1
87 </pre>
88 </div>
90 <p>Constant folding, as seen above, in particular, is a very common and very
91 important optimization: so much so that many language implementors implement
92 constant folding support in their AST representation.</p>
94 <p>With LLVM, you don't need this support in the AST. Since all calls to build
95 LLVM IR go through the LLVM IR builder, the builder itself checked to see if
96 there was a constant folding opportunity when you call it. If so, it just does
97 the constant fold and return the constant instead of creating an instruction.
99 <p>Well, that was easy :). In practice, we recommend always using
100 <tt>IRBuilder</tt> when generating code like this. It has no
101 "syntactic overhead" for its use (you don't have to uglify your compiler with
102 constant checks everywhere) and it can dramatically reduce the amount of
103 LLVM IR that is generated in some cases (particular for languages with a macro
104 preprocessor or that use a lot of constants).</p>
106 <p>On the other hand, the <tt>IRBuilder</tt> is limited by the fact
107 that it does all of its analysis inline with the code as it is built. If you
108 take a slightly more complex example:</p>
110 <div class="doc_code">
111 <pre>
112 ready&gt; <b>def test(x) (1+2+x)*(x+(1+2));</b>
113 ready> Read function definition:
114 define double @test(double %x) {
115 entry:
116 %addtmp = fadd double 3.000000e+00, %x
117 %addtmp1 = fadd double %x, 3.000000e+00
118 %multmp = fmul double %addtmp, %addtmp1
119 ret double %multmp
121 </pre>
122 </div>
124 <p>In this case, the LHS and RHS of the multiplication are the same value. We'd
125 really like to see this generate "<tt>tmp = x+3; result = tmp*tmp;</tt>" instead
126 of computing "<tt>x+3</tt>" twice.</p>
128 <p>Unfortunately, no amount of local analysis will be able to detect and correct
129 this. This requires two transformations: reassociation of expressions (to
130 make the add's lexically identical) and Common Subexpression Elimination (CSE)
131 to delete the redundant add instruction. Fortunately, LLVM provides a broad
132 range of optimizations that you can use, in the form of "passes".</p>
134 </div>
136 <!-- *********************************************************************** -->
137 <div class="doc_section"><a name="optimizerpasses">LLVM Optimization
138 Passes</a></div>
139 <!-- *********************************************************************** -->
141 <div class="doc_text">
143 <p>LLVM provides many optimization passes, which do many different sorts of
144 things and have different tradeoffs. Unlike other systems, LLVM doesn't hold
145 to the mistaken notion that one set of optimizations is right for all languages
146 and for all situations. LLVM allows a compiler implementor to make complete
147 decisions about what optimizations to use, in which order, and in what
148 situation.</p>
150 <p>As a concrete example, LLVM supports both "whole module" passes, which look
151 across as large of body of code as they can (often a whole file, but if run
152 at link time, this can be a substantial portion of the whole program). It also
153 supports and includes "per-function" passes which just operate on a single
154 function at a time, without looking at other functions. For more information
155 on passes and how they are run, see the <a href="../WritingAnLLVMPass.html">How
156 to Write a Pass</a> document and the <a href="../Passes.html">List of LLVM
157 Passes</a>.</p>
159 <p>For Kaleidoscope, we are currently generating functions on the fly, one at
160 a time, as the user types them in. We aren't shooting for the ultimate
161 optimization experience in this setting, but we also want to catch the easy and
162 quick stuff where possible. As such, we will choose to run a few per-function
163 optimizations as the user types the function in. If we wanted to make a "static
164 Kaleidoscope compiler", we would use exactly the code we have now, except that
165 we would defer running the optimizer until the entire file has been parsed.</p>
167 <p>In order to get per-function optimizations going, we need to set up a
168 <a href="../WritingAnLLVMPass.html#passmanager">FunctionPassManager</a> to hold and
169 organize the LLVM optimizations that we want to run. Once we have that, we can
170 add a set of optimizations to run. The code looks like this:</p>
172 <div class="doc_code">
173 <pre>
174 FunctionPassManager OurFPM(TheModule);
176 // Set up the optimizer pipeline. Start with registering info about how the
177 // target lays out data structures.
178 OurFPM.add(new TargetData(*TheExecutionEngine->getTargetData()));
179 // Do simple "peephole" optimizations and bit-twiddling optzns.
180 OurFPM.add(createInstructionCombiningPass());
181 // Reassociate expressions.
182 OurFPM.add(createReassociatePass());
183 // Eliminate Common SubExpressions.
184 OurFPM.add(createGVNPass());
185 // Simplify the control flow graph (deleting unreachable blocks, etc).
186 OurFPM.add(createCFGSimplificationPass());
188 OurFPM.doInitialization();
190 // Set the global so the code gen can use this.
191 TheFPM = &amp;OurFPM;
193 // Run the main "interpreter loop" now.
194 MainLoop();
195 </pre>
196 </div>
198 <p>This code defines a <tt>FunctionPassManager</tt>, "<tt>OurFPM</tt>". It
199 requires a pointer to the <tt>Module</tt> to construct itself. Once it is set
200 up, we use a series of "add" calls to add a bunch of LLVM passes. The first
201 pass is basically boilerplate, it adds a pass so that later optimizations know
202 how the data structures in the program are laid out. The
203 "<tt>TheExecutionEngine</tt>" variable is related to the JIT, which we will get
204 to in the next section.</p>
206 <p>In this case, we choose to add 4 optimization passes. The passes we chose
207 here are a pretty standard set of "cleanup" optimizations that are useful for
208 a wide variety of code. I won't delve into what they do but, believe me,
209 they are a good starting place :).</p>
211 <p>Once the PassManager is set up, we need to make use of it. We do this by
212 running it after our newly created function is constructed (in
213 <tt>FunctionAST::Codegen</tt>), but before it is returned to the client:</p>
215 <div class="doc_code">
216 <pre>
217 if (Value *RetVal = Body->Codegen()) {
218 // Finish off the function.
219 Builder.CreateRet(RetVal);
221 // Validate the generated code, checking for consistency.
222 verifyFunction(*TheFunction);
224 <b>// Optimize the function.
225 TheFPM-&gt;run(*TheFunction);</b>
227 return TheFunction;
229 </pre>
230 </div>
232 <p>As you can see, this is pretty straightforward. The
233 <tt>FunctionPassManager</tt> optimizes and updates the LLVM Function* in place,
234 improving (hopefully) its body. With this in place, we can try our test above
235 again:</p>
237 <div class="doc_code">
238 <pre>
239 ready&gt; <b>def test(x) (1+2+x)*(x+(1+2));</b>
240 ready> Read function definition:
241 define double @test(double %x) {
242 entry:
243 %addtmp = fadd double %x, 3.000000e+00
244 %multmp = fmul double %addtmp, %addtmp
245 ret double %multmp
247 </pre>
248 </div>
250 <p>As expected, we now get our nicely optimized code, saving a floating point
251 add instruction from every execution of this function.</p>
253 <p>LLVM provides a wide variety of optimizations that can be used in certain
254 circumstances. Some <a href="../Passes.html">documentation about the various
255 passes</a> is available, but it isn't very complete. Another good source of
256 ideas can come from looking at the passes that <tt>llvm-gcc</tt> or
257 <tt>llvm-ld</tt> run to get started. The "<tt>opt</tt>" tool allows you to
258 experiment with passes from the command line, so you can see if they do
259 anything.</p>
261 <p>Now that we have reasonable code coming out of our front-end, lets talk about
262 executing it!</p>
264 </div>
266 <!-- *********************************************************************** -->
267 <div class="doc_section"><a name="jit">Adding a JIT Compiler</a></div>
268 <!-- *********************************************************************** -->
270 <div class="doc_text">
272 <p>Code that is available in LLVM IR can have a wide variety of tools
273 applied to it. For example, you can run optimizations on it (as we did above),
274 you can dump it out in textual or binary forms, you can compile the code to an
275 assembly file (.s) for some target, or you can JIT compile it. The nice thing
276 about the LLVM IR representation is that it is the "common currency" between
277 many different parts of the compiler.
278 </p>
280 <p>In this section, we'll add JIT compiler support to our interpreter. The
281 basic idea that we want for Kaleidoscope is to have the user enter function
282 bodies as they do now, but immediately evaluate the top-level expressions they
283 type in. For example, if they type in "1 + 2;", we should evaluate and print
284 out 3. If they define a function, they should be able to call it from the
285 command line.</p>
287 <p>In order to do this, we first declare and initialize the JIT. This is done
288 by adding a global variable and a call in <tt>main</tt>:</p>
290 <div class="doc_code">
291 <pre>
292 <b>static ExecutionEngine *TheExecutionEngine;</b>
294 int main() {
296 <b>// Create the JIT. This takes ownership of the module.
297 TheExecutionEngine = EngineBuilder(TheModule).create();</b>
300 </pre>
301 </div>
303 <p>This creates an abstract "Execution Engine" which can be either a JIT
304 compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler
305 for you if one is available for your platform, otherwise it will fall back to
306 the interpreter.</p>
308 <p>Once the <tt>ExecutionEngine</tt> is created, the JIT is ready to be used.
309 There are a variety of APIs that are useful, but the simplest one is the
310 "<tt>getPointerToFunction(F)</tt>" method. This method JIT compiles the
311 specified LLVM Function and returns a function pointer to the generated machine
312 code. In our case, this means that we can change the code that parses a
313 top-level expression to look like this:</p>
315 <div class="doc_code">
316 <pre>
317 static void HandleTopLevelExpression() {
318 // Evaluate a top-level expression into an anonymous function.
319 if (FunctionAST *F = ParseTopLevelExpr()) {
320 if (Function *LF = F-&gt;Codegen()) {
321 LF->dump(); // Dump the function for exposition purposes.
323 <b>// JIT the function, returning a function pointer.
324 void *FPtr = TheExecutionEngine-&gt;getPointerToFunction(LF);
326 // Cast it to the right type (takes no arguments, returns a double) so we
327 // can call it as a native function.
328 double (*FP)() = (double (*)())(intptr_t)FPtr;
329 fprintf(stderr, "Evaluated to %f\n", FP());</b>
331 </pre>
332 </div>
334 <p>Recall that we compile top-level expressions into a self-contained LLVM
335 function that takes no arguments and returns the computed double. Because the
336 LLVM JIT compiler matches the native platform ABI, this means that you can just
337 cast the result pointer to a function pointer of that type and call it directly.
338 This means, there is no difference between JIT compiled code and native machine
339 code that is statically linked into your application.</p>
341 <p>With just these two changes, lets see how Kaleidoscope works now!</p>
343 <div class="doc_code">
344 <pre>
345 ready&gt; <b>4+5;</b>
346 define double @""() {
347 entry:
348 ret double 9.000000e+00
351 <em>Evaluated to 9.000000</em>
352 </pre>
353 </div>
355 <p>Well this looks like it is basically working. The dump of the function
356 shows the "no argument function that always returns double" that we synthesize
357 for each top-level expression that is typed in. This demonstrates very basic
358 functionality, but can we do more?</p>
360 <div class="doc_code">
361 <pre>
362 ready&gt; <b>def testfunc(x y) x + y*2; </b>
363 Read function definition:
364 define double @testfunc(double %x, double %y) {
365 entry:
366 %multmp = fmul double %y, 2.000000e+00
367 %addtmp = fadd double %multmp, %x
368 ret double %addtmp
371 ready&gt; <b>testfunc(4, 10);</b>
372 define double @""() {
373 entry:
374 %calltmp = call double @testfunc(double 4.000000e+00, double 1.000000e+01)
375 ret double %calltmp
378 <em>Evaluated to 24.000000</em>
379 </pre>
380 </div>
382 <p>This illustrates that we can now call user code, but there is something a bit
383 subtle going on here. Note that we only invoke the JIT on the anonymous
384 functions that <em>call testfunc</em>, but we never invoked it
385 on <em>testfunc</em> itself. What actually happened here is that the JIT
386 scanned for all non-JIT'd functions transitively called from the anonymous
387 function and compiled all of them before returning
388 from <tt>getPointerToFunction()</tt>.</p>
390 <p>The JIT provides a number of other more advanced interfaces for things like
391 freeing allocated machine code, rejit'ing functions to update them, etc.
392 However, even with this simple code, we get some surprisingly powerful
393 capabilities - check this out (I removed the dump of the anonymous functions,
394 you should get the idea by now :) :</p>
396 <div class="doc_code">
397 <pre>
398 ready&gt; <b>extern sin(x);</b>
399 Read extern:
400 declare double @sin(double)
402 ready&gt; <b>extern cos(x);</b>
403 Read extern:
404 declare double @cos(double)
406 ready&gt; <b>sin(1.0);</b>
407 <em>Evaluated to 0.841471</em>
409 ready&gt; <b>def foo(x) sin(x)*sin(x) + cos(x)*cos(x);</b>
410 Read function definition:
411 define double @foo(double %x) {
412 entry:
413 %calltmp = call double @sin(double %x)
414 %multmp = fmul double %calltmp, %calltmp
415 %calltmp2 = call double @cos(double %x)
416 %multmp4 = fmul double %calltmp2, %calltmp2
417 %addtmp = fadd double %multmp, %multmp4
418 ret double %addtmp
421 ready&gt; <b>foo(4.0);</b>
422 <em>Evaluated to 1.000000</em>
423 </pre>
424 </div>
426 <p>Whoa, how does the JIT know about sin and cos? The answer is surprisingly
427 simple: in this
428 example, the JIT started execution of a function and got to a function call. It
429 realized that the function was not yet JIT compiled and invoked the standard set
430 of routines to resolve the function. In this case, there is no body defined
431 for the function, so the JIT ended up calling "<tt>dlsym("sin")</tt>" on the
432 Kaleidoscope process itself.
433 Since "<tt>sin</tt>" is defined within the JIT's address space, it simply
434 patches up calls in the module to call the libm version of <tt>sin</tt>
435 directly.</p>
437 <p>The LLVM JIT provides a number of interfaces (look in the
438 <tt>ExecutionEngine.h</tt> file) for controlling how unknown functions get
439 resolved. It allows you to establish explicit mappings between IR objects and
440 addresses (useful for LLVM global variables that you want to map to static
441 tables, for example), allows you to dynamically decide on the fly based on the
442 function name, and even allows you to have the JIT compile functions lazily the
443 first time they're called.</p>
445 <p>One interesting application of this is that we can now extend the language
446 by writing arbitrary C++ code to implement operations. For example, if we add:
447 </p>
449 <div class="doc_code">
450 <pre>
451 /// putchard - putchar that takes a double and returns 0.
452 extern "C"
453 double putchard(double X) {
454 putchar((char)X);
455 return 0;
457 </pre>
458 </div>
460 <p>Now we can produce simple output to the console by using things like:
461 "<tt>extern putchard(x); putchard(120);</tt>", which prints a lowercase 'x' on
462 the console (120 is the ASCII code for 'x'). Similar code could be used to
463 implement file I/O, console input, and many other capabilities in
464 Kaleidoscope.</p>
466 <p>This completes the JIT and optimizer chapter of the Kaleidoscope tutorial. At
467 this point, we can compile a non-Turing-complete programming language, optimize
468 and JIT compile it in a user-driven way. Next up we'll look into <a
469 href="LangImpl5.html">extending the language with control flow constructs</a>,
470 tackling some interesting LLVM IR issues along the way.</p>
472 </div>
474 <!-- *********************************************************************** -->
475 <div class="doc_section"><a name="code">Full Code Listing</a></div>
476 <!-- *********************************************************************** -->
478 <div class="doc_text">
481 Here is the complete code listing for our running example, enhanced with the
482 LLVM JIT and optimizer. To build this example, use:
483 </p>
485 <div class="doc_code">
486 <pre>
487 # Compile
488 g++ -g toy.cpp `llvm-config --cppflags --ldflags --libs core jit native` -O3 -o toy
489 # Run
490 ./toy
491 </pre>
492 </div>
495 If you are compiling this on Linux, make sure to add the "-rdynamic" option
496 as well. This makes sure that the external functions are resolved properly
497 at runtime.</p>
499 <p>Here is the code:</p>
501 <div class="doc_code">
502 <pre>
503 #include "llvm/DerivedTypes.h"
504 #include "llvm/ExecutionEngine/ExecutionEngine.h"
505 #include "llvm/ExecutionEngine/JIT.h"
506 #include "llvm/LLVMContext.h"
507 #include "llvm/Module.h"
508 #include "llvm/PassManager.h"
509 #include "llvm/Analysis/Verifier.h"
510 #include "llvm/Target/TargetData.h"
511 #include "llvm/Target/TargetSelect.h"
512 #include "llvm/Transforms/Scalar.h"
513 #include "llvm/Support/IRBuilder.h"
514 #include &lt;cstdio&gt;
515 #include &lt;string&gt;
516 #include &lt;map&gt;
517 #include &lt;vector&gt;
518 using namespace llvm;
520 //===----------------------------------------------------------------------===//
521 // Lexer
522 //===----------------------------------------------------------------------===//
524 // The lexer returns tokens [0-255] if it is an unknown character, otherwise one
525 // of these for known things.
526 enum Token {
527 tok_eof = -1,
529 // commands
530 tok_def = -2, tok_extern = -3,
532 // primary
533 tok_identifier = -4, tok_number = -5
536 static std::string IdentifierStr; // Filled in if tok_identifier
537 static double NumVal; // Filled in if tok_number
539 /// gettok - Return the next token from standard input.
540 static int gettok() {
541 static int LastChar = ' ';
543 // Skip any whitespace.
544 while (isspace(LastChar))
545 LastChar = getchar();
547 if (isalpha(LastChar)) { // identifier: [a-zA-Z][a-zA-Z0-9]*
548 IdentifierStr = LastChar;
549 while (isalnum((LastChar = getchar())))
550 IdentifierStr += LastChar;
552 if (IdentifierStr == "def") return tok_def;
553 if (IdentifierStr == "extern") return tok_extern;
554 return tok_identifier;
557 if (isdigit(LastChar) || LastChar == '.') { // Number: [0-9.]+
558 std::string NumStr;
559 do {
560 NumStr += LastChar;
561 LastChar = getchar();
562 } while (isdigit(LastChar) || LastChar == '.');
564 NumVal = strtod(NumStr.c_str(), 0);
565 return tok_number;
568 if (LastChar == '#') {
569 // Comment until end of line.
570 do LastChar = getchar();
571 while (LastChar != EOF &amp;&amp; LastChar != '\n' &amp;&amp; LastChar != '\r');
573 if (LastChar != EOF)
574 return gettok();
577 // Check for end of file. Don't eat the EOF.
578 if (LastChar == EOF)
579 return tok_eof;
581 // Otherwise, just return the character as its ascii value.
582 int ThisChar = LastChar;
583 LastChar = getchar();
584 return ThisChar;
587 //===----------------------------------------------------------------------===//
588 // Abstract Syntax Tree (aka Parse Tree)
589 //===----------------------------------------------------------------------===//
591 /// ExprAST - Base class for all expression nodes.
592 class ExprAST {
593 public:
594 virtual ~ExprAST() {}
595 virtual Value *Codegen() = 0;
598 /// NumberExprAST - Expression class for numeric literals like "1.0".
599 class NumberExprAST : public ExprAST {
600 double Val;
601 public:
602 NumberExprAST(double val) : Val(val) {}
603 virtual Value *Codegen();
606 /// VariableExprAST - Expression class for referencing a variable, like "a".
607 class VariableExprAST : public ExprAST {
608 std::string Name;
609 public:
610 VariableExprAST(const std::string &amp;name) : Name(name) {}
611 virtual Value *Codegen();
614 /// BinaryExprAST - Expression class for a binary operator.
615 class BinaryExprAST : public ExprAST {
616 char Op;
617 ExprAST *LHS, *RHS;
618 public:
619 BinaryExprAST(char op, ExprAST *lhs, ExprAST *rhs)
620 : Op(op), LHS(lhs), RHS(rhs) {}
621 virtual Value *Codegen();
624 /// CallExprAST - Expression class for function calls.
625 class CallExprAST : public ExprAST {
626 std::string Callee;
627 std::vector&lt;ExprAST*&gt; Args;
628 public:
629 CallExprAST(const std::string &amp;callee, std::vector&lt;ExprAST*&gt; &amp;args)
630 : Callee(callee), Args(args) {}
631 virtual Value *Codegen();
634 /// PrototypeAST - This class represents the "prototype" for a function,
635 /// which captures its name, and its argument names (thus implicitly the number
636 /// of arguments the function takes).
637 class PrototypeAST {
638 std::string Name;
639 std::vector&lt;std::string&gt; Args;
640 public:
641 PrototypeAST(const std::string &amp;name, const std::vector&lt;std::string&gt; &amp;args)
642 : Name(name), Args(args) {}
644 Function *Codegen();
647 /// FunctionAST - This class represents a function definition itself.
648 class FunctionAST {
649 PrototypeAST *Proto;
650 ExprAST *Body;
651 public:
652 FunctionAST(PrototypeAST *proto, ExprAST *body)
653 : Proto(proto), Body(body) {}
655 Function *Codegen();
658 //===----------------------------------------------------------------------===//
659 // Parser
660 //===----------------------------------------------------------------------===//
662 /// CurTok/getNextToken - Provide a simple token buffer. CurTok is the current
663 /// token the parser is looking at. getNextToken reads another token from the
664 /// lexer and updates CurTok with its results.
665 static int CurTok;
666 static int getNextToken() {
667 return CurTok = gettok();
670 /// BinopPrecedence - This holds the precedence for each binary operator that is
671 /// defined.
672 static std::map&lt;char, int&gt; BinopPrecedence;
674 /// GetTokPrecedence - Get the precedence of the pending binary operator token.
675 static int GetTokPrecedence() {
676 if (!isascii(CurTok))
677 return -1;
679 // Make sure it's a declared binop.
680 int TokPrec = BinopPrecedence[CurTok];
681 if (TokPrec &lt;= 0) return -1;
682 return TokPrec;
685 /// Error* - These are little helper functions for error handling.
686 ExprAST *Error(const char *Str) { fprintf(stderr, "Error: %s\n", Str);return 0;}
687 PrototypeAST *ErrorP(const char *Str) { Error(Str); return 0; }
688 FunctionAST *ErrorF(const char *Str) { Error(Str); return 0; }
690 static ExprAST *ParseExpression();
692 /// identifierexpr
693 /// ::= identifier
694 /// ::= identifier '(' expression* ')'
695 static ExprAST *ParseIdentifierExpr() {
696 std::string IdName = IdentifierStr;
698 getNextToken(); // eat identifier.
700 if (CurTok != '(') // Simple variable ref.
701 return new VariableExprAST(IdName);
703 // Call.
704 getNextToken(); // eat (
705 std::vector&lt;ExprAST*&gt; Args;
706 if (CurTok != ')') {
707 while (1) {
708 ExprAST *Arg = ParseExpression();
709 if (!Arg) return 0;
710 Args.push_back(Arg);
712 if (CurTok == ')') break;
714 if (CurTok != ',')
715 return Error("Expected ')' or ',' in argument list");
716 getNextToken();
720 // Eat the ')'.
721 getNextToken();
723 return new CallExprAST(IdName, Args);
726 /// numberexpr ::= number
727 static ExprAST *ParseNumberExpr() {
728 ExprAST *Result = new NumberExprAST(NumVal);
729 getNextToken(); // consume the number
730 return Result;
733 /// parenexpr ::= '(' expression ')'
734 static ExprAST *ParseParenExpr() {
735 getNextToken(); // eat (.
736 ExprAST *V = ParseExpression();
737 if (!V) return 0;
739 if (CurTok != ')')
740 return Error("expected ')'");
741 getNextToken(); // eat ).
742 return V;
745 /// primary
746 /// ::= identifierexpr
747 /// ::= numberexpr
748 /// ::= parenexpr
749 static ExprAST *ParsePrimary() {
750 switch (CurTok) {
751 default: return Error("unknown token when expecting an expression");
752 case tok_identifier: return ParseIdentifierExpr();
753 case tok_number: return ParseNumberExpr();
754 case '(': return ParseParenExpr();
758 /// binoprhs
759 /// ::= ('+' primary)*
760 static ExprAST *ParseBinOpRHS(int ExprPrec, ExprAST *LHS) {
761 // If this is a binop, find its precedence.
762 while (1) {
763 int TokPrec = GetTokPrecedence();
765 // If this is a binop that binds at least as tightly as the current binop,
766 // consume it, otherwise we are done.
767 if (TokPrec &lt; ExprPrec)
768 return LHS;
770 // Okay, we know this is a binop.
771 int BinOp = CurTok;
772 getNextToken(); // eat binop
774 // Parse the primary expression after the binary operator.
775 ExprAST *RHS = ParsePrimary();
776 if (!RHS) return 0;
778 // If BinOp binds less tightly with RHS than the operator after RHS, let
779 // the pending operator take RHS as its LHS.
780 int NextPrec = GetTokPrecedence();
781 if (TokPrec &lt; NextPrec) {
782 RHS = ParseBinOpRHS(TokPrec+1, RHS);
783 if (RHS == 0) return 0;
786 // Merge LHS/RHS.
787 LHS = new BinaryExprAST(BinOp, LHS, RHS);
791 /// expression
792 /// ::= primary binoprhs
794 static ExprAST *ParseExpression() {
795 ExprAST *LHS = ParsePrimary();
796 if (!LHS) return 0;
798 return ParseBinOpRHS(0, LHS);
801 /// prototype
802 /// ::= id '(' id* ')'
803 static PrototypeAST *ParsePrototype() {
804 if (CurTok != tok_identifier)
805 return ErrorP("Expected function name in prototype");
807 std::string FnName = IdentifierStr;
808 getNextToken();
810 if (CurTok != '(')
811 return ErrorP("Expected '(' in prototype");
813 std::vector&lt;std::string&gt; ArgNames;
814 while (getNextToken() == tok_identifier)
815 ArgNames.push_back(IdentifierStr);
816 if (CurTok != ')')
817 return ErrorP("Expected ')' in prototype");
819 // success.
820 getNextToken(); // eat ')'.
822 return new PrototypeAST(FnName, ArgNames);
825 /// definition ::= 'def' prototype expression
826 static FunctionAST *ParseDefinition() {
827 getNextToken(); // eat def.
828 PrototypeAST *Proto = ParsePrototype();
829 if (Proto == 0) return 0;
831 if (ExprAST *E = ParseExpression())
832 return new FunctionAST(Proto, E);
833 return 0;
836 /// toplevelexpr ::= expression
837 static FunctionAST *ParseTopLevelExpr() {
838 if (ExprAST *E = ParseExpression()) {
839 // Make an anonymous proto.
840 PrototypeAST *Proto = new PrototypeAST("", std::vector&lt;std::string&gt;());
841 return new FunctionAST(Proto, E);
843 return 0;
846 /// external ::= 'extern' prototype
847 static PrototypeAST *ParseExtern() {
848 getNextToken(); // eat extern.
849 return ParsePrototype();
852 //===----------------------------------------------------------------------===//
853 // Code Generation
854 //===----------------------------------------------------------------------===//
856 static Module *TheModule;
857 static IRBuilder&lt;&gt; Builder(getGlobalContext());
858 static std::map&lt;std::string, Value*&gt; NamedValues;
859 static FunctionPassManager *TheFPM;
861 Value *ErrorV(const char *Str) { Error(Str); return 0; }
863 Value *NumberExprAST::Codegen() {
864 return ConstantFP::get(getGlobalContext(), APFloat(Val));
867 Value *VariableExprAST::Codegen() {
868 // Look this variable up in the function.
869 Value *V = NamedValues[Name];
870 return V ? V : ErrorV("Unknown variable name");
873 Value *BinaryExprAST::Codegen() {
874 Value *L = LHS-&gt;Codegen();
875 Value *R = RHS-&gt;Codegen();
876 if (L == 0 || R == 0) return 0;
878 switch (Op) {
879 case '+': return Builder.CreateFAdd(L, R, "addtmp");
880 case '-': return Builder.CreateFSub(L, R, "subtmp");
881 case '*': return Builder.CreateFMul(L, R, "multmp");
882 case '&lt;':
883 L = Builder.CreateFCmpULT(L, R, "cmptmp");
884 // Convert bool 0/1 to double 0.0 or 1.0
885 return Builder.CreateUIToFP(L, Type::getDoubleTy(getGlobalContext()),
886 "booltmp");
887 default: return ErrorV("invalid binary operator");
891 Value *CallExprAST::Codegen() {
892 // Look up the name in the global module table.
893 Function *CalleeF = TheModule-&gt;getFunction(Callee);
894 if (CalleeF == 0)
895 return ErrorV("Unknown function referenced");
897 // If argument mismatch error.
898 if (CalleeF-&gt;arg_size() != Args.size())
899 return ErrorV("Incorrect # arguments passed");
901 std::vector&lt;Value*&gt; ArgsV;
902 for (unsigned i = 0, e = Args.size(); i != e; ++i) {
903 ArgsV.push_back(Args[i]-&gt;Codegen());
904 if (ArgsV.back() == 0) return 0;
907 return Builder.CreateCall(CalleeF, ArgsV.begin(), ArgsV.end(), "calltmp");
910 Function *PrototypeAST::Codegen() {
911 // Make the function type: double(double,double) etc.
912 std::vector&lt;const Type*&gt; Doubles(Args.size(),
913 Type::getDoubleTy(getGlobalContext()));
914 FunctionType *FT = FunctionType::get(Type::getDoubleTy(getGlobalContext()),
915 Doubles, false);
917 Function *F = Function::Create(FT, Function::ExternalLinkage, Name, TheModule);
919 // If F conflicted, there was already something named 'Name'. If it has a
920 // body, don't allow redefinition or reextern.
921 if (F-&gt;getName() != Name) {
922 // Delete the one we just made and get the existing one.
923 F-&gt;eraseFromParent();
924 F = TheModule-&gt;getFunction(Name);
926 // If F already has a body, reject this.
927 if (!F-&gt;empty()) {
928 ErrorF("redefinition of function");
929 return 0;
932 // If F took a different number of args, reject.
933 if (F-&gt;arg_size() != Args.size()) {
934 ErrorF("redefinition of function with different # args");
935 return 0;
939 // Set names for all arguments.
940 unsigned Idx = 0;
941 for (Function::arg_iterator AI = F-&gt;arg_begin(); Idx != Args.size();
942 ++AI, ++Idx) {
943 AI-&gt;setName(Args[Idx]);
945 // Add arguments to variable symbol table.
946 NamedValues[Args[Idx]] = AI;
949 return F;
952 Function *FunctionAST::Codegen() {
953 NamedValues.clear();
955 Function *TheFunction = Proto-&gt;Codegen();
956 if (TheFunction == 0)
957 return 0;
959 // Create a new basic block to start insertion into.
960 BasicBlock *BB = BasicBlock::Create(getGlobalContext(), "entry", TheFunction);
961 Builder.SetInsertPoint(BB);
963 if (Value *RetVal = Body-&gt;Codegen()) {
964 // Finish off the function.
965 Builder.CreateRet(RetVal);
967 // Validate the generated code, checking for consistency.
968 verifyFunction(*TheFunction);
970 // Optimize the function.
971 TheFPM-&gt;run(*TheFunction);
973 return TheFunction;
976 // Error reading body, remove function.
977 TheFunction-&gt;eraseFromParent();
978 return 0;
981 //===----------------------------------------------------------------------===//
982 // Top-Level parsing and JIT Driver
983 //===----------------------------------------------------------------------===//
985 static ExecutionEngine *TheExecutionEngine;
987 static void HandleDefinition() {
988 if (FunctionAST *F = ParseDefinition()) {
989 if (Function *LF = F-&gt;Codegen()) {
990 fprintf(stderr, "Read function definition:");
991 LF-&gt;dump();
993 } else {
994 // Skip token for error recovery.
995 getNextToken();
999 static void HandleExtern() {
1000 if (PrototypeAST *P = ParseExtern()) {
1001 if (Function *F = P-&gt;Codegen()) {
1002 fprintf(stderr, "Read extern: ");
1003 F-&gt;dump();
1005 } else {
1006 // Skip token for error recovery.
1007 getNextToken();
1011 static void HandleTopLevelExpression() {
1012 // Evaluate a top-level expression into an anonymous function.
1013 if (FunctionAST *F = ParseTopLevelExpr()) {
1014 if (Function *LF = F-&gt;Codegen()) {
1015 // JIT the function, returning a function pointer.
1016 void *FPtr = TheExecutionEngine-&gt;getPointerToFunction(LF);
1018 // Cast it to the right type (takes no arguments, returns a double) so we
1019 // can call it as a native function.
1020 double (*FP)() = (double (*)())(intptr_t)FPtr;
1021 fprintf(stderr, "Evaluated to %f\n", FP());
1023 } else {
1024 // Skip token for error recovery.
1025 getNextToken();
1029 /// top ::= definition | external | expression | ';'
1030 static void MainLoop() {
1031 while (1) {
1032 fprintf(stderr, "ready&gt; ");
1033 switch (CurTok) {
1034 case tok_eof: return;
1035 case ';': getNextToken(); break; // ignore top-level semicolons.
1036 case tok_def: HandleDefinition(); break;
1037 case tok_extern: HandleExtern(); break;
1038 default: HandleTopLevelExpression(); break;
1043 //===----------------------------------------------------------------------===//
1044 // "Library" functions that can be "extern'd" from user code.
1045 //===----------------------------------------------------------------------===//
1047 /// putchard - putchar that takes a double and returns 0.
1048 extern "C"
1049 double putchard(double X) {
1050 putchar((char)X);
1051 return 0;
1054 //===----------------------------------------------------------------------===//
1055 // Main driver code.
1056 //===----------------------------------------------------------------------===//
1058 int main() {
1059 InitializeNativeTarget();
1060 LLVMContext &amp;Context = getGlobalContext();
1062 // Install standard binary operators.
1063 // 1 is lowest precedence.
1064 BinopPrecedence['&lt;'] = 10;
1065 BinopPrecedence['+'] = 20;
1066 BinopPrecedence['-'] = 20;
1067 BinopPrecedence['*'] = 40; // highest.
1069 // Prime the first token.
1070 fprintf(stderr, "ready&gt; ");
1071 getNextToken();
1073 // Make the module, which holds all the code.
1074 TheModule = new Module("my cool jit", Context);
1076 // Create the JIT. This takes ownership of the module.
1077 std::string ErrStr;
1078 TheExecutionEngine = EngineBuilder(TheModule).setErrorStr(&ErrStr).create();
1079 if (!TheExecutionEngine) {
1080 fprintf(stderr, "Could not create ExecutionEngine: %s\n", ErrStr.c_str());
1081 exit(1);
1084 FunctionPassManager OurFPM(TheModule);
1086 // Set up the optimizer pipeline. Start with registering info about how the
1087 // target lays out data structures.
1088 OurFPM.add(new TargetData(*TheExecutionEngine-&gt;getTargetData()));
1089 // Do simple "peephole" optimizations and bit-twiddling optzns.
1090 OurFPM.add(createInstructionCombiningPass());
1091 // Reassociate expressions.
1092 OurFPM.add(createReassociatePass());
1093 // Eliminate Common SubExpressions.
1094 OurFPM.add(createGVNPass());
1095 // Simplify the control flow graph (deleting unreachable blocks, etc).
1096 OurFPM.add(createCFGSimplificationPass());
1098 OurFPM.doInitialization();
1100 // Set the global so the code gen can use this.
1101 TheFPM = &amp;OurFPM;
1103 // Run the main "interpreter loop" now.
1104 MainLoop();
1106 TheFPM = 0;
1108 // Print out all of the generated code.
1109 TheModule-&gt;dump();
1111 return 0;
1113 </pre>
1114 </div>
1116 <a href="LangImpl5.html">Next: Extending the language: control flow</a>
1117 </div>
1119 <!-- *********************************************************************** -->
1120 <hr>
1121 <address>
1122 <a href="http://jigsaw.w3.org/css-validator/check/referer"><img
1123 src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!"></a>
1124 <a href="http://validator.w3.org/check/referer"><img
1125 src="http://www.w3.org/Icons/valid-html401" alt="Valid HTML 4.01!"></a>
1127 <a href="mailto:sabre@nondot.org">Chris Lattner</a><br>
1128 <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br>
1129 Last modified: $Date$
1130 </address>
1131 </body>
1132 </html>