3 <style|<tuple|book|fangle|header-book|tmdoc-keyboard>>
6 <hide-preamble|<assign|LyX|<macro|L<space|-0.1667em><move|Y|0fn|-0.25em><space|-0.125em>X>><assign|par-first|0fn><assign|par-par-sep|0.5fn>>
8 <doc-data|<doc-title|fangle>|<doc-author-data|<author-name|Sam
9 Liddicott>|<\author-address>
11 </author-address>>|<doc-date|August 2009>>
13 <section*|Introduction>
15 <name|Fangle> is a tool for fangled literate programming. Newfangled is
16 defined as <em|New and often needlessly novel> by
17 <name|TheFreeDictionary.com>.
19 In this case, fangled means yet another not-so-new<footnote|but improved.>
20 method for literate programming.
22 <name|Literate Programming> has a long history starting with the great
23 <name|Donald Knuth> himself, whose literate programming tools seem to make
24 use of as many escape sequences for semantic markup as <TeX> (also by
27 <name|Norman Ramsey> wrote the <name|Noweb> set of tools
28 (<verbatim|notangle>, <verbatim|noweave> and <verbatim|noroots>) and
29 helpfully reduced the amount of magic character sequences to pretty much
30 just <verbatim|\<less\>\<less\>>, <verbatim|\<gtr\>\<gtr\>> and
31 <verbatim|@>, and in doing so brought the wonders of literate programming
34 While using the <LyX> editor for <LaTeX> editing I had various troubles
35 with the noweb tools, some of which were my fault, some of which were
36 noweb's fault and some of which were <LyX>'s fault.
38 <name|Noweb> generally brought literate programming to the masses through
39 removing some of the complexity of the original literate programming, but
40 this would be of no advantage to me if the <LyX> / <LaTeX> combination
41 brought more complications in their place.
43 <name|Fangle> was thus born (originally called <name|Newfangle>) as an awk
44 replacement for notangle, adding some important features, like better
45 integration with <LyX> and <LaTeX> (and later <TeXmacs>), multiple output
46 format conversions, and fixing notangle bugs like indentation when using -L
49 Significantly, fangle is just one program which replaces various programs
50 in <name|Noweb>. Noweave is done away with and implemented directly as
51 <LaTeX> macros, and noroots is implemented as a function of the untangler
54 Fangle is written in awk for portability reasons, awk being available for
55 most platforms. A Python version<\footnote>
56 hasn't anyone implemented awk in python yet?
57 </footnote> was considered for the benefit of <LyX> but a scheme version
58 for <TeXmacs> will probably materialise first; as <TeXmacs> macro
59 capabilities help make edit-time and format-time rendering of fangle chunks
60 simple enough for my weak brain.
62 As an extension to many literate-programming styles, Fangle permits code
63 chunks to take parameters and thus operate somewhat like C pre-processor
64 macros, or like C++ templates. Name parameters (or even local
65 <em|variables> in the callers scope) are anticipated, as parameterized
66 chunks <emdash> useful though they are <emdash> are hard to comprehend in
67 the literate document.
69 <section*|License><new-page*><label|License>
71 Fangle is licensed under the GPL 3 (or later).
73 This doesn't mean that sources generated by fangle must be licensed under
76 This doesn't mean that you can't use or distribute fangle with sources of
77 an incompatible license, but it means you must make the source of fangle
80 As fangle is currently written in awk, an interpreted language, this should
83 <\nf-chunk|gpl3-copyright>
84 <item>fangle - fully featured notangle replacement in awk
88 <item>Copyright (C) 2009-2010 Sam Liddicott
89 \<less\>sam@liddicott.com\<gtr\>
93 <item>This program is free software: you can redistribute it and/or
96 <item>it under the terms of the GNU General Public License as published
99 <item>the Free Software Foundation, either version 3 of the License, or
101 <item>(at your option) any later version.
105 <item>This program is distributed in the hope that it will be useful,
107 <item>but WITHOUT ANY WARRANTY; without even the implied warranty of
109 <item>MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. \ See the
111 <item>GNU General Public License for more details.
115 <item>You should have received a copy of the GNU General Public License
117 <item>along with this program. \ If not, see
118 \<less\>http://www.gnu.org/licenses/\<gtr\>.
121 <\table-of-contents|toc>
126 <chapter|Introduction to Literate Programming>
128 Todo: Should really follow on from a part-0 explanation of what literate
131 <chapter|Running Fangle>
133 Fangle is a replacement for <name|noweb>, which consists of
134 <verbatim|notangle>, <verbatim|noroots> and <verbatim|noweave>.
136 Like <verbatim|notangle> and <verbatim|noroots>, <verbatim|fangle> can read
137 multiple named files, or from stdin.
139 <section|Listing roots>
141 The -r option causes fangle to behave like noroots.
143 <code*|fangle -r filename.tex>
145 will print out the fangle roots of a tex file.\
147 Unlike the <verbatim|noroots> command, the printed roots are not enclosed
148 in angle brackets e.g. <verbatim|\<less\>\<less\>name\<gtr\>\<gtr\>>,
149 unless at least one of the roots is defined using the <verbatim|notangle>
150 notation <verbatim|\<less\>\<less\>name\<gtr\>\<gtr\>=>.
152 Also, unlike noroots, it prints out all roots --- not just those that are
153 not used elsewhere. I find that a root not being used doesn't make it
154 particularly top level <emdash> and so-called top level roots could also be
155 included in another root as well.\
157 My convention is that top level roots to be extracted begin with
158 <verbatim|./> and have the form of a filename.
160 Makefile.inc, discussed in <reference|makefile.inc>, can automatically
161 extract all such sources prefixed with <verbatim|./>
163 <section|Extracting roots>
165 notangle's <verbatim|-R> and <verbatim|-L> options are supported.
167 If you are using <LyX> or <LaTeX>, the standard way to extract a file would
170 <verbatim|fangle -R./Makefile.inc fangle.tex \<gtr\> ./Makefile.inc>
172 If you are using <TeXmacs>, the standard way to extract a file would
175 <verbatim|fangle -R./Makefile.inc fangle.txt \<gtr\> ./Makefile.inc>
177 <TeXmacs> users would obtain the text file with a <em|verbatim> export from
178 <TeXmacs> which can be done on the command line with <verbatim|texmacs -s
179 -c fangle.tm fangle.txt -q>
181 Unlike the <verbatim|noroots> command, the <verbatim|<verbatim|-L>> option
182 to generate C pre-preocessor <verbatim|#file> style line-number
183 directives,does not break indenting of the generated file..
185 Also, thanks to mode tracking (described in <reference|modes>) the
186 <verbatim|-L> option does not interrupt (and break) multi-line C macros
189 This does mean that sometimes the compiler might calculate the source line
190 wrongly when generating error messages in such cases, but there isn't any
191 other way around if multi-line macros include other chunks.
193 Future releases will include a mapping file so that line/character
194 references from the C compiler can be converted to the correct part of the
197 <section|Formatting the document>
199 The noweave replacement built into the editing and formatting environment
200 for <TeXmacs>, <LyX> (which uses <LaTeX>), and even for raw <LaTeX>.
202 Use of fangle with <TeXmacs>, <LyX> and <LaTeX> are explained the the next
205 <chapter|Using Fangle with <LaTeX>>
207 Because the noweave replacement is impemented in <LaTeX>, there is no
208 processing stage required before running the <LaTeX> command. Of course,
209 <LaTeX> may need running two or more times, so that the code chunk
210 references can be fully calculated.
212 The formatting is managed by a set of macros shown in
213 <reference|latex-source>, and can be included with:
215 <verbatim|\\usepackage{fangle.sty}>
217 Norman Ramsay's origial <filename|noweb.sty> package is currently required
218 as it is used for formatting the code chunk captions.
220 The <filename|listings.sty> package is required, and is used for formatting
221 the code chunks and syntax highlighting.
223 The <filename|xargs.sty> package is also required, and makes writing
224 <LaTeX> macro so much more pleasant.
226 <todo|Add examples of use of Macros>
228 <chapter|Using Fangle with <LyX>>
230 <LyX> uses the same <LaTeX> macros shown in <reference|latex-source> as
231 part of a <LyX> module file <filename|fangle.module>, which automatically
232 includes the macros in the document pre-amble provided that the fangle
233 <LyX> module is used in the document.
235 <section|Installing the <LyX> module>
237 Copy <filename|fangle.module> to your <LyX> layouts directory, which for
238 unix users will be <filename|~/.lyx/layouts>
240 In order to make the new literate styles availalble, you will need to
241 reconfigure <LyX> by clicking Tools-\<gtr\>Reconfigure, and then re-start
244 <section|Obtaining a decent mono font>
246 The syntax high-lighting features of <name|lstlistings> makes use of bold;
247 however a mono-space tt font is used to typeset the listings. Obtaining a
248 <with|font-family|tt|<strong|bold> tt font> can be impossibly difficult and
249 amazingly easy. I spent many hours at it, following complicated
250 instructions from those who had spend many hours over it, and was finally
251 delivered the simple solution on the lyx mailing list.
255 The simple way was to add this to my preamble:
258 \\usepackage{txfonts}
260 \\renewcommand{\\ttdefault}{txtt}
267 The next simplest way was to use ams poor-mans-bold, by adding this to the
273 %\\renewcommand{\\ttdefault}{txtt}
275 %somehow make \\pmb be the command for bold, forgot how, sorry, above
279 It works, but looks wretched on the dvi viewer.
281 <subsection|Luximono>
283 The lstlistings documention suggests using Luximono.
285 Luximono was installed according to the instructions in Ubuntu Forums
286 thread 1159181<\footnote>
287 http://ubuntuforums.org/showthread.php?t=1159181
288 </footnote> with tips from miknight<\footnote>
289 http://miknight.blogspot.com/2005/11/how-to-install-luxi-mono-font-in.html
290 </footnote> stating that <verbatim|sudo updmap --enable MixedMap ul9.map>
291 is required. It looks fine in PDF and PS view but still looks rotten in dvi
294 <section|Formatting your Lyx document>
296 It is not necessary to base your literate document on any of the original
297 <LyX> literate classes; so select a regular class for your document type.
299 Add the new module <em|Fangle Literate Listings> and also <em|Logical
300 Markup> which is very useful.
302 In the drop-down style listbox you should notice a new style defined,
305 When you wish to insert a literate chunk, you enter it's plain name in the
306 Chunk style, instead of the old <name|noweb> method that uses
307 <verbatim|\<less\>\<less\>name\<gtr\>\<gtr\>=> type tags. In the line (or
308 paragraph) following the chunk name, you insert a listing with:
309 Insert-\<gtr\>Program Listing.
311 Inside the white listing box you can type (or paste using
312 <kbd|shift+ctrl+V>) your listing. There is no need to use <kbd|ctrl+enter>
313 at the end of lines as with some older <LyX> literate techniques --- just
314 press enter as normal.
316 <subsection|Customising the listing appearance>
318 The code is formatted using the <name|lstlistings> package. The chunk style
319 doesn't just define the chunk name, but can also define any other chunk
320 options supported by the lstlistings package <verbatim|\\lstset> command.
321 In fact, what you type in the chunk style is raw latex. If you want to set
322 the chunk language without having to right-click the listing, just add
323 <verbatim|,lanuage=C> after the chunk name. (Currently the language will
324 affect all subsequent listings, so you may need to specify
325 <verbatim|,language=> quite a lot).
327 <todo|so fix the bug>
329 Of course you can do this by editing the listings box advanced properties
330 by right-clicking on the listings box, but that takes longer, and you can't
331 see at-a-glance what the advanced settings are while editing the document;
332 also advanced settings apply only to that box --- the chunk settings apply
333 through the rest of the document<\footnote>
334 It ought to apply only to subsequent chunks of the same name. I'll fix
338 <todo|So make sure they only apply to chunks of that name>
340 <subsection|Global customisations>
342 As lstlistings is used to set the code chunks, it's <verbatim|\\lstset>
343 command can be used in the pre-amble to set some document wide settings.
345 If your source has many words with long sequences of capital letters, then
346 <verbatim|columns=fullflexible> may be a good idea, or the capital letters
347 will get crowded. (I think lstlistings ought to use a slightly smaller font
348 for captial letters so that they still fit).
350 The font family <verbatim|\\ttfamily> looks more normal for code, but has
351 no bold (an alternate typewriter font is used).\
353 With <verbatim|\\ttfamily>, I must also specify
354 <verbatim|columns=fullflexible> or the wrong letter spacing is used.
356 In my <LaTeX> pre-amble I usually specialise my code format with:
358 <\nf-chunk|document-preamble>
361 <item>numbers=left, stepnumber=1, numbersep=5pt,
363 <item>breaklines=false,
365 <item>basicstyle=\\footnotesize\\ttfamily,
367 <item>numberstyle=\\tiny,
371 <item>columns=fullflexible,
373 <item>numberfirstline=true
380 <section|Configuring the build script>
382 You can invoke code extraction and building from the <LyX> menu option
383 Document-\<gtr\>Build Program.
385 First, make sure you don't have a conversion defined for Lyx-\<gtr\>Program
387 From the menu Tools-\<gtr\>Preferences, add a conversion from
388 Latex(Plain)-\<gtr\>Program as:
391 set -x ; fangle -Rlyx-build $$i \|\
393 \ \ env LYX_b=$$b LYX_i=$$i LYX_o=$$o LYX_p=$$p LYX_r=$$r bash
396 (But don't cut-n-paste it from this document or you may be be pasting a
397 multi-line string which will break your lyx preferences file).\
399 I hope that one day, <LyX> will set these into the environment when calling
402 You may also want to consider adding options to this conversion...
404 <verbatim|parselog=/usr/share/lyx/scripts/listerrors>
406 ...but if you do you will lose your stderr<\footnote>
407 There is some bash plumbing to get a copy of stderr but this footnote is
411 Now, a shell script chunk called <filename|lyx-build> will be extracted and
412 run whenever you choose the Document-\<gtr\>Build Program menu item.
414 This document was originally managed using <LyX> and lyx-build script for
415 this document is shown here for historical reference.\
418 lyx -e latex fangle.lyx && \\
420 \ \ fangle fangle.lyx \<gtr\> ./autoboot
423 This looks simple enough, but as mentioned, fangle has to be had from
424 somewhere before it can be extracted.
428 When the lyx-build chunk is executed, the current directory will be a
429 temporary directory, and <verbatim|LYX_SOURCE> will refer to the tex file
430 in this temporary directory. This is unfortunate as our makefile wants to
431 run from the project directory where the Lyx file is kept.
433 We can extract the project directory from <verbatim|$$r>, and derive the
434 probable Lyx filename from the noweb file that Lyx generated.
436 <\nf-chunk|lyx-build-helper>
437 <item>PROJECT_DIR="$LYX_r"
439 <item>LYX_SRC="$PROJECT_DIR/${LYX_i%.tex}.lyx"
441 <item>TEX_DIR="$LYX_p"
443 <item>TEX_SRC="$TEX_DIR/$LYX_i"
446 And then we can define a lyx-build fragment similar to the autoboot
449 <\nf-chunk|lyx-build>
452 <item><nf-ref|lyx-build-helper|>
454 <item>cd $PROJECT_DIR \|\| exit 1
458 <item>#/usr/bin/fangle -filter ./notanglefix-filter \\
460 <item># \ -R./Makefile.inc "../../noweb-lyx/noweb-lyx3.lyx" \\
462 <item># \ \| sed '/NOWEB_SOURCE=/s/=.*/=samba4-dfs.lyx/' \\
464 <item># \ \<gtr\> ./Makefile.inc
468 <item>#make -f ./Makefile.inc fangle_sources
473 <chapter|Using Fangle with <TeXmacs>>
475 <todo|Write this chapter>
477 <chapter|Fangle with Makefiles><label|makefile.inc>
479 Here we describe a <filename|Makefile.inc> that you can include in your own
480 Makefiles, or glue as a recursive make to other projects.
482 <filename|Makefile.inc> will cope with extracting all the other source
483 files from this or any specified literate document and keeping them up to
486 It may also be included by a <verbatim|Makefile> or <verbatim|Makefile.am>
487 defined in a literate document to automatically deal with the extraction of
488 source files and documents during normal builds.
490 Thus, if <verbatim|Makefile.inc> is included into a main project makefile
491 it add rules for the source files, capable of extracting the source files
492 from the literate document.
494 <section|A word about makefiles formats>
496 Whitespace formatting is very important in a Makefile. The first character
497 of each action line must be a TAB.\
500 target: pre-requisite
507 This requires that the literate programming environment have the ability to
508 represent a TAB character in a way that fangle will generate an actual TAB
511 We also adopt a convention that code chunks whose names beginning with
512 <verbatim|./> should always be automatically extracted from the document.
513 Code chunks whose names do not begin with <verbatim|./> are for internal
514 reference. Such chunks may be extracted directly, but will not be
515 automatically extracted by this Makefile.
517 <section|Extracting Sources>
519 Our makefile has two parts; variables must be defined before the targets
522 As we progress through this chapter, explaining concepts, we will be adding
523 lines to <nf-ref|Makefile.inc-vars|> and <nf-ref|Makefile.inc-targets|>
524 which are included in <nf-ref|./Makefile.inc|> below.
526 <\nf-chunk|./Makefile.inc>
527 <item><nf-ref|Makefile.inc-vars|>
529 <item><nf-ref|Makefile.inc-default-targets|>
531 <item><nf-ref|Makefile.inc-targets|>
534 We first define a placeholder for the tool <verbatim|fangle> in case it
535 cannot be found in the path.
537 <\nf-chunk|Makefile.inc-vars>
542 <item>RUN_FANGLE=$(AWK) -f $(FANGLE)
545 We also define a placeholder for <verbatim|LITERATE_SOURCE> to hold the
546 name of this document. This will normally be passed on the command line or
547 set by the including makefile.
549 <\nf-chunk|Makefile.inc-vars>
550 <item>#LITERATE_SOURCE=
553 Fangle cannot process <LyX> or <TeXmacs> documents directly, so the first
554 stage is to convert these to more suitable text based formats<\footnote>
555 <LyX> and <TeXmacs> formats are text-based, but not suitable for fangle
558 <subsection|Converting from <LyX> to <LaTeX>><label|Converting-from-Lyx>
560 The first stage will always be to convert the <LyX> file to a <LaTeX> file.
561 Fangle must run on a <TeX> file because the <LyX> command
562 <verbatim|server-goto-file-line><\footnote>
563 The Lyx command <verbatim|server-goto-file-line> is used to position the
564 Lyx cursor at the compiler errors.
565 </footnote> requries that the line number provided be a line of the <TeX>
566 file and always maps this the line in the <LyX> docment. We use
567 <verbatim|server-goto-file-line> when moving the cursor to error lines
568 during compile failures.
570 The command <verbatim|lyx -e literate fangle.lyx> will produce
571 <verbatim|fangle.tex>, a <TeX> file; so we define a make target to be the
572 same as the <LyX> file but with the <verbatim|.tex> extension.
574 The <verbatim|EXTRA_DIST> is for automake support so that the <TeX> files
575 will automaticaly be distributed with the source, to help those who don't
576 have <LyX> installed.
578 <\nf-chunk|Makefile.inc-vars>
579 <item>LYX_SOURCE=$(LITERATE_SOURCE) # but only the .lyx files
581 <item>TEX_SOURCE=$(LYX_SOURCE:.lyx=.tex)
583 <item>EXTRA_DIST+=$(TEX_SOURCE)
586 We then specify that the <TeX> source is to be generated from the <LyX>
589 <\nf-chunk|Makefile.inc-targets>
590 <item>.SUFFIXES: .tex .lyx
594 <item><nf-tab>lyx -e latex $\<less\>
598 <item><nf-tab>rm -f -- $(TEX_SOURCE)
600 <item>clean: clean_tex
603 <subsection|Converting from <TeXmacs>><label|Converting-from-Lyx>
605 Fangle cannot process <TeXmacs> files directly<\footnote>
606 but this is planned when <TeXmacs> uses xml as it's native format
607 </footnote>, but must first convert them to text files.
609 The command <verbatim|texmacs -c fangle.tm fangle.txt -q> will produce
610 <verbatim|fangle.txt>, a text file; so we define a make target to be the
611 same as the <TeXmacs> file but with the <verbatim|.txt> extension.
613 The <verbatim|EXTRA_DIST> is for automake support so that the <TeX> files
614 will automaticaly be distributed with the source, to help those who don't
615 have <LyX> installed.
617 <\nf-chunk|Makefile.inc-vars>
618 <item>TEXMACS_SOURCE=$(LITERATE_SOURCE) # but only the .tm files
620 <item>TXT_SOURCE=$(LITERATE_SOURCE:.tm=.txt)
622 <item>EXTRA_DIST+=$(TXT_SOURCE)
625 <todo|Add loop around each $\<less\> so multiple targets can be specified>
627 <\nf-chunk|Makefile.inc-targets>
628 <item>.SUFFIXES: .txt .tm
632 <item><nf-tab>texmacs -s -c $\<less\> $@ -q
634 <item>.PHONEY: clean_txt
638 <item><nf-tab>rm -f -- $(TXT_SOURCE)
640 <item>clean: clean_txt
643 <section|Extracting Program Source>
645 The program source is extracted using fangle, which is designed to operate
646 on text or a <LaTeX> documents<\footnote>
647 <LaTeX> documents are just slightly special text documents
650 <\nf-chunk|Makefile.inc-vars>
651 <item>FANGLE_SOURCE=$(TXT_SOURCE)
654 The literate document can result in any number of source files, but not all
655 of these will be changed each time the document is updated. We certainly
656 don't want to update the timestamps of these files and cause the whole
657 source tree to be recompiled just because the literate explanation was
658 revised. We use <verbatim|CPIF> from the <em|Noweb> tools to avoid updating
659 the file if the content has not changed, but should probably write our own.
661 However, if a source file is not updated, then the fangle file will always
662 have a newer time-stamp and the makefile would always re-attempt to extact
663 a newer source file which would be a waste of time.
665 Because of this, we use a stamp file which is always updated each time the
666 sources are fully extracted from the <LaTeX> document. If the stamp file is
667 newer than the document, then we can avoid an attempt to re-extract any of
668 the sources. Because this stamp file is only updated when extraction is
669 complete, it is safe for the user to interrupt the build-process
672 We use <verbatim|echo> rather than <verbatim|touch> to update the stamp
673 file beause the <verbatim|touch> command does not work very well over an
674 <verbatim|sshfs> mount \ that I was using.
676 <\nf-chunk|Makefile.inc-vars>
677 <item>FANGLE_SOURCE_STAMP=$(FANGLE_SOURCE).stamp
680 <\nf-chunk|Makefile.inc-targets>
681 <item>$(FANGLE_SOURCE_STAMP): $(FANGLE_SOURCE) \\
683 <item><nf-tab> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $(FANGLE_SOURCES) ; \\
685 <item><nf-tab>echo -n \<gtr\> $(FANGLE_SOURCE_STAMP)
689 <item><nf-tab>rm -f $(FANGLE_SOURCE_STAMP)
691 <item>clean: clean_stamp
694 <section|Extracting Source Files>
696 We compute <verbatim|FANGLE_SOURCES> to hold the names of all the source
697 files defined in the document. We compute this only once, by means of
698 <verbatim|:=> in assignent. The sed deletes the any
699 <verbatim|\<less\>\<less\>> and <verbatim|\<gtr\>\<gtr\>> which may
700 surround the roots names (for compatibility with Noweb's noroots command).
702 As we use chunk names beginning with <filename|./> to denote top level
703 fragments that should be extracted, we filter out all fragments that do not
704 begin with <filename|./>
707 <verbatim|FANGLE_PREFIX> is set to <verbatim|./> by default, but whatever
708 it may be overridden to, the prefix is replaced by a literal
709 <verbatim|./> before extraction so that files will be extracted in the
710 current directory whatever the prefix. This helps namespace or
711 sub-project prefixes like <verbatim|documents:> for chunks like
712 <verbatim|documents:docbook/intro.xml>
715 <todo|This doesn't work though, because it loses the full name and doesn't
716 know what to extact!>
718 <\nf-chunk|Makefile.inc-vars>
719 <item>FANGLE_PREFIX:=\\.\\/
721 <item>FANGLE_SOURCES:=$(shell \\
723 <item> \ $(RUN_FANGLE) -r $(FANGLE_SOURCE) \|\\
725 <item> \ sed -e 's/^[\<less\>][\<less\>]//;s/[\<gtr\>][\<gtr\>]$$//;/^$(FANGLE_PREFIX)/!d'
728 <item> \ \ \ \ \ -e 's/^$(FANGLE_PREFIX)/\\.\\//' )
731 The target below, <verbatim|echo_fangle_sources> is a helpful debugging
732 target and shows the names of the files that would be extracted.
734 <\nf-chunk|Makefile.inc-targets>
735 <item>.PHONY: echo_fangle_sources
737 <item>echo_fangle_sources: ; @echo $(FANGLE_SOURCES)
740 We define a convenient target called <verbatim|fangle_sources> so that
741 <verbatim|make -f fangle_sources> will re-extract the source if the
742 literate document has been updated.\
744 <\nf-chunk|Makefile.inc-targets>
745 <item>.PHONY: fangle_sources
747 <item>fangle_sources: $(FANGLE_SOURCE_STAMP)
750 And also a convenient target to remove extracted sources.
752 <\nf-chunk|Makefile.inc-targets>
753 <item>.PHONY: clean_fangle_sources
755 <item>clean_fangle_sources: ; \\
757 <item> \ \ \ \ \ \ \ rm -f -- $(FANGLE_SOURCE_STAMP) $(FANGLE_SOURCES)
760 We now look at the extraction of the source files.
762 This makefile macro <verbatim|if_extension> takes 4 arguments: the filename
763 <verbatim|$(1)>, some extensions to match <verbatim|$(2)> and a shell
764 command to return if the filename does match the exensions <verbatim|$(3)>,
765 and a shell command to return if it does not match the extensions
768 <\nf-chunk|Makefile.inc-vars>
769 <item>if_extension=$(if $(findstring $(suffix $(1)),$(2)),$(3),$(4))
772 For some source files like C files, we want to output the line number and
773 filename of the original <LaTeX> document from which the source
775 I plan to replace this option with a separate mapping file so as not to
776 pollute the generated source, and also to allow a code pretty-printing
777 reformatter like <verbatim|indent> be able to re-format the file and
778 adjust for changes through comparing the character streams.
781 To make this easier we define the file extensions for which we want to do
784 <\nf-chunk|Makefile.inc-vars>
785 <item>C_EXTENSIONS=.c .h
788 We can then use the <verbatim|if_extensions> macro to define a macro which
789 expands out to the <verbatim|-L> option if fangle is being invoked in a C
790 source file, so that C compile errors will refer to the line number in the
793 <\nf-chunk|Makefile.inc-vars>
796 <item>nf_line=-L -T$(TABS)
798 <item>fangle=$(RUN_FANGLE) $(call if_extension,$(2),$(C_EXTENSIONS),$(nf_line))
802 We can use a similar trick to define an indent macro which takes just the
803 filename as an argument and can return a pipeline stage calling the indent
804 command. Indent can be turned off with <verbatim|make fangle_sources
807 <\nf-chunk|Makefile.inc-vars>
808 <item>indent_options=-npro -kr -i8 -ts8 -sob -l80 -ss -ncs
810 <item>indent=$(call if_extension,$(1),$(C_EXTENSIONS), \| indent
814 We now define the pattern for extracting a file. The files are written
815 using noweb's <verbatim|cpif> so that the file timestamp will not be
816 touched if the contents haven't changed. This avoids the need to rebuild
817 the entire project because of a typographical change in the documentation,
818 or if none or a few C source files have changed.
820 <\nf-chunk|Makefile.inc-vars>
821 <item>fangle_extract=@mkdir -p $(dir $(1)) && \\
823 <item> \ $(call fangle,$(2),$(1)) \<gtr\> "$(1).tmp" && \\
825 <item> \ cat "$(1).tmp" $(indent) \| cpif "$(1)" \\
827 <item> \ && rm -f -- "$(1).tmp" \|\| \\
829 <item> \ (echo error fangling $(1) from $(2) ; exit 1)
832 We define a target which will extract or update all sources. To do this we
833 first defined a makefile template that can do this for any source file in
834 the <LaTeX> document.
836 <\nf-chunk|Makefile.inc-vars>
837 <item>define FANGLE_template
841 <item><nf-tab>$$(call fangle_extract,$(1),$(2))
843 <item> \ FANGLE_TARGETS+=$(1)
848 We then enumerate the discovered <verbatim|FANGLE_SOURCES> to generate a
849 makefile rule for each one using the makefile template we defined above.
851 <\nf-chunk|Makefile.inc-targets>
852 <item>$(foreach source,$(FANGLE_SOURCES),\\
854 <item> \ $(eval $(call FANGLE_template,$(source),$(FANGLE_SOURCE))) \\
859 These will all be built with <verbatim|FANGLE_SOURCE_STAMP>.
861 We also remove the generated sources on a make distclean.
863 <\nf-chunk|Makefile.inc-targets>
864 <item>_distclean: clean_fangle_sources
867 <section|Extracting Documentation>
869 We then identify the intermediate stages of the documentation and their
870 build and clean targets.
872 <\nf-chunk|Makefile.inc-default-targets>
873 <item>.PHONEY : clean_pdf
876 <subsection|Formatting <TeX>>
878 <subsubsection|Running pdflatex>
880 We produce a pdf file from the tex file.
882 <\nf-chunk|Makefile.inc-vars>
883 <item>FANGLE_PDF+=$(TEX_SOURCE:.tex=.pdf)
886 We run pdflatex twice to be sure that the contents and aux files are up to
887 date. We certainly are <em|required> to run pdflatex at least twice if
888 these files do not exist.
890 <\nf-chunk|Makefile.inc-targets>
891 <item>.SUFFIXES: .tex .pdf
895 <item><nf-tab>pdflatex $\<less\> && pdflatex $\<less\>
901 <item><nf-tab>rm -f -- $(FANGLE_PDF) $(TEX_SOURCE:.tex=.toc) \\
903 <item><nf-tab> \ $(TEX_SOURCE:.tex=.log) $(TEX_SOURCE:.tex=.aux)
905 <item>clean_pdf: clean_pdf_tex
908 <subsection|Formatting <TeXmacs>>
910 <TeXmacs> can produce a PDF file directly.
912 <\nf-chunk|Makefile.inc-vars>
913 <item>FANGLE_PDF+=$(LITERATE_SOURCE:.tm=.pdf)
917 Outputting the PDF may not be enough to update the links and page
920 we need to update twice, generate a pdf, update twice mode and generate a
923 Basically the PDF export of <TeXmacs> is pretty rotten and doesn't work
924 properly from the CLI
927 <\nf-chunk|Makefile.inc-targets>
928 <item>.SUFFIXES: .tm .pdf
932 <item><nf-tab>texmacs -s -c $\<less\> $@ -q
936 <item>clean_pdf_texmacs:
938 <item><nf-tab>rm -f -- $(FANGLE_PDF)
940 <item>clean_pdf: clean_pdf_texmacs
943 <subsection|Building the Documentation as a Whole>
945 Currently we only build pdf as a final format, but <verbatim|FANGLE_DOCS>
946 may later hold other output formats.
948 <\nf-chunk|Makefile.inc-vars>
949 <item>FANGLE_DOCS=$(FANGLE_PDF)
952 We also define <verbatim|fangle_docs> as a convenient phony target.
954 <\nf-chunk|Makefile.inc-targets>
955 <item>.PHONY: fangle_docs
957 <item>fangle_docs: $(FANGLE_DOCS)
959 <item>docs: fangle_docs
962 And define a convenient <verbatim|clean_fangle_docs> which we add to the
965 <\nf-chunk|Makefile.inc-targets>
966 <item>.PHONEY: clean_fangle_docs
968 <item>clean_fangle_docs: clean_tex clean_pdf
970 <item>clean: clean_fangle_docs
974 <item>distclean_fangle_docs: clean_tex clean_fangle_docs
976 <item>distclean: clean distclean_fangle_docs
979 <section|Other helpers>
981 If <filename|Makefile.inc> is included into <filename|Makefile>, then
982 extracted files can be updated with this command:
984 <verbatim|make fangle_sources>
988 <verbatim|make -f Makefile.inc fangle_sources>
990 <section|Boot-strapping the extraction>
992 As well as having the makefile extract or update the source files as part
993 of it's operation, it also seems convenient to have the makefile
994 re-extracted itself from <em|this> document.
996 It would also be convenient to have the code that extracts the makefile
997 from this document to also be part of this document, however we have to
998 start somewhere and this unfortunately requires us to type at least a few
999 words by hand to start things off.
1001 Therefore we will have a minimal root fragment, which, when extracted, can
1002 cope with extracting the rest of the source. This shell script fragment can
1003 do that. It's name is <verbatim|*> <emdash> out of regard for <name|Noweb>,
1004 but when extracted might better be called <verbatim|autoupdate>.
1013 <item>MAKE_SRC="${1:-${NW_LYX:-../../noweb-lyx/noweb-lyx3.lyx}}"
1015 <item>MAKE_SRC=`dirname "$MAKE_SRC"`/`basename "$MAKE_SRC" .lyx`
1017 <item>NOWEB_SRC="${2:-${NOWEB_SRC:-$MAKE_SRC.lyx}}"
1019 <item>lyx -e latex $MAKE_SRC
1023 <item>fangle -R./Makefile.inc ${MAKE_SRC}.tex \\
1025 <item> \ \| sed "/FANGLE_SOURCE=/s/^/#/;T;aNOWEB_SOURCE=$FANGLE_SRC" \\
1027 <item> \ \| cpif ./Makefile.inc
1031 <item>make -f ./Makefile.inc fangle_sources
1034 The general Makefile can be invoked with <filename|./autoboot> and can also
1035 be included into any automake file to automatically re-generate the source
1038 The <em|autoboot> can be extracted with this command:
1041 lyx -e latex fangle.lyx && \\
1043 \ \ fangle fangle.lyx \<gtr\> ./autoboot
1046 This looks simple enough, but as mentioned, fangle has to be had from
1047 somewhere before it can be extracted.
1049 On a unix system this will extract <filename|fangle.module> and the
1050 <filename|fangle> awk script, and run some basic tests.\
1052 <todo|cross-ref to test chapter when it is a chapter all on its own>
1054 <section|Incorporating Makefile.inc into existing projects>
1056 If you are writing a literate module of an existing non-literate program
1057 you may find it easier to use a slight recursive make instead of directly
1058 including <verbatim|Makefile.inc> in the projects makefile.\
1060 This way there is less chance of definitions in <verbatim|Makefile.inc>
1061 interfering with definitions in the main makefile, or with definitions in
1062 other <verbatim|Makefile.inc> from other literate modules of the same
1065 To do this we add some <em|glue> to the project makefile that invokes
1066 Makefile.inc in the right way. The glue works by adding a <verbatim|.PHONY>
1067 target to call the recursive make, and adding this target as an additional
1068 pre-requisite to the existing targets.
1070 <paragraph|Example>Sub-module of existing system
1072 In this example, we are building <verbatim|module.so> as a literate module
1073 of a larger project.
1075 We will show the sort glue that can be inserted into the projects Makefile
1076 <emdash> or more likely <emdash> a regular Makefile included in or invoked
1077 by the projects Makefile.
1079 <\nf-chunk|makefile-glue>
1080 <item>module_srcdir=modules/module
1082 <item>MODULE_SOURCE=module.tm
1084 <item>MODULE_STAMP=$(MODULE_SOURCE).stamp
1087 The existing build system may already have a build target for
1088 <filename|module.o>, but we just add another pre-requisite to that. In this
1089 case we use <filename|module.tm.stamp> as a pre-requisite, the stamp file's
1090 modified time indicating when all sources were extracted<\footnote>
1091 If the projects build system does not know how to build the module from
1092 the extracted sources, then just add build actions here as normal.
1095 <\nf-chunk|makefile-glue>
1096 <item>$(module_srcdir)/module.o: $(module_srcdir)/$(MODULE_STAMP)
1099 The target for this new pre-requisite will be generated by a recursive make
1100 using <filename|Makefile.inc> which will make sure that the source is up to
1101 date, before it is built by the main projects makefile.
1103 <\nf-chunk|makefile-glue>
1104 <item>$(module_srcdir)/$(MODULE_STAMP): $(module_srcdir)/$(MODULE_SOURCE)
1106 <item><nf-tab>$(MAKE) -C $(module_srcdir) -f Makefile.inc fangle_sources
1107 LITERATE_SOURCE=$(MODULE_SOURCE)
1110 We can do similar glue for the docs, clean and distclean targets. In this
1111 example the main prject was using a double colon for these targets, so we
1112 must use the same in our glue.
1114 <\nf-chunk|makefile-glue>
1115 <item>docs:: docs_module
1117 <item>.PHONY: docs_module
1121 <item><nf-tab>$(MAKE) -C $(module_srcdir) -f Makefile.inc docs
1122 LITERATE_SOURCE=$(MODULE_SOURCE)
1126 <item>clean:: clean_module
1128 <item>.PHONEY: clean_module
1132 <item><nf-tab>$(MAKE) -C $(module_srcdir) -f Makefile.inc clean
1133 LITERATE_SOURCE=$(MODULE_SOURCE)
1137 <item>distclean:: distclean_module
1139 <item>.PHONY: distclean_module
1141 <item>distclean_module:
1143 <item><nf-tab>$(MAKE) -C $(module_srcdir) -f Makefile.inc distclean
1144 LITERATE_SOURCE=$(MODULE_SOURCE)
1147 We could do similarly for install targets to install the generated docs.
1151 <chapter|Fangle Makefile>
1153 We use the copyright notice from chapter <reference|License>, and the
1154 Makefile.inc from chapter <reference|makefile.inc>
1156 <\nf-chunk|./Makefile>
1157 <item># <nf-ref|gpl3-copyright|>
1161 <item><nf-ref|make-fix-make-shell|>
1165 <item>LITERATE_SOURCE=fangle.tm
1169 <item>all: fangle_sources
1171 <item>include Makefile.inc
1177 <item>./fangle: test
1183 <item>test: fangle.txt
1185 <item><nf-tab>$(RUN_FANGLE) -R"test:*" fangle.txt \<gtr\> test.sh
1187 <item><nf-tab>bash test.sh ; echo pass $$?
1190 <chapter|Fangle awk source code>
1192 We use the copyright notice from chapter <reference|License>.
1194 <\nf-chunk|./fangle>
1195 <item>#! /usr/bin/awk -f
1197 <item># <nf-ref|gpl3-copyright|>
1200 We also use code from <person|Arnold Robbins> public domain getopt (1993
1201 revision) defined in <reference|getopt>, and naturally want to attribute
1204 <\nf-chunk|./fangle>
1205 <item># NOTE: Arnold Robbins public domain getopt for awk is also used:
1207 <item><nf-ref|getopt.awk-header|>
1209 <item><nf-ref|getopt.awk-getopt()|>
1214 And include the following chunks (which are explained further on) to make
1217 <\nf-chunk|./fangle>
1218 <item><nf-ref|helper-functions|>
1220 <item><nf-ref|mode-tracker|>
1222 <item><nf-ref|parse_chunk_args|>
1224 <item><nf-ref|chunk-storage-functions|>
1226 <item><nf-ref|output_chunk_names()|>
1228 <item><nf-ref|output_chunks()|>
1230 <item><nf-ref|write_chunk()|>
1232 <item><nf-ref|expand_chunk_args()|>
1236 <item><nf-ref|begin|>
1238 <item><nf-ref|recognize-chunk|>
1243 <section|AWK tricks>
1245 The portable way to erase an array in awk is to split the empty string, so
1246 we define a fangle macro that can split an array, like this:
1248 <\nf-chunk|awk-delete-array>
1249 <item>split("", <nf-arg|ARRAY>);
1250 </nf-chunk|awk|<tuple|ARRAY>>
1252 For debugging it is sometimes convenient to be able to dump the contents of
1253 an array to <verbatim|stderr>, and so this macro is also useful.
1255 <\nf-chunk|dump-array>
1256 <item>print "\\nDump: <nf-arg|ARRAY>\\n--------\\n" \<gtr\>
1259 <item>for (_x in <nf-arg|ARRAY>) {
1261 <item> \ print _x "=" <nf-arg|ARRAY>[_x] "\\n" \<gtr\> "/dev/stderr";
1265 <item>print "========\\n" \<gtr\> "/dev/stderr";
1266 </nf-chunk|awk|<tuple|ARRAY>>
1268 <section|Catching errors>
1270 Fatal errors are issued with the error function:
1273 <item>function error(message)
1277 <item> \ print "ERROR: " FILENAME ":" FNR " " message \<gtr\>
1285 and likewise for non-fatal warnings:
1288 <item>function warning(message)
1292 <item> \ print "WARNING: " FILENAME ":" FNR " " message \<gtr\>
1295 <item> \ warnings++;
1300 and debug output too:
1303 <item>function debug_log(message)
1307 <item> \ print "DEBUG: " FILENAME ":" FNR " " message \<gtr\>
1313 <todo|append=helper-functions>
1315 <\nf-chunk|helper-functions>
1316 <item><nf-ref|error()|>
1319 <chapter|<TeXmacs> args>
1321 <TeXmacs> functions with arguments<\footnote>
1322 or function declarations with parameters
1323 </footnote> appear like this:
1325 <math|<math-tt|blah(><wide*|<wide|<math-tt|I came, I saw, I
1326 conquered>|\<wide-overbrace\>><rsup|argument 1><wide|<math-tt|<key|^K>>,
1327 |\<wide-overbrace\>><rsup|sep.><wide|and then went home
1328 asd|\<wide-overbrace\>><rsup|argument 3><wide|<math-tt|<key|^K>><math-tt|)>|\<wide-overbrace\>><rsup|term.>|\<wide-underbrace\>><rsub|arguments>>
1330 Arguments commence after the opening parenthesis. The first argument runs
1331 up till the next <key|^K>.\
1333 If the following character is a <key|,> then another argument follows. If
1334 the next character after the <key|,> is a space character, then it is also
1335 eaten. The fangle stylesheet emits <key|^K><key|,><key|space> as
1336 separators, but the fangle untangler will forgive a missing space.
1338 If the following character is <key|)> then this is a terminator and there
1339 are no more arguments.
1341 <\nf-chunk|constants>
1342 <item>ARG_SEPARATOR=sprintf("%c", 11);
1345 To process the <verbatim|text> in this fashion, we split the string on
1350 <\nf-chunk|get_chunk_args>
1351 <item>function get_texmacs_chunk_args(text, args, \ \ a, done) {
1353 <item> \ split(text, args, ARG_SEPARATOR);
1359 <item> \ for (a=1; (a in args); a++) if (a\<gtr\>1) {
1361 <item> \ \ \ if (args[a] == "" \|\| substr(args[a], 1, 1) == ")") done=1;
1363 <item> \ \ \ if (done) {
1365 <item> \ \ \ \ \ delete args[a];
1367 <item> \ \ \ \ \ break;
1373 <item> \ \ \ if (substr(args[a], 1, 2) == ", ") args[a]=substr(args[a],
1376 <item> \ \ \ else if (substr(args[a], 1, 1) == ",")
1377 args[a]=substr(args[a], 2); \
1384 <chapter|<LaTeX> and lstlistings>
1386 <todo|Split LyX and TeXmacs parts>
1388 For <LyX> and <LaTeX>, the <verbatim|lstlistings> package is used to format
1389 the lines of code chunks. You may recal from chapter XXX that arguments to
1390 a chunk definition are pure <LaTeX> code. This means that fangle needs to
1391 be able to parse <LaTeX> a little.
1393 <LaTeX> arguments to <verbatim|lstlistings> macros are a comma seperated
1394 list of key-value pairs, and values containing commas are enclosed in
1395 <verbatim|{> braces <verbatim|}> (which is to be expected for <LaTeX>).
1397 A sample expressions is:
1399 <verbatim|name=thomas, params={a, b}, something, something-else>
1401 but we see that this is just a simpler form of this expression:
1403 <verbatim|name=freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc>
1405 We may consider that we need a function that can parse such <LaTeX>
1406 expressions and assign the values to an AWK associated array, perhaps using
1407 a recursive parser into a multi-dimensional hash<\footnote>
1408 as AWK doesn't have nested-hash support
1409 </footnote>, resulting in:
1411 <tabular|<tformat|<cwith|2|6|1|2|cell-lborder|0.5pt>|<cwith|2|6|1|2|cell-rborder|0.5pt>|<cwith|2|6|1|2|cell-bborder|0.5pt>|<cwith|2|6|1|2|cell-tborder|0.5pt>|<cwith|1|1|1|2|cell-lborder|0.5pt>|<cwith|1|1|1|2|cell-rborder|0.5pt>|<cwith|1|1|1|2|cell-bborder|0.5pt>|<cwith|1|1|1|2|cell-tborder|0.5pt>|<table|<row|<cell|key>|<cell|value>>|<row|<cell|a[name]>|<cell|freddie>>|<row|<cell|a[foo,
1412 bar]>|<cell|baz>>|<row|<cell|a[foo, quux,
1413 quirk]>|<cell|>>|<row|<cell|a[foo, quux,
1414 a]>|<cell|fleeg>>|<row|<cell|a[etc]>|<cell|>>>>>
1416 Yet, also, on reflection it seems that sometimes such nesting is not
1417 desirable, as the braces are also used to delimit values that contain
1418 commas --- we may consider that
1420 <verbatim|name={williamson, freddie}>
1422 should assign <verbatim|williamson, freddie> to <verbatim|name>.
1424 In fact we are not so interested in the detail so as to be bothered by
1425 this, which turns out to be a good thing for two reasons. Firstly <TeX> has
1426 a malleable parser with no strict syntax, and secondly whether or not
1427 <verbatim|williamson> and <verbatim|freddie> should count as two items will
1428 be context dependant anyway.
1430 We need to parse this latex for only one reason; which is that we are
1431 extending lstlistings to add some additional arguments which will be used
1432 to express chunk parameters and other chunk options.
1434 <section|Additional lstlstings parameters>
1436 Further on we define a <verbatim|\\Chunk> <LaTeX> macro whose arguments
1437 will consist of a the chunk name, optionally followed by a comma and then a
1438 comma separated list of arguments. In fact we will just need to prefix
1439 <verbatim|name=> to the arguments to in order to create valid lstlistings
1442 There will be other arguments supported too;\
1445 <item*|params>As an extension to many literate-programming styles, fangle
1446 permits code chunks to take parameters and thus operate somewhat like C
1447 pre-processor macros, or like C++ templates. Chunk parameters are
1448 declared with a chunk argument called params, which holds a semi-colon
1449 separated list of parameters, like this:
1451 <verbatim|achunk,language=C,params=name;address>
1453 <item*|addto>a named chunk that this chunk is to be included into. This
1454 saves the effort of having to declare another listing of the named chunk
1455 merely to include this one.
1458 Function get_chunk_args() will accept two paramters, text being the text to
1459 parse, and values being an array to receive the parsed values as described
1460 above. The optional parameter path is used during recursion to build up the
1461 multi-dimensional array path.
1463 <\nf-chunk|./fangle>
1464 <item><nf-ref|get_chunk_args()|>
1467 <\nf-chunk|get_chunk_args()>
1468 <item>function get_tex_chunk_args(text, values,
1470 <item> \ # optional parameters
1472 <item> \ path, # hierarchical precursors
1474 <item> \ # local vars
1479 The strategy is to parse the name, and then look for a value. If the value
1480 begins with a brace <verbatim|{>, then we recurse and consume as much of
1481 the text as necessary, returning the remaining text when we encounter a
1482 leading close-brace <verbatim|}>. This being the strategy --- and executed
1483 in a loop --- we realise that we must first look for the closing brace
1484 (perhaps preceded by white space) in order to terminate the recursion, and
1485 returning remaining text.
1487 <\nf-chunk|get_chunk_args()>
1490 <item> \ split("", values);
1492 <item> \ while(length(text)) {
1494 <item> \ \ \ if (match(text, "^ *}(.*)", a)) {
1496 <item> \ \ \ \ \ return a[1];
1500 <item> \ \ \ <nf-ref|parse-chunk-args|>
1504 <item> \ return text;
1509 We can see that the text could be inspected with this regex:
1511 <\nf-chunk|parse-chunk-args>
1512 <item>if (! match(text, " *([^,=]*[^,= ]) *(([,=]) *(([^,}]*) *,*
1515 <item> \ return text;
1520 and that <verbatim|a> will have the following values:
1522 <tabular|<tformat|<cwith|2|7|1|2|cell-lborder|0.5pt>|<cwith|2|7|1|2|cell-rborder|0.5pt>|<cwith|2|7|1|2|cell-bborder|0.5pt>|<cwith|2|7|1|2|cell-tborder|0.5pt>|<cwith|1|1|1|2|cell-lborder|0.5pt>|<cwith|1|1|1|2|cell-rborder|0.5pt>|<cwith|1|1|1|2|cell-bborder|0.5pt>|<cwith|1|1|1|2|cell-tborder|0.5pt>|<table|<row|<cell|a[n]>|<cell|assigned
1523 text>>|<row|<cell|1>|<cell|freddie>>|<row|<cell|2>|<cell|=freddie,
1524 foo={bar=baz, quux={quirk, a=fleeg}}, etc>>|<row|<cell|3>|<cell|=>>|<row|<cell|4>|<cell|freddie,
1525 foo={bar=baz, quux={quirk, a=fleeg}}, etc>>|<row|<cell|5>|<cell|freddie>>|<row|<cell|6>|<cell|,
1526 foo={bar=baz, quux={quirk, a=fleeg}}, etc>>>>>
1528 <verbatim|a[3]> will be either <verbatim|=> or <verbatim|,> and signify
1529 whether the option named in <verbatim|a[1]> has a value or not
1532 If the option does have a value, then if the expression
1533 <verbatim|substr(a[4],1,1)> returns a brace <verbatim|{> it will signify
1534 that we need to recurse:
1536 <\nf-chunk|parse-chunk-args>
1539 <item>if (a[3] == "=") {
1541 <item> \ if (substr(a[4],1,1) == "{") {
1543 <item> \ \ \ text = get_tex_chunk_args(substr(a[4],2), values, path name
1548 <item> \ \ \ values[path name]=a[5];
1550 <item> \ \ \ text = a[6];
1556 <item> \ values[path name]="";
1558 <item> \ text = a[2];
1563 We can test this function like this:
1565 <\nf-chunk|gca-test.awk>
1566 <item><nf-ref|get_chunk_args()|>
1570 <item> \ SUBSEP=".";
1574 <item> \ print get_tex_chunk_args("name=freddie, foo={bar=baz,
1575 quux={quirk, a=fleeg}}, etc", a);
1577 <item> \ for (b in a) {
1579 <item> \ \ \ print "a[" b "] =\<gtr\> " a[b];
1586 which should give this output:
1588 <\nf-chunk|gca-test.awk-results>
1589 <item>a[foo.quux.quirk] =\<gtr\>\
1591 <item>a[foo.quux.a] =\<gtr\> fleeg
1593 <item>a[foo.bar] =\<gtr\> baz
1595 <item>a[etc] =\<gtr\>\
1597 <item>a[name] =\<gtr\> freddie
1600 <section|Parsing chunk arguments><label|Chunk Arguments>
1602 Arguments to paramterized chunks are expressed in round brackets as a comma
1603 separated list of optional arguments. For example, a chunk that is defined
1606 <verbatim|\\Chunk{achunk, params=name ; address}>
1608 could be invoked as:
1610 <verbatim|\\chunkref{achunk}(John Jones, jones@example.com)>
1612 An argument list may be as simple as in <verbatim|\\chunkref{pull}(thing,
1613 otherthing)> or as complex as:
1615 <verbatim|\\chunkref{pull}(things[x, y], get_other_things(a, "(all)"))>
1617 --- which for all it's commas and quotes and parenthesis represents only
1618 two parameters: <verbatim|things[x, y]> and <verbatim|get_other_things(a,
1621 If we simply split parameter list on commas, then the comma in
1622 <verbatim|things[x,y]> would split into two seperate arguments:
1623 <verbatim|things[x> and <verbatim|y]>--- neither of which make sense on
1626 One way to prevent this would be by refusing to split text between matching
1627 delimiters, such as <verbatim|[>, <verbatim|]>, <verbatim|(>, <verbatim|)>,
1628 <verbatim|{>, <verbatim|}> and most likely also <verbatim|">, <verbatim|">
1629 and <verbatim|'>, <verbatim|'>. Of course this also makes it impossible to
1630 pass such mis-matched code fragments as parameters, but I think that it
1631 would be hard for readers to cope with authors who would pass such code
1632 unbalanced fragments as chunk parameters<\footnote>
1633 I know that I couldn't cope with users doing such things, and although
1634 the GPL3 license prevents me from actually forbidding anyone from trying,
1635 if they want it to work they'll have to write the code themselves and not
1636 expect any support from me.
1639 Unfortunately, the full set of matching delimiters may vary from language
1640 to language. In certain C++ template contexts, <verbatim|\<less\>> and
1641 <verbatim|\<gtr\>> would count as delimiters, and yet in other contexts
1644 This puts me in the unfortunate position of having to parse-somewhat all
1645 programming languages without knowing what they are!
1647 However, if this universal mode-tracking is possible, then parsing the
1648 arguments would be trivial. Such a mode tracker is described in chapter
1649 <reference|modes> and used here with simplicity.
1651 <\nf-chunk|parse_chunk_args>
1652 <item>function parse_chunk_args(language, text, values, mode,
1654 <item> \ # local vars
1656 <item> \ c, context, rest)
1660 <item> \ <nf-ref|new-mode-tracker|<tuple|context|language|mode>>
1662 <item> \ rest = mode_tracker(context, text, values);
1664 <item> \ # extract values
1666 <item> \ for(c=1; c \<less\>= context[0, "values"]; c++) {
1668 <item> \ \ \ values[c] = context[0, "values", c];
1672 <item> \ return rest;
1677 <section|Expanding parameters in the text>
1679 Within the body of the chunk, the parameters are referred to with:
1680 <verbatim|${name}> and <verbatim|${address}>. There is a strong case that a
1681 <LaTeX> style notation should be used, like <verbatim|\\param{name}> which
1682 would be expressed in the listing as <verbatim|=\<less\>\\param{name}\<gtr\>>
1683 and be rendered as <verbatim|<nf-arg|name>>. Such notation would make me go
1684 blind, but I do intend to adopt it.
1686 We therefore need a function <verbatim|expand_chunk_args> which will take a
1687 block of text, a list of permitted parameters, and the arguments which must
1688 substitute for the parameters.\
1690 Here we split the text on <verbatim|${> which means that all parts except
1691 the first will begin with a parameter name which will be terminated by
1692 <verbatim|}>. The split function will consume the literal <verbatim|${> in
1695 <\nf-chunk|expand_chunk_args()>
1696 <item>function expand_chunk_args(text, params, args, \
1698 <item> \ p, text_array, next_text, v, t, l)
1702 <item> \ if (split(text, text_array, "\\\\${")) {
1704 <item> \ \ \ <nf-ref|substitute-chunk-args|>
1710 <item> \ return text;
1715 First, we produce an associative array of substitution values indexed by
1716 parameter names. This will serve as a cache, allowing us to look up the
1717 replacement values as we extract each name.
1719 <\nf-chunk|substitute-chunk-args>
1720 <item>for(p in params) {
1722 <item> \ v[params[p]]=args[p];
1727 We accumulate substituted text in the variable text. As the first part of
1728 the split function is the part before the delimiter --- which is
1729 <verbatim|${> in our case --- this part will never contain a parameter
1730 reference, so we assign this directly to the result kept in
1733 <\nf-chunk|substitute-chunk-args>
1734 <item>text=text_array[1];
1737 We then iterate over the remaining values in the array, and substitute each
1738 reference for it's argument.
1740 <\nf-chunk|substitute-chunk-args>
1741 <item>for(t=2; t in text_array; t++) {
1743 <item> \ <nf-ref|substitute-chunk-arg|>
1748 After the split on <verbatim|${> a valid parameter reference will consist
1749 of valid parameter name terminated by a close-brace <verbatim|}>. A valid
1750 character name begins with the underscore or a letter, and may contain
1751 letters, digits or underscores.
1753 A valid looking reference that is not actually the name of a parameter will
1754 be and not substituted. This is good because there is nothing to substitute
1755 anyway, and it avoids clashes when writing code for languages where
1756 <verbatim|${...}> is a valid construct --- such constructs will not be
1757 interfered with unless the parameter name also matches.
1759 <\nf-chunk|substitute-chunk-arg>
1760 <item>if (match(text_array[t], "^([a-zA-Z_][a-zA-Z0-9_]*)}", l) &&
1762 <item> \ \ \ l[1] in v)\
1766 <item> \ text = text v[l[1]] substr(text_array[t], length(l[1])+2);
1770 <item> \ text = text "${" text_array[t];
1775 <chapter|Language Modes & Quoting><label|modes>
1777 <verbatim|lstlistings> and <verbatim|fangle> both recognize source
1778 languages, and perform some basic parsing and syntax highlighting in the
1779 rendered document<\footnote>
1780 although lstlisting supports many more languages
1781 </footnote>. <verbatim|lstlistings> can detect strings and comments within
1782 a language definition and perform suitable rendering, such as italics for
1783 comments, and visible-spaces within strings.
1785 Fangle similarly can recognize strings, and comments, etc, within a
1786 language, so that any chunks included with <verbatim|\\chunkref{a-chunk}>
1787 or <nf-ref|a-chunk|> can be suitably escape or quoted.
1789 <section|Modes explanation>
1791 As an example, the C language has a few parse modes, which affect the
1792 interpretation of characters.
1794 One parse mode is the string mode. The string mode is commenced by an
1795 un-escaped quotation mark <verbatim|"> and terminated by the same. Within
1796 the string mode, only one additional mode can be commenced, it is the
1797 backslash mode <verbatim|\\>, which is always terminated by the following
1800 Another mode is <verbatim|[> which is terminated by a <verbatim|]> (unless
1801 it occurs in a string).
1803 Consider this fragment of C code:
1805 <math|<math-tt|do_something><wide|<around*|(|<math-tt|things><wide|<around|[|<math-tt|x>,
1806 <math-tt|y>|]>|\<wide-overbrace\>><rsup|2. <math-tt|[> mode><math-tt|,
1807 get_other_things><wide|<around|(|<math-tt|a>,
1808 <wide*|<text|"><math-tt|<around|(|all|)>><text|">|\<wide-underbrace\>><rsub|4.
1809 <text|"> mode>|)>|\<wide-overbrace\>><rsup|3. <math-tt|(>
1810 mode>|)>|\<wide-overbrace\>><rsup|1. <math-tt|(> mode>>
1814 Mode nesting prevents the close parenthesis in the quoted string (part 4)
1815 from terminating the parenthesis mode (part 3).
1817 Each language has a set of modes, the default mode being the null mode.
1818 Each mode can lead to other modes.
1820 <section|Modes affect included chunks>
1822 For instance, consider this chunk with <verbatim|language=perl>:
1824 <\nf-chunk|test:example-perl>
1825 <item>print "hello world $0\\n";
1828 If it were included in a chunk with <verbatim|language=sh>, like this:
1830 <\nf-chunk|test:example-sh>
1831 <item>perl -e "<nf-ref|test:example-perl|>"
1834 we might want fangle would to generate output like this:
1836 <\nf-chunk|test:example-sh.result>
1837 <item>perl -e "print \\"hello world \\$0\\\\n\\";"
1840 See that the double quote <verbatim|">, back-slash <verbatim|\\> and
1841 <verbatim|$> have been quoted with a back-slash to protect them from shell
1844 If that were then included in a chunk with language=make, like this:
1846 <\nf-chunk|test:example-makefile>
1847 <item>target: pre-req
1849 <item><nf-tab><nf-ref|test:example-sh|>
1852 We would need the output to look like this --- note the <verbatim|$$> as
1853 the single <verbatim|$> has been makefile-quoted with another <verbatim|$>.
1855 <\nf-chunk|test:example-makefile.result>
1856 <item>target: pre-req
1858 <item><nf-tab>perl -e "print \\"hello world \\$$0\\\\n\\";"
1861 <section|Language Mode Definitions>
1863 In order to make this work, we must define a mode-tracker supporting each
1864 language, that can detect the various quoting modes, and provide a
1865 transformation that may be applied to any included text so that included
1866 text will be interpreted correctly after any interpolation that it may be
1867 subject to at run-time.
1869 For example, the sed transformation for text to be inserted into shell
1870 double-quoted strings would be something like:
1872 <verbatim|s/\\\\/\\\\\\\\/g;s/$/\\\\$/g;s/"/\\\\"/g;>
1874 which would protect <verbatim|\\ $ ">
1876 All modes definitions are stored in a single multi-dimensional hash called
1879 <verbatim|modes[language, mode, properties]>
1881 The first index is the language, and the second index is the mode. The
1882 third indexes hold properties such as terminators, possible submodes,
1883 transformations, and so forth.
1885 <\nf-chunk|xmode:set-terminators>
1886 <item>modes["<nf-arg|language>", "<nf-arg|mode>",
1887 "terminators"]="<nf-arg|terminators>";
1888 </nf-chunk||<tuple|language|mode|terminators>>
1890 <\nf-chunk|xmode:set-submodes>
1891 <item>modes["<nf-arg|language>", "<nf-arg|mode>",
1892 \ "submodes"]="<nf-arg|submodes>";
1893 </nf-chunk||<tuple|language|mode|submodes>>
1895 A useful set of mode definitions for a nameless general C-type language is
1898 Don't be confused by the double backslash escaping needed in awk. One set
1899 of escaping is for the string, and the second set of escaping is for the
1903 TODO: Add =\<less\>\\mode{}\<gtr\> command which will allow us to signify
1906 \ regex and thus fangle will quote it for us.
1909 Sub-modes are identified by a backslash, a double or single quote, various
1910 bracket styles or a <verbatim|/*> comment; specifically: <verbatim|\\>
1911 <verbatim|"> <verbatim|'> <verbatim|{> <verbatim|(> <verbatim|[>
1914 For each of these sub-modes modes we must also identify at a mode
1915 terminator, and any sub-modes or delimiters that may be entered<\footnote>
1916 Because we are using the sub-mode characters as the mode identifier it
1917 means we can't currently have a mode character dependant on it's context;
1918 i.e. <verbatim|{> can't behave differently when it is inside
1922 <\nf-chunk|common-mode-definitions>
1923 <item>modes[<nf-arg|language>, "", \ "submodes"]="\\\\\\\\\|\\"\|'\|{\|\\\\(\|\\\\[";
1924 </nf-chunk||<tuple|language>>
1926 In the default mode, a comma surrounded by un-important white space is a
1927 delimiter of language items<\footnote>
1928 whatever a <em|language item> might be
1929 </footnote>. Delimiters are used so that fangle can parse and recognise
1930 arguments individually.
1932 <\nf-chunk|common-mode-definitions>
1933 <item>modes[<nf-arg|language>, "", \ "delimiters"]=" *, *";
1934 </nf-chunk||language>
1936 and should pass this test:<todo|Why do the tests run in ?(? mode and not ??
1939 <\nf-chunk|test:mode-definitions>
1940 <item>parse_chunk_args("c-like", "1,2,3", a, "");
1942 <item>if (a[1] != "1") e++;
1944 <item>if (a[2] != "2") e++;
1946 <item>if (a[3] != "3") e++;
1948 <item>if (length(a) != 3) e++;
1950 <item><nf-ref|pca-test.awk:summary|>
1954 <item>parse_chunk_args("c-like", "joe, red", a, "");
1956 <item>if (a[1] != "joe") e++;
1958 <item>if (a[2] != "red") e++;
1960 <item>if (length(a) != 2) e++;
1962 <item><nf-ref|pca-test.awk:summary|>
1966 <item>parse_chunk_args("c-like", "${colour}", a, "");
1968 <item>if (a[1] != "${colour}") e++;
1970 <item>if (length(a) != 1) e++;
1972 <item><nf-ref|pca-test.awk:summary|>
1975 <subsection|Backslash>
1977 The backslash mode has no submodes or delimiters, and is terminated by any
1978 character. Note that we are not so much interested in evaluating or
1979 interpolating content as we are in delineating content. It is no matter
1980 that a double backslash (<verbatim|\\\\>) may represent a single backslash
1981 while a backslash-newline may represent white space, but it does matter
1982 that the newline in a backslash newline should not be able to terminate a C
1983 pre-processor statement; and so the newline will be consumed by the
1984 backslash terminator however it may uultimately be interpreted.
1986 <\nf-chunk|common-mode-definitions>
1987 <item>modes[<nf-arg|language>, "\\\\", "terminators"]=".";
1990 <subsection|Strings>
1992 Common languages support two kinds of strings quoting, double quotes and
1995 In a string we have one special mode, which is the backslash. This may
1996 escape an embedded quote and prevent us thinking that it should terminate
1999 <\nf-chunk|mode:common-string>
2000 <item>modes[<nf-arg|language>, <nf-arg|quote>, "submodes"]="\\\\\\\\";
2001 </nf-chunk||<tuple|language|quote>>
2003 Otherwise, the string will be terminated by the same character that
2006 <\nf-chunk|mode:common-string>
2007 <item>modes[<nf-arg|language>, <nf-arg|quote>,
2008 "terminators"]=<nf-arg|quote>;
2009 </nf-chunk||language>
2011 In C type languages, certain escape sequences exist in strings. We need to
2012 define mechanism to enclode any chunks included in this mode using those
2013 escape sequences. These are expressed in two parts, s meaning search, and r
2016 The first substitution is to replace a backslash with a double backslash.
2017 We do this first as other substitutions may introduce a backslash which we
2018 would not then want to escape again here.
2020 Note: Backslashes need double-escaping in the search pattern but not in the
2021 replacement string, hence we are replacing a literal <verbatim|\\> with a
2022 literal <verbatim|\\\\>.
2024 <\nf-chunk|mode:common-string>
2025 <item>escapes[<nf-arg|language>, <nf-arg|quote>,
2026 ++escapes[<nf-arg|language>, <nf-arg|quote>], "s"]="\\\\\\\\";
2028 <item>escapes[<nf-arg|language>, <nf-arg|quote>,
2029 \ \ escapes[<nf-arg|language>, <nf-arg|quote>], "r"]="\\\\\\\\";
2030 </nf-chunk||language>
2032 If the quote character occurs in the text, it should be preceded by a
2033 backslash, otherwise it would terminate the string unexpectedly.
2035 <\nf-chunk|mode:common-string>
2036 <item>escapes[<nf-arg|language>, <nf-arg|quote>,
2037 ++escapes[<nf-arg|language>, <nf-arg|quote>], "s"]=<nf-arg|quote>;
2039 <item>escapes[<nf-arg|language>, <nf-arg|quote>,
2040 \ \ escapes[<nf-arg|language>, <nf-arg|quote>], "r"]="\\\\"
2042 </nf-chunk||language>
2044 Any newlines in the string, must be replaced by <verbatim|\\n>.
2046 <\nf-chunk|mode:common-string>
2047 <item>escapes[<nf-arg|language>, <nf-arg|quote>,
2048 ++escapes[<nf-arg|language>, <nf-arg|quote>], "s"]="\\n";
2050 <item>escapes[<nf-arg|language>, <nf-arg|quote>,
2051 \ \ escapes[<nf-arg|language>, <nf-arg|quote>], "r"]="\\\\n";
2052 </nf-chunk||language>
2054 For the common modes, we define this string handling for double and single
2057 <\nf-chunk|common-mode-definitions>
2058 <item><nf-ref|mode:common-string|<tuple|<nf-arg|language>|"\\"">>
2060 <item><nf-ref|mode:common-string|<tuple|<nf-arg|language>|"'">>
2063 Working strings should pass this test:
2065 <\nf-chunk|test:mode-definitions>
2066 <item>parse_chunk_args("c-like", "say \\"I said, \\\\\\"Hello, how are
2067 you\\\\\\".\\", for me", a, "");
2069 <item>if (a[1] != "say \\"I said, \\\\\\"Hello, how are you\\\\\\".\\"")
2072 <item>if (a[2] != "for me") e++;
2074 <item>if (length(a) != 2) e++;
2076 <item><nf-ref|pca-test.awk:summary|>
2079 <subsection|Parentheses, Braces and Brackets>
2081 Where quotes are closed by the same character, parentheses, brackets and
2082 braces are closed by an alternate character.
2084 <\nf-chunk|mode:common-brackets>
2085 <item>modes[<nf-arg|language>, <nf-arg|open>, \ "submodes"
2086 ]="\\\\\\\\\|\\"\|{\|\\\\(\|\\\\[\|'\|/\\\\*";
2088 <item>modes[<nf-arg|language>, <nf-arg|open>, \ "delimiters"]=" *, *";
2090 <item>modes[<nf-arg|language>, <nf-arg|open>,
2091 \ "terminators"]=<nf-arg|close>;
2092 </nf-chunk||<tuple|language|open|close>>
2094 Note that the open is NOT a regex but the close token IS. <todo|When we can
2095 quote regex we won't have to put the slashes in here>
2097 <\nf-chunk|common-mode-definitions>
2098 <item><nf-ref|mode:common-brackets|<tuple|<nf-arg|language>|"{"|"}">>
2100 <item><nf-ref|mode:common-brackets|<tuple|<nf-arg|language>|"["|"\\\\]">>
2102 <item><nf-ref|mode:common-brackets|<tuple|<nf-arg|language>|"("|"\\\\)">>
2105 <subsection|Customizing Standard Modes>
2107 <\nf-chunk|mode:add-submode>
2108 <item>modes[<nf-arg|language>, <nf-arg|mode>, "submodes"] =
2109 modes[<nf-arg|language>, <nf-arg|mode>, "submodes"] "\|"
2111 </nf-chunk||<tuple|language|mode|submode>>
2113 <\nf-chunk|mode:add-escapes>
2114 <item>escapes[<nf-arg|language>, <nf-arg|mode>,
2115 ++escapes[<nf-arg|language>, <nf-arg|mode>], "s"]=<nf-arg|search>;
2117 <item>escapes[<nf-arg|language>, <nf-arg|mode>,
2118 \ \ escapes[<nf-arg|language>, <nf-arg|mode>], "r"]=<nf-arg|replace>;
2119 </nf-chunk||<tuple|language|mode|search|replace>>
2123 <subsection|Comments>
2125 We can define <verbatim|/* comment */> style comments and
2126 <verbatim|//comment> style comments to be added to any language:
2128 <\nf-chunk|mode:multi-line-comments>
2129 <item><nf-ref|mode:add-submode|<tuple|<nf-arg|language>|""|"/\\\\*">>
2131 <item>modes[<nf-arg|language>, "/*", "terminators"]="\\\\*/";
2132 </nf-chunk||<tuple|language>>
2134 <\nf-chunk|mode:single-line-slash-comments>
2135 <item><nf-ref|mode:add-submode|<tuple|<nf-arg|language>|""|"//">>
2137 <item>modes[<nf-arg|language>, "//", "terminators"]="\\n";
2139 <item><nf-ref|mode:add-escapes|<tuple|<nf-arg|language>|"//"|"\\n"|"\\n//">>
2140 </nf-chunk||language>
2142 We can also define <verbatim|# comment> style comments (as used in awk and
2143 shell scripts) in a similar manner.
2145 <todo|I'm having to use # for hash and \textbackslash{} for \ and have
2146 hacky work-arounds in the parser for now>
2148 <\nf-chunk|mode:add-hash-comments>
2149 <item><nf-ref|mode:add-submode|<tuple|<nf-arg|language>|""|"#">>
2151 <item>modes[<nf-arg|language>, "#", "terminators"]="\\n";
2153 <item><nf-ref|mode:add-escapes|<tuple|<nf-arg|language>|"#"|"\\n"|"\\n#">>
2154 </nf-chunk||<tuple|language>>
2156 In C, the <verbatim|#> denotes pre-processor directives which can be
2159 <\nf-chunk|mode:add-hash-defines>
2160 <item><nf-ref|mode:add-submode|<tuple|<nf-arg|language>|""|"#">>
2162 <item>modes[<nf-arg|language>, "#", "submodes" ]="\\\\\\\\";
2164 <item>modes[<nf-arg|language>, "#", "terminators"]="\\n";
2166 <item><nf-ref|mode:add-escapes|<tuple|<nf-arg|language>|"#"|"\\n"|"\\\\\\\\\\n">>
2167 </nf-chunk||<tuple|language>>
2169 <\nf-chunk|mode:quote-dollar-escape>
2170 <item>escapes[<nf-arg|language>, <nf-arg|quote>,
2171 ++escapes[<nf-arg|language>, <nf-arg|quote>], "s"]="\\\\$";
2173 <item>escapes[<nf-arg|language>, <nf-arg|quote>,
2174 \ \ escapes[<nf-arg|language>, <nf-arg|quote>], "r"]="\\\\$";
2175 </nf-chunk||<tuple|language|quote>>
2177 We can add these definitions to various languages
2179 <\nf-chunk|mode-definitions>
2180 <item><nf-ref|common-mode-definitions|<tuple|"c-like">>
2184 <item><nf-ref|common-mode-definitions|<tuple|"c">>
2186 <item><nf-ref|mode:multi-line-comments|<tuple|"c">>
2188 <item><nf-ref|mode:single-line-slash-comments|<tuple|"c">>
2190 <item><nf-ref|mode:add-hash-defines|<tuple|"c">>
2194 <item><nf-ref|common-mode-definitions|<tuple|"awk">>
2196 <item><nf-ref|mode:add-hash-comments|<tuple|"awk">>
2198 <item><nf-ref|mode:add-naked-regex|<tuple|"awk">>
2201 The awk definitions should allow a comment block like this:
2203 <nf-chunk|test:comment-quote|<item># Comment:
2204 <nf-ref|test:comment-text|>|awk|>
2206 <\nf-chunk|test:comment-text>
2207 <item>Now is the time for
2209 <item>the quick brown fox to bring lemonade
2214 to come out like this:
2216 <\nf-chunk|test:comment-quote:result>
2217 <item># Comment: Now is the time for
2219 <item>#the quick brown fox to bring lemonade
2224 The C definition for such a block should have it come out like this:
2226 <\nf-chunk|test:comment-quote:C-result>
2227 <item># Comment: Now is the time for\\
2229 <item>the quick brown fox to bring lemonade\\
2236 This pattern is incomplete, but meant to detect naked regular expressions
2237 in awk and perl; e.g. <verbatim|/.*$/>, however required capabilities are
2240 Current it only detects regexes anchored with ^ as used in fangle.
2242 For full regex support, modes need to be named not after their starting
2243 character, but some other more fully qualified name.
2245 <\nf-chunk|mode:add-naked-regex>
2246 <item><nf-ref|mode:add-submode|<tuple|<nf-arg|language>|""|"/\\\\^">>
2248 <item>modes[<nf-arg|language>, "/^", "terminators"]="/";
2249 </nf-chunk||<tuple|language>>
2253 <\nf-chunk|mode-definitions>
2254 <item><nf-ref|common-mode-definitions|<tuple|"perl">>
2256 <item><nf-ref|mode:multi-line-comments|<tuple|"perl">>
2258 <item><nf-ref|mode:add-hash-comments|<tuple|"perl">>
2261 Still need to add add <verbatim|s/>, submode <verbatim|/>, terminate both
2262 with <verbatim|//>. This is likely to be impossible as perl regexes can
2267 Shell single-quote strings are different to other strings and have no
2268 escape characters. The only special character is the single quote
2269 <verbatim|'> which always closes the string. Therefore we cannot use
2270 <nf-ref|common-mode-definitions|<tuple|"sh">> but we will invoke most of
2271 it's definition apart from single-quote strings.\
2273 <\nf-chunk|mode-definitions>
2274 <item>modes["sh", "", \ "submodes"]="\\\\\\\\\|\\"\|'\|{\|\\\\(\|\\\\[\|\\\\$\\\\(";
2276 <item>modes["sh", "\\\\", "terminators"]=".";
2280 <item>modes["sh", "\\"", "submodes"]="\\\\\\\\\|\\\\$\\\\(";
2282 <item>modes["sh", "\\"", "terminators"]="\\"";
2284 <item>escapes["sh", "\\"", ++escapes["sh", "\\""], "s"]="\\\\\\\\";
2286 <item>escapes["sh", "\\"", \ \ escapes["sh", "\\""], "r"]="\\\\\\\\";
2288 <item>escapes["sh", "\\"", ++escapes["sh", "\\""], "s"]="\\"";
2290 <item>escapes["sh", "\\"", \ \ escapes["sh", "\\""], "r"]="\\\\" "\\"";
2292 <item>escapes["sh", "\\"", ++escapes["sh", "\\""], "s"]="\\n";
2294 <item>escapes["sh", "\\"", \ \ escapes["sh", "\\""], "r"]="\\\\n";
2298 <item>modes["sh", "'", "terminators"]="'";
2300 <item>escapes["sh", "'", ++escapes["sh", "'"], "s"]="'";
2302 <item>escapes["sh", "'", \ \ escapes["sh", "'"], "r"]="'\\\\'" "'";
2304 <item><nf-ref|mode:common-brackets|<tuple|"sh"|"$("|"\\\\)">>
2306 <item><nf-ref|mode:add-tunnel|<tuple|"sh"|"$("|"">>
2308 <item><nf-ref|mode:common-brackets|<tuple|"sh"|"{"|"}">>
2310 <item><nf-ref|mode:common-brackets|<tuple|"sh"|"["|"\\\\]">>
2312 <item><nf-ref|mode:common-brackets|<tuple|"sh"|"("|"\\\\)">>
2314 <item><nf-ref|mode:add-hash-comments|<tuple|"sh">>
2316 <item><nf-ref|mode:quote-dollar-escape|<tuple|"sh"|"\\"">>
2319 The definition of add-tunnel is:
2321 <\nf-chunk|mode:add-tunnel>
2322 <item>escapes[<nf-arg|language>, <nf-arg|mode>,
2323 ++escapes[<nf-arg|language>, <nf-arg|mode>], "tunnel"]=<nf-arg|tunnel>;
2324 </nf-chunk||<tuple|language|mode|tunnel>>
2328 BUGS: makefile tab mode is terminated by newline, but chunks never end in a
2329 newline! So tab mode is never closed unless there is a trailing blank line!
2331 For makefiles, we currently recognize 2 modes: the <em|null> mode and
2332 <nf-tab> mode, which is tabbed mode and contains the makefile recipie.\
2336 <\nf-chunk|mode-definitions>
2337 <item>modes["make", "", \ "submodes"]="<nf-tab>";
2340 In the <em|null> mode the only escape is <verbatim|$> which must be
2341 converted to <verbatim|$$>, and hash-style comments. POSIX requires that
2342 line-continuations extend hash-style comments and so fangle-style
2343 transformations to replicate the hash at the start of each line is not
2344 strictly required, however it is harmless, easier to read, and required by
2345 some implementations of <verbatim|make> which do not implement POSIX
2346 requirements correctly.
2348 <\nf-chunk|mode-definitions>
2349 <item>escapes["make", "", ++escapes["make", ""], "s"]="\\\\$";
2351 <item>escapes["make", "", escapes["make", ""], "r"]="$$";
2353 <item><nf-ref|mode:add-hash-comments|<tuple|"make">>
2356 Tabbed mode is harder to manage, as the GNU Make Manual says in the section
2357 on <hlink|splitting lines|http://www.gnu.org/s/hello/manual/make/Splitting-Lines.html>.
2358 There is no obvious way to escape a multi-line text that occurs as part of
2361 Traditionally, if the newline's in the shell script all occur at points of
2362 top-level shell syntax, then we could replace them with <verbatim|
2363 ;\\n<nf-tab>>and largely get the right effect.
2365 <\with|par-columns|2>
2366 <\nf-chunk|test:make:1>
2367 <label|test-make-line-quoting><item>all:
2369 <item><nf-tab>echo making
2371 <item><nf-tab><nf-ref|test:make:1-inc|$@>
2378 <\nf-chunk|test:make:1-inc>
2379 <item>if test "<nf-arg|target>" = "all"
2381 <item>then echo yes, all
2383 <item>else echo "<nf-arg|target>" \| sed -e '/^\\//{
2385 <item> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ p;s/^/../
2387 <item> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }'
2390 </nf-chunk|sh|<tuple|target>>
2393 The two chunks above could reasonably produce something like this:
2395 <\nf-chunk|test:make:1.result.bad>
2398 <item><nf-tab>echo making
2400 <item><nf-tab>if test "$@" = "all" ;\\
2402 <item><nf-tab>then echo yes, all ;\\
2404 <item><nf-tab>else echo "$@" \| sed -e '/^\\//{ ;\\
2406 <item><nf-tab> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ p;s/^/../
2409 <item><nf-tab> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }' ;\\
2414 However <verbatim|;\\> is not a proper continuation inside a multi-line sed
2415 script. There is no simple continuation that fangle could use <emdash> and
2416 in any case it would depend on what type of quote marks were used in the
2417 bash that contained the sed.\
2419 We would prefer to use a more intuitive single backslash at the end of the
2420 line, giving these results.
2422 <\nf-chunk|test:make:1.result>
2425 <item><nf-tab>echo making
2427 <item><nf-tab>if test "$$@" = "all"\\
2429 <item><nf-tab> then echo yes, all\\
2431 <item><nf-tab> else echo "$$@" \| sed -e '/^\\//{\\
2433 <item><nf-tab> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ p;s/^/../\\
2435 <item><nf-tab> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }'\\
2440 The difficulty lies in the way that make handles the recipe. Each line of
2441 the recipe is invoked as a separate shell command (using <verbatim|$(SHELL)
2442 -c>) unless the last character of the line was a backslash. In such a case,
2443 the backslash and the newline and the nextline are handed to the shell
2444 (although the tab character that prefixes the next line is stripped).
2446 This behaviour makes it impossible to hand a newline character to the shell
2447 unless it is prefixed by a backslash. If an included shell fragment
2448 contained strings with literal newline characters then there would be no
2449 easy way to escape these and preserve the value of the string.
2451 A different style of makefile construction might be used <emdash> the
2452 recipe could be stored in a <hlink|target specific
2453 variable|http://www.gnu.org/s/hello/manual/make/Target_002dspecific.html>
2454 which contains the recipe with a more normal escape mechanism.
2456 A better solution is to use a shell helper that strips the back-slash which
2457 precedes the newline character and then passes the arguments to the normal
2460 Because this is a simple operation and because bash is so flexible, this
2461 can be managed in a single line <em|within the makefile itself.>
2463 As a newline will only exist when preceded by the backslash, and as the
2464 purpose of the backash is to protect th newline, that is needed is to
2465 remove any backslash that is followed by a newline.
2467 Bash is capable of doing this with its pattern substitution. If
2468 <verbatim|A=123:=456:=789> then <verbatim|${A//:=/=}> will be
2469 <verbatim|123=456=789>. We don't want to just perform the substitution in a
2470 single variable but in fact in all of <verbatim|$@''>, however bash will
2471 repeat substitution over all members of an array, so this is done
2474 In bash, <verbatim|$'\\012'> represents the newline character (expressed as
2475 an octal escape sequence), so this expression will replace
2476 backslash-newline with a single newline.
2478 <\nf-chunk|fix-requote-newline>
2479 <item>"${@//\\\\$'\\012'/$'\\012'}"
2482 We use this as part of a larger statement which will invoke such a
2483 transformed command ine using any particular shell. The trailing
2484 <verbatim|--> prevents any options in the command line from being
2485 interpreted as options to our bash command <emdash> instead they will be
2486 transformed and passed to the inner shell which is invoked with
2487 <verbatim|exec> so that our fixup-shell does not hang around longer than is
2490 <\nf-chunk|fix-make-shell>
2491 <item>bash -c 'exec <nf-arg|shell> <nf-ref|fix-requote-newline|>' --
2492 </nf-chunk|sh|<tuple|shell>>
2494 We can then cinlude a line like this in our makefiles. We should rather
2495 pass <verbatim|$(SHELL)> as the chunk argument than <verbatim|bash>, but
2496 currently fangle will not track which nested-inclusion level the argument
2497 comes from and will quote the <verbatim|$> in <verbatim|$(SHELL)> in the
2498 same way it quotes a <verbatim|$> that may occur in the bash script, so
2499 this would come out as <verbatim|$$(SHELL) and have the wrong effect.>
2501 <\nf-chunk|make-fix-make-shell>
2502 <item>SHELL:=<nf-ref|fix-make-shell|<tuple|bash>>
2505 The full escaped and quoted text with <verbatim|$(SHELL)> and suitale for
2506 inclusion in a Makefile is:
2509 SHELL:=bash -c 'exec $(SHELL) "$${@//\\\\$$'\\''\\012'\\''/$$'\\''\\012'\\''}"'
2513 Based on this, we just need to escape newlines (in tabbed mode) with a
2516 Note that terminators applies to literal, not included text, escapes apply
2517 to included, not literal text; also that the tab character is hard-wired
2518 into the pattern, and that the make variable <verbatim|.RECIPEPREFIX> might
2519 change this to something else.
2521 <\nf-chunk|mode-definitions>
2522 <item>modes["make", "<nf-tab>", "terminators"]="\\\\n";
2524 <item>escapes["make", "<nf-tab>", ++escapes["make", "<nf-tab>"],
2527 <item>escapes["make", "<nf-tab>", \ \ escapes["make", "<nf-tab>"],
2528 "r"]="\\\\\\n<nf-tab>";
2531 With this improved quoting, the test on <reference|test-make-line-quoting>
2532 will actually produce this:
2534 <\nf-chunk|test:make:1.result-actual>
2537 <item><nf-tab>echo making
2539 <item><nf-tab>if test "$$@" = "all"\\
2541 <item><nf-tab> then echo yes, all\\
2543 <item><nf-tab> else echo not all\\
2548 The chunk argument <verbatim|$@> has been quoted (which would have been
2549 fine if we were passing the name of a shell variable), and the other shell
2550 lines are (harmlessly) indented by 1 space as part of fangle
2551 indent-matching which should have taken into account the expanded tab size,
2552 and should generally take into account the expanded prefix of the line
2553 whose indent it is trying to match, but which in this case we want to have
2557 The $@ was passed from a make fragment. In what cases should it be
2560 Do we need to track the language of sources of arguments?
2563 A more ugly work-around until this problem can be solved would be to use
2566 <\nf-chunk|test:make:2>
2569 <item><nf-tab>echo making
2571 <item><nf-tab>ARG="$@"; <nf-ref|test:make:1-inc|$ARG>
2574 which produces this output which is more useful (because it works):
2576 <\nf-chunk|test:make:2.result>
2579 <item><nf-tab>echo making
2581 <item><nf-tab>ARG="$@"; if test "$$ARG" = "all"\\
2583 <item><nf-tab> \ \ \ \ \ \ \ \ \ \ then echo yes, all\\
2585 <item><nf-tab> \ \ \ \ \ \ \ \ \ \ else echo "$$ARG" \| sed -e '/^\\//{\\
2587 <item><nf-tab> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ p;s/^/../\\
2589 <item><nf-tab> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }'\\
2591 <item><nf-tab> \ \ \ \ \ \ \ \ \ \ fi
2594 <section|Quoting scenarios>
2596 <subsection|Direct quoting>
2598 He we give examples of various quoting scenarios and discuss what the
2599 expected outcome might be and how this could be obtained.
2601 <\with|par-columns|2>
2602 <\nf-chunk|test:q:1>
2603 <item>echo "$(<nf-ref|test:q:1-inc|>)"
2606 <\nf-chunk|test:q:1-inc>
2611 Should this examples produce <verbatim|echo "$(echo "hello")"> or
2612 <verbatim|echo "$(echo \\"hello\\")"> ?
2614 This depends on what the author intended, but we must provde a way to
2615 express that intent.
2617 We might argue that as both chunks have <verbatim|lang=sh> the intent must
2618 have been to quote the included chunk <emdash> but consider that this might
2619 be shell script that writes shell script.
2621 If <nf-ref|test:q:1-inc|> had <verbatim|lang=text> then it certainly would
2622 have been right to quote it, which leads us to ask: in what ways can we
2623 reduce quoting if lang of the included chunk is compatible with the lang of
2624 the including chunk?
2626 If we take a completely nested approach then even though <verbatim|$(> mode
2627 might do no quoting of it's own, <verbatim|"> mode will still do it's own
2628 quoting. We need a model where the nested <verbatim|$(> mode will prevent
2629 <verbatim|"> from quoting.
2631 This leads rise to the <em|tunneling> feature. In bash, the <verbatim|$(>
2632 gives rise to a new top-level parsing scenario, so we need to enter the
2633 <em|null> mode, and also ignore any quoting and then undo-this when the
2634 <verbatim|$(> mode is terminated by the corresponding close <verbatim|)>.
2636 We shall say that tunneling is when a mode in a language ignores other
2637 modes in the same language and arrives back at an earlier <em|null> mode of
2640 In example <nf-ref|test:q:1|> above, the nesting of modes is: <em|null>,
2641 <verbatim|">, <verbatim|$(>
2643 When mode <verbatim|$(> is commenced, the stack of nest modes will be
2644 traversed. If the <em|null> mode can be found in the same language, without
2645 the language varying, then a tunnel will be established so that the
2646 intervening modes, <verbatim|"> in this case, can be skipped when the modes
2647 are enumerated to quote the texted being emitted.
2649 In such a case, the correct result would be:
2651 <\nf-chunk|test:q:1.result>
2652 <item>echo "$(echo "hello")"
2655 <section|Some tests>
2657 Also, the parser must return any spare text at the end that has not been
2658 processed due to a mode terminator being found.
2660 <\nf-chunk|test:mode-definitions>
2661 <item>rest = parse_chunk_args("c-like", "1, 2, 3) spare", a, "(");
2663 <item>if (a[1] != 1) e++;
2665 <item>if (a[2] != 2) e++;
2667 <item>if (a[3] != 3) e++;
2669 <item>if (length(a) != 3) e++;
2671 <item>if (rest != " spare") e++;
2673 <item><nf-ref|pca-test.awk:summary|>
2676 We must also be able to parse the example given earlier.
2678 <\nf-chunk|test:mode-definitions>
2679 <item>parse_chunk_args("c-like", "things[x, y], get_other_things(a,
2680 \\"(all)\\"), 99", a, "(");
2682 <item>if (a[1] != "things[x, y]") e++;
2684 <item>if (a[2] != "get_other_things(a, \\"(all)\\")") e++;
2686 <item>if (a[3] != "99") e++;
2688 <item>if (length(a) != 3) e++;
2690 <item><nf-ref|pca-test.awk:summary|>
2693 <section|A non-recursive mode tracker>
2695 As each chunk is output a new mode tracker for that language is initialized
2696 in it's normal state. As text is output for that chunk the output mode is
2697 tracked. When a new chunk is included, a transformation appropriate to that
2698 mode is selected and pushed onto a stack of transformations. Any text to be
2699 output is passed through this stack of transformations.
2701 It remains to consider if the chunk-include function should return it's
2702 generated text so that the caller can apply any transformations (and
2703 formatting), or if it should apply the stack of transformations itself.
2705 Note that the transformed included text should have the property of not
2706 being able to change the mode in the current chunk.
2708 <todo|Note chunk parameters should probably also be transformed>
2710 <subsection|Constructor>
2712 The mode tracker holds its state in a stack based on a numerically indexed
2713 hash. This function, when passed an empty hash, will intialize it.
2715 <\nf-chunk|new_mode_tracker()>
2716 <item>function new_mode_tracker(context, language, mode) {
2718 <item> \ context[""] = 0;
2720 <item> \ context[0, "language"] = language;
2722 <item> \ context[0, "mode"] = mode;
2727 Awk functions cannot return an array, but arrays are passed by reference.
2728 Because of this we must create the array first and pass it in, so we have a
2729 fangle macro to do this:
2731 <\nf-chunk|new-mode-tracker>
2732 <item><nf-ref|awk-delete-array|<tuple|<nf-arg|context>>>
2734 <item>new_mode_tracker(<nf-arg|context>, <nf-arg|language>,
2736 </nf-chunk|awk|<tuple|context|language|mode>>
2738 <subsection|Management>
2740 And for tracking modes, we dispatch to a mode-tracker action based on the
2743 <\nf-chunk|mode_tracker>
2744 <item>function push_mode_tracker(context, language, mode,
2746 <item> \ # local vars
2752 <item> \ if (! ("" in context)) {
2754 <item> \ \ \ <nf-ref|new-mode-tracker|<tuple|context|language|mode>>
2756 <item> \ \ \ return;
2760 <item> \ \ \ top = context[""];
2762 <item># \ \ \ if (context[top, "language"] == language && mode=="") mode
2763 = context[top, "mode"];
2765 <item> \ \ \ if (context[top, "language"] == language && context[top,
2766 "mode"] == mode) return top - 1;
2768 <item> \ \ \ old_top = top;
2772 <item> \ \ \ context[top, "language"] = language;
2774 <item> \ \ \ context[top, "mode"] = mode;
2776 <item> \ \ \ context[""] = top;
2780 <item> \ return old_top;
2785 <\nf-chunk|mode_tracker>
2786 <item>function dump_mode_tracker(context, \
2792 <item> \ for(c=0; c \<less\>= context[""]; c++) {
2794 <item> \ \ \ printf(" %2d \ \ %s:%s\\n", c, context[c, "language"],
2795 context[c, "mode"]) \<gtr\> "/dev/stderr";
2797 <item># \ \ \ for(d=1; ( (c, "values", d) in context); d++) {
2799 <item># \ \ \ \ \ printf(" \ \ %2d %s\\n", d, context[c, "values", d])
2800 \<gtr\> "/dev/stderr";
2809 <\nf-chunk|mode_tracker>
2810 <item>function pop_mode_tracker(context, context_origin)
2814 <item> \ if ( (context_origin) && ("" in context) && context[""] !=
2815 (1+context_origin) && context[""] != context_origin) {
2817 <item> \ \ \ print "Context level: " context[""] ", origin: "
2818 context_origin "\\n" \<gtr\> "/dev/stderr"
2820 <item> \ \ \ return 0;
2824 <item> \ context[""] = context_origin;
2831 This implies that any chunk must be syntactically whole; for instance, this
2834 <\nf-chunk|test:whole-chunk>
2837 <item> \ <nf-ref|test:say-hello|>
2842 <\nf-chunk|test:say-hello>
2843 <item>print "hello";
2846 But this is not fine; the chunk <nf-ref|test:hidden-else|> is not properly
2849 <\nf-chunk|test:partial-chunk>
2852 <item> \ <nf-ref|test:hidden-else|>
2857 <\nf-chunk|test:hidden-else>
2858 <item> \ print "I'm fine";
2862 <item> \ print "I'm not";
2865 These tests will check for correct behaviour:
2867 <\nf-chunk|test:cromulence>
2868 <item>echo Cromulence test
2870 <item>passtest $FANGLE -Rtest:whole-chunk $TXT_SRC &\<gtr\>/dev/null \|\|
2871 ( echo "Whole chunk failed" && exit 1 )
2873 <item>failtest $FANGLE -Rtest:partial-chunk $TXT_SRC &\<gtr\>/dev/null
2874 \|\| ( echo "Partial chunk failed" && exit 1 )
2877 <subsection|Tracker>
2879 We must avoid recursion as a language construct because we intend to employ
2880 mode-tracking to track language mode of emitted code, and the code is
2881 emitted from a function which is itself recursive, so instead we implement
2882 psuedo-recursion using our own stack based on a hash.
2884 <\nf-chunk|mode_tracker()>
2885 <item>function mode_tracker(context, text, values,\
2887 <item> \ # optional parameters
2889 <item> \ # local vars
2891 <item> \ mode, submodes, language,
2893 <item> \ cindex, c, a, part, item, name, result, new_values, new_mode,\
2895 <item> \ delimiters, terminators)
2900 We could be re-commencing with a valid context, so we need to setup the
2901 state according to the last context.
2903 <\nf-chunk|mode_tracker()>
2904 <item> \ cindex = context[""] + 0;
2906 <item> \ mode = context[cindex, "mode"];
2908 <item> \ language = context[cindex, "language" ];
2911 First we construct a single large regex combining the possible sub-modes
2912 for the current mode along with the terminators for the current mode.
2914 <\nf-chunk|parse_chunk_args-reset-modes>
2915 <item> \ submodes=modes[language, mode, "submodes"];
2919 <item> \ if ((language, mode, "delimiters") in modes) {
2921 <item> \ \ \ delimiters = modes[language, mode, "delimiters"];
2923 <item> \ \ \ if (length(submodes)\<gtr\>0) submodes = submodes "\|";
2925 <item> \ \ \ submodes=submodes delimiters;
2927 <item> \ } else delimiters="";
2929 <item> \ if ((language, mode, "terminators") in modes) {
2931 <item> \ \ \ terminators = modes[language, mode, "terminators"];
2933 <item> \ \ \ if (length(submodes)\<gtr\>0) submodes = submodes "\|";
2935 <item> \ \ \ submodes=submodes terminators;
2937 <item> \ } else terminators="";
2940 If we don't find anything to match on --- probably because the language is
2941 not supported --- then we return the entire text without matching anything.
2943 <\nf-chunk|parse_chunk_args-reset-modes>
2944 <item> if (! length(submodes)) return text;
2947 <\nf-chunk|mode_tracker()>
2948 <item><nf-ref|parse_chunk_args-reset-modes|>
2951 We then iterate the text (until there is none left) looking for sub-modes
2952 or terminators in the regex.
2954 <\nf-chunk|mode_tracker()>
2955 <item> \ while((cindex \<gtr\>= 0) && length(text)) {
2957 <item> \ \ \ if (match(text, "(" submodes ")", a)) {
2960 A bug that creeps in regularly during development is bad regexes of zero
2961 length which result in an infinite loop (as no text is consumed), so I
2962 catch that right away with this test.
2964 <\nf-chunk|mode_tracker()>
2965 <item> \ \ \ \ \ if (RLENGTH\<less\>1) {
2967 <item> \ \ \ \ \ \ \ error(sprintf("Internal error, matched zero length
2968 submode, should be impossible - likely regex computation error\\n" \\
2970 <item> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ "Language=%s\\nmode=%s\\nmatch=%s\\n",
2971 language, mode, submodes));
2976 part is defined as the text up to the sub-mode or terminator, and this is
2977 appended to item --- which is the current text being gathered. If a mode
2978 has a delimiter, then item is reset each time a delimiter is found.
2980 <math|<wide|<with|mode|prog|"><wide*|hello|\<wide-underbrace\>><rsub|item>,
2981 <wide*|there|\<wide-underbrace\>><rsub|item><with|mode|prog|">|\<wide-overbrace\>><rsup|item>,
2982 \ <wide|he said.|\<wide-overbrace\>><rsup|item>>
2984 <\nf-chunk|mode_tracker()>
2985 <item> \ \ \ \ \ part = substr(text, 1, RSTART -1);
2987 <item> \ \ \ \ \ item = item part;
2990 We must now determine what was matched. If it was a terminator, then we
2991 must restore the previous mode.
2993 <\nf-chunk|mode_tracker()>
2994 <item> \ \ \ \ \ if (match(a[1], "^" terminators "$")) {
2996 <item>#printf("%2d EXIT \ MODE [%s] by [%s] [%s]\\n", cindex, mode, a[1],
2997 text) \<gtr\> "/dev/stderr"
2999 <item> \ \ \ \ \ \ \ context[cindex, "values", ++context[cindex,
3002 <item> \ \ \ \ \ \ \ delete context[cindex];
3004 <item> \ \ \ \ \ \ \ context[""] = --cindex;
3006 <item> \ \ \ \ \ \ \ if (cindex\<gtr\>=0) {
3008 <item> \ \ \ \ \ \ \ \ \ mode = context[cindex, "mode"];
3010 <item> \ \ \ \ \ \ \ \ \ language = context[cindex, "language"];
3012 <item> \ \ \ \ \ \ \ \ \ <nf-ref|parse_chunk_args-reset-modes|>
3014 <item> \ \ \ \ \ \ \ }
3016 <item> \ \ \ \ \ \ \ item = item a[1];
3018 <item> \ \ \ \ \ \ \ text = substr(text, 1 + length(part) +
3024 If a delimiter was matched, then we must store the current item in the
3025 parsed values array, and reset the item.
3027 <\nf-chunk|mode_tracker()>
3028 <item> \ \ \ \ \ else if (match(a[1], "^" delimiters "$")) {
3030 <item> \ \ \ \ \ \ \ if (cindex==0) {
3032 <item> \ \ \ \ \ \ \ \ \ context[cindex, "values", ++context[cindex,
3035 <item> \ \ \ \ \ \ \ \ \ item = "";
3037 <item> \ \ \ \ \ \ \ } else {
3039 <item> \ \ \ \ \ \ \ \ \ item = item a[1];
3041 <item> \ \ \ \ \ \ \ }
3043 <item> \ \ \ \ \ \ \ text = substr(text, 1 + length(part) +
3049 otherwise, if a new submode is detected (all submodes have terminators), we
3050 must create a nested parse context until we find the terminator for this
3053 <\nf-chunk|mode_tracker()>
3054 <item> else if ((language, a[1], "terminators") in modes) {
3056 <item> \ \ \ \ \ \ \ #check if new_mode is defined
3058 <item> \ \ \ \ \ \ \ item = item a[1];
3060 <item>#printf("%2d ENTER MODE [%s] in [%s]\\n", cindex, a[1], text)
3061 \<gtr\> "/dev/stderr"
3063 <item> \ \ \ \ \ \ \ text = substr(text, 1 + length(part) +
3066 <item> \ \ \ \ \ \ \ context[""] = ++cindex;
3068 <item> \ \ \ \ \ \ \ context[cindex, "mode"] = a[1];
3070 <item> \ \ \ \ \ \ \ context[cindex, "language"] = language;
3072 <item> \ \ \ \ \ \ \ mode = a[1];
3074 <item> \ \ \ \ \ \ \ <nf-ref|parse_chunk_args-reset-modes|>
3076 <item> \ \ \ \ \ } else {
3078 <item> \ \ \ \ \ \ \ error(sprintf("Submode '%s' set unknown mode in
3079 text: %s\\nLanguage %s Mode %s\\n", a[1], text, language, mode));
3081 <item> \ \ \ \ \ \ \ text = substr(text, 1 + length(part) +
3089 In the final case, we parsed to the end of the string. If the string was
3090 entire, then we should have no nested mode context, but if the string was
3091 just a fragment we may have a mode context which must be preserved for the
3092 next fragment. Todo: Consideration ought to be given if sub-mode strings
3093 are split over two fragments.
3095 <\nf-chunk|mode_tracker()>
3098 <item> \ \ \ \ \ context[cindex, "values", ++context[cindex, "values"]] =
3101 <item> \ \ \ \ \ text = "";
3103 <item> \ \ \ \ \ item = "";
3111 <item> \ context["item"] = item;
3115 <item> \ if (length(item)) context[cindex, "values", ++context[cindex,
3118 <item> \ return text;
3123 <subsubsection|One happy chunk>
3125 All the mode tracker chunks are referred to here:
3127 <\nf-chunk|mode-tracker>
3128 <item><nf-ref|new_mode_tracker()|>
3130 <item><nf-ref|mode_tracker()|>
3133 <subsubsection|Tests>
3135 We can test this function like this:
3137 <\nf-chunk|pca-test.awk>
3138 <item><nf-ref|error()|>
3140 <item><nf-ref|mode-tracker|>
3142 <item><nf-ref|parse_chunk_args()|>
3146 <item> \ SUBSEP=".";
3148 <item> \ <nf-ref|mode-definitions|>
3152 <item> \ <nf-ref|test:mode-definitions|>
3157 <\nf-chunk|pca-test.awk:summary>
3160 <item> \ printf "Failed " e
3162 <item> \ for (b in a) {
3164 <item> \ \ \ print "a[" b "] =\<gtr\> " a[b];
3170 <item> \ print "Passed"
3179 which should give this output:
3181 <\nf-chunk|pca-test.awk-results>
3182 <item>a[foo.quux.quirk] =\<gtr\>\
3184 <item>a[foo.quux.a] =\<gtr\> fleeg
3186 <item>a[foo.bar] =\<gtr\> baz
3188 <item>a[etc] =\<gtr\>\
3190 <item>a[name] =\<gtr\> freddie
3193 <section|Escaping and Quoting>
3195 For the time being and to get around <TeXmacs> inability to export a
3196 <kbd|TAB> character, the right arrow <with|mode|math|\<mapsto\>> whose
3197 UTF-8 sequence is ...
3201 Another special character is used, the left-arrow
3202 <with|mode|math|\<mapsfrom\>> with UTF-8 sequence 0xE2 0x86 0xA4 is used to
3203 strip any preceding white space as a way of un-tabbing and removing indent
3204 that has been applied <emdash> this is important for bash here documents,
3205 and the like. It's a filthy hack.
3207 <todo|remove the hack>
3209 <\nf-chunk|mode_tracker>
3212 <item>function untab(text) {
3214 <item> \ gsub("[[:space:]]*\\xE2\\x86\\xA4","", text);
3216 <item> \ return text;
3221 Each nested mode can optionally define a set of transforms to be applied to
3222 any text that is included from another language.
3224 This code can perform transforms from index c downwards.
3226 <\nf-chunk|mode_tracker>
3227 <item>function transform_escape(context, text, top,
3229 <item> \ c, cp, cpl, s, r)
3233 <item> \ for(c = top; c \<gtr\>= 0; c--) {
3235 <item> \ \ \ if ( (context[c, "language"], context[c, "mode"]) in
3238 <item> \ \ \ \ \ cpl = escapes[context[c, "language"], context[c,
3241 <item> \ \ \ \ \ for (cp = 1; cp \<less\>= cpl; cp ++) {
3243 <item> \ \ \ \ \ \ \ s = escapes[context[c, "language"], context[c,
3246 <item> \ \ \ \ \ \ \ r = escapes[context[c, "language"], context[c,
3249 <item> \ \ \ \ \ \ \ if (length(s)) {
3251 <item> \ \ \ \ \ \ \ \ \ gsub(s, r, text);
3253 <item> \ \ \ \ \ \ \ }
3255 <item> \ \ \ \ \ \ \ if ( (context[c, "language"], context[c, "mode"],
3256 cp, "t") in escapes ) {
3258 <item> \ \ \ \ \ \ \ \ \ quotes[src, "t"] = escapes[context[c,
3259 "language"], context[c, "mode"], cp, "t"];
3261 <item> \ \ \ \ \ \ \ }
3269 <item> \ return text;
3273 <item>function dump_escaper(quotes, r, cc) {
3275 <item> \ for(cc=1; cc\<less\>=c; cc++) {
3277 <item> \ \ \ printf("%2d s[%s] r[%s]\\n", cc, quotes[cc, "s"], quotes[cc,
3278 "r"]) \<gtr\> "/dev/stderr"
3285 <\nf-chunk|test:escapes>
3286 <item>echo escapes test
3288 <item>passtest $FANGLE -Rtest:comment-quote $TXT_SRC &\<gtr\>/dev/null
3289 \|\| ( echo "Comment-quote failed" && exit 1 )
3292 <chapter|Recognizing Chunks>
3294 Fangle recognizes noweb chunks, but as we also want better <LaTeX>
3295 integration we will recognize any of these:
3298 <item>notangle chunks matching the pattern
3299 <verbatim|^\<less\>\<less\>.*?\<gtr\>\<gtr\>=>
3301 <item>chunks beginning with <verbatim|\\begin{lstlistings}>, possibly
3302 with <verbatim|\\Chunk{...}> on the previous line
3304 <item>an older form I have used, beginning with
3305 <verbatim|\\begin{Chunk}[options]> --- also more suitable for plain
3306 <LaTeX> users<\footnote>
3307 Is there such a thing as plain <LaTeX>?
3311 <section|Chunk start>
3313 The variable chunking is used to signify that we are processing a code
3314 chunk and not document. In such a state, input lines will be assigned to
3315 the current chunk; otherwise they are ignored.
3317 <subsection|<TeXmacs>>
3319 We don't handle <TeXmacs> files natively yet, but rather instead emit
3320 unicode character sequences to mark up the text-export file which we do
3323 These hacks detect the unicode character sequences and retro-fit in the old
3326 We convert <math|\<mapsto\>> into a tab character.
3328 <\nf-chunk|recognize-chunk>
3333 <item># \ gsub("\\n*$","");
3335 <item># \ gsub("\\n", " ");
3341 <item>/\\xE2\\x86\\xA6/ {
3343 <item> \ gsub("\\\\xE2\\\\x86\\\\xA6", "\\x09");
3348 <TeXmacs> back-tick handling is obscure, and a cut-n-paste back-tick from a
3349 shell window comes out as a unicode sequence<\footnote>
3350 that won't export to html, except as a NULL character (literal 0x00)
3351 </footnote> that is fixed-up here.
3353 <\nf-chunk|recognize-chunk>
3356 <item>/\\xE2\\x80\\x98/ {
3358 <item> \ gsub("\\\\xE2\\\\x80\\\\x98", "`");
3363 In the <TeXmacs> output, the start of a chunk will appear like this:
3365 <verbatim| \ 5b\<less\>example-chunk<key|^K>[1](arg1,<key|^K>
3366 arg2<key|^K><key|^K>), lang=C\<gtr\> <math|\<equiv\>>>
3368 We detect the the start of a <TeXmacs> chunk by detecting the
3369 <math|\<equiv\>> symbol which occurs near the end of the line. We obtain
3370 the chunk name, the chunk parameters, and the chunk language.
3372 <\nf-chunk|recognize-chunk>
3375 <item>/\\xE2\\x89\\xA1/ {
3377 <item> \ if (match($0, "^ *([^[ ]* \|)\<less\>([^[
3378 ]*)\\\\[[0-9]*\\\\][(](.*)[)].*, lang=([^ ]*)\<gtr\>", line)) {
3380 <item> \ \ \ next_chunk_name=line[2];
3382 <item> \ \ \ get_texmacs_chunk_args(line[3], next_chunk_params);
3384 <item> \ \ \ gsub(ARG_SEPARATOR ",? ?", ";", line[3]);
3386 <item> \ \ \ params = "params=" line[3];
3388 <item> \ \ \ if ((line[4])) {
3390 <item> \ \ \ \ \ params = params ",language=" line[4]
3394 <item> \ \ \ get_tex_chunk_args(params, next_chunk_opts);
3396 <item> \ \ \ new_chunk(next_chunk_name, next_chunk_opts,
3399 <item> \ \ \ texmacs_chunking = 1;
3403 <item> \ \ \ # warning(sprintf("Unexpected chunk match: %s\\n", $_))
3412 <subsection|lstlistings>
3414 Our current scheme is to recognize the new lstlisting chunks, but these may
3415 be preceded by a <verbatim|\\Chunk> command which in <LyX> is a more
3416 convenient way to pass the chunk name to the
3417 <verbatim|\\begin{lstlistings}> command, and a more visible way to specify
3418 other <verbatim|lstset> settings.
3420 The arguments to the <verbatim|\\Chunk> command are a name, and then a
3421 comma-seperated list of key-value pairs after the manner of
3422 <verbatim|\\lstset>. (In fact within the <LaTeX> <verbatim|\\Chunk> macro
3423 (section <reference|sub:The-chunk-command>) the text <verbatim|name=> is
3424 prefixed to the argument which is then literally passed to
3425 <verbatim|\\lstset>).
3427 <\nf-chunk|recognize-chunk>
3428 <item>/^\\\\Chunk{/ {
3430 <item> \ if (match($0, "^\\\\\\\\Chunk{ *([^ ,}]*),?(.*)}", line)) {
3432 <item> \ \ \ next_chunk_name = line[1];
3434 <item> \ \ \ get_tex_chunk_args(line[2], next_chunk_opts);
3443 We also make a basic attempt to parse the name out of the
3444 <verbatim|\\lstlistings[name=chunk-name]> text, otherwise we fall back to
3445 the name found in the previous chunk command. This attempt is very basic
3446 and doesn't support commas or spaces or square brackets as part of the
3447 chunkname. We also recognize <verbatim|\\begin{Chunk}> which is convenient
3448 for some users<\footnote>
3449 but not yet supported in the <LaTeX> macros
3452 <\nf-chunk|recognize-chunk>
3453 <item>/^\\\\begin{lstlisting}\|^\\\\begin{Chunk}/ {
3455 <item> \ if (match($0, "}.*[[,] *name= *{? *([^], }]*)", line)) {
3457 <item> \ \ \ new_chunk(line[1]);
3461 <item> \ \ \ new_chunk(next_chunk_name, next_chunk_opts);
3465 <item> \ chunking=1;
3472 <section|Chunk Body>
3474 <subsection|<TeXmacs>>
3476 A chunk body in <TeXmacs> ends with <verbatim|\|________>... if it is the
3477 final chunklet of a chunk, or if there are further chunklets it ends with
3478 <verbatim|\|\\/\\/\\/>... which is a depiction of a jagged line of torn
3481 <\nf-chunk|recognize-chunk>
3482 <item>/^ *\\\|____________*/ && texmacs_chunking {
3484 <item> \ active_chunk="";
3486 <item> \ texmacs_chunking=0;
3488 <item> \ chunking=0;
3492 <item>/^ *\\\|\\/\\\\/ && texmacs_chunking {
3494 <item> \ texmacs_chunking=0;
3496 <item> \ chunking=0;
3498 <item> \ active_chunk="";
3503 It has been observed that not every line of output when a <TeXmacs> chunk
3504 is active is a line of chunk. This may no longer be true, but we set a
3505 variable <verbatim|texmacs_chunk> if the current line is a chunk line.
3507 Initially we set this to zero...
3509 <\nf-chunk|recognize-chunk>
3510 <item>texmacs_chunk=0;
3513 ...and then we look to see if the current line is a chunk line.
3515 <TeXmacs> lines look like this: <verbatim| \ 3 \| main() {> so we detect
3516 the lines by leading white space, digits, more whiter space and a vertical
3517 bar followed by at least once space.
3519 If we find such a line, we remove this line-header and set
3520 <verbatim|texmacs_chunk=1> as well as <verbatim|chunking=1>
3522 <\nf-chunk|recognize-chunk>
3523 <item>/^ *[1-9][0-9]* *\\\| / {
3525 <item> \ if (texmacs_chunking) {
3527 <item> \ \ \ chunking=1;
3529 <item> \ \ \ texmacs_chunk=1;
3531 <item> \ \ \ gsub("^ *[1-9][0-9]* *\\\\\| ", "")
3538 When <TeXmacs> chunking, lines that commence with <verbatim|\\/> or
3539 <verbatim|__> are not chunk content but visual framing, and are skipped.
3541 <\nf-chunk|recognize-chunk>
3542 <item>/^ *\\.\\/\\\\/ && texmacs_chunking {
3548 <item>/^ *__*$/ && texmacs_chunking {
3555 Any other line when <TeXmacs> chunking is considered to be a line-wrapped
3558 <\nf-chunk|recognize-chunk>
3559 <item>texmacs_chunking {
3561 <item> \ if (! texmacs_chunk) {
3563 <item> \ \ \ # must be a texmacs continued line
3565 <item> \ \ \ chunking=1;
3567 <item> \ \ \ texmacs_chunk=1;
3574 This final chunklet seems bogus and probably stops <LyX> working.
3576 <\nf-chunk|recognize-chunk>
3577 <item>! texmacs_chunk {
3579 <item># \ texmacs_chunking=0;
3581 <item> \ chunking=0;
3588 We recognize notangle style chunks too:
3590 <\nf-chunk|recognize-chunk>
3591 <item>/^[\<less\>]\<less\>.*[\<gtr\>]\<gtr\>=/ {
3593 <item> \ if (match($0, "^[\<less\>]\<less\>(.*)[\<gtr\>]\<gtr\>= *$",
3596 <item> \ \ \ chunking=1;
3598 <item> \ \ \ notangle_mode=1;
3600 <item> \ \ \ new_chunk(line[1]);
3611 Likewise, we need to recognize when a chunk ends.
3613 <subsection|lstlistings>
3615 The <verbatim|e> in <verbatim|[e]nd{lislisting}> is surrounded by square
3616 brackets so that when this document is processed, this chunk doesn't
3617 terminate early when the lstlistings package recognizes it's own
3618 end-string!<\footnote>
3619 This doesn't make sense as the regex is anchored with ^, which this line
3620 does not begin with!
3623 <\nf-chunk|recognize-chunk>
3624 <item>/^\\\\[e]nd{lstlisting}\|^\\\\[e]nd{Chunk}/ {
3626 <item> \ chunking=0;
3628 <item> \ active_chunk="";
3637 <\nf-chunk|recognize-chunk>
3640 <item> \ chunking=0;
3642 <item> \ active_chunk="";
3647 All other recognizers are only of effect if we are chunking; there's no
3648 point in looking at lines if they aren't part of a chunk, so we just ignore
3649 them as efficiently as we can.
3651 <\nf-chunk|recognize-chunk>
3652 <item>! chunking { next; }
3655 <section|Chunk contents>
3657 Chunk contents are any lines read while <verbatim|chunking> is true. Some
3658 chunk contents are special in that they refer to other chunks, and will be
3659 replaced by the contents of these chunks when the file is generated.
3661 <label|sub:ORS-chunk-text>We add the output record separator <verbatim|ORS>
3662 to the line now, because we will set <verbatim|ORS> to the empty string
3663 when we generate the output<\footnote>
3664 So that we can partial print lines using <verbatim|print> instead of
3665 <verbatim|printf>. <todo|This does't make sense>
3668 <\nf-chunk|recognize-chunk>
3669 <item>length(active_chunk) {
3671 <item> \ <nf-ref|process-chunk-tabs|>
3673 <item> \ <nf-ref|process-chunk|>
3678 If a chunk just consisted of plain text, we could handle the chunk like
3681 <\nf-chunk|process-chunk-simple>
3682 <item>chunk_line(active_chunk, $0 ORS);
3685 but in fact a chunk can include references to other chunks. Chunk includes
3686 are traditionally written as <verbatim|\<less\>\<less\>chunk-name\<gtr\>\<gtr\>>
3687 but we support other variations, some of which are more suitable for
3688 particular editing systems.
3690 However, we also process tabs at this point. A tab at input can be replaced
3691 by a number of spaces defined by the <verbatim|tabs> variable, set by the
3692 <verbatim|-T> option. Of course this is poor tab behaviour, we should
3693 probably have the option to use proper counted tab-stops and process this
3696 <\nf-chunk|process-chunk-tabs>
3697 <item>if (length(tabs)) {
3699 <item> \ gsub("\\t", tabs);
3704 <subsection|lstlistings><label|sub:lst-listings-includes>
3706 If <verbatim|\\lstset{escapeinside={=\<less\>}{\<gtr\>}}> is set, then we
3707 can use <verbatim|<nf-ref|chunk-name|>> in listings. The sequence
3708 <verbatim|=\<less\>> was chosen because:
3711 <item>it is a better mnemonic than <verbatim|\<less\>\<less\>chunk-name\<gtr\>\<gtr\>>
3712 in that the <verbatim|=> sign signifies equivalence or substitutability.
3714 <item>and because <verbatim|=\<less\>> is not valid in C or any language
3717 <item>and also because lstlistings doesn't like <verbatim|\<gtr\>\<gtr\>>
3718 as an end delimiter for the <em|texcl> escape, so we must make do with a
3719 single <verbatim|\<gtr\>> which is better complemented by
3720 <verbatim|=\<less\>> than by <verbatim|\<less\>\<less\>>.
3723 Unfortunately the <verbatim|=\<less\>...\<gtr\>> that we use re-enters a
3724 <LaTeX> parsing mode in which some characters are special, e.g. <verbatim|#
3725 \\> and so these cause trouble if used in arguments to
3726 <verbatim|\\chunkref>. At some point I must fix the <LaTeX> command
3727 <verbatim|\\chunkref> so that it can accept these literally, but until
3728 then, when writing chunkref argumemts that need these characters, I must
3729 use the forms <verbatim|\\textbackslash{}> and <verbatim|\\#>; so I also
3730 define a hacky chunk <verbatim|delatex> to be used further on whose purpose
3731 it is to remove these from any arguments parsed by fangle.
3736 <item>gsub("\\\\\\\\#", "#", ${text});
3738 <item>gsub("\\\\\\\\textbackslash{}", "\\\\", ${text});
3740 <item>gsub("\\\\\\\\\\\\^", "^", ${text});
3741 </nf-chunk||<tuple|text>>
3743 As each chunk line may contain more than one chunk include, we will split
3744 out chunk includes in an iterative fashion<\footnote>
3745 Contrary to our use of split when substituting parameters in chapter
3746 <reference|Here-we-split>
3749 First, as long as the chunk contains a <verbatim|\\chunkref> command we
3750 take as much as we can up to the first <verbatim|\\chunkref> command.
3752 <TeXmacs> text output uses <math|\<langle\>>...<math|\<rangle\>> which
3753 comes out as unicode sequences <verbatim|0xC2> <verbatim|0xAB> ...
3754 <verbatim|0xC2> <verbatim|0xBB>. Modern awk will interpret
3755 <verbatim|[^\\xC2\\xBB]> as a single unicode character if <verbatim|LANG>
3756 is set correctly to the sub-type <verbatim|UTF-8>, e.g.
3757 <verbatim|LANG=en_GB.UTF-8>, otherwise <verbatim|[^\\xC2\\xBB]> will be
3758 treated as a two character negated match <emdash> but this should not
3759 interfere with the function.
3761 <\nf-chunk|process-chunk>
3766 <item>while(match(chunk,"(\\xC2\\xAB)([^\\xC2\\xBB]*)
3767 [^\\xC2\\xBB]*\\xC2\\xBB", line) \|\|
3769 <item> \ \ \ \ \ match(chunk,\
3771 <item> \ \ \ \ \ \ \ \ \ \ \ "([=]\<less\>\\\\\\\\chunkref{([^}\<gtr\>]*)}(\\\\(.*\\\\)\|)\<gtr\>\|\<less\>\<less\>([a-zA-Z_][-a-zA-Z0-9_]*)\<gtr\>\<gtr\>)",\
3773 <item> \ \ \ \ \ \ \ \ \ \ \ line)\\
3777 <item> \ chunklet = substr(chunk, 1, RSTART - 1);
3780 We keep track of the indent count, by counting the number of literal
3781 characters found. We can then preserve this indent on each output line when
3782 multi-line chunks are expanded.
3784 We then process this first part literal text, and set the chunk which is
3785 still to be processed to be the text after the <verbatim|\\chunkref>
3786 command, which we will process next as we continue around the loop.
3788 <\nf-chunk|process-chunk>
3789 <item> \ indent += length(chunklet);
3791 <item> \ chunk_line(active_chunk, chunklet);
3793 <item> \ chunk = substr(chunk, RSTART + RLENGTH);
3796 We then consider the type of chunk command we have found, whether it is the
3797 fangle style command beginning with <verbatim|=\<less\>> the older notangle
3798 style beginning with <verbatim|\<less\>\<less\>>.
3800 Fangle chunks may have parameters contained within square brackets. These
3801 will be matched in <verbatim|line[3]> and are considered at this stage of
3802 processing to be part of the name of the chunk to be included.
3804 <\nf-chunk|process-chunk>
3805 <item> \ if (substr(line[1], 1, 1) == "=") {
3807 <item> \ \ \ # chunk name up to }
3809 <item> \ \ \ \ \ \ \ <nf-ref|delatex|<tuple|line[3]>>
3811 <item> \ \ \ chunk_include(active_chunk, line[2] line[3], indent);
3813 <item> \ } else if (substr(line[1], 1, 1) == "\<less\>") {
3815 <item> \ \ \ chunk_include(active_chunk, line[4], indent);
3817 <item> \ } else if (line[1] == "\\xC2\\xAB") {
3819 <item> \ \ \ chunk_include(active_chunk, line[2], indent);
3823 <item> \ \ \ error("Unknown chunk fragment: " line[1]);
3830 The loop will continue until there are no more chunkref statements in the
3831 text, at which point we process the final part of the chunk.
3833 <\nf-chunk|process-chunk>
3836 <item>chunk_line(active_chunk, chunk);
3839 <label|lone-newline>We add the newline character as a chunklet on it's own,
3840 to make it easier to detect new lines and thus manage indentation when
3841 processing the output.
3843 <\nf-chunk|process-chunk>
3844 <item>chunk_line(active_chunk, "\\n");
3849 We will also permit a chunk-part number to follow in square brackets, so
3850 that <verbatim|<nf-ref|chunk-name[1]|>> will refer to the first part only.
3851 This can make it easy to include a C function prototype in a header file,
3852 if the first part of the chunk is just the function prototype without the
3853 trailing semi-colon. The header file would include the prototype with the
3854 trailing semi-colon, like this:
3856 <verbatim|<nf-ref|chunk-name[1]|>>
3858 This is handled in section <reference|sub:Chunk-parts>
3860 We should perhaps introduce a notion of language specific chunk options; so
3861 that perhaps we could specify:
3863 <verbatim|=\<less\>\\chunkref{chunk-name[function-declaration]}>
3865 which applies a transform <verbatim|function-declaration> to the chunk ---
3866 which in this case would extract a function prototype from a function.
3869 <chapter|Processing Options>
3871 At the start, first we set the default options.
3873 <\nf-chunk|default-options>
3878 <item>notangle_mode=0;
3885 Then we use getopt the standard way, and null out ARGV afterwards in the
3888 <\nf-chunk|read-options>
3889 <item>Optind = 1 \ \ \ # skip ARGV[0]
3891 <item>while(getopt(ARGC, ARGV, "R:LdT:hr")!=-1) {
3893 <item> \ <nf-ref|handle-options|>
3897 <item>for (i=1; i\<less\>Optind; i++) { ARGV[i]=""; }
3900 This is how we handle our options:
3902 <\nf-chunk|handle-options>
3903 <item>if (Optopt == "R") root = Optarg;
3905 <item>else if (Optopt == "r") root="";
3907 <item>else if (Optopt == "L") linenos = 1;
3909 <item>else if (Optopt == "d") debug = 1;
3911 <item>else if (Optopt == "T") tabs = indent_string(Optarg+0);
3913 <item>else if (Optopt == "h") help();
3915 <item>else if (Optopt == "?") help();
3918 We do all of this at the beginning of the program
3923 <item> \ <nf-ref|constants|>
3925 <item> \ <nf-ref|mode-definitions|>
3927 <item> \ <nf-ref|default-options|>
3931 <item> \ <nf-ref|read-options|>
3936 And have a simple help function
3939 <item>function help() {
3941 <item> \ print "Usage:"
3943 <item> \ print " \ fangle [-L] -R\<less\>rootname\<gtr\> [source.tex
3946 <item> \ print " \ fangle -r [source.tex ...]"
3948 <item> \ print " \ If the filename, source.tex is not specified then
3953 <item> \ print "-L causes the C statement: #line \<less\>lineno\<gtr\>
3954 \\"filename\\"" to be issued"
3956 <item> \ print "-R causes the named root to be written to stdout"
3958 <item> \ print "-r lists all roots in the file (even those used
3966 <chapter|Generating the Output>
3968 We generate output by calling output_chunk, or listing the chunk names.
3970 <\nf-chunk|generate-output>
3971 <item>if (length(root)) output_chunk(root);
3973 <item>else output_chunk_names();
3976 We also have some other output debugging:
3978 <\nf-chunk|debug-output>
3981 <item> \ print "------ chunk names "
3983 <item> \ output_chunk_names();
3985 <item> \ print "====== chunks"
3987 <item> \ output_chunks();
3989 <item> \ print "++++++ debug"
3991 <item> \ for (a in chunks) {
3993 <item> \ \ \ print a "=" chunks[a];
4000 We do both of these at the end. We also set <verbatim|ORS=""> because each
4001 chunklet is not necessarily a complete line, and we already added
4002 <verbatim|ORS> to each input line in section
4003 <reference|sub:ORS-chunk-text>.
4008 <item> \ <nf-ref|debug-output|>
4012 <item> \ <nf-ref|generate-output|>
4017 We write chunk names like this. If we seem to be running in notangle
4018 compatibility mode, then we enclose the name like this
4019 <verbatim|\<less\>\<less\>name\<gtr\>\<gtr\>> the same way notangle does:
4021 <\nf-chunk|output_chunk_names()>
4022 <item>function output_chunk_names( \ \ c, prefix, suffix)\
4026 <item> \ if (notangle_mode) {
4028 <item> \ \ \ prefix="\<less\>\<less\>";
4030 <item> \ \ \ suffix="\<gtr\>\<gtr\>";
4034 <item> \ for (c in chunk_names) {
4036 <item> \ \ \ print prefix c suffix "\\n";
4043 This function would write out all chunks
4045 <\nf-chunk|output_chunks()>
4046 <item>function output_chunks( \ a)\
4050 <item> \ for (a in chunk_names) {
4052 <item> \ \ \ output_chunk(a);
4060 <item>function output_chunk(chunk) {
4062 <item> \ newline = 1;
4064 <item> \ lineno_needed = linenos;
4068 <item> \ write_chunk(chunk);
4075 <section|Assembling the Chunks>
4077 <verbatim|chunk_path> holds a string consisting of the names of all the
4078 chunks that resulted in this chunk being output. It should probably also
4079 contain the source line numbers at which each inclusion also occured.
4081 We first initialize the mode tracker for this chunk.
4083 <\nf-chunk|write_chunk()>
4084 <item>function write_chunk(chunk_name) {
4086 <item> \ <nf-ref|awk-delete-array|<tuple|context>>
4088 <item> \ return write_chunk_r(chunk_name, context);
4094 <item>function write_chunk_r(chunk_name, context, indent, tail,
4096 <item> \ # optional vars
4098 <item> \ <with|font-shape|italic|chunk_path>, chunk_args,\
4100 <item> \ # local vars
4102 <item> \ context_origin,
4104 <item> \ chunk_params, part, max_part, part_line, frag, max_frag, text,\
4106 <item> \ chunklet, only_part, call_chunk_args, new_context)
4110 <item> \ if (debug) debug_log("write_chunk_r(" chunk_name ")");
4113 <subsection|Chunk Parts><label|sub:Chunk-parts>
4115 As mentioned in section <reference|sub:lstlistings-includes>, a chunk name
4116 may contain a part specifier in square brackets, limiting the parts that
4119 <\nf-chunk|write_chunk()>
4120 <item> \ if (match(chunk_name, "^(.*)\\\\[([0-9]*)\\\\]$",
4121 chunk_name_parts)) {
4123 <item> \ \ \ chunk_name = chunk_name_parts[1];
4125 <item> \ \ \ only_part = chunk_name_parts[2];
4130 We then create a mode tracker
4132 <\nf-chunk|write_chunk()>
4133 <item> \ context_origin = context[""];
4135 <item> \ new_context = push_mode_tracker(context, chunks[chunk_name,
4139 We extract into <verbatim|chunk_params> the names of the parameters that
4140 this chunk accepts, whose values were (optionally) passed in
4141 <verbatim|chunk_args>.
4143 <\nf-chunk|write_chunk()>
4144 <item> \ split(chunks[chunk_name, "params"], chunk_params, " *; *");
4147 To assemble a chunk, we write out each part.
4149 <\nf-chunk|write_chunk()>
4150 <item> \ if (! (chunk_name in chunk_names)) {
4152 <item> \ \ \ error(sprintf(_"The root module
4153 \<less\>\<less\>%s\<gtr\>\<gtr\> was not defined.\\nUsed by: %s",\\
4155 <item> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ chunk_name, chunk_path));
4161 <item> \ max_part = chunks[chunk_name, "part"];
4163 <item> \ for(part = 1; part \<less\>= max_part; part++) {
4165 <item> \ \ \ if (! only_part \|\| part == only_part) {
4167 <item> \ \ \ \ \ <nf-ref|write-part|>
4173 <item> \ if (! pop_mode_tracker(context, context_origin)) {
4175 <item> \ \ \ dump_mode_tracker(context);
4177 <item> \ \ \ error(sprintf(_"Module %s did not close context
4178 properly.\\nUsed by: %s\\n", chunk_name, chunk_path));
4185 A part can either be a chunklet of lines, or an include of another chunk.
4187 Chunks may also have parameters, specified in LaTeX style with braces after
4188 the chunk name --- looking like this in the document: chunkname{param1,
4189 param2}. Arguments are passed in square brackets:
4190 <verbatim|\\chunkref{chunkname}[arg1, arg2]>.
4192 Before we process each part, we check that the source position hasn't
4193 changed unexpectedly, so that we can know if we need to output a new
4194 file-line directive.
4196 <\nf-chunk|write-part>
4197 <item><nf-ref|check-source-jump|>
4201 <item>chunklet = chunks[chunk_name, "part", part];
4203 <item>if (chunks[chunk_name, "part", part, "type"] == part_type_chunk) {
4205 <item> \ <nf-ref|write-included-chunk|>
4207 <item>} else if (chunklet SUBSEP "line" in chunks) {
4209 <item> \ <nf-ref|write-chunklets|>
4213 <item> \ # empty last chunklet
4218 To write an included chunk, we must detect any optional chunk arguments in
4219 parenthesis. Then we recurse calling <verbatim|write_chunk()>.
4221 <\nf-chunk|write-included-chunk>
4222 <item>if (match(chunklet, "^([^\\\\[\\\\(]*)\\\\((.*)\\\\)$",
4225 <item> \ chunklet = chunklet_parts[1];
4229 <item>gsub(sprintf("%c",11), "", chunklet);
4231 <item>gsub(sprintf("%c",11), "", chunklet_parts[2]);
4233 <item> \ parse_chunk_args("c-like", chunklet_parts[2], call_chunk_args,
4236 <item> \ for (c in call_chunk_args) {
4238 <item> \ \ \ call_chunk_args[c] = expand_chunk_args(call_chunk_args[c],
4239 chunk_params, chunk_args);
4245 <item> \ split("", call_chunk_args);
4251 <item>write_chunk_r(chunklet, context,
4253 <item> \ \ \ \ \ \ \ \ \ \ \ chunks[chunk_name, "part", part, "indent"]
4256 <item> \ \ \ \ \ \ \ \ \ \ \ chunks[chunk_name, "part", part, "tail"],
4258 <item> \ \ \ \ \ \ \ \ \ \ \ chunk_path "\\n \ \ \ \ \ \ \ \ "
4261 <item> \ \ \ \ \ \ \ \ \ \ \ call_chunk_args);
4264 Before we output a chunklet of lines, we first emit the file and line
4265 number if we have one, and if it is safe to do so.
4267 Chunklets are generally broken up by includes, so the start of a chunklet
4268 is a good place to do this. Then we output each line of the chunklet.
4270 When it is not safe, such as in the middle of a multi-line macro
4271 definition, <verbatim|lineno_suppressed> is set to true, and in such a case
4272 we note that we want to emit the line statement when it is next safe.
4274 <\nf-chunk|write-chunklets>
4275 <item>max_frag = chunks[chunklet, "line"];
4277 <item>for(frag = 1; frag \<less\>= max_frag; frag++) {
4279 <item> \ <nf-ref|write-file-line|>
4282 We then extract the chunklet text and expand any arguments.
4284 <\nf-chunk|write-chunklets>
4287 <item> \ text = chunks[chunklet, frag];
4291 <item> \ /* check params */
4293 <item> \ text = expand_chunk_args(text, chunk_params, chunk_args);
4296 If the text is a single newline (which we keep separate - see
4297 <reference|lone-newline>) then we increment the line number. In the case
4298 where this is the last line of a chunk and it is not a top-level chunk we
4299 replace the newline with an empty string --- because the chunk that
4300 included this chunk will have the newline at the end of the line that
4301 included this chunk.
4303 We also note by <verbatim|newline = 1> that we have started a new line, so
4304 that indentation can be managed with the following piece of text.
4306 <\nf-chunk|write-chunklets>
4309 <item> if (text == "\\n") {
4311 <item> \ \ \ lineno++;
4313 <item> \ \ \ if (part == max_part && frag == max_frag &&
4314 length(chunk_path)) {
4316 <item> \ \ \ \ \ text = "";
4318 <item> \ \ \ \ \ break;
4320 <item> \ \ \ } else {
4322 <item> \ \ \ \ \ newline = 1;
4327 If this text does not represent a newline, but we see that we are the first
4328 piece of text on a newline, then we prefix our text with the current
4332 <verbatim|newline> is a global output-state variable, but the
4333 <verbatim|indent> is not.
4336 <\nf-chunk|write-chunklets>
4337 <item> \ } else if (length(text) \|\| length(tail)) {
4339 <item> \ \ \ if (newline) text = indent text;
4341 <item> \ \ \ newline = 0;
4348 Tail will soon no longer be relevant once mode-detection is in place.
4350 <\nf-chunk|write-chunklets>
4351 <item> \ text = text tail;
4353 <item> \ mode_tracker(context, text);
4355 <item> \ print untab(transform_escape(context, text, new_context));
4358 If a line ends in a backslash --- suggesting continuation --- then we
4359 supress outputting file-line as it would probably break the continued
4362 <\nf-chunk|write-chunklets>
4363 <item> \ if (linenos) {
4365 <item> \ \ \ lineno_suppressed = substr(lastline, length(lastline)) ==
4373 Of course there is no point in actually outputting the source filename and
4374 line number (file-line) if they don't say anything new! We only need to
4375 emit them if they aren't what is expected, or if we we not able to emit one
4376 when they had changed.
4378 <\nf-chunk|write-file-line>
4379 <item>if (newline && lineno_needed && ! lineno_suppressed) {
4381 <item> \ filename = a_filename;
4383 <item> \ lineno = a_lineno;
4385 <item> \ print "#line " lineno " \\"" filename "\\"\\n"
4387 <item> \ lineno_needed = 0;
4392 We check if a new file-line is needed by checking if the source line
4393 matches what we (or a compiler) would expect.
4395 <\nf-chunk|check-source-jump>
4396 <item>if (linenos && (chunk_name SUBSEP "part" SUBSEP part SUBSEP
4397 "FILENAME" in chunks)) {
4399 <item> \ a_filename = chunks[chunk_name, "part", part, "FILENAME"];
4401 <item> \ a_lineno = chunks[chunk_name, "part", part, "LINENO"];
4403 <item> \ if (a_filename != filename \|\| a_lineno != lineno) {
4405 <item> \ \ \ lineno_needed++;
4412 <chapter|Storing Chunks>
4414 Awk has pretty limited data structures, so we will use two main hashes.
4415 Uninterrupted sequences of a chunk will be stored in chunklets and the
4416 chunklets used in a chunk will be stored in <verbatim|chunks>.
4418 <\nf-chunk|constants>
4419 <item>part_type_chunk=1;
4424 The params mentioned are not chunk parameters for parameterized chunks, as
4425 mentioned in <reference|Chunk Arguments>, but the lstlistings style
4426 parameters used in the <verbatim|\\Chunk> command<\footnote>
4427 The <verbatim|params> parameter is used to hold the parameters for
4428 parameterized chunks
4431 <\nf-chunk|chunk-storage-functions>
4432 <item>function new_chunk(chunk_name, opts, args,
4434 <item> \ # local vars
4436 <item> \ p, append )
4440 <item> \ # HACK WHILE WE CHANGE TO ( ) for PARAM CHUNKS
4442 <item> \ gsub("\\\\(\\\\)$", "", chunk_name);
4444 <item> \ if (! (chunk_name in chunk_names)) {
4446 <item> \ \ \ if (debug) print "New chunk " chunk_name;
4448 <item> \ \ \ chunk_names[chunk_name];
4450 <item> \ \ \ for (p in opts) {
4452 <item> \ \ \ \ \ chunks[chunk_name, p] = opts[p];
4454 <item> \ \ \ \ \ if (debug) print "chunks[" chunk_name "," p "] = "
4459 <item> \ \ \ for (p in args) {
4461 <item> \ \ \ \ \ chunks[chunk_name, "params", p] = args[p];
4465 <item> \ \ \ if ("append" in opts) {
4467 <item> \ \ \ \ \ append=opts["append"];
4469 <item> \ \ \ \ \ if (! (append in chunk_names)) {
4471 <item> \ \ \ \ \ \ \ warning("Chunk " chunk_name " is appended to chunk "
4472 append " which is not defined yet");
4474 <item> \ \ \ \ \ \ \ new_chunk(append);
4478 <item> \ \ \ \ \ chunk_include(append, chunk_name);
4480 <item> \ \ \ \ \ chunk_line(append, ORS);
4486 <item> \ active_chunk = chunk_name;
4488 <item> \ prime_chunk(chunk_name);
4493 <\nf-chunk|chunk-storage-functions>
4496 <item>function prime_chunk(chunk_name)
4500 <item> \ chunks[chunk_name, "part", ++chunks[chunk_name, "part"] ] = \\
4502 <item> \ \ \ \ \ \ \ \ chunk_name SUBSEP "chunklet" SUBSEP ""
4503 ++chunks[chunk_name, "chunklet"];
4505 <item> \ chunks[chunk_name, "part", chunks[chunk_name, "part"],
4506 "FILENAME"] = FILENAME;
4508 <item> \ chunks[chunk_name, "part", chunks[chunk_name, "part"], "LINENO"]
4515 <item>function chunk_line(chunk_name, line){
4517 <item> \ chunks[chunk_name, "chunklet", chunks[chunk_name, "chunklet"],
4519 <item> \ \ \ \ \ \ \ \ ++chunks[chunk_name, "chunklet",
4520 chunks[chunk_name, "chunklet"], "line"] \ ] = line;
4527 Chunk include represents a <em|chunkref> statement, and stores the
4528 requirement to include another chunk. The parameter indent represents the
4529 quanity of literal text characters that preceded this <em|chunkref>
4530 statement and therefore by how much additional lines of the included chunk
4533 <\nf-chunk|chunk-storage-functions>
4534 <item>function chunk_include(chunk_name, chunk_ref, indent, tail)
4538 <item> \ chunks[chunk_name, "part", ++chunks[chunk_name, "part"] ] =
4541 <item> \ chunks[chunk_name, "part", chunks[chunk_name, "part"], "type" ]
4544 <item> \ chunks[chunk_name, "part", chunks[chunk_name, "part"], "indent"
4545 ] = indent_string(indent);
4547 <item> \ chunks[chunk_name, "part", chunks[chunk_name, "part"], "tail" ]
4550 <item> \ prime_chunk(chunk_name);
4557 The indent is calculated by indent_string, which may in future convert some
4558 spaces into tab characters. This function works by generating a printf
4559 padded format string, like <verbatim|%22s> for an indent of 22, and then
4560 printing an empty string using that format.
4562 <\nf-chunk|chunk-storage-functions>
4563 <item>function indent_string(indent) {
4565 <item> \ return sprintf("%" indent "s", "");
4570 <chapter|getopt><label|cha:getopt>
4572 I use Arnold Robbins public domain getopt (1993 revision). This is probably
4573 the same one that is covered in chapter 12 of “Edition 3 of GAWK:
4574 Effective AWK Programming: A User's Guide for GNU Awk” but as that is
4575 licensed under the GNU Free Documentation License, Version 1.3, which
4576 conflicts with the GPL3, I can't use it from there (or it's accompanying
4577 explanations), so I do my best to explain how it works here.
4579 The getopt.awk header is:
4581 <\nf-chunk|getopt.awk-header>
4582 <item># getopt.awk --- do C library getopt(3) function in awk
4586 <item># Arnold Robbins, arnold@skeeve.com, Public Domain
4590 <item># Initial version: March, 1991
4592 <item># Revised: May, 1993
4597 The provided explanation is:
4599 <\nf-chunk|getopt.awk-notes>
4600 <item># External variables:
4602 <item># \ \ \ Optind -- index in ARGV of first nonoption argument
4604 <item># \ \ \ Optarg -- string value of argument to current option
4606 <item># \ \ \ Opterr -- if nonzero, print our own diagnostic
4608 <item># \ \ \ Optopt -- current option letter
4614 <item># \ \ \ -1 \ \ \ \ at end of options
4616 <item># \ \ \ ? \ \ \ \ \ for unrecognized option
4618 <item># \ \ \ \<less\>c\<gtr\> \ \ \ a character representing the current
4623 <item># Private Data:
4625 <item># \ \ \ _opti \ -- index in multi-flag option, e.g., -abc
4630 The function follows. The final two parameters, <verbatim|thisopt> and
4631 <verbatim|i> are local variables and not parameters --- as indicated by the
4632 multiple spaces preceding them. Awk doesn't care, the multiple spaces are a
4633 convention to help us humans.
4635 <\nf-chunk|getopt.awk-getopt()>
4636 <item>function getopt(argc, argv, options, \ \ \ thisopt, i)
4640 <item> \ \ \ if (length(options) == 0) \ \ \ # no options given
4642 <item> \ \ \ \ \ \ \ return -1
4644 <item> \ \ \ if (argv[Optind] == "--") { \ # all done
4646 <item> \ \ \ \ \ \ \ Optind++
4648 <item> \ \ \ \ \ \ \ _opti = 0
4650 <item> \ \ \ \ \ \ \ return -1
4652 <item> \ \ \ } else if (argv[Optind] !~ /^-[^: \\t\\n\\f\\r\\v\\b]/) {
4654 <item> \ \ \ \ \ \ \ _opti = 0
4656 <item> \ \ \ \ \ \ \ return -1
4660 <item> \ \ \ if (_opti == 0)
4662 <item> \ \ \ \ \ \ \ _opti = 2
4664 <item> \ \ \ thisopt = substr(argv[Optind], _opti, 1)
4666 <item> \ \ \ Optopt = thisopt
4668 <item> \ \ \ i = index(options, thisopt)
4670 <item> \ \ \ if (i == 0) {
4672 <item> \ \ \ \ \ \ \ if (Opterr)
4674 <item> \ \ \ \ \ \ \ \ \ \ \ printf("%c -- invalid option\\n",
4676 <item> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ thisopt)
4677 \<gtr\> "/dev/stderr"
4679 <item> \ \ \ \ \ \ \ if (_opti \<gtr\>= length(argv[Optind])) {
4681 <item> \ \ \ \ \ \ \ \ \ \ \ Optind++
4683 <item> \ \ \ \ \ \ \ \ \ \ \ _opti = 0
4685 <item> \ \ \ \ \ \ \ } else
4687 <item> \ \ \ \ \ \ \ \ \ \ \ _opti++
4689 <item> \ \ \ \ \ \ \ return "?"
4694 At this point, the option has been found and we need to know if it takes
4697 <\nf-chunk|getopt.awk-getopt()>
4698 <item> \ \ \ if (substr(options, i + 1, 1) == ":") {
4700 <item> \ \ \ \ \ \ \ # get option argument
4702 <item> \ \ \ \ \ \ \ if (length(substr(argv[Optind], _opti + 1)) \<gtr\>
4705 <item> \ \ \ \ \ \ \ \ \ \ \ Optarg = substr(argv[Optind], _opti + 1)
4707 <item> \ \ \ \ \ \ \ else
4709 <item> \ \ \ \ \ \ \ \ \ \ \ Optarg = argv[++Optind]
4711 <item> \ \ \ \ \ \ \ _opti = 0
4715 <item> \ \ \ \ \ \ \ Optarg = ""
4717 <item> \ \ \ if (_opti == 0 \|\| _opti \<gtr\>= length(argv[Optind])) {
4719 <item> \ \ \ \ \ \ \ Optind++
4721 <item> \ \ \ \ \ \ \ _opti = 0
4725 <item> \ \ \ \ \ \ \ _opti++
4727 <item> \ \ \ return thisopt
4732 A test program is built in, too
4734 <\nf-chunk|getopt.awk-begin>
4737 <item> \ \ \ Opterr = 1 \ \ \ # default is to diagnose
4739 <item> \ \ \ Optind = 1 \ \ \ # skip ARGV[0]
4741 <item> \ \ \ # test program
4743 <item> \ \ \ if (_getopt_test) {
4745 <item> \ \ \ \ \ \ \ while ((_go_c = getopt(ARGC, ARGV, "ab:cd")) != -1)
4747 <item> \ \ \ \ \ \ \ \ \ \ \ printf("c = \<less\>%c\<gtr\>, optarg =
4748 \<less\>%s\<gtr\>\\n",
4750 <item> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ _go_c,
4753 <item> \ \ \ \ \ \ \ printf("non-option arguments:\\n")
4755 <item> \ \ \ \ \ \ \ for (; Optind \<less\> ARGC; Optind++)
4757 <item> \ \ \ \ \ \ \ \ \ \ \ printf("\\tARGV[%d] = \<less\>%s\<gtr\>\\n",
4759 <item> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Optind,
4767 The entire getopt.awk is made out of these chunks in order
4769 <\nf-chunk|getopt.awk>
4770 <item><nf-ref|getopt.awk-header|>
4774 <item><nf-ref|getopt.awk-notes|>
4776 <item><nf-ref|getopt.awk-getopt()|>
4778 <item><nf-ref|getopt.awk-begin|>
4781 Although we only want the header and function:
4784 <item># try: locate getopt.awk for the full original file
4786 <item># as part of your standard awk installation
4788 <item><nf-ref|getopt.awk-header|>
4792 <item><nf-ref|getopt.awk-getopt()|>
4795 <chapter|Fangle LaTeX source code><label|latex-source>
4797 <section|fangle module>
4799 Here we define a <LyX> <verbatim|.module> file that makes it convenient to
4800 use <LyX> for writing such literate programs.
4802 This file <verbatim|./fangle.module> can be installed in your personal
4803 <verbatim|.lyx/layouts> folder. You will need to Tools Reconfigure so that
4804 <LyX> notices it. It adds a new format Chunk, which should precede every
4805 listing and contain the chunk name.
4807 <\nf-chunk|./fangle.module>
4808 <item>#\\DeclareLyXModule{Fangle Literate Listings}
4810 <item>#DescriptionBegin
4812 <item># \ Fangle literate listings allow one to write
4814 <item># \ \ literate programs after the fashion of noweb, but without
4817 <item># \ \ to use noweave to generate the documentation. Instead the
4820 <item># \ \ package is extended in conjunction with the noweb package to
4823 <item># \ \ to code formating directly as latex.
4825 <item># \ The fangle awk script
4827 <item>#DescriptionEnd
4831 <item><nf-ref|gpl3-copyright.hashed|>
4841 <item><nf-ref|./fangle.sty|>
4847 <item><nf-ref|chunkstyle|>
4851 <item><nf-ref|chunkref|>
4852 </nf-chunk|lyx-module|>
4854 Because <LyX> modules are not yet a language supported by fangle or
4855 lstlistings, we resort to this fake awk chunk below in order to have each
4856 line of the GPL3 license commence with a #
4858 <\nf-chunk|gpl3-copyright.hashed>
4859 <item>#<nf-ref|gpl3-copyright|>
4864 <subsection|The Chunk style>
4866 The purpose of the <name|chunk> style is to make it easier for <LyX> users
4867 to provide the name to <verbatim|lstlistings>. Normally this requires
4868 right-clicking on the listing, choosing settings, advanced, and then typing
4869 <verbatim|name=chunk-name>. This has the further disadvantage that the name
4870 (and other options) are not generally visible during document editing.
4872 The chunk style is defined as a <LaTeX> command, so that all text on the
4873 same line is passed to the <verbatim|LaTeX> command <verbatim|Chunk>. This
4874 makes it easy to parse using <verbatim|fangle>, and easy to pass these
4875 options on to the listings package. The first word in a chunk section
4876 should be the chunk name, and will have <verbatim|name=> prepended to it.
4877 Any other words are accepted arguments to <verbatim|lstset>.
4879 We set PassThru to 1 because the user is actually entering raw latex.
4881 <\nf-chunk|chunkstyle>
4884 <item> \ LatexType \ \ \ \ \ \ \ \ \ \ \ \ Command
4886 <item> \ LatexName \ \ \ \ \ \ \ \ \ \ \ \ Chunk
4888 <item> \ Margin \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ First_Dynamic
4890 <item> \ LeftMargin \ \ \ \ \ \ \ \ \ \ \ Chunk:xxx
4892 <item> \ LabelSep \ \ \ \ \ \ \ \ \ \ \ \ \ xx
4894 <item> \ LabelType \ \ \ \ \ \ \ \ \ \ \ \ Static
4896 <item> \ LabelString \ \ \ \ \ \ \ \ \ \ "Chunk:"
4898 <item> \ Align \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Left
4900 <item> \ PassThru \ \ \ \ \ \ \ \ \ \ \ \ \ 1
4905 To make the label very visible we choose a larger font coloured red.
4907 <\nf-chunk|chunkstyle>
4910 <item> \ \ \ Family \ \ \ \ \ \ \ \ \ \ \ \ \ Sans
4912 <item> \ \ \ Size \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Large
4914 <item> \ \ \ Series \ \ \ \ \ \ \ \ \ \ \ \ \ Bold
4916 <item> \ \ \ Shape \ \ \ \ \ \ \ \ \ \ \ \ \ \ Italic
4918 <item> \ \ \ Color \ \ \ \ \ \ \ \ \ \ \ \ \ \ red
4925 <subsection|The chunkref style>
4927 We also define the Chunkref style which can be used to express cross
4928 references to chunks.
4930 <\nf-chunk|chunkref>
4931 <item>InsetLayout Chunkref
4933 <item> \ LyxType \ \ \ \ \ \ \ \ \ \ \ \ \ \ charstyle
4935 <item> \ LatexType \ \ \ \ \ \ \ \ \ \ \ \ Command
4937 <item> \ LatexName \ \ \ \ \ \ \ \ \ \ \ \ chunkref
4939 <item> \ PassThru \ \ \ \ \ \ \ \ \ \ \ \ \ 1
4941 <item> \ LabelFont \ \ \ \ \ \ \ \ \ \ \ \
4943 <item> \ \ \ Shape \ \ \ \ \ \ \ \ \ \ \ \ \ \ Italic
4945 <item> \ \ \ Color \ \ \ \ \ \ \ \ \ \ \ \ \ \ red
4952 <section|Latex Macros><label|sec:Latex-Macros>
4954 We require the listings, noweb and xargs packages. As noweb defines it's
4955 own <verbatim|\\code> environment, we re-define the one that <LyX> logical
4956 markup module expects here.
4958 <\nf-chunk|./fangle.sty>
4959 <item>\\usepackage{listings}%
4961 <item>\\usepackage{noweb}%
4963 <item>\\usepackage{xargs}%
4965 <item>\\renewcommand{\\code}[1]{\\texttt{#1}}%
4968 We also define a <verbatim|CChunk> macro, for use as:
4969 <verbatim|\\begin{CChunk}> which will need renaming to
4970 <verbatim|\\begin{Chunk}> when I can do this without clashing with
4973 <\nf-chunk|./fangle.sty>
4974 <item>\\lstnewenvironment{Chunk}{\\relax}{\\relax}%
4977 We also define a suitable <verbatim|\\lstset> of parameters that suit the
4978 literate programming style after the fashion of <name|noweave>.
4980 <\nf-chunk|./fangle.sty>
4981 <item>\\lstset{numbers=left, stepnumber=5, numbersep=5pt,
4983 <item> \ \ \ \ \ \ \ breaklines=false,basicstyle=\\ttfamily,
4985 <item> \ \ \ \ \ \ \ numberstyle=\\tiny, language=C}%
4988 We also define a notangle-like mechanism for escaping to <LaTeX> from the
4989 listing, and by which we can refer to other listings. We declare the
4990 <verbatim|=\<less\>...\<gtr\>> sequence to contain <LaTeX> code, and
4991 include another like this chunk: <verbatim|<nf-ref|chunkname|>>. However,
4992 because <verbatim|=\<less\>...\<gtr\>> is already defined to contain
4993 <LaTeX> code for this document --- this is a fangle document after all ---
4994 the code fragment below effectively contains the <LaTeX> code:
4995 <verbatim|}{>. To avoid problems with document generation, I had to declare
4996 an lstlistings property: <verbatim|escapeinside={}> for this listing only;
4997 which in <LyX> was done by right-clicking the listings inset, choosing
4998 settings-\<gtr\>advanced. Therefore <verbatim|=\<less\>> isn't interpreted
4999 literally here, in a listing when the escape sequence is already defined as
5000 shown... we need to somehow escape this representation...
5002 <\nf-chunk|./fangle.sty>
5003 <item>\\lstset{escapeinside={=\<less\>}{\<gtr\>}}%
5006 Although our macros will contain the <verbatim|@> symbol, they will be
5007 included in a <verbatim|\\makeatletter> section by <LyX>; however we keep
5008 the commented out <verbatim|\\makeatletter> as a reminder. The listings
5009 package likes to centre the titles, but noweb titles are specially
5010 formatted and must be left aligned. The simplest way to do this turned out
5011 to be by removing the definition of <verbatim|\\lst@maketitle>. This may
5012 interact badly if other listings want a regular title or caption. We
5013 remember the old maketitle in case we need it.
5015 <\nf-chunk|./fangle.sty>
5016 <item>%\\makeatletter
5018 <item>%somehow re-defining maketitle gives us a left-aligned title
5020 <item>%which is extactly what our specially formatted title needs!
5022 <item>\\global\\let\\fangle@lst@maketitle\\lst@maketitle%
5024 <item>\\global\\def\\lst@maketitle{}%
5027 <subsection|The chunk command><label|sub:The-chunk-command>
5029 Our chunk command accepts one argument, and calls <verbatim|\\ltset>.
5030 Although <verbatim|\\ltset> will note the name, this is erased when the
5031 next <verbatim|\\lstlisting> starts, so we make a note of this in
5032 <verbatim|\\lst@chunkname> and restore in in lstlistings Init hook.
5034 <\nf-chunk|./fangle.sty>
5035 <item>\\def\\Chunk#1{%
5037 <item> \ \\lstset{title={\\fanglecaption},name=#1}%
5039 <item> \ \\global\\edef\\lst@chunkname{\\lst@intname}%
5043 <item>\\def\\lst@chunkname{\\empty}%
5046 <subsubsection|Chunk parameters>
5048 Fangle permits parameterized chunks, and requires the paramters to be
5049 specified as listings options. The fangle script uses this, and although we
5050 don't do anything with these in the <LaTeX> code right now, we need to stop
5051 the listings package complaining.
5053 <\nf-chunk|./fangle.sty>
5054 <item>\\lst@Key{params}\\relax{\\def\\fangle@chunk@params{#1}}%
5057 As it is common to define a chunk which then needs appending to another
5058 chunk, and annoying to have to declare a single line chunk to manage the
5059 include, we support an append= option.
5061 <\nf-chunk|./fangle.sty>
5062 <item>\\lst@Key{append}\\relax{\\def\\fangle@chunk@append{#1}}%
5065 <subsection|The noweb styled caption>
5067 We define a public macro <verbatim|\\fanglecaption> which can be set as a
5068 regular title. By means of <verbatim|\\protect>, It expands to
5069 <verbatim|\\fangle@caption> at the appopriate time when the caption is
5072 <nf-chunk|./fangle.sty|\\def\\fanglecaption{\\protect\\fangle@caption}%||>
5075 22c <math|\<langle\>>some-chunk 19b<math|\<rangle\>><math|\<equiv\>>+
5076 \ \ <math|\<vartriangleleft\>>22b 24d<math|\<vartriangleright\>>
5080 In this example, the current chunk is 22c, and therefore the third chunk
5083 It's name is some-chunk.\
5085 The first chunk with this name (19b) occurs as the second chunk on page
5088 The previous chunk (22d) with the same name is the second chunk on page
5091 The next chunk (24d) is the fourth chunk on page 24.
5092 </big-figure|Noweb Heading<label|noweb heading>>
5094 The general noweb output format compactly identifies the current chunk, and
5095 references to the first chunk, and the previous and next chunks that have
5098 This means that we need to keep a counter for each chunk-name, that we use
5099 to count chunks of the same name.
5101 <subsection|The chunk counter>
5103 It would be natural to have a counter for each chunk name, but TeX would
5104 soon run out of counters<\footnote>
5105 ...soon did run out of counters and so I had to re-write the LaTeX macros
5106 to share a counter as described here.
5107 </footnote>, so we have one counter which we save at the end of a chunk and
5108 restore at the beginning of a chunk.
5110 <\nf-chunk|./fangle.sty>
5111 <item>\\newcounter{fangle@chunkcounter}%
5114 We construct the name of this variable to store the counter to be the text
5115 <verbatim|lst-chunk-> prefixed onto the chunks own name, and store it in
5116 <verbatim|\\chunkcount>.\
5118 We save the counter like this:
5120 <nf-chunk|save-counter|\\global\\expandafter\\edef\\csname
5121 \\chunkcount\\endcsname{\\arabic{fangle@chunkcounter}}%||>
5123 and restore the counter like this:
5125 <nf-chunk|restore-counter|\\setcounter{fangle@chunkcounter}{\\csname
5126 \\chunkcount\\endcsname}%||>
5128 If there does not already exist a variable whose name is stored in
5129 <verbatim|\\chunkcount>, then we know we are the first chunk with this
5130 name, and then define a counter.\
5132 Although chunks of the same name share a common counter, they must still be
5133 distinguished. We use is the internal name of the listing, suffixed by the
5134 counter value. So the first chunk might be <verbatim|something-1> and the
5135 second chunk be <verbatim|something-2>, etc.
5137 We also calculate the name of the previous chunk if we can (before we
5138 increment the chunk counter). If this is the first chunk of that name, then
5139 <verbatim|\\prevchunkname> is set to <verbatim|\\relax> which the noweb
5140 package will interpret as not existing.
5142 <\nf-chunk|./fangle.sty>
5143 <item>\\def\\fangle@caption{%
5145 <item> \ \\edef\\chunkcount{lst-chunk-\\lst@intname}%
5147 <item> \ \\@ifundefined{\\chunkcount}{%
5149 <item> \ \ \ \\expandafter\\gdef\\csname \\chunkcount\\endcsname{0}%
5151 <item> \ \ \ \\setcounter{fangle@chunkcounter}{\\csname
5152 \\chunkcount\\endcsname}%
5154 <item> \ \ \ \\let\\prevchunkname\\relax%
5158 <item> \ \ \ \\setcounter{fangle@chunkcounter}{\\csname
5159 \\chunkcount\\endcsname}%
5161 <item> \ \ \ \\edef\\prevchunkname{\\lst@intname-\\arabic{fangle@chunkcounter}}%
5166 After incrementing the chunk counter, we then define the name of this
5167 chunk, as well as the name of the first chunk.
5169 <\nf-chunk|./fangle.sty>
5170 <item> \ \\addtocounter{fangle@chunkcounter}{1}%
5172 <item> \ \\global\\expandafter\\edef\\csname
5173 \\chunkcount\\endcsname{\\arabic{fangle@chunkcounter}}%
5175 <item> \ \\edef\\chunkname{\\lst@intname-\\arabic{fangle@chunkcounter}}%
5177 <item> \ \\edef\\firstchunkname{\\lst@intname-1}%
5180 We now need to calculate the name of the next chunk. We do this by
5181 temporarily skipping the counter on by one; however there may not actually
5182 be another chunk with this name! We detect this by also defining a label
5183 for each chunk based on the chunkname. If there is a next chunkname then it
5184 will define a label with that name. As labels are persistent, we can at
5185 least tell the second time <LaTeX> is run. If we don't find such a defined
5186 label then we define <verbatim|\\nextchunkname> to <verbatim|\\relax>.
5188 <\nf-chunk|./fangle.sty>
5189 <item> \ \\addtocounter{fangle@chunkcounter}{1}%
5191 <item> \ \\edef\\nextchunkname{\\lst@intname-\\arabic{fangle@chunkcounter}}%
5193 <item> \ \\@ifundefined{r@label-\\nextchunkname}{\\let\\nextchunkname\\relax}{}%
5196 The noweb package requires that we define a <verbatim|\\sublabel> for every
5197 chunk, with a unique name, which is then used to print out it's navigation
5200 We also define a regular label for this chunk, as was mentioned above when
5201 we calculated <verbatim|\\nextchunkname>. This requires <LaTeX> to be run
5202 at least twice after new chunk sections are added --- but noweb requried
5205 <\nf-chunk|./fangle.sty>
5206 <item> \ \\sublabel{\\chunkname}%
5208 <item>% define this label for every chunk instance, so we
5210 <item>% can tell when we are the last chunk of this name
5212 <item> \ \\label{label-\\chunkname}%
5215 We also try and add the chunk to the list of listings, but I'm afraid we
5216 don't do very well. We want each chunk name listing once, with all of it's
5219 <\nf-chunk|./fangle.sty>
5220 <item> \ \\addcontentsline{lol}{lstlisting}{\\lst@name~[\\protect\\subpageref{\\chunkname}]}%
5223 We then call the noweb output macros in the same way that noweave generates
5224 them, except that we don't need to call <verbatim|\\nwstartdeflinemarkup>
5225 or <verbatim|\\nwenddeflinemarkup> <emdash> and if we do, it messes up the
5228 <\nf-chunk|./fangle.sty>
5229 <item> \ \\nwmargintag{%
5233 <item> \ \ \ \ \ \\nwtagstyle{}%
5235 <item> \ \ \ \ \ \\subpageref{\\chunkname}%
5245 <item> \ \ \ {\\lst@name}%
5249 <item> \ \ \ \ \ \\nwtagstyle{}\\/%
5251 <item> \ \ \ \ \ \\@ifundefined{fangle@chunk@params}{}{%
5253 <item> \ \ \ \ \ \ \ (\\fangle@chunk@params)%
5257 <item> \ \ \ \ \ [\\csname \\chunkcount\\endcsname]~%
5259 <item> \ \ \ \ \ \\subpageref{\\firstchunkname}%
5263 <item> \ \ \ \\@ifundefined{fangle@chunk@append}{}{%
5265 <item> \ \ \ \\ifx{}\\fangle@chunk@append{x}\\else%
5267 <item> \ \ \ \ \ \ \ ,~add~to~\\fangle@chunk@append%
5273 <item>\\global\\def\\fangle@chunk@append{}%
5275 <item>\\lstset{append=x}%
5281 <item> \ \\ifx\\relax\\prevchunkname\\endmoddef\\else\\plusendmoddef\\fi%
5283 <item>% \ \\nwstartdeflinemarkup%
5285 <item> \ \\nwprevnextdefs{\\prevchunkname}{\\nextchunkname}%
5287 <item>% \ \\nwenddeflinemarkup%
5292 Originally this was developed as a <verbatim|listings> aspect, in the Init
5293 hook, but it was found easier to affect the title without using a hook
5294 <emdash> <verbatim|\\lst@AddToHookExe{PreSet}> is still required to set the
5295 listings name to the name passed to the <verbatim|\\Chunk> command, though.
5297 <\nf-chunk|./fangle.sty>
5298 <item>%\\lst@BeginAspect{fangle}
5300 <item>%\\lst@Key{fangle}{true}[t]{\\lstKV@SetIf{#1}{true}}
5302 <item>\\lst@AddToHookExe{PreSet}{\\global\\let\\lst@intname\\lst@chunkname}
5304 <item>\\lst@AddToHook{Init}{}%\\fangle@caption}
5306 <item>%\\lst@EndAspect
5309 <subsection|Cross references>
5311 We define the <verbatim|\\chunkref> command which makes it easy to generate
5312 visual references to different code chunks, e.g.
5314 <block|<tformat|<table|<row|<cell|Macro>|<cell|Appearance>>|<row|<cell|<verbatim|\\chunkref{preamble}>>|<cell|>>|<row|<cell|<verbatim|\\chunkref[3]{preamble}>>|<cell|>>|<row|<cell|<verbatim|\\chunkref{preamble}[arg1,
5317 Chunkref can also be used within a code chunk to include another code
5318 chunk. The third optional parameter to chunkref is a comma sepatarated list
5319 of arguments, which will replace defined parameters in the chunkref.
5322 Darn it, if I have: <verbatim|=\<less\>\\chunkref{new-mode-tracker}[{chunks[chunk_name,
5323 "language"]},{mode}]\<gtr\>> the inner braces (inside [ ]) cause _ to
5324 signify subscript even though we have <verbatim|lst@ReplaceIn>
5327 <\nf-chunk|./fangle.sty>
5328 <item>\\def\\chunkref@args#1,{%
5330 <item> \ \\def\\arg{#1}%
5332 <item> \ \\lst@ReplaceIn\\arg\\lst@filenamerpl%
5336 <item> \ \\@ifnextchar){\\relax}{, \\chunkref@args}%
5340 <item>\\newcommand\\chunkref[2][0]{%
5342 <item> \ \\@ifnextchar({\\chunkref@i{#1}{#2}}{\\chunkref@i{#1}{#2}()}%
5346 <item>\\def\\chunkref@i#1#2(#3){%
5348 <item> \ \\def\\zero{0}%
5350 <item> \ \\def\\chunk{#2}%
5352 <item> \ \\def\\chunkno{#1}%
5354 <item> \ \\def\\chunkargs{#3}%
5356 <item> \ \\ifx\\chunkno\\zero%
5358 <item> \ \ \ \\def\\chunkname{#2-1}%
5362 <item> \ \ \ \\def\\chunkname{#2-\\chunkno}%
5366 <item> \ \\let\\lst@arg\\chunk%
5368 <item> \ \\lst@ReplaceIn\\chunk\\lst@filenamerpl%
5370 <item> \ \\LA{%\\moddef{%
5372 <item> \ \ \ {\\chunk}%
5376 <item> \ \ \ \ \ \\nwtagstyle{}\\/%
5378 <item> \ \ \ \ \ \\ifx\\chunkno\\zero%
5380 <item> \ \ \ \ \ \\else%
5382 <item> \ \ \ \ \ [\\chunkno]%
5384 <item> \ \ \ \ \ \\fi%
5386 <item> \ \ \ \ \ \\ifx\\chunkargs\\empty%
5388 <item> \ \ \ \ \ \\else%
5390 <item> \ \ \ \ \ \ \ (\\chunkref@args #3,)%
5392 <item> \ \ \ \ \ \\fi%
5394 <item> \ \ \ \ \ ~\\subpageref{\\chunkname}%
5400 <item> \ \\RA%\\endmoddef%
5405 <subsection|The end>
5407 <\nf-chunk|./fangle.sty>
5410 <item>%\\makeatother
5413 <chapter|Extracting fangle>
5415 <section|Extracting from Lyx>
5417 To extract from <LyX>, you will need to configure <LyX> as explained in
5418 section <reference|Configuring-the-build>.
5420 <label|lyx-build-script>And this lyx-build scrap will extract fangle for
5423 <\nf-chunk|lyx-build>
5430 <item><nf-ref|lyx-build-helper|>
5432 <item>cd $PROJECT_DIR \|\| exit 1
5436 <item>/usr/local/bin/fangle -R./fangle $TEX_SRC \<gtr\> ./fangle
5438 <item>/usr/local/bin/fangle -R./fangle.module $TEX_SRC \<gtr\>
5443 <item>export FANGLE=./fangle
5445 <item>export TMP=${TMP:-/tmp}
5447 <item><nf-ref|test:*|>
5450 With a lyx-build-helper
5452 <\nf-chunk|lyx-build-helper>
5453 <item>PROJECT_DIR="$LYX_r"
5455 <item>LYX_SRC="$PROJECT_DIR/${LYX_i%.tex}.lyx"
5457 <item>TEX_DIR="$LYX_p"
5459 <item>TEX_SRC="$TEX_DIR/$LYX_i"
5461 <item>TXT_SRC="$TEX_SRC"
5464 <section|Extracting documentation>
5466 <\nf-chunk|./gen-www>
5467 <item>#python -m elyxer --css lyx.css $LYX_SRC \| \\
5469 <item># \ iconv -c -f utf-8 -t ISO-8859-1//TRANSLIT \| \\
5471 <item># \ sed 's/UTF-8"\\(.\\)\<gtr\>/ISO-8859-1"\\1\<gtr\>/' \<gtr\>
5472 www/docs/fangle.html
5476 <item>python -m elyxer --css lyx.css --iso885915 --html --destdirectory
5477 www/docs/fangle.e \\
5479 <item> \ \ \ \ \ \ fangle.lyx \<gtr\> www/docs/fangle.e/fangle.html
5483 <item>( mkdir -p www/docs/fangle && cd www/docs/fangle && \\
5485 <item> \ lyx -e latex ../../../fangle.lyx && \\
5487 <item> \ htlatex ../../../fangle.tex "xhtml,fn-in" && \\
5489 <item> \ sed -i -e 's/\<less\>!--l\\. [0-9][0-9]* *--\<gtr\>//g'
5496 <item>( mkdir -p www/docs/literate && cd www/docs/literate && \\
5498 <item> \ lyx -e latex ../../../literate.lyx && \\
5500 <item> \ htlatex ../../../literate.tex "xhtml,fn-in" && \\
5502 <item> \ sed -i -e 's/\<less\>!--l\\. [0-9][0-9]* *--\<gtr\>$//g'
5508 <section|Extracting from the command line>
5510 First you will need the tex output, then you can extract:
5512 <\nf-chunk|lyx-build-manual>
5513 <item>lyx -e latex fangle.lyx
5515 <item>fangle -R./fangle fangle.tex \<gtr\> ./fangle
5517 <item>fangle -R./fangle.module fangle.tex \<gtr\> ./fangle.module
5531 <item>export SRC="${SRC:-./fangle.tm}"
5533 <item>export FANGLE="${FANGLE:-./fangle}"
5535 <item>export TMP="${TMP:-/tmp}"
5537 <item>export TESTDIR="$TMP/$USER/fangle.tests"
5539 <item>export TXT_SRC="${TXT_SRC:-$TESTDIR/fangle.txt}"
5541 <item>export AWK="${AWK:-awk}"
5543 <item>export RUN_FANGLE="${RUN_FANGLE:-$AWK -f}"
5549 <item> \ ${AWK} -f ${FANGLE} "$@"
5555 <item>mkdir -p "$TESTDIR"
5559 <item>tm -s -c "$SRC" "$TXT_SRC" -q
5563 <item><nf-ref|test:helpers|>
5567 <item> \ <nf-ref|test:run-tests|>
5573 <item># test current fangle
5575 <item>echo Testing current fangle
5581 <item># extract new fangle
5583 <item>echo testing new fangle
5585 <item>fangle -R./fangle "$TXT_SRC" \<gtr\> "$TESTDIR/fangle"
5587 <item>export FANGLE="$TESTDIR/fangle"
5593 <item># Now check that it can extract a fangle that also passes the
5596 <item>echo testing if new fangle can generate itself
5598 <item>fangle -R./fangle "$TXT_SRC" \<gtr\> "$TESTDIR/fangle.new"
5600 <item>passtest diff -bwu "$FANGLE" "$TESTDIR/fangle.new"
5602 <item>export FANGLE="$TESTDIR/fangle.new"
5607 <\nf-chunk|test:run-tests>
5610 <item>fangle -Rpca-test.awk $TXT_SRC \| awk -f - \|\| exit 1
5612 <item><nf-ref|test:cromulence|>
5614 <item><nf-ref|test:escapes|>
5616 <item><nf-ref|test:test-chunk|<tuple|test:example-sh>>
5618 <item><nf-ref|test:test-chunk|<tuple|test:example-makefile>>
5620 <item><nf-ref|test:test-chunk|<tuple|test:q:1>>
5622 <item><nf-ref|test:test-chunk|<tuple|test:make:1>>
5624 <item><nf-ref|test:test-chunk|<tuple|test:make:2>>
5626 <item><nf-ref|test:chunk-params|>
5629 <\nf-chunk|test:helpers>
5634 <item> \ then echo "Passed $TEST"
5636 <item> \ else echo "Failed $TEST"
5638 <item> \ \ \ \ \ \ return 1
5650 <item> \ then echo "Passed $TEST"
5652 <item> \ else echo "Failed $TEST"
5654 <item> \ \ \ \ \ \ return 1
5661 This chunk will render a named chunk and compare it to another rendered
5664 <\nf-chunk|test:test-chunk>
5665 <item><nf-ref|test:test-chunk-result|<tuple|<nf-arg|chunk>|<nf-arg|chunk>.result>>
5666 </nf-chunk|sh|<tuple|chunk>>
5668 <\nf-chunk|test:test-chunk-result>
5669 <item>TEST="<nf-arg|result>" passtest diff -u --label "EXPECTED:
5670 <nf-arg|result>" \<less\>( fangle -R<nf-arg|result> $TXT_SRC ) \\
5672 <item> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ --label
5673 "ACTUAL: <nf-arg|chunk>" \<less\>( fangle -R<nf-arg|chunk> $TXT_SRC )
5674 </nf-chunk|sh|<tuple|chunk|result>>
5676 <chapter|Chunk Parameters>
5680 <\nf-chunk|test:lyx:chunk-params:sub>
5681 <item>I see a ${THING},
5683 <item>a ${THING} of colour ${colour},\
5685 <item>and looking closer =\<less\>\\chunkref{test:lyx:chunk-params:sub:sub}(${colour})\<gtr\>
5686 </nf-chunk||<tuple|THING|colour>>
5688 <\nf-chunk|test:lyx:chunk-params:sub:sub>
5689 <item>a funny shade of ${colour}
5690 </nf-chunk||<tuple|colour>>
5692 <\nf-chunk|test:lyx:chunk-params:text>
5693 <item>What do you see? "=\<less\>\\chunkref{test:lyx:chunk-params:sub}(joe,
5699 Should generate output:
5701 <\nf-chunk|test:lyx:chunk-params:result>
5702 <item>What do you see? "I see a joe,
5704 <item> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ a joe of colour red,\
5706 <item> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ and looking closer a funny shade
5712 And this chunk will perform the test:
5714 <\nf-chunk|test:chunk-params>
5715 <item><nf-ref|test:test-chunk-result|<tuple|test:lyx:chunk-params:text|test:lyx:chunk-params:result>>
5721 <\nf-chunk|test:chunk-params:sub>
5722 <item>I see a <nf-arg|THING>,
5724 <item>a <nf-arg|THING> of colour <nf-arg|colour>,\
5726 <item>and looking closer <nf-ref|test:chunk-params:sub:sub|<tuple|<nf-arg|colour>>>
5727 </nf-chunk||<tuple|THING|colour>>
5729 <\nf-chunk|test:chunk-params:sub:sub>
5730 <item>a funny shade of <nf-arg|colour>
5731 </nf-chunk||<tuple|colour>>
5733 <\nf-chunk|test:chunk-params:text>
5734 <item>What do you see? "<nf-ref|test:chunk-params:sub|<tuple|joe|red>>"
5739 Should generate output:
5741 <\nf-chunk|test:chunk-params:result>
5742 <item>What do you see? "I see a joe,
5744 <item> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ a joe of colour red,\
5746 <item> \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ and looking closer a funny shade
5752 And this chunk will perform the test:
5754 <\nf-chunk|test:chunk-params>
5755 <item><nf-ref|test:test-chunk-result|<tuple|test:chunk-params:text|test:chunk-params:result>>
5759 <chapter|Compile-log-lyx><label|Compile-log-lyx>
5761 <\nf-chunk|Chunk:./compile-log-lyx>
5764 <item># can't use gtkdialog -i, cos it uses the "source" command which
5765 ubuntu sh doesn't have
5771 <item> \ errors="/tmp/compile.log.$$"
5773 <item># \ if grep '^[^ ]*:\\( In \\\|[0-9][0-9]*: [^ ]*:\\)' \<gtr\>
5776 <item>if grep '^[^ ]*(\\([0-9][0-9]*\\)) *: *\\(error\\\|warning\\)'
5781 <item> \ \ \ sed -i -e 's/^[^ ]*[/\\\\]\\([^/\\\\]*\\)(\\([ 0-9][
5782 0-9]*\\)) *: */\\1:\\2\|\\2\|/' $errors
5784 <item> \ \ \ COMPILE_DIALOG='
5786 <item> \<less\>vbox\<gtr\>
5788 <item> \ \<less\>text\<gtr\>
5790 <item> \ \ \ \<less\>label\<gtr\>Compiler errors:\<less\>/label\<gtr\>
5792 <item> \ \<less\>/text\<gtr\>
5794 <item> \ \<less\>tree exported_column="0"\<gtr\>
5796 <item> \ \ \ \<less\>variable\<gtr\>LINE\<less\>/variable\<gtr\>
5798 <item> \ \ \ \<less\>height\<gtr\>400\<less\>/height\<gtr\>\<less\>width\<gtr\>800\<less\>/width\<gtr\>
5800 <item> \ \ \ \<less\>label\<gtr\>File \| Line \|
5801 Message\<less\>/label\<gtr\>
5803 <item> \ \ \ \<less\>action\<gtr\>'". $SELF ; "'lyxgoto
5804 $LINE\<less\>/action\<gtr\>
5806 <item> \ \ \ \<less\>input\<gtr\>'"cat $errors"'\<less\>/input\<gtr\>
5808 <item> \ \<less\>/tree\<gtr\>
5810 <item> \ \<less\>hbox\<gtr\>
5812 <item> \ \ \<less\>button\<gtr\>\<less\>label\<gtr\>Build\<less\>/label\<gtr\>
5814 <item> \ \ \ \ \<less\>action\<gtr\>lyxclient -c "LYXCMD:build-program"
5815 &\<less\>/action\<gtr\>
5817 <item> \ \ \<less\>/button\<gtr\>
5819 <item> \ \ \<less\>button ok\<gtr\>\<less\>/button\<gtr\>
5821 <item> \ \<less\>/hbox\<gtr\>
5823 <item> \<less\>/vbox\<gtr\>
5827 <item> \ \ \ export COMPILE_DIALOG
5829 <item> \ \ \ ( gtkdialog --program=COMPILE_DIALOG ; rm $errors ) &
5833 <item> \ \ \ rm $errors
5843 <item> \ file="${LINE%:*}"
5845 <item> \ line="${LINE##*:}"
5847 <item> \ extraline=`cat $file \| head -n $line \| tac \| sed
5848 '/^\\\\\\\\begin{lstlisting}/q' \| wc -l`
5850 <item> \ extraline=`expr $extraline - 1`
5852 <item> \ lyxclient -c "LYXCMD:command-sequence server-goto-file-row $file
5853 $line ; char-forward ; repeat $extraline paragraph-down ;
5854 paragraph-up-select"
5862 <item>if test -z "$COMPILE_DIALOG"
5864 <item>then main "$@"\
5874 <associate|info-flag|short>
5875 <associate|page-medium|paper>
5876 <associate|page-screen-height|982016tmpt>
5877 <associate|page-screen-margin|false>
5878 <associate|page-screen-width|1686528tmpt>
5879 <associate|page-show-hf|true>
5880 <associate|preamble|false>
5881 <associate|sfactor|5>