18 Fangle is a tool for fangled literate programming. Newfangled is defined as New and often needlessly novel by TheFreeDictionary.com.
19 In this case, fangled means yet another not-so-new1. but improved. ^1 method for literate programming.
20 Literate Programming has a long history starting with the great Donald Knuth himself, whose literate programming tools seem to make use of as many escape sequences for semantic markup as TeX (also by Donald Knuth).
21 Norman Ramsey wrote the Noweb set of tools (notangle, noweave and noroots) and helpfully reduced the amount of magic character sequences to pretty much just <<, >> and @, and in doing so brought the wonders of literate programming within my reach.
22 While using the L Y X editor for LaTeX editing I had various troubles with the noweb tools, some of which were my fault, some of which were noweb's fault and some of which were L Y X's fault.
23 Noweb generally brought literate programming to the masses through removing some of the complexity of the original literate programming, but this would be of no advantage to me if the L Y X / LaTeX combination brought more complications in their place.
24 Fangle was thus born (originally called Newfangle) as an awk replacement for notangle, adding some important features, like better integration with L Y X and LaTeX (and later TeXmacs), multiple output format conversions, and fixing notangle bugs like indentation when using -L for line numbers.
25 Significantly, fangle is just one program which replaces various programs in Noweb. Noweave is done away with and implemented directly as LaTeX macros, and noroots is implemented as a function of the untangler fangle.
26 Fangle is written in awk for portability reasons, awk being available for most platforms. A Python version2. hasn't anyone implemented awk in python yet? ^2 was considered for the benefit of L Y X but a scheme version for TeXmacs will probably materialise first; as TeXmacs macro capabilities help make edit-time and format-time rendering of fangle chunks simple enough for my weak brain.
27 As an extension to many literate-programming styles, Fangle permits code chunks to take parameters and thus operate somewhat like C pre-processor macros, or like C++ templates. Name parameters (or even local variables in the callers scope) are anticipated, as parameterized chunks — useful though they are — are hard to comprehend in the literate document.
29 Fangle is licensed under the GPL 3 (or later).
30 This doesn't mean that sources generated by fangle must be licensed under the GPL 3.
31 This doesn't mean that you can't use or distribute fangle with sources of an incompatible license, but it means you must make the source of fangle available too.
32 As fangle is currently written in awk, an interpreted language, this should not be too hard.
34 4a <gpl3-copyright[1](), lang=text> ≡
35 ________________________________________________________________________
36 1 | # fangle - fully featured notangle replacement in awk
38 3 | # Copyright (C) 2009-2010 Sam Liddicott <sam@liddicott.com>
40 5 | # This program is free software: you can redistribute it and/or modify
41 6 | # it under the terms of the GNU General Public License as published by
42 7 | # the Free Software Foundation, either version 3 of the License, or
43 8 | # (at your option) any later version.
45 10 | # This program is distributed in the hope that it will be useful,
46 11 | # but WITHOUT ANY WARRANTY; without even the implied warranty of
47 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
48 13 | # GNU General Public License for more details.
50 15 | # You should have received a copy of the GNU General Public License
51 16 | # along with this program. If not, see <http://www.gnu.org/licenses/>.
52 |________________________________________________________________________
59 1 Introduction to Literate Programming 11
62 2.2 Extracting roots 13
63 2.3 Formatting the document 13
64 3 Using Fangle with L^ A T_ E X 15
65 4 Using Fangle with L Y X 17
66 4.1 Installing the L Y X module 17
67 4.2 Obtaining a decent mono font 17
71 4.3 Formatting your Lyx document 18
72 4.3.1 Customising the listing appearance 18
73 4.3.2 Global customisations 18
74 4.4 Configuring the build script 19
76 5 Using Fangle with T_ E X_( M A CS) 21
77 6 Fangle with Makefiles 23
78 6.1 A word about makefiles formats 23
79 6.2 Extracting Sources 23
80 6.2.1 Converting from L Y X to L^ A T_ E X 24
81 6.2.2 Converting from T_ E X_( M A CS) 24
82 6.3 Extracting Program Source 25
83 6.4 Extracting Source Files 25
84 6.5 Extracting Documentation 27
85 6.5.1 Formatting T_ E X 27
86 6.5.1.1 Running pdflatex 27
87 6.5.2 Formatting T_ E X_( M A CS) 28
88 6.5.3 Building the Documentation as a Whole 28
90 6.7 Boot-strapping the extraction 29
91 6.8 Incorporating Makefile.inc into existing projects 30
94 7 Fangle awk source code 33
96 7.2 Catching errors 34
97 8 L^ A T_ E X and lstlistings 35
98 8.1 Additional lstlstings parameters 35
99 8.2 Parsing chunk arguments 37
100 8.3 Expanding parameters in the text 38
101 9 Language Modes & Quoting 41
103 9.1.1 Modes to keep code together 41
104 9.1.2 Modes affect included chunks 41
105 9.2 Language Mode Definitions 42
108 9.2.3 Parentheses, Braces and Brackets 45
109 9.2.4 Customizing Standard Modes 45
115 9.4 A non-recursive mode tracker 48
119 9.4.3.1 One happy chunk 52
121 9.5 Escaping and Quoting 52
122 10 Recognizing Chunks 55
124 10.1.1 lstlistings 55
127 10.2.1 lstlistings 57
129 10.3 Chunk contents 58
130 10.3.1 lstlistings 58
131 11 Processing Options 58
132 12 Generating the Output 59
133 12.1 Assembling the Chunks 61
134 12.1.1 Chunk Parts 63
137 15 Fangle LaTeX source code 69
138 15.1 fangle module 71
139 15.1.1 The Chunk style 75
140 15.1.2 The chunkref style 75
142 15.2.1 The chunk command 76
143 15.2.1.1 Chunk parameters 76
144 15.2.2 The noweb styled caption 77
145 15.2.3 The chunk counter 78
146 15.2.4 Cross references 78
148 16 Extracting fangle 81
149 16.1 Extracting from Lyx 82
150 16.2 Extracting documentation 83
151 16.3 Extracting from the command line 83
154 17 Chunk Parameters 84
155 18 Compile-log-lyx 85
157 Chapter 1Introduction to Literate Programming
158 Todo: Should really follow on from a part-0 explanation of what literate programming is.
159 Chapter 2Running Fangle
160 Fangle is a replacement for noweb, which consists of notangle, noroots and noweave.
161 Like notangle and noroots, fangle can read multiple named files, or from stdin.
163 The -r option causes fangle to behave like noroots.
164 fangle -r filename.tex
165 will print out the fangle roots of a tex file.
166 Unlike the noroots command, the printed roots are not enclosed in angle brackets e.g. <<name>>, unless at least one of the roots is defined using the notangle notation <<name>>=.
167 Also, unlike noroots, it prints out all roots --- not just those that are not used elsewhere. I find that a root not being used doesn't make it particularly top level — and so-called top level roots could also be included in another root as well.
168 My convention is that top level roots to be extracted begin with ./ and have the form of a filename.
169 Makefile.inc, discussed in 6, can automatically extract all such sources prefixed with ./
171 notangle's -R and -L options are supported.
172 If you are using L Y X or LaTeX, the standard way to extract a file would be:
173 fangle -R./Makefile.inc fangle.tex > ./Makefile.inc
174 If you are using TeXmacs, the standard way to extract a file would similarly be:
175 fangle -R./Makefile.inc fangle.txt > ./Makefile.inc
176 TeXmacs users would obtain the text file with a verbatim export from TeXmacs which can be done on the command line with texmacs -s -c fangle.tm fangle.txt -q
177 Unlike the noroots command, the -L option to generate C pre-preocessor #file style line-number directives,does not break indenting of the generated file..
178 Also, thanks to mode tracking (described in 9) the -L option does not interrupt (and break) multi-line C macros either.
179 This does mean that sometimes the compiler might calculate the source line wrongly when generating error messages in such cases, but there isn't any other way around if multi-line macros include other chunks.
180 Future releases will include a mapping file so that line/character references from the C compiler can be converted to the correct part of the source document.
181 2.3 Formatting the document
182 The noweave replacement built into the editing and formatting environment for TeXmacs, L Y X (which uses LaTeX), and even for raw LaTeX.
183 Use of fangle with TeXmacs, L Y X and LaTeX are explained the the next few chapters.
184 Chapter 3Using Fangle with LaTeX
185 Because the noweave replacement is impemented in LaTeX, there is no processing stage required before running the LaTeX command. Of course, LaTeX may need running two or more times, so that the code chunk references can be fully calculated.
186 The formatting is managed by a set of macros shown in 15, and can be included with:
187 \usepackage{fangle.sty}
188 Norman Ramsay's origial noweb.sty package is currently required as it is used for formatting the code chunk captions.
189 The listings.sty package is required, and is used for formatting the code chunks and syntax highlighting.
190 The xargs.sty package is also required, and makes writing LaTeX macro so much more pleasant.
191 To do: Add examples of use of Macros
193 Chapter 4Using Fangle with L Y X
194 L Y X uses the same LaTeX macros shown in 15 as part of a L Y X module file fangle.module, which automatically includes the macros in the document pre-amble provided that the fangle L Y X module is used in the document.
195 4.1 Installing the L Y X module
196 Copy fangle.module to your L Y X layouts directory, which for unix users will be ~/.lyx/layouts
197 In order to make the new literate styles availalble, you will need to reconfigure L Y X by clicking Tools->Reconfigure, and then re-start L Y X.
198 4.2 Obtaining a decent mono font
199 The syntax high-lighting features of lstlistings makes use of bold; however a mono-space tt font is used to typeset the listings. Obtaining a bold tt font can be impossibly difficult and amazingly easy. I spent many hours at it, following complicated instructions from those who had spend many hours over it, and was finally delivered the simple solution on the lyx mailing list.
201 The simple way was to add this to my preamble:
203 \renewcommand{\ttdefault}{txtt}
206 The next simplest way was to use ams poor-mans-bold, by adding this to the pre-amble:
208 %\renewcommand{\ttdefault}{txtt}
209 %somehow make \pmb be the command for bold, forgot how, sorry, above line not work
210 It works, but looks wretched on the dvi viewer.
212 The lstlistings documention suggests using Luximono.
213 Luximono was installed according to the instructions in Ubuntu Forums thread 11591811. http://ubuntuforums.org/showthread.php?t=1159181 ^1 with tips from miknight2. http://miknight.blogspot.com/2005/11/how-to-install-luxi-mono-font-in.html ^2 stating that sudo updmap --enable MixedMap ul9.map is required. It looks fine in PDF and PS view but still looks rotten in dvi view.
214 4.3 Formatting your Lyx document
215 It is not necessary to base your literate document on any of the original L Y X literate classes; so select a regular class for your document type.
216 Add the new module Fangle Literate Listings and also Logical Markup which is very useful.
217 In the drop-down style listbox you should notice a new style defined, called Chunk.
218 When you wish to insert a literate chunk, you enter it's plain name in the Chunk style, instead of the old noweb method that uses <<name>>= type tags. In the line (or paragraph) following the chunk name, you insert a listing with: Insert->Program Listing.
219 Inside the white listing box you can type (or paste using shift+ctrl+V) your listing. There is no need to use ctrl+enter at the end of lines as with some older L Y X literate techniques --- just press enter as normal.
220 4.3.1 Customising the listing appearance
221 The code is formatted using the lstlistings package. The chunk style doesn't just define the chunk name, but can also define any other chunk options supported by the lstlistings package \lstset command. In fact, what you type in the chunk style is raw latex. If you want to set the chunk language without having to right-click the listing, just add ,lanuage=C after the chunk name. (Currently the language will affect all subsequent listings, so you may need to specify ,language= quite a lot).
222 To do: so fix the bug
224 Of course you can do this by editing the listings box advanced properties by right-clicking on the listings box, but that takes longer, and you can't see at-a-glance what the advanced settings are while editing the document; also advanced settings apply only to that box --- the chunk settings apply through the rest of the document3. It ought to apply only to subsequent chunks of the same name. I'll fix that later ^3.
225 To do: So make sure they only apply to chunks of that name
227 4.3.2 Global customisations
228 As lstlistings is used to set the code chunks, it's \lstset command can be used in the pre-amble to set some document wide settings.
229 If your source has many words with long sequences of capital letters, then columns=fullflexible may be a good idea, or the capital letters will get crowded. (I think lstlistings ought to use a slightly smaller font for captial letters so that they still fit).
230 The font family \ttfamily looks more normal for code, but has no bold (an alternate typewriter font is used).
231 With \ttfamily, I must also specify columns=fullflexible or the wrong letter spacing is used.
232 In my LaTeX pre-amble I usually specialise my code format with:
234 19a <document-preamble[1](), lang=tex> ≡
235 ________________________________________________________________________
237 2 | numbers=left, stepnumber=1, numbersep=5pt,
238 3 | breaklines=false,
239 4 | basicstyle=\footnotesize\ttfamily,
240 5 | numberstyle=\tiny,
242 7 | columns=fullflexible,
243 8 | numberfirstline=true
245 |________________________________________________________________________
249 4.4 Configuring the build script
250 You can invoke code extraction and building from the L Y X menu option Document->Build Program.
251 First, make sure you don't have a conversion defined for Lyx->Program
252 From the menu Tools->Preferences, add a conversion from Latex(Plain)->Program as:
253 set -x ; fangle -Rlyx-build $$i |
254 env LYX_b=$$b LYX_i=$$i LYX_o=$$o LYX_p=$$p LYX_r=$$r bash
255 (But don't cut-n-paste it from this document or you may be be pasting a multi-line string which will break your lyx preferences file).
256 I hope that one day, L Y X will set these into the environment when calling the build script.
257 You may also want to consider adding options to this conversion...
258 parselog=/usr/share/lyx/scripts/listerrors
259 ...but if you do you will lose your stderr4. There is some bash plumbing to get a copy of stderr but this footnote is too small ^4.
260 Now, a shell script chunk called lyx-build will be extracted and run whenever you choose the Document->Build Program menu item.
261 This document was originally managed using L Y X and lyx-build script for this document is shown here for historical reference.
262 lyx -e latex fangle.lyx && \
263 fangle fangle.lyx > ./autoboot
264 This looks simple enough, but as mentioned, fangle has to be had from somewhere before it can be extracted.
266 When the lyx-build chunk is executed, the current directory will be a temporary directory, and LYX_SOURCE will refer to the tex file in this temporary directory. This is unfortunate as our makefile wants to run from the project directory where the Lyx file is kept.
267 We can extract the project directory from $$r, and derive the probable Lyx filename from the noweb file that Lyx generated.
269 19b <lyx-build-helper[1](), lang=sh> ≡ 83c⊳
270 ________________________________________________________________________
271 1 | PROJECT_DIR="$LYX_r"
272 2 | LYX_SRC="$PROJECT_DIR/${LYX_i%.tex}.lyx"
274 4 | TEX_SRC="$TEX_DIR/$LYX_i"
275 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
276 And then we can define a lyx-build fragment similar to the autoboot fragment
278 20a <lyx-build[1](), lang=sh> ≡ 83a⊳
279 ________________________________________________________________________
281 2 | =<\chunkref{lyx-build-helper}>
282 3 | cd $PROJECT_DIR || exit 1
284 5 | #/usr/bin/fangle -filter ./notanglefix-filter \
285 6 | # -R./Makefile.inc "../../noweb-lyx/noweb-lyx3.lyx" \
286 7 | # | sed '/NOWEB_SOURCE=/s/=.*/=samba4-dfs.lyx/' \
287 8 | # > ./Makefile.inc
289 10 | #make -f ./Makefile.inc fangle_sources
290 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
292 Chapter 5Using Fangle with TeXmacs
293 To do: Write this chapter
295 Chapter 6Fangle with Makefiles
296 Here we describe a Makefile.inc that you can include in your own Makefiles, or glue as a recursive make to other projects.
297 Makefile.inc will cope with extracting all the other source files from this or any specified literate document and keeping them up to date.
298 It may also be included by a Makefile or Makefile.am defined in a literate document to automatically deal with the extraction of source files and documents during normal builds.
299 Thus, if Makefile.inc is included into a main project makefile it add rules for the source files, capable of extracting the source files from the literate document.
300 6.1 A word about makefiles formats
301 Whitespace formatting is very important in a Makefile. The first character of each action line must be a TAB.
302 target: pre-requisite
305 This requires that the literate programming environment have the ability to represent a TAB character in a way that fangle will generate an actual TAB character.
306 We also adopt a convention that code chunks whose names beginning with ./ should always be automatically extracted from the document. Code chunks whose names do not begin with ./ are for internal reference. Such chunks may be extracted directly, but will not be automatically extracted by this Makefile.
307 6.2 Extracting Sources
308 Our makefile has two parts; variables must be defined before the targets that use them.
309 As we progress through this chapter, explaining concepts, we will be adding lines to <Makefile.inc-vars 23b> and <Makefile.inc-targets 24b> which are included in <./Makefile.inc 23a> below.
311 23a <./Makefile.inc[1](), lang=make> ≡
312 ________________________________________________________________________
313 1 | «Makefile.inc-vars 23b»
314 2 | «Makefile.inc-targets 24b»
315 |________________________________________________________________________
318 We first define a placeholder for LITERATE_SOURCE to hold the name of this document. This will normally be passed on the command line.
320 23b <Makefile.inc-vars[1](), lang=> ≡ 24a⊳
321 ________________________________________________________________________
323 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
324 Fangle cannot process L Y X or TeXmacs documents directly, so the first stage is to convert these to more suitable text based formats1. L Y X and TeXmacs formats are text-based, but not suitable for fangle ^1.
325 6.2.1 Converting from L Y X to LaTeX
326 The first stage will always be to convert the L Y X file to a LaTeX file. Fangle must run on a TeX file because the L Y X command server-goto-file-line2. The Lyx command server-goto-file-line is used to position the Lyx cursor at the compiler errors. ^2 requries that the line number provided be a line of the TeX file and always maps this the line in the L Y X docment. We use server-goto-file-line when moving the cursor to error lines during compile failures.
327 The command lyx -e literate fangle.lyx will produce fangle.tex, a TeX file; so we define a make target to be the same as the L Y X file but with the .tex extension.
328 The EXTRA_DIST is for automake support so that the TeX files will automaticaly be distributed with the source, to help those who don't have L Y X installed.
330 24a <Makefile.inc-vars[2]() ⇑23b, lang=> +≡ ⊲23b 24c▿
331 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
332 2 | TEX_SOURCE=$(LYX_SOURCE:.lyx=.tex)
333 3 | EXTRA_DIST+=$(TEX_SOURCE)
334 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
335 We then specify that the TeX source is to be generated from the L Y X source.
337 24b <Makefile.inc-targets[1](), lang=> ≡ 24d▿
338 ________________________________________________________________________
339 1 | $(TEX_SOURCE): $(LYX_SOURCE)
342 4 | ↦rm -f -- $(TEX_SOURCE)
344 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
345 6.2.2 Converting from TeXmacs
346 Fangle cannot process TeXmacs files directly3. but this is planned when TeXmacs uses xml as it's native format ^3, but must first convert them to text files.
347 The command texmacs -c fangle.tm fangle.txt -q will produce fangle.txt, a text file; so we define a make target to be the same as the TeXmacs file but with the .txt extension.
348 The EXTRA_DIST is for automake support so that the TeX files will automaticaly be distributed with the source, to help those who don't have L Y X installed.
350 24c <Makefile.inc-vars[3]() ⇑23b, lang=> +≡ ▵24a 25a⊳
351 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
352 4 | TXT_SOURCE=$(LITERATE_SOURCE:.tm=.txt)
353 5 | EXTRA_DIST+=$(TXT_SOURCE)
354 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
355 To do: Add loop around each $< so multiple targets can be specified
358 24d <Makefile.inc-targets[2]() ⇑24b, lang=> +≡ ▵24b 25c⊳
359 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
360 6 | $(TXT_SOURCE): $(LITERATE_SOURCE)
361 7 | ↦texmacs -c $< $(TXT_SOURCE) -q
363 9 | ↦rm -f -- $(TXT_SOURCE)
364 10 | clean: clean_txt
365 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
366 6.3 Extracting Program Source
367 The program source is extracted using fangle, which is designed to operate on text or a LaTeX documents4. LaTeX documents are just slightly special text documents ^4.
369 25a <Makefile.inc-vars[4]() ⇑23b, lang=> +≡ ⊲24c 25b▿
370 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
371 6 | FANGLE_SOURCE=$(TEX_SOURCE) $(TXT_SOURCE)
372 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
373 The literate document can result in any number of source files, but not all of these will be changed each time the document is updated. We certainly don't want to update the timestamps of these files and cause the whole source tree to be recompiled just because the literate explanation was revised. We use CPIF from the Noweb tools to avoid updating the file if the content has not changed, but should probably write our own.
374 However, if a source file is not updated, then the fangle file will always have a newer time-stamp and the makefile would always re-attempt to extact a newer source file which would be a waste of time.
375 Because of this, we use a stamp file which is always updated each time the sources are fully extracted from the LaTeX document. If the stamp file is newer than the document, then we can avoid an attempt to re-extract any of the sources. Because this stamp file is only updated when extraction is complete, it is safe for the user to interrupt the build-process mid-extraction.
376 We use echo rather than touch to update the stamp file beause the touch command does not work very well over an sshfsmount that I was using.
378 25b <Makefile.inc-vars[5]() ⇑23b, lang=> +≡ ▵25a 26a⊳
379 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
380 7 | FANGLE_SOURCE_STAMP=$(FANGLE_SOURCE).stamp
381 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
383 25c <Makefile.inc-targets[3]() ⇑24b, lang=> +≡ ⊲24d 26b⊳
384 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
385 11 | $(FANGLE_SOURCE_STAMP): $(FANGLE_SOURCE) \
386 12 | ↦ $(FANGLE_SOURCES) ; \
387 13 | ↦echo -n > $(FANGLE_SOURCE_STAMP)
389 15 | ↦rm -f $(FANGLE_SOURCE_STAMP)
390 16 | clean: clean_stamp
391 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
392 6.4 Extracting Source Files
393 We compute FANGLE_SOURCES to hold the names of all the source files defined in the document. We compute this only once, by means of := in assignent. The sed deletes the any << and >> which may surround the roots names (for compatibility with Noweb's noroots command).
394 As we use chunk names beginning with ./ to denote top level fragments that should be extracted, we filter out all fragments that do not begin with ./
395 Note 1. FANGLE_PREFIX is set to ./ by default, but whatever it may be overridden to, the prefix is replaced by a literal ./ before extraction so that files will be extracted in the current directory whatever the prefix. This helps namespace or sub-project prefixes like documents: for chunks like documents:docbook/intro.xml
396 To do: This doesn't work though, because it loses the full name and doesn't know what to extact!
399 26a <Makefile.inc-vars[6]() ⇑23b, lang=> +≡ ⊲25b 26e▿
400 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
401 8 | FANGLE_PREFIX:=\.\/
402 9 | FANGLE_SOURCES:=$(shell \
403 10 | fangle -r $(FANGLE_SOURCE) |\
404 11 | sed -e 's/^[<][<]//;s/[>][>]$$//;/^$(FANGLE_PREFIX)/!d' \
405 12 | -e 's/^$(FANGLE_PREFIX)/\.\//' )
406 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
407 The target below, echo_fangle_sources is a helpful debugging target and shows the names of the files that would be extracted.
409 26b <Makefile.inc-targets[4]() ⇑24b, lang=> +≡ ⊲25c 26c▿
410 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
411 17 | .PHONY: echo_fangle_sources
412 18 | echo_fangle_sources: ; @echo $(FANGLE_SOURCES)
413 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
414 We define a convenient target called fangle_sources so that make -f fangle_sources will re-extract the source if the literate document has been updated.
416 26c <Makefile.inc-targets[5]() ⇑24b, lang=> +≡ ▵26b 26d▿
417 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
418 19 | .PHONY: fangle_sources
419 20 | fangle_sources: $(FANGLE_SOURCE_STAMP)
420 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
421 And also a convenient target to remove extracted sources.
423 26d <Makefile.inc-targets[6]() ⇑24b, lang=> +≡ ▵26c 27d⊳
424 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
425 21 | .PHONY: clean_fangle_sources
426 22 | clean_fangle_sources: ; \
427 23 | rm -f -- $(FANGLE_SOURCE_STAMP) $(FANGLE_SOURCES)
428 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
429 We now look at the extraction of the source files.
430 This makefile macro if_extension takes 4 arguments: the filename $(1), some extensions to match $(2) and a shell command to return if the filename does match the exensions $(3), and a shell command to return if it does not match the extensions $(4).
432 26e <Makefile.inc-vars[7]() ⇑23b, lang=> +≡ ▵26a 26f▿
433 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
434 13 | if_extension=$(if $(findstring $(suffix $(1)),$(2)),$(3),$(4))
435 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
436 For some source files like C files, we want to output the line number and filename of the original LaTeX document from which the source came5. I plan to replace this option with a separate mapping file so as not to pollute the generated source, and also to allow a code pretty-printing reformatter like indent be able to re-format the file and adjust for changes through comparing the character streams. ^5.
437 To make this easier we define the file extensions for which we want to do this.
439 26f <Makefile.inc-vars[8]() ⇑23b, lang=> +≡ ▵26e 26g▿
440 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
441 14 | C_EXTENSIONS=.c .h
442 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
443 We can then use the if_extensions macro to define a macro which expands out to the -L option if fangle is being invoked in a C source file, so that C compile errors will refer to the line number in the TeX document.
445 26g <Makefile.inc-vars[9]() ⇑23b, lang=> +≡ ▵26f 27a⊳
446 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
448 16 | nf_line=-L -T$(TABS)
449 17 | fangle=fangle $(call if_extension,$(2),$(C_EXTENSIONS),$(nf_line)) -R"$(2)" $(1)
450 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
451 We can use a similar trick to define an indent macro which takes just the filename as an argument and can return a pipeline stage calling the indent command. Indent can be turned off with make fangle_sources indent=
453 27a <Makefile.inc-vars[10]() ⇑23b, lang=> +≡ ⊲26g 27b▿
454 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
455 18 | indent_options=-npro -kr -i8 -ts8 -sob -l80 -ss -ncs
456 19 | indent=$(call if_extension,$(1),$(C_EXTENSIONS), | indent $(indent_options))
457 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
458 We now define the pattern for extracting a file. The files are written using noweb's cpif so that the file timestamp will not be touched if the contents haven't changed. This avoids the need to rebuild the entire project because of a typographical change in the documentation, or if none or a few C source files have changed.
460 27b <Makefile.inc-vars[11]() ⇑23b, lang=> +≡ ▵27a 27c▿
461 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
462 20 | fangle_extract=@mkdir -p $(dir $(1)) && \
463 21 | $(call fangle,$(2),$(1)) > "$(1).tmp" && \
464 22 | cat "$(1).tmp" $(indent) | cpif "$(1)" \
465 23 | && rm -- "$(1).tmp" || \
466 24 | (echo error newfangling $(1) from $(2) ; exit 1)
467 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
468 We define a target which will extract or update all sources. To do this we first defined a makefile template that can do this for any source file in the LaTeX document.
470 27c <Makefile.inc-vars[12]() ⇑23b, lang=> +≡ ▵27b 28a⊳
471 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
472 25 | define FANGLE_template
474 27 | ↦$$(call fangle_extract,$(1),$(2))
475 28 | FANGLE_TARGETS+=$(1)
477 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
478 We then enumerate the discovered FANGLE_SOURCES to generate a makefile rule for each one using the makefile template we defined above.
480 27d <Makefile.inc-targets[7]() ⇑24b, lang=> +≡ ⊲26d 27e▿
481 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
482 24 | $(foreach source,$(FANGLE_SOURCES),\
483 25 | $(eval $(call FANGLE_template,$(source),$(FANGLE_SOURCE))) \
485 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
486 These will all be built with FANGLE_SOURCE_STAMP.
487 We also remove the generated sources on a make distclean.
489 27e <Makefile.inc-targets[8]() ⇑24b, lang=> +≡ ▵27d 28b⊳
490 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
491 27 | _distclean: clean_fangle_sources
492 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
493 6.5 Extracting Documentation
494 We then identify the intermediate stages of the documentation and their build and clean targets.
496 6.5.1.1 Running pdflatex
497 We produce a pdf file from the tex file.
499 28a <Makefile.inc-vars[13]() ⇑23b, lang=> +≡ ⊲27c 28c▿
500 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
501 30 | FANGLE_PDF=$(TEX_SOURCE:.tex=.pdf)
502 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
503 We run pdflatex twice to be sure that the contents and aux files are up to date. We certainly are required to run pdflatex at least twice if these files do not exist.
505 28b <Makefile.inc-targets[9]() ⇑24b, lang=> +≡ ⊲27e 28d▿
506 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
507 28 | $(FANGLE_PDF): $(TEX_SOURCE)
508 29 | ↦pdflatex $< && pdflatex $<
511 32 | ↦rm -f -- $(FANGLE_PDF) $(TEX_SOURCE:.tex=.toc) \
512 33 | ↦ $(TEX_SOURCE:.tex=.log) $(TEX_SOURCE:.tex=.aux)
513 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
514 6.5.2 Formatting TeXmacs
515 TeXmacs can produce a PDF file directly.
517 28c <Makefile.inc-vars[14]() ⇑23b, lang=> +≡ ▵28a 28e▿
518 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
519 31 | FANGLE_PDF=$(TEX_SOURCE:.tm=.pdf)
520 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
521 To do: Outputting the PDF may not be enough to update the links and page references. I think
522 we need to update twice, generate a pdf, update twice mode and generate a new PDF.
523 Basically the PDF export of TeXmacs is pretty rotten and doesn't work properly from the CLI
526 28d <Makefile.inc-targets[10]() ⇑24b, lang=> +≡ ▵28b 28f▿
527 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
528 34 | $(FANGLE_PDF): $(TEXMACS_SOURCE)
529 35 | ↦texmacs -c $(TEXMACS_SOURCE) $< -q
532 38 | ↦rm -f -- $(FANGLE_PDF)
533 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
534 6.5.3 Building the Documentation as a Whole
535 Currently we only build pdf as a final format, but FANGLE_DOCS may later hold other output formats.
537 28e <Makefile.inc-vars[15]() ⇑23b, lang=> +≡ ▵28c
538 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
539 32 | FANGLE_DOCS=$(FANGLE_PDF)
540 |________________________________________________________________________
543 We also define fangle_docs as a convenient phony target.
545 28f <Makefile.inc-targets[11]() ⇑24b, lang=> +≡ ▵28d 28g▿
546 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
547 39 | .PHONY: fangle_docs
548 40 | fangle_docs: $(FANGLE_DOCS)
549 41 | docs: fangle_docs
550 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
551 And define a convenient clean_fangle_docs which we add to the regular clean target
553 28g <Makefile.inc-targets[12]() ⇑24b, lang=> +≡ ▵28f
554 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
555 42 | .PHONEY: clean_fangle_docs
556 43 | clean_fangle_docs: clean_tex clean_pdf
557 44 | clean: clean_fangle_docs
559 46 | distclean_fangle_docs: clean_tex clean_fangle_docs
560 47 | distclean: clean distclean_fangle_docs
561 |________________________________________________________________________
565 If Makefile.inc is included into Makefile, then extracted files can be updated with this command:
568 make -f Makefile.inc fangle_sources
569 6.7 Boot-strapping the extraction
570 As well as having the makefile extract or update the source files as part of it's operation, it also seems convenient to have the makefile re-extracted itself from this document.
571 It would also be convenient to have the code that extracts the makefile from this document to also be part of this document, however we have to start somewhere and this unfortunately requires us to type at least a few words by hand to start things off.
572 Therefore we will have a minimal root fragment, which, when extracted, can cope with extracting the rest of the source. This shell script fragment can do that. It's name is * — out of regard for Noweb, but when extracted might better be called autoupdate.
576 29a <*[1](), lang=sh> ≡
577 ________________________________________________________________________
580 3 | MAKE_SRC="${1:-${NW_LYX:-../../noweb-lyx/noweb-lyx3.lyx}}"
581 4 | MAKE_SRC=`dirname "$MAKE_SRC"`/`basename "$MAKE_SRC" .lyx`
582 5 | NOWEB_SRC="${2:-${NOWEB_SRC:-$MAKE_SRC.lyx}}"
583 6 | lyx -e latex $MAKE_SRC
585 8 | fangle -R./Makefile.inc ${MAKE_SRC}.tex \
586 9 | | sed "/FANGLE_SOURCE=/s/^/#/;T;aNOWEB_SOURCE=$FANGLE_SRC" \
587 10 | | cpif ./Makefile.inc
589 12 | make -f ./Makefile.inc fangle_sources
590 |________________________________________________________________________
593 The general Makefile can be invoked with ./autoboot and can also be included into any automake file to automatically re-generate the source files.
594 The autoboot can be extracted with this command:
595 lyx -e latex fangle.lyx && \
596 fangle fangle.lyx > ./autoboot
597 This looks simple enough, but as mentioned, fangle has to be had from somewhere before it can be extracted.
598 On a unix system this will extract fangle.module and the fangle awk script, and run some basic tests.
599 To do: cross-ref to test chapter when it is a chapter all on its own
601 6.8 Incorporating Makefile.inc into existing projects
602 If you are writing a literate module of an existing non-literate program you may find it easier to use a slight recursive make instead of directly including Makefile.inc in the projects makefile.
603 This way there is less chance of definitions in Makefile.inc interfering with definitions in the main makefile, or with definitions in other Makefile.inc from other literate modules of the same project.
604 To do this we add some glue to the project makefile that invokes Makefile.inc in the right way. The glue works by adding a .PHONY target to call the recursive make, and adding this target as an additional pre-requisite to the existing targets.
605 Example Sub-module of existing system
606 In this example, we are building module.so as a literate module of a larger project.
607 We will show the sort glue that can be inserted into the projects Makefile — or more likely — a regular Makefile included in or invoked by the projects Makefile.
609 30a <makefile-glue[1](), lang=> ≡ 30b▿
610 ________________________________________________________________________
611 1 | module_srcdir=modules/module
612 2 | MODULE_SOURCE=module.tm
613 3 | MODULE_STAMP=$(MODULE_SOURCE).stamp
614 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
615 The existing build system may already have a build target for module.o, but we just add another pre-requisite to that. In this case we use module.tm.stamp as a pre-requisite, the stamp file's modified time indicating when all sources were extracted6. If the projects build system does not know how to build the module from the extracted sources, then just add build actions here as normal. ^6.
617 30b <makefile-glue[2]() ⇑30a, lang=make> +≡ ▵30a 30c▿
618 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
619 4 | $(module_srcdir)/module.o: $(module_srcdir)/$(MODULE_STAMP)
620 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
621 The target for this new pre-requisite will be generated by a recursive make using Makefile.inc which will make sure that the source is up to date, before it is built by the main projects makefile.
623 30c <makefile-glue[3]() ⇑30a, lang=> +≡ ▵30b 30d▿
624 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
625 5 | $(module_srcdir)/$(MODULE_STAMP): $(module_srcdir)/$(MODULE_SOURCE)
626 6 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc fangle_sources LITERATE_SOURCE=$(MODULE_SOURCE)
627 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
628 We can do similar glue for the docs, clean and distclean targets. In this example the main prject was using a double colon for these targets, so we must use the same in our glue.
630 30d <makefile-glue[4]() ⇑30a, lang=> +≡ ▵30c
631 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
632 7 | docs:: docs_module
633 8 | .PHONY: docs_module
635 10 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc docs LITERATE_SOURCE=$(MODULE_SOURCE)
637 12 | clean:: clean_module
638 13 | .PHONEY: clean_module
640 15 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc clean LITERATE_SOURCE=$(MODULE_SOURCE)
642 17 | distclean:: distclean_module
643 18 | .PHONY: distclean_module
644 19 | distclean_module:
645 20 | ↦$(MAKE) -C $(module_srcdir) -f Makefile.inc distclean LITERATE_SOURCE=$(MODULE_SOURCE)
646 |________________________________________________________________________
649 We could do similarly for install targets to install the generated docs.
651 Chapter 7Fangle awk source code
652 We use the copyright notice from chapter 2.
654 33a <./fangle[1](), lang=awk> ≡ 33b▿
655 ________________________________________________________________________
656 1 | #! /usr/bin/awk -f
657 2 | # «gpl3-copyright 4a»
658 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
659 We also use code from Arnold Robbins public domain getopt (1993 revision) defined in 73a, and naturally want to attribute this appropriately.
661 33b <./fangle[2]() ⇑33a, lang=> +≡ ▵33a 33c▿
662 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
663 3 | # NOTE: Arnold Robbins public domain getopt for awk is also used:
664 4 | «getopt.awk-header 71a»
665 5 | «getopt.awk-getopt() 71c»
667 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
668 And include the following chunks (which are explained further on) to make up the program:
670 33c <./fangle[3]() ⇑33a, lang=> +≡ ▵33b 36a⊳
671 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
672 7 | «helper-functions 34d»
673 8 | «mode-tracker 52b»
674 9 | «parse_chunk_args 38a»
675 10 | «chunk-storage-functions 69b»
676 11 | «output_chunk_names() 63d»
677 12 | «output_chunks() 63e»
678 13 | «write_chunk() 64a»
679 14 | «expand_chunk_args() 38b»
682 17 | «recognize-chunk 55a»
684 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
686 The portable way to erase an array in awk is to split the empty string, so we define a fangle macro that can split an array, like this:
688 33d <awk-delete-array[1](ARRAY), lang=awk> ≡
689 ________________________________________________________________________
690 1 | split("", ${ARRAY});
691 |________________________________________________________________________
694 For debugging it is sometimes convenient to be able to dump the contents of an array to stderr, and so this macro is also useful.
696 33e <dump-array[1](ARRAY), lang=awk> ≡
697 ________________________________________________________________________
698 1 | print "\nDump: ${ARRAY}\n--------\n" > "/dev/stderr";
699 2 | for (_x in ${ARRAY}) {
700 3 | print _x "=" ${ARRAY}[_x] "\n" > "/dev/stderr";
702 5 | print "========\n" > "/dev/stderr";
703 |________________________________________________________________________
707 Fatal errors are issued with the error function:
709 34a <error()[1](), lang=awk> ≡ 34b▿
710 ________________________________________________________________________
711 1 | function error(message)
713 3 | print "ERROR: " FILENAME ":" FNR " " message > "/dev/stderr";
716 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
717 and likewise for non-fatal warnings:
719 34b <error()[2]() ⇑34a, lang=awk> +≡ ▵34a 34c▿
720 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
721 6 | function warning(message)
723 8 | print "WARNING: " FILENAME ":" FNR " " message > "/dev/stderr";
726 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
727 and debug output too:
729 34c <error()[3]() ⇑34a, lang=awk> +≡ ▵34b
730 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
731 11 | function debug_log(message)
733 13 | print "DEBUG: " FILENAME ":" FNR " " message > "/dev/stderr";
735 |________________________________________________________________________
738 To do: append=helper-functions
741 34d <helper-functions[1](), lang=> ≡
742 ________________________________________________________________________
744 |________________________________________________________________________
747 Chapter 8LaTeX and lstlistings
748 To do: Split LyX and TeXmacs parts
750 For L Y X and LaTeX, the lstlistings package is used to format the lines of code chunks. You may recal from chapter XXX that arguments to a chunk definition are pure LaTeX code. This means that fangle needs to be able to parse LaTeX a little.
751 LaTeX arguments to lstlistings macros are a comma seperated list of key-value pairs, and values containing commas are enclosed in { braces } (which is to be expected for LaTeX).
752 A sample expressions is:
753 name=thomas, params={a, b}, something, something-else
754 but we see that this is just a simpler form of this expression:
755 name=freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc
756 We may consider that we need a function that can parse such LaTeX expressions and assign the values to an AWK associated array, perhaps using a recursive parser into a multi-dimensional hash1. as AWK doesn't have nested-hash support ^1, resulting in:
761 a[foo, quux, a] fleeg
764 Yet, also, on reflection it seems that sometimes such nesting is not desirable, as the braces are also used to delimit values that contain commas --- we may consider that
765 name={williamson, freddie}
766 should assign williamson, freddie to name.
767 In fact we are not so interested in the detail so as to be bothered by this, which turns out to be a good thing for two reasons. Firstly TeX has a malleable parser with no strict syntax, and secondly whether or not williamson and freddie should count as two items will be context dependant anyway.
768 We need to parse this latex for only one reason; which is that we are extending lstlistings to add some additional arguments which will be used to express chunk parameters and other chunk options.
769 8.1 Additional lstlstings parameters
770 Further on we define a \Chunk LaTeX macro whose arguments will consist of a the chunk name, optionally followed by a comma and then a comma separated list of arguments. In fact we will just need to prefix name= to the arguments to in order to create valid lstlistings arguments.
771 There will be other arguments supported too;
772 params.As an extension to many literate-programming styles, fangle permits code chunks to take parameters and thus operate somewhat like C pre-processor macros, or like C++ templates. Chunk parameters are declared with a chunk argument called params, which holds a semi-colon separated list of parameters, like this:
773 achunk,language=C,params=name;address
774 addto.a named chunk that this chunk is to be included into. This saves the effort of having to declare another listing of the named chunk merely to include this one.
775 Function get_chunk_args() will accept two paramters, text being the text to parse, and values being an array to receive the parsed values as described above. The optional parameter path is used during recursion to build up the multi-dimensional array path.
777 36a <./fangle[4]() ⇑33a, lang=> +≡ ⊲33c
778 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
779 19 | =<\chunkref{get_chunk_args()}>
780 |________________________________________________________________________
784 36b <get_chunk_args()[1](), lang=> ≡ 36c▿
785 ________________________________________________________________________
786 1 | function get_chunk_args(text, values,
787 2 | # optional parameters
788 3 | path, # hierarchical precursors
791 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
792 The strategy is to parse the name, and then look for a value. If the value begins with a brace {, then we recurse and consume as much of the text as necessary, returning the remaining text when we encounter a leading close-brace }. This being the strategy --- and executed in a loop --- we realise that we must first look for the closing brace (perhaps preceded by white space) in order to terminate the recursion, and returning remaining text.
794 36c <get_chunk_args()[2]() ⇑36b, lang=> +≡ ▵36b
795 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
797 7 | split("", next_chunk_args);
798 8 | while(length(text)) {
799 9 | if (match(text, "^ *}(.*)", a)) {
802 12 | =<\chunkref{parse-chunk-args}>
806 |________________________________________________________________________
809 We can see that the text could be inspected with this regex:
811 36d <parse-chunk-args[1](), lang=> ≡ 37a⊳
812 ________________________________________________________________________
813 1 | if (! match(text, " *([^,=]*[^,= ]) *(([,=]) *(([^,}]*) *,* *(.*))|)$", a)) {
816 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
817 and that a will have the following values:
820 2 =freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc
822 4 freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc
824 6 , foo={bar=baz, quux={quirk, a=fleeg}}, etc
826 a[3] will be either = or , and signify whether the option named in a[1] has a value or not (respectively).
827 If the option does have a value, then if the expression substr(a[4],1,1) returns a brace { it will signify that we need to recurse:
829 37a <parse-chunk-args[2]() ⇑36d, lang=> +≡ ⊲36d
830 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
832 5 | if (a[3] == "=") {
833 6 | if (substr(a[4],1,1) == "{") {
834 7 | text = get_chunk_args(substr(a[4],2), values, path name SUBSEP);
836 9 | values[path name]=a[5];
840 13 | values[path name]="";
843 |________________________________________________________________________
846 We can test this function like this:
848 37b <gca-test.awk[1](), lang=> ≡
849 ________________________________________________________________________
850 1 | =<\chunkref{get_chunk_args()}>
854 5 | print get_chunk_args("name=freddie, foo={bar=baz, quux={quirk, a=fleeg}}, etc", a);
856 7 | print "a[" b "] => " a[b];
859 |________________________________________________________________________
862 which should give this output:
864 37c <gca-test.awk-results[1](), lang=> ≡
865 ________________________________________________________________________
866 1 | a[foo.quux.quirk] =>
867 2 | a[foo.quux.a] => fleeg
868 3 | a[foo.bar] => baz
870 5 | a[name] => freddie
871 |________________________________________________________________________
874 8.2 Parsing chunk arguments
875 Arguments to paramterized chunks are expressed in round brackets as a comma separated list of optional arguments. For example, a chunk that is defined with:
876 \Chunk{achunk, params=name ; address}
878 \chunkref{achunk}(John Jones, jones@example.com)
879 An argument list may be as simple as in \chunkref{pull}(thing, otherthing) or as complex as:
880 \chunkref{pull}(things[x, y], get_other_things(a, "(all)"))
881 --- which for all it's commas and quotes and parenthesis represents only two parameters: things[x, y] and get_other_things(a, "(all)").
882 If we simply split parameter list on commas, then the comma in things[x,y] would split into two seperate arguments: things[x and y]--- neither of which make sense on their own.
883 One way to prevent this would be by refusing to split text between matching delimiters, such as [, ], (, ), {, } and most likely also ", " and ', '. Of course this also makes it impossible to pass such mis-matched code fragments as parameters, but I think that it would be hard for readers to cope with authors who would pass such code unbalanced fragments as chunk parameters2. I know that I couldn't cope with users doing such things, and although the GPL3 license prevents me from actually forbidding anyone from trying, if they want it to work they'll have to write the code themselves and not expect any support from me. ^2.
884 Unfortunately, the full set of matching delimiters may vary from language to language. In certain C++ template contexts, < and > would count as delimiters, and yet in other contexts they would not.
885 This puts me in the unfortunate position of having to parse-somewhat all programming languages without knowing what they are!
886 However, if this universal mode-tracking is possible, then parsing the arguments would be trivial. Such a mode tracker is described in chapter 9 and used here with simplicity.
888 38a <parse_chunk_args[1](), lang=> ≡
889 ________________________________________________________________________
890 1 | function parse_chunk_args(language, text, values, mode,
892 3 | c, context, rest)
894 5 | =<\chunkref{new-mode-tracker}(context, language, mode)>
895 6 | rest = mode_tracker(context, text, values);
897 8 | for(c=1; c <= context[0, "values"]; c++) {
898 9 | values[c] = context[0, "values", c];
902 |________________________________________________________________________
905 8.3 Expanding parameters in the text
906 Within the body of the chunk, the parameters are referred to with: ${name} and ${address}. There is a strong case that a LaTeX style notation should be used, like \param{name} which would be expressed in the listing as =<\param{name}> and be rendered as ${name}. Such notation would make me go blind, but I do intend to adopt it.
907 We therefore need a function expand_chunk_args which will take a block of text, a list of permitted parameters, and the arguments which must substitute for the parameters.
908 Here we split the text on ${ which means that all parts except the first will begin with a parameter name which will be terminated by }. The split function will consume the literal ${ in each case.
910 38b <expand_chunk_args()[1](), lang=> ≡
911 ________________________________________________________________________
912 1 | function expand_chunk_args(text, params, args,
913 2 | p, text_array, next_text, v, t, l)
915 4 | if (split(text, text_array, "\\${")) {
916 5 | «substitute-chunk-args 39a»
921 |________________________________________________________________________
924 First, we produce an associative array of substitution values indexed by parameter names. This will serve as a cache, allowing us to look up the replacement values as we extract each name.
926 39a <substitute-chunk-args[1](), lang=> ≡ 39b▿
927 ________________________________________________________________________
928 1 | for(p in params) {
929 2 | v[params[p]]=args[p];
931 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
932 We accumulate substituted text in the variable text. As the first part of the split function is the part before the delimiter --- which is ${ in our case --- this part will never contain a parameter reference, so we assign this directly to the result kept in $text.
934 39b <substitute-chunk-args[2]() ⇑39a, lang=> +≡ ▵39a 39c▿
935 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
936 4 | text=text_array[1];
937 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
938 We then iterate over the remaining values in the array3. I don't know why I think that it will enumerate the array in order, but it seems to work ^3
939 To do: fix or prove it
940 , and substitute each reference for it's argument.
942 39c <substitute-chunk-args[3]() ⇑39a, lang=> +≡ ▵39b
943 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
944 5 | for(t=2; t in text_array; t++) {
945 6 | =<\chunkref{substitute-chunk-arg}>
947 |________________________________________________________________________
950 After the split on ${ a valid parameter reference will consist of valid parameter name terminated by a close-brace }. A valid character name begins with the underscore or a letter, and may contain letters, digits or underscores.
951 A valid looking reference that is not actually the name of a parameter will be and not substituted. This is good because there is nothing to substitute anyway, and it avoids clashes when writing code for languages where ${...} is a valid construct --- such constructs will not be interfered with unless the parameter name also matches.
953 39d <substitute-chunk-arg[1](), lang=> ≡
954 ________________________________________________________________________
955 1 | if (match(text_array[t], "^([a-zA-Z_][a-zA-Z0-9_]*)}", l) &&
958 4 | text = text v[l[1]] substr(text_array[t], length(l[1])+2);
960 6 | text = text "${" text_array[t];
962 |________________________________________________________________________
965 Chapter 9Language Modes & Quoting
967 lstlistings and fangle both recognize source languages, and perform some basic parsing. lstlistings can detect strings and comments within a language definition and perform suitable rendering, such as italics for comments, and visible-spaces within strings.
968 Fangle similarly can recognize strings, and comments, etc, within a language, so that any chunks included with \chunkref can be suitably escape or quoted.
969 9.1.1 Modes to keep code together
970 As an example, in the C language there are a few parse modes, affecting the interpretation of characters.
971 One parse mode is the strings mode. The string mode is commenced by an un-escaped quotation mark " and terminated by the same. Within the string mode, only one additional mode can be commenced, it is the backslash mode \, which is always terminated after the folloing character.
972 Another mode is [ which is terminated by a ] (unless it occurs in a string).
973 Consider this fragment of C code:
975 things([x, y])<wide-overbrace>^(1. [ mode), get_other_things((a, "(all)"_(3. " mode)))<wide-overbrace>^(2. ( mode)
977 Mode nesting prevents the close parenthesis in the quoted string (part 3) from terminating the parenthesis mode (part 2).
978 Each language has a set of modes, the default mode being the null mode. Each mode can lead to other modes.
979 9.1.2 Modes affect included chunks
980 For instance, consider this chunk with language=perl:
982 41a <example-perl[1](), lang=perl> ≡
983 ________________________________________________________________________
984 print "hello world $0\n";
985 |________________________________________________________________________
988 If it were included in a chunk with language=sh, like this:
990 41b <example-sh[1](), lang=sh> ≡
991 ________________________________________________________________________
992 perl -e "=<\chunkref{example-perl}>"
993 |________________________________________________________________________
996 fangle would want to generate output like this:
997 perl -e "print \"hello world \$0\\n\";"
998 See that the double quote ", back-slash \ and $ have been quoted with a back-slash to protect them from shell interpretation.
999 If that were then included in a chunk with language=make, like this:
1001 42a <example-makefile[1](), lang=make> ≡
1002 ________________________________________________________________________
1004 2 | =<\chunkref{example-sh}>
1005 |________________________________________________________________________
1008 We would need the output to look like this --- note the $$:
1010 perl -e "print \"hello world \$$0\\n\";"
1011 In order to make this work, we need to define a mode-tracker supporting each language, that can detect the various quoting modes, and provide a transformation that must be applied to any included text so that included text will be interpreted correctly after any interpolation that it may be subject to at run-time.
1012 For example, the sed transformation for text to be inserted into shell double-quoted strings would be something like:
1013 s/\\/\\\\/g;s/$/\\$/g;s/"/\\"/g;
1014 which protects \ $ ".
1015 To do: I don't think this example is true
1016 The mode tracker must also track nested mode-changes, as in this sh example.
1017 echo "hello `id ...`"
1019 Any characters inserted at the point marked ↑ would need to be escaped, including ` | * among others. First it would need escaping for the back-ticks `, and then for the double-quotes ".
1021 Escaping need not occur if the format and mode of the included chunk matches that of the including chunk.
1022 As each chunk is output a new mode tracker for that language is initialized in it's normal state. As text is output for that chunk the output mode is tracked. When a new chunk is included, a transformation appropriate to that mode is selected and pushed onto a stack of transformations. Any text to be output is first passed through this stack of transformations.
1023 It remains to consider if the chunk-include function should return it's generated text so that the caller can apply any transformations (and formatting), or if it should apply the stack of transformations itself.
1024 Note that the transformed text should have the property of not being able to change the mode in the current chunk.
1025 To do: Note chunk parameters should probably also be transformed
1027 9.2 Language Mode Definitions
1028 All modes are stored in a single multi-dimensional hash. The first index is the language, and the second index is the mode-identifier. The third indexes are terminators, and optionally, submodes, and delimiters.
1029 A useful set of mode definitions for a nameless general C-type language is shown here. (Don't be confused by the double backslash escaping needed in awk. One set of escaping is for the string, and the second set of escaping is for the regex).
1030 To do: TODO: Add =<\mode{}> command which will allow us to signify that a string is
1031 regex and thus fangle will quote it for us.
1033 Submodes are entered by the characters " ' { ( [ /*
1035 43a <common-mode-definitions[1](language), lang=> ≡ 43b▿
1036 ________________________________________________________________________
1037 1 | modes[${language}, "", "submodes"]="\\\\|\"|'|{|\\(|\\[";
1038 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1039 In the default mode, a comma surrounded by un-important white space is a delimiter of language items1. whatever a language item might be ^1.
1041 43b <common-mode-definitions[2](language) ⇑43a, lang=> +≡ ▵43a 44a▿
1042 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1043 2 | modes[${language}, "", "delimiters"]=" *, *";
1044 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1045 and should pass this test:
1046 To do: Why do the tests run in ?(? mode and not ?? mode
1049 43c <test:mode-definitions[1](), lang=> ≡ 44h⊳
1050 ________________________________________________________________________
1051 1 | parse_chunk_args("c-like", "1,2,3", a, "");
1052 2 | if (a[1] != "1") e++;
1053 3 | if (a[2] != "2") e++;
1054 4 | if (a[3] != "3") e++;
1055 5 | if (length(a) != 3) e++;
1056 6 | =<\chunkref{pca-test.awk:summary}>
1058 8 | parse_chunk_args("c-like", "joe, red", a, "");
1059 9 | if (a[1] != "joe") e++;
1060 10 | if (a[2] != "red") e++;
1061 11 | if (length(a) != 2) e++;
1062 12 | =<\chunkref{pca-test.awk:summary}>
1064 14 | parse_chunk_args("c-like", "${colour}", a, "");
1065 15 | if (a[1] != "${colour}") e++;
1066 16 | if (length(a) != 1) e++;
1067 17 | =<\chunkref{pca-test.awk:summary}>
1068 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1069 Nested modes are identified by a backslash, a double or single quote, various bracket styles or a /* comment.
1070 For each of these sub-modes modes we must also identify at a mode terminator, and any sub-modes or delimiters that may be entered2. Because we are using the sub-mode characters as the mode identifier it means we can't currently have a mode character dependant on it's context; i.e. { can't behave differently when it is inside [. ^2.
1072 The backslash mode has no submodes or delimiters, and is terminated by any character. Note that we are not so much interested in evaluating or interpolating content as we are in delineating content. It is no matter that a double backslash (\\) may represent a single backslash while a backslash-newline may represent white space, but it does matter that the newline in a backslash newline should not be able to terminate a C pre-processor statement; and so the newline will be consumed by the backslash however it is to be interpreted.
1074 43d <common-mode-definitions[3](language) ⇑43a, lang=> +≡ ▵43b 44g⊳
1075 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1076 3 | modes[${language}, "\\", "terminators"]=".";
1077 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1079 Common languages support two kinds of strings quoting, double quotes and single quotes.
1080 In a string we have one special mode, which is the backslash. This may escape an embedded quote and prevent us thinking that it should terminate the string.
1082 44a <mode:common-string[1](language, quote), lang=> ≡ 44c▿
1083 ________________________________________________________________________
1084 1 | modes[${language}, ${quote}, "submodes"]="\\\\";
1085 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1086 Otherwise, the string will be terminated by the same character that commenced it.
1088 44b <mode:common-string[2](language, quote) ⇑44b, lang=> +≡ ▵44b 44d▿
1089 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1090 2 | modes[${language}, ${quote}, "terminators"]=${quote};
1091 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1092 In C type languages, certain escape sequences exist in strings. We need to define mechanism to enclode any chunks included in this mode using those escape sequences. These are expressed in two parts, s meaning search, and r meaning replace.
1093 The first substitution is to replace a backslash with a double backslash. We do this first as other substitutions may introduce a backslash which we would not then want to escape again here.
1094 Note: Backslashes need double-escaping in the search pattern but not in the replacement string, hence we are replacing a literal \ with a literal \\.
1096 44c <mode:common-string[3](language, quote) ⇑44b, lang=> +≡ ▵44c 44e▿
1097 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1098 3 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]="\\\\";
1099 4 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\\\";
1100 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1101 If the quote character occurs in the text, it should be preceded by a backslash, otherwise it would terminate the string unexpectedly.
1103 44d <mode:common-string[4](language, quote) ⇑44b, lang=> +≡ ▵44d 44f▿
1104 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1105 5 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]=${quote};
1106 6 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\" ${quote};
1107 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1108 Any newlines in the string, must be replaced by \n.
1110 44e <mode:common-string[5](language, quote) ⇑44b, lang=> +≡ ▵44e
1111 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1112 7 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]="\n";
1113 8 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\n";
1114 |________________________________________________________________________
1117 For the common modes, we define this string handling for double and single quotes.
1119 44f <common-mode-definitions[4](language) ⇑43a, lang=> +≡ ⊲44a 45b⊳
1120 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1121 4 | =<\chunkref{mode:common-string}(${language}, "\textbackslash{}"")>
1122 5 | =<\chunkref{mode:common-string}(${language}, "'")>
1123 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1124 Working strings should pass this test:
1126 44g <test:mode-definitions[2]() ⇑43c, lang=> +≡ ⊲43c 47d⊳
1127 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1128 18 | parse_chunk_args("c-like", "say \"I said, \\\"Hello, how are you\\\".\", for me", a, "");
1129 19 | if (a[1] != "say \"I said, \\\"Hello, how are you\\\".\"") e++;
1130 20 | if (a[2] != "for me") e++;
1131 21 | if (length(a) != 2) e++;
1132 22 | =<\chunkref{pca-test.awk:summary}>
1133 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1134 9.2.3 Parentheses, Braces and Brackets
1135 Where quotes are closed by the same character, parentheses, brackets and braces are closed by an alternate character.
1137 45a <mode:common-brackets[1](language, open, close), lang=> ≡
1138 ________________________________________________________________________
1139 1 | modes[${language}, ${open}, "submodes" ]="\\\\|\"|{|\\(|\\[|'|/\\*";
1140 2 | modes[${language}, ${open}, "delimiters"]=" *, *";
1141 3 | modes[${language}, ${open}, "terminators"]=${close};
1142 |________________________________________________________________________
1145 Note that the open is NOT a regex but the close token IS.
1146 To do: When we can quote regex we won't have to put the slashes in here
1149 45b <common-mode-definitions[5](language) ⇑43a, lang=> +≡ ⊲44g
1150 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1151 6 | =<\chunkref{mode:common-brackets}(${language}, "{", "}")>
1152 7 | =<\chunkref{mode:common-brackets}(${language}, "[", "\textbackslash{}\textbackslash{}]")>
1153 8 | =<\chunkref{mode:common-brackets}(${language}, "(", "\textbackslash{}\textbackslash{})")>
1154 |________________________________________________________________________
1157 9.2.4 Customizing Standard Modes
1159 45c <mode:add-submode[1](language, mode, submode), lang=> ≡
1160 ________________________________________________________________________
1161 1 | modes[${language}, ${mode}, "submodes"] = modes[${language}, ${mode}, "submodes"] "|" ${submode};
1162 |________________________________________________________________________
1166 45d <mode:add-escapes[1](language, mode, search, replace), lang=> ≡
1167 ________________________________________________________________________
1168 1 | escapes[${language}, ${mode}, ++escapes[${language}, ${mode}], "s"]=${search};
1169 2 | escapes[${language}, ${mode}, escapes[${language}, ${mode}], "r"]=${replace};
1170 |________________________________________________________________________
1175 We can define /* comment */ style comments and //comment style comments to be added to any language:
1177 45e <mode:multi-line-comments[1](language), lang=> ≡
1178 ________________________________________________________________________
1179 1 | =<\chunkref{mode:add-submode}(${language}, "", "/\textbackslash{}\textbackslash{}*")>
1180 2 | modes[${language}, "/*", "terminators"]="\\*/";
1181 |________________________________________________________________________
1185 45f <mode:single-line-slash-comments[1](language), lang=> ≡
1186 ________________________________________________________________________
1187 1 | =<\chunkref{mode:add-submode}(${language}, "", "//")>
1188 2 | modes[${language}, "//", "terminators"]="\n";
1189 3 | =<\chunkref{mode:add-escapes}(${language}, "//", "\textbackslash{}n", "\textbackslash{}n//")>
1190 |________________________________________________________________________
1193 We can also define # comment style comments (as used in awk and shell scripts) in a similar manner.
1194 To do: I'm having to use # for hash and ¯extbackslash{} for and have hacky work-arounds in the parser for now
1197 45g <mode:add-hash-comments[1](language), lang=> ≡
1198 ________________________________________________________________________
1199 1 | =<\chunkref{mode:add-submode}(${language}, "", "\#")>
1200 2 | modes[${language}, "#", "terminators"]="\n";
1201 3 | =<\chunkref{mode:add-escapes}(${language}, "\#", "\textbackslash{}n", "\textbackslash{}n\#")>
1202 |________________________________________________________________________
1205 In C, the # denotes pre-processor directives which can be multi-line
1207 46a <mode:add-hash-defines[1](language), lang=> ≡
1208 ________________________________________________________________________
1209 1 | =<\chunkref{mode:add-submode}(${language}, "", "\#")>
1210 2 | modes[${language}, "#", "submodes" ]="\\\\";
1211 3 | modes[${language}, "#", "terminators"]="\n";
1212 4 | =<\chunkref{mode:add-escapes}(${language}, "\#", "\textbackslash{}n", "\textbackslash{}\textbackslash{}\textbackslash{}\textbackslash{}\textbackslash{}n")>
1213 |________________________________________________________________________
1217 46b <mode:quote-dollar-escape[1](language, quote), lang=> ≡
1218 ________________________________________________________________________
1219 1 | escapes[${language}, ${quote}, ++escapes[${language}, ${quote}], "s"]="\\$";
1220 2 | escapes[${language}, ${quote}, escapes[${language}, ${quote}], "r"]="\\$";
1221 |________________________________________________________________________
1224 We can add these definitions to various languages
1226 46c <mode-definitions[1](), lang=> ≡ 47b⊳
1227 ________________________________________________________________________
1228 1 | «common-mode-definitions("c-like") 43a»
1230 3 | «common-mode-definitions("c") 43a»
1231 4 | =<\chunkref{mode:multi-line-comments}("c")>
1232 5 | =<\chunkref{mode:single-line-slash-comments}("c")>
1233 6 | =<\chunkref{mode:add-hash-defines}("c")>
1235 8 | =<\chunkref{common-mode-definitions}("awk")>
1236 9 | =<\chunkref{mode:add-hash-comments}("awk")>
1237 10 | =<\chunkref{mode:add-naked-regex}("awk")>
1238 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1239 The awk definitions should allow a comment block like this:
1241 46d <test:comment-quote[1](), lang=awk> ≡
1242 ________________________________________________________________________
1243 1 | # Comment: =<\chunkref{test:comment-text}>
1244 |________________________________________________________________________
1248 46e <test:comment-text[1](), lang=> ≡
1249 ________________________________________________________________________
1250 1 | Now is the time for
1251 2 | the quick brown fox to bring lemonade
1253 |________________________________________________________________________
1256 to come out like this:
1258 46f <test:comment-quote:result[1](), lang=> ≡
1259 ________________________________________________________________________
1260 1 | # Comment: Now is the time for
1261 2 | #the quick brown fox to bring lemonade
1263 |________________________________________________________________________
1266 The C definition for such a block should have it come out like this:
1268 46g <test:comment-quote:C-result[1](), lang=> ≡
1269 ________________________________________________________________________
1270 1 | # Comment: Now is the time for\
1271 2 | the quick brown fox to bring lemonade\
1273 |________________________________________________________________________
1277 This pattern is incomplete, but meant to detect naked regular expressions in awk and perl; e.g. /.*$/, however required capabilities are not present.
1278 Current it only detects regexes anchored with ^ as used in fangle.
1279 For full regex support, modes need to be named not after their starting character, but some other more fully qualified name.
1281 47a <mode:add-naked-regex[1](language), lang=> ≡
1282 ________________________________________________________________________
1283 1 | =<\chunkref{mode:add-submode}(${language}, "", "/\textbackslash{}\textbackslash{}\^")>
1284 2 | modes[${language}, "/^", "terminators"]="/";
1285 |________________________________________________________________________
1290 47b <mode-definitions[2]() ⇑46d, lang=> +≡ ⊲46d 47c▿
1291 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1292 11 | =<\chunkref{common-mode-definitions}("perl")>
1293 12 | =<\chunkref{mode:multi-line-comments}("perl")>
1294 13 | =<\chunkref{mode:add-hash-comments}("perl")>
1295 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1296 Still need to add add s/, submode /, terminate both with //. This is likely to be impossible as perl regexes can contain perl.
1299 47c <mode-definitions[3]() ⇑46d, lang=> +≡ ▵47b
1300 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1301 14 | =<\chunkref{common-mode-definitions}("sh")>
1302 15 | #<\chunkref{mode:common-string}("sh", "\textbackslash{}"")>
1303 16 | #<\chunkref{mode:common-string}("sh", "'")>
1304 17 | =<\chunkref{mode:add-hash-comments}("sh")>
1305 18 | =<\chunkref{mode:quote-dollar-escape}("sh", "\"")>
1306 |________________________________________________________________________
1310 Also, the parser must return any spare text at the end that has not been processed due to a mode terminator being found.
1312 47d <test:mode-definitions[3]() ⇑43c, lang=> +≡ ⊲44h 47e▿
1313 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1314 23 | rest = parse_chunk_args("c-like", "1, 2, 3) spare", a, "(");
1315 24 | if (a[1] != 1) e++;
1316 25 | if (a[2] != 2) e++;
1317 26 | if (a[3] != 3) e++;
1318 27 | if (length(a) != 3) e++;
1319 28 | if (rest != " spare") e++;
1320 29 | =<\chunkref{pca-test.awk:summary}>
1321 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1322 We must also be able to parse the example given earlier.
1324 47e <test:mode-definitions[4]() ⇑43c, lang=> +≡ ▵47d
1325 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1326 30 | parse_chunk_args("c-like", "things[x, y], get_other_things(a, \"(all)\"), 99", a, "(");
1327 31 | if (a[1] != "things[x, y]") e++;
1328 32 | if (a[2] != "get_other_things(a, \"(all)\")") e++;
1329 33 | if (a[3] != "99") e++;
1330 34 | if (length(a) != 3) e++;
1331 35 | =<\chunkref{pca-test.awk:summary}>
1332 |________________________________________________________________________
1335 9.4 A non-recursive mode tracker
1337 The mode tracker holds its state in a stack based on a numerically indexed hash. This function, when passed an empty hash, will intialize it.
1339 48a <new_mode_tracker()[1](), lang=> ≡
1340 ________________________________________________________________________
1341 1 | function new_mode_tracker(context, language, mode) {
1342 2 | context[""] = 0;
1343 3 | context[0, "language"] = language;
1344 4 | context[0, "mode"] = mode;
1346 |________________________________________________________________________
1349 Because awk functions cannot return an array, we must create the array first and pass it in, so we have a fangle macro to do this:
1351 48b <new-mode-tracker[1](context, language, mode), lang=awk> ≡
1352 ________________________________________________________________________
1353 1 | «awk-delete-array(context) 33d»
1354 2 | new_mode_tracker(${context}, ${language}, ${mode});
1355 |________________________________________________________________________
1359 And for tracking modes, we dispatch to a mode-tracker action based on the current language
1361 48c <mode_tracker[1](), lang=awk> ≡ 48d▿
1362 ________________________________________________________________________
1363 1 | function push_mode_tracker(context, language, mode,
1367 5 | if (! ("" in context)) {
1368 6 | «new-mode-tracker(context, language, mode) 48b»
1370 8 | top = context[""];
1371 9 | if (context[top, "language"] == language && mode=="") mode = context[top, "mode"];
1373 11 | context[top, "language"] = language;
1374 12 | context[top, "mode"] = mode;
1375 13 | context[""] = top;
1378 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1380 48d <mode_tracker[2]() ⇑48c, lang=> +≡ ▵48c 49a▿
1381 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1382 16 | function dump_mode_tracker(context,
1385 19 | for(c=0; c <= context[""]; c++) {
1386 20 | printf(" %2d %s:%s\n", c, context[c, "language"], context[c, "mode"]) > "/dev/stderr";
1387 21 | for(d=1; ( (c, "values", d) in context); d++) {
1388 22 | printf(" %2d %s\n", d, context[c, "values", d]) > "/dev/stderr";
1392 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1394 48e <mode_tracker[3]() ⇑48c, lang=> +≡ ▵48d 53a⊳
1395 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1396 26 | function finalize_mode_tracker(context)
1398 28 | if ( ("" in context) && context[""] != 0) return 0;
1401 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1402 This implies that any chunk must be syntactically whole; for instance, this is fine:
1404 49a <test:whole-chunk[1](), lang=> ≡
1405 ________________________________________________________________________
1407 2 | =<\chunkref{test:say-hello}>
1409 |________________________________________________________________________
1413 49b <test:say-hello[1](), lang=> ≡
1414 ________________________________________________________________________
1416 |________________________________________________________________________
1419 But this is not fine; the chunk <test:hidden-else 49e> is not properly cromulent.
1421 49c <test:partial-chunk[1](), lang=> ≡
1422 ________________________________________________________________________
1424 2 | =<\chunkref{test:hidden-else}>
1426 |________________________________________________________________________
1430 49d <test:hidden-else[1](), lang=> ≡
1431 ________________________________________________________________________
1432 1 | print "I'm fine";
1434 3 | print "I'm not";
1435 |________________________________________________________________________
1438 These tests will check for correct behaviour:
1440 49e <test:cromulence[1](), lang=> ≡
1441 ________________________________________________________________________
1442 1 | echo Cromulence test
1443 2 | passtest $FANGLE -Rtest:whole-chunk $TEX_SRC &>/dev/null || ( echo "Whole chunk failed" && exit 1 )
1444 3 | failtest $FANGLE -Rtest:partial-chunk $TEX_SRC &>/dev/null || ( echo "Partial chunk failed" && exit 1 )
1445 |________________________________________________________________________
1449 We must avoid recursion as a language construct because we intend to employ mode-tracking to track language mode of emitted code, and the code is emitted from a function which is itself recursive, so instead we implement psuedo-recursion using our own stack based on a hash.
1451 49f <mode_tracker()[1](), lang=awk> ≡ 50a▿
1452 ________________________________________________________________________
1453 1 | function mode_tracker(context, text, values,
1454 2 | # optional parameters
1456 4 | mode, submodes, language,
1457 5 | cindex, c, a, part, item, name, result, new_values, new_mode,
1458 6 | delimiters, terminators)
1460 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1461 We could be re-commencing with a valid context, so we need to setup the state according to the last context.
1463 49g <mode_tracker()[2]() ⇑49g, lang=> +≡ ▵49g 50d⊳
1464 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1465 8 | cindex = context[""] + 0;
1466 9 | mode = context[cindex, "mode"];
1467 10 | language = context[cindex, "language" ];
1468 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1469 First we construct a single large regex combining the possible sub-modes for the current mode along with the terminators for the current mode.
1471 50a <parse_chunk_args-reset-modes[1](), lang=> ≡ 50c▿
1472 ________________________________________________________________________
1473 1 | submodes=modes[language, mode, "submodes"];
1475 3 | if ((language, mode, "delimiters") in modes) {
1476 4 | delimiters = modes[language, mode, "delimiters"];
1477 5 | if (length(submodes)>0) submodes = submodes "|";
1478 6 | submodes=submodes delimiters;
1479 7 | } else delimiters="";
1480 8 | if ((language, mode, "terminators") in modes) {
1481 9 | terminators = modes[language, mode, "terminators"];
1482 10 | if (length(submodes)>0) submodes = submodes "|";
1483 11 | submodes=submodes terminators;
1484 12 | } else terminators="";
1485 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1486 If we don't find anything to match on --- probably because the language is not supported --- then we return the entire text without matching anything.
1488 50b <parse_chunk_args-reset-modes[2]() ⇑50b, lang=> +≡ ▵50b
1489 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1490 13 | if (! length(submodes)) return text;
1491 |________________________________________________________________________
1495 50c <mode_tracker()[3]() ⇑49g, lang=> +≡ ⊲50a 50e▿
1496 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1497 11 | =<\chunkref{parse_chunk_args-reset-modes}>
1498 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1499 We then iterate the text (until there is none left) looking for sub-modes or terminators in the regex.
1501 50d <mode_tracker()[4]() ⇑49g, lang=> +≡ ▵50d 50f▿
1502 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1503 12 | while((cindex >= 0) && length(text)) {
1504 13 | if (match(text, "(" submodes ")", a)) {
1505 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1506 A bug that creeps in regularly during development is bad regexes of zero length which result in an infinite loop (as no text is consumed), so I catch that right away with this test.
1508 50e <mode_tracker()[5]() ⇑49g, lang=> +≡ ▵50e 51a▿
1509 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1510 14 | if (RLENGTH<1) {
1511 15 | error(sprintf("Internal error, matched zero length submode, should be impossible - likely regex computation error\n" \
1512 16 | "Language=%s\nmode=%s\nmatch=%s\n", language, mode, submodes));
1514 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1515 part is defined as the text up to the sub-mode or terminator, and this is appended to item --- which is the current text being gathered. If a mode has a delimiter, then item is reset each time a delimiter is found.
1516 ("hello_item, there_item")<wide-overbrace>^item, (he said.)<wide-overbrace>^item
1518 50f <mode_tracker()[6]() ⇑49g, lang=> +≡ ▵50f 51b⊳
1519 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1520 18 | part = substr(text, 1, RSTART -1);
1521 19 | item = item part;
1522 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1523 We must now determine what was matched. If it was a terminator, then we must restore the previous mode.
1525 51a <mode_tracker()[7]() ⇑49g, lang=> +≡ ⊲51a 51c▿
1526 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1527 20 | if (match(a[1], "^" terminators "$")) {
1528 21 | #printf("%2d EXIT MODE [%s] by [%s] [%s]\n", cindex, mode, a[1], text) > "/dev/stderr"
1529 22 | context[cindex, "values", ++context[cindex, "values"]] = item;
1530 23 | delete context[cindex];
1531 24 | context[""] = --cindex;
1532 25 | if (cindex>=0) {
1533 26 | mode = context[cindex, "mode"];
1534 27 | language = context[cindex, "language"];
1535 28 | =<\chunkref{parse_chunk_args-reset-modes}>
1537 30 | item = item a[1];
1538 31 | text = substr(text, 1 + length(part) + length(a[1]));
1540 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1541 If a delimiter was matched, then we must store the current item in the parsed values array, and reset the item.
1543 51b <mode_tracker()[8]() ⇑49g, lang=> +≡ ▵51b 51d▿
1544 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1545 33 | else if (match(a[1], "^" delimiters "$")) {
1546 34 | if (cindex==0) {
1547 35 | context[cindex, "values", ++context[cindex, "values"]] = item;
1550 38 | item = item a[1];
1552 40 | text = substr(text, 1 + length(part) + length(a[1]));
1554 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1555 otherwise, if a new submode is detected (all submodes have terminators), we must create a nested parse context until we find the terminator for this mode.
1557 51c <mode_tracker()[9]() ⇑49g, lang=> +≡ ▵51c 52a▿
1558 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1559 42 | else if ((language, a[1], "terminators") in modes) {
1560 43 | #check if new_mode is defined
1561 44 | item = item a[1];
1562 45 | #printf("%2d ENTER MODE [%s] in [%s]\n", cindex, a[1], text) > "/dev/stderr"
1563 46 | text = substr(text, 1 + length(part) + length(a[1]));
1564 47 | context[""] = ++cindex;
1565 48 | context[cindex, "mode"] = a[1];
1566 49 | context[cindex, "language"] = language;
1568 51 | =<\chunkref{parse_chunk_args-reset-modes}>
1570 53 | error(sprintf("Submode '%s' set unknown mode in text: %s\nLanguage %s Mode %s\n", a[1], text, language, mode));
1571 54 | text = substr(text, 1 + length(part) + length(a[1]));
1574 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1575 In the final case, we parsed to the end of the string. If the string was entire, then we should have no nested mode context, but if the string was just a fragment we may have a mode context which must be preserved for the next fragment. Todo: Consideration ought to be given if sub-mode strings are split over two fragments.
1577 51d <mode_tracker()[10]() ⇑49g, lang=> +≡ ▵51d
1578 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1580 58 | context[cindex, "values", ++context[cindex, "values"]] = item text;
1586 64 | context["item"] = item;
1588 66 | if (length(item)) context[cindex, "values", ++context[cindex, "values"]] = item;
1591 |________________________________________________________________________
1594 9.4.3.1 One happy chunk
1595 All the mode tracker chunks are referred to here:
1597 52a <mode-tracker[1](), lang=> ≡
1598 ________________________________________________________________________
1599 1 | «new_mode_tracker() 48a»
1600 2 | «mode_tracker() 49g»
1601 |________________________________________________________________________
1605 We can test this function like this:
1607 52b <pca-test.awk[1](), lang=awk> ≡
1608 ________________________________________________________________________
1609 1 | =<\chunkref{error()}>
1610 2 | =<\chunkref{mode-tracker}>
1611 3 | =<\chunkref{parse_chunk_args()}>
1614 6 | =<\chunkref{mode-definitions}>
1616 8 | =<\chunkref{test:mode-definitions}>
1618 |________________________________________________________________________
1622 52c <pca-test.awk:summary[1](), lang=awk> ≡
1623 ________________________________________________________________________
1625 2 | printf "Failed " e
1627 4 | print "a[" b "] => " a[b];
1634 |________________________________________________________________________
1637 which should give this output:
1639 52d <pca-test.awk-results[1](), lang=> ≡
1640 ________________________________________________________________________
1641 1 | a[foo.quux.quirk] =>
1642 2 | a[foo.quux.a] => fleeg
1643 3 | a[foo.bar] => baz
1645 5 | a[name] => freddie
1646 |________________________________________________________________________
1649 9.5 Escaping and Quoting
1650 For the time being and to get around TeXmacs inability to export a TAB character, the right arrow whose UTF-8 sequence is ...
1653 Another special character is used, the left-arrow with UTF-8 sequence 0xE2 0x86 0xA4 is used to strip any preceding white space as a way of un-tabbing and removing indent that has been applied — this is important for bash here documents, and the like. It's a filthy hack.
1654 To do: remove the hack
1657 53a <mode_tracker[4]() ⇑48c, lang=> +≡ ⊲49a 53b▿
1658 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1660 31 | function untab(text) {
1661 32 | gsub("[[:space:]]*\xE2\x86\xA4","", text);
1664 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1665 Each nested mode can optionally define a set of transforms to be applied to any text that is included from another language.
1666 This code can perform transforms
1668 53b <mode_tracker[5]() ⇑48c, lang=awk> +≡ ▵53a 53c▿
1669 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1670 35 | function transform_escape(s, r, text,
1676 41 | for(c=1; c <= max && (c in s); c++) {
1677 42 | gsub(s[c], r[c], text);
1681 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1682 This function must append from index c onwards, and escape transforms from the supplied context, and return c + number of new transforms.
1684 53c <mode_tracker[6]() ⇑48c, lang=awk> +≡ ▵53b
1685 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1686 46 | function mode_escaper(context, s, r, src,
1689 49 | for(c = context[""]; c >= 0; c--) {
1690 50 | if ( (context[c, "language"], context[c, "mode"]) in escapes) {
1691 51 | cpl = escapes[context[c, "language"], context[c, "mode"]];
1692 52 | for (cp = 1; cp <= cpl; cp ++) {
1694 54 | s[src] = escapes[context[c, "language"], context[c, "mode"], cp, "s"];
1695 55 | r[src] = escapes[context[c, "language"], context[c, "mode"], cp, "r"];
1701 61 | function dump_escaper(c, s, r, cc) {
1702 62 | for(cc=1; cc<=c; cc++) {
1703 63 | printf("%2d s[%s] r[%s]\n", cc, s[cc], r[cc]) > "/dev/stderr"
1706 |________________________________________________________________________
1710 53d <test:escapes[1](), lang=sh> ≡
1711 ________________________________________________________________________
1712 1 | echo escapes test
1713 2 | passtest $FANGLE -Rtest:comment-quote $TEX_SRC &>/dev/null || ( echo "Comment-quote failed" && exit 1 )
1714 |________________________________________________________________________
1717 Chapter 10Recognizing Chunks
1718 Fangle recognizes noweb chunks, but as we also want better LaTeX integration we will recognize any of these:
1719 • notangle chunks matching the pattern ^<<.*?>>=
1720 • chunks beginning with \begin{lstlistings}, possibly with \Chunk{...} on the previous line
1721 • an older form I have used, beginning with \begin{Chunk}[options] --- also more suitable for plain LaTeX users1. Is there such a thing as plain LaTeX? ^1.
1723 The variable chunking is used to signify that we are processing a code chunk and not document. In such a state, input lines will be assigned to the current chunk; otherwise they are ignored.
1724 10.1.1 TeXmacs hackery
1725 We don't handle TeXmacs files natively but instead emit unicode character sequences to mark up the text-export file which we work on.
1726 These hacks detect such sequences and retro-fit in the old TeX parsing.
1728 55a <recognize-chunk[1](), lang=> ≡ 56a⊳
1729 ________________________________________________________________________
1732 2 | # gsub("\n*$","");
1733 3 | # gsub("\n", " ");
1736 6 | /\xE2\x86\xA6/ {
1737 7 | gsub("\\xE2\\x86\\xA6", "\x09");
1740 10 | /\xE2\x80\x98/ {
1741 11 | gsub("\\xE2\\x80\\x98", "`");
1744 14 | /\xE2\x89\xA1/ {
1745 15 | if (match($0, "^ *([^[ ]* |)<([^[ ]*)\\[[0-9]*\\][(](.*)[)].*, lang=([^ ]*)", line)) {
1746 16 | next_chunk_name=line[2];
1747 17 | gsub(",",";",line[3]);
1748 18 | params="params=" line[3];
1749 19 | if ((line[4])) {
1750 20 | params = params ",language=" line[4]
1752 22 | get_chunk_args(params, next_chunk_args);
1753 23 | new_chunk(next_chunk_name, next_chunk_args);
1754 24 | texmacs_chunking = 1;
1756 26 | #print "Unexpected
1763 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1765 Our current scheme is to recognize the new lstlisting chunks, but these may be preceded by a \Chunk command which in L Y X is a more convenient way to pass the chunk name to the \begin{lstlistings} command, and a more visible way to specify other lstset settings.
1766 The arguments to the \Chunk command are a name, and then a comma-seperated list of key-value pairs after the manner of \lstset. (In fact within the LaTeX \Chunk macro (section 15.2.1) the text name= is prefixed to the argument which is then literally passed to \lstset).
1768 56a <recognize-chunk[2]() ⇑55a, lang=awk> +≡ ⊲55a 56b▿
1769 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1771 34 | if (match($0, "^\\\\Chunk{ *([^ ,}]*),?(.*)}", line)) {
1772 35 | next_chunk_name = line[1];
1773 36 | get_chunk_args(line[2], next_chunk_args);
1777 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1778 We also make a basic attempt to parse the name out of the \lstlistings[name=chunk-name] text, otherwise we fall back to the name found in the previous chunk command. This attempt is very basic and doesn't support commas or spaces or square brackets as part of the chunkname. We also recognize \begin{Chunk} which is convenient for some users2. but not yet supported in the LaTeX macros ^2.
1780 56b <recognize-chunk[3]() ⇑55a, lang=> +≡ ▵56a 56c▿
1781 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1782 40 | /^\\begin{lstlisting}|^\\begin{Chunk}/ {
1783 41 | if (match($0, "}.*[[,] *name= *{? *([^], }]*)", line)) {
1784 42 | new_chunk(line[1]);
1786 44 | new_chunk(next_chunk_name, next_chunk_args);
1791 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1795 56c <recognize-chunk[4]() ⇑55a, lang=> +≡ ▵56b 57a⊳
1796 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1798 50 | /^ *\|____________*/ && texmacs_chunking {
1799 51 | active_chunk="";
1800 52 | texmacs_chunking=0;
1803 55 | /^ *\|\/\\/ && texmacs_chunking {
1804 56 | texmacs_chunking=0;
1806 58 | active_chunk="";
1808 60 | texmacs_chunk=0;
1809 61 | /^ *[1-9][0-9]* *\| / {
1810 62 | if (texmacs_chunking) {
1812 64 | texmacs_chunk=1;
1813 65 | gsub("^ *[1-9][0-9]* *\\| ", "")
1816 68 | /^ *\.\/\\/ && texmacs_chunking {
1819 71 | /^ *__*$/ && texmacs_chunking {
1823 75 | texmacs_chunking {
1824 76 | if (! texmacs_chunk) {
1825 77 | # must be a texmacs continued line
1827 79 | texmacs_chunk=1;
1830 82 | ! texmacs_chunk {
1831 83 | # texmacs_chunking=0;
1836 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1838 We recognize notangle style chunks too:
1840 57a <recognize-chunk[5]() ⇑55a, lang=awk> +≡ ⊲56c 58a⊳
1841 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1842 88 | /^[<]<.*[>]>=/ {
1843 89 | if (match($0, "^[<]<(.*)[>]>= *$", line)) {
1845 91 | notangle_mode=1;
1846 92 | new_chunk(line[1]);
1850 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1852 Likewise, we need to recognize when a chunk ends.
1854 The e in [e]nd{lislisting} is surrounded by square brackets so that when this document is processed, this chunk doesn't terminate early when the lstlistings package recognizes it's own end-string!3. This doesn't make sense as the regex is anchored with ^, which this line does not begin with! ^3
1856 58a <recognize-chunk[6]() ⇑55a, lang=> +≡ ⊲57a 58b▿
1857 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1858 96 | /^\\[e]nd{lstlisting}|^\\[e]nd{Chunk}/ {
1860 98 | active_chunk="";
1863 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1866 58b <recognize-chunk[7]() ⇑55a, lang=> +≡ ▵58a 58c▿
1867 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1870 103 | active_chunk="";
1872 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1873 All other recognizers are only of effect if we are chunking; there's no point in looking at lines if they aren't part of a chunk, so we just ignore them as efficiently as we can.
1875 58c <recognize-chunk[8]() ⇑55a, lang=> +≡ ▵58b 58d▿
1876 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1877 105 | ! chunking { next; }
1878 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1880 Chunk contents are any lines read while chunking is true. Some chunk contents are special in that they refer to other chunks, and will be replaced by the contents of these chunks when the file is generated.
1881 We add the output record separator ORS to the line now, because we will set ORS to the empty string when we generate the output4. So that we can partial print lines using print instead of printf.
1882 To do: This does't make sense
1885 58d <recognize-chunk[9]() ⇑55a, lang=> +≡ ▵58c
1886 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1887 106 | length(active_chunk) {
1888 107 | =<\chunkref{process-chunk-tabs}>
1889 108 | =<\chunkref{process-chunk}>
1891 |________________________________________________________________________
1894 If a chunk just consisted of plain text, we could handle the chunk like this:
1896 58e <process-chunk-simple[1](), lang=> ≡
1897 ________________________________________________________________________
1898 1 | chunk_line(active_chunk, $0 ORS);
1899 |________________________________________________________________________
1902 but in fact a chunk can include references to other chunks. Chunk includes are traditionally written as <<chunk-name>> but we support other variations, some of which are more suitable for particular editing systems.
1903 However, we also process tabs at this point, a tab at input can be replaced by a number of spaces defined by the tabs variable, set by the -T option. Of course this is poor tab behaviour, we should probably have the option to use proper counted tab-stops and process this on output.
1905 59a <process-chunk-tabs[1](), lang=> ≡
1906 ________________________________________________________________________
1907 1 | if (length(tabs)) {
1908 2 | gsub("\t", tabs);
1910 |________________________________________________________________________
1914 If \lstset{escapeinside={=<}{>}} is set, then we can use =<\chunkref{chunk-name}> in listings. The sequence =< was chosen because:
1915 1.it is a better mnemonic than <<chunk-name>> in that the = sign signifies equivalence or substitutability.
1916 2.and because =< is not valid in C or any language I can think of.
1917 3.and also because lstlistings doesn't like >> as an end delimiter for the texcl escape, so we must make do with a single > which is better complemented by =< than by <<.
1918 Unfortunately the =<...> that we use re-enters a LaTeX parsing mode in which some characters are special, e.g. # \ and so these cause trouble if used in arguments to \chunkref. At some point I must fix the LaTeX command \chunkref so that it can accept these literally, but until then, when writing chunkref argumemts that need these characters, I must use the forms \textbackslash{} and \#; so I also define a hacky chunk delatex to be used further on whose purpose it is to remove these from any arguments parsed by fangle.
1920 59b <delatex[1](text), lang=> ≡
1921 ________________________________________________________________________
1923 2 | gsub("\\\\#", "#", ${text});
1924 3 | gsub("\\\\textbackslash{}", "\\", ${text});
1925 4 | gsub("\\\\\\^", "^", ${text});
1926 |________________________________________________________________________
1929 As each chunk line may contain more than one chunk include, we will split out chunk includes in an iterative fashion5. Contrary to our use of split when substituting parameters in chapter ? ^5.
1930 First, as long as the chunk contains a \chunkref command we take as much as we can up to the first \chunkref command.
1932 59c <process-chunk[1](), lang=> ≡ 60a⊳
1933 ________________________________________________________________________
1936 3 | while(match(chunk,"(\xC2\xAB)([^\xC2]*) [^\xC2]*\xC2\xBB", line) ||
1938 5 | "([=]<\\\\chunkref{([^}>]*)}(\\(.*\\)|)>|<<([a-zA-Z_][-a-zA-Z0-9_]*)>>)",
1941 8 | chunklet = substr(chunk, 1, RSTART - 1);
1942 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1943 We keep track of the indent count, by counting the number of literal characters found. We can then preserve this indent on each output line when multi-line chunks are expanded.
1944 We then process this first part literal text, and set the chunk which is still to be processed to be the text after the \chunkref command, which we will process next as we continue around the loop.
1946 60a <process-chunk[2]() ⇑59c, lang=> +≡ ⊲59c 60b▿
1947 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1948 9 | indent += length(chunklet);
1949 10 | chunk_line(active_chunk, chunklet);
1950 11 | chunk = substr(chunk, RSTART + RLENGTH);
1951 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1952 We then consider the type of chunk command we have found, whether it is the fangle style command beginning with =< the older notangle style beginning with <<.
1953 Fangle chunks may have parameters contained within square brackets. These will be matched in line[3] and are considered at this stage of processing to be part of the name of the chunk to be included.
1955 60b <process-chunk[3]() ⇑59c, lang=> +≡ ▵60a 60c▿
1956 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1957 12 | if (substr(line[1], 1, 1) == "=") {
1958 13 | # chunk name up to }
1959 14 | =<\chunkref{delatex}(line[3])>
1960 15 | chunk_include(active_chunk, line[2] line[3], indent);
1961 16 | } else if (substr(line[1], 1, 1) == "<") {
1962 17 | chunk_include(active_chunk, line[4], indent);
1963 18 | } else if (line[1] == "\xC2\xAB") {
1964 19 | chunk_include(active_chunk, line[2], indent);
1966 21 | error("Unknown chunk fragment: " line[1]);
1968 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1969 The loop will continue until there are no more chunkref statements in the text, at which point we process the final part of the chunk.
1971 60c <process-chunk[4]() ⇑59c, lang=> +≡ ▵60b 60d▿
1972 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1974 24 | chunk_line(active_chunk, chunk);
1975 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1976 We add the newline character as a chunklet on it's own, to make it easier to detect new lines and thus manage indentation when processing the output.
1978 60d <process-chunk[5]() ⇑59c, lang=> +≡ ▵60c
1979 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
1980 25 | chunk_line(active_chunk, "\n");
1981 |________________________________________________________________________
1984 We will also permit a chunk-part number to follow in square brackets, so that =<\chunkref{chunk-name[1]}> will refer to the first part only. This can make it easy to include a C function prototype in a header file, if the first part of the chunk is just the function prototype without the trailing semi-colon. The header file would include the prototype with the trailing semi-colon, like this:
1985 =<\chunkref{chunk-name[1]}>
1986 This is handled in section 12.1.1
1987 We should perhaps introduce a notion of language specific chunk options; so that perhaps we could specify:
1988 =<\chunkref{chunk-name[function-declaration]}
1989 which applies a transform function-declaration to the chunk --- which in this case would extract a function prototype from a function.
1992 Chapter 11Processing Options
1993 At the start, first we set the default options.
1995 61a <default-options[1](), lang=> ≡
1996 ________________________________________________________________________
1999 3 | notangle_mode=0;
2002 |________________________________________________________________________
2005 Then we use getopt the standard way, and null out ARGV afterwards in the normal AWK fashion.
2007 61b <read-options[1](), lang=> ≡
2008 ________________________________________________________________________
2009 1 | Optind = 1 # skip ARGV[0]
2010 2 | while(getopt(ARGC, ARGV, "R:LdT:hr")!=-1) {
2011 3 | =<\chunkref{handle-options}>
2013 5 | for (i=1; i<Optind; i++) { ARGV[i]=""; }
2014 |________________________________________________________________________
2017 This is how we handle our options:
2019 61c <handle-options[1](), lang=> ≡
2020 ________________________________________________________________________
2021 1 | if (Optopt == "R") root = Optarg;
2022 2 | else if (Optopt == "r") root="";
2023 3 | else if (Optopt == "L") linenos = 1;
2024 4 | else if (Optopt == "d") debug = 1;
2025 5 | else if (Optopt == "T") tabs = indent_string(Optarg+0);
2026 6 | else if (Optopt == "h") help();
2027 7 | else if (Optopt == "?") help();
2028 |________________________________________________________________________
2031 We do all of this at the beginning of the program
2033 61d <begin[1](), lang=> ≡
2034 ________________________________________________________________________
2036 2 | =<\chunkref{constants}>
2037 3 | =<\chunkref{mode-definitions}>
2038 4 | =<\chunkref{default-options}>
2040 6 | =<\chunkref{read-options}>
2042 |________________________________________________________________________
2045 And have a simple help function
2047 61e <help()[1](), lang=> ≡
2048 ________________________________________________________________________
2049 1 | function help() {
2051 3 | print " fangle [-L] -R<rootname> [source.tex ...]"
2052 4 | print " fangle -r [source.tex ...]"
2053 5 | print " If the filename, source.tex is not specified then stdin is used"
2055 7 | print "-L causes the C statement: #line <lineno> \"filename\"" to be issued"
2056 8 | print "-R causes the named root to be written to stdout"
2057 9 | print "-r lists all roots in the file (even those used elsewhere)"
2060 |________________________________________________________________________
2063 Chapter 12Generating the Output
2064 We generate output by calling output_chunk, or listing the chunk names.
2066 63a <generate-output[1](), lang=> ≡
2067 ________________________________________________________________________
2068 1 | if (length(root)) output_chunk(root);
2069 2 | else output_chunk_names();
2070 |________________________________________________________________________
2073 We also have some other output debugging:
2075 63b <debug-output[1](), lang=> ≡
2076 ________________________________________________________________________
2078 2 | print "------ chunk names "
2079 3 | output_chunk_names();
2080 4 | print "====== chunks"
2081 5 | output_chunks();
2082 6 | print "++++++ debug"
2083 7 | for (a in chunks) {
2084 8 | print a "=" chunks[a];
2087 |________________________________________________________________________
2090 We do both of these at the end. We also set ORS="" because each chunklet is not necessarily a complete line, and we already added ORS to each input line in section 10.3.
2092 63c <end[1](), lang=> ≡
2093 ________________________________________________________________________
2095 2 | =<\chunkref{debug-output}>
2097 4 | =<\chunkref{generate-output}>
2099 |________________________________________________________________________
2102 We write chunk names like this. If we seem to be running in notangle compatibility mode, then we enclose the name like this <<name>> the same way notangle does:
2104 63d <output_chunk_names()[1](), lang=> ≡
2105 ________________________________________________________________________
2106 1 | function output_chunk_names( c, prefix, suffix)
2108 3 | if (notangle_mode) {
2112 7 | for (c in chunk_names) {
2113 8 | print prefix c suffix "\n";
2116 |________________________________________________________________________
2119 This function would write out all chunks
2121 63e <output_chunks()[1](), lang=> ≡
2122 ________________________________________________________________________
2123 1 | function output_chunks( a)
2125 3 | for (a in chunk_names) {
2126 4 | output_chunk(a);
2130 8 | function output_chunk(chunk) {
2132 10 | lineno_needed = linenos;
2134 12 | write_chunk(chunk);
2137 |________________________________________________________________________
2140 12.1 Assembling the Chunks
2141 chunk_path holds a string consisting of the names of all the chunks that resulted in this chunk being output. It should probably also contain the source line numbers at which each inclusion also occured.
2142 We first initialize the mode tracker for this chunk.
2144 64a <write_chunk()[1](), lang=> ≡ 64b▿
2145 ________________________________________________________________________
2146 1 | function write_chunk(chunk_name) {
2147 2 | =<\chunkref{awk-delete-array}(context)>
2148 3 | return write_chunk_r(chunk_name, context);
2151 6 | function write_chunk_r(chunk_name, context, indent, tail,
2153 8 | chunk_path, chunk_args,
2154 9 | s, r, src, new_src,
2156 11 | chunk_params, part, max_part, part_line, frag, max_frag, text,
2157 12 | chunklet, only_part, call_chunk_args, new_context)
2159 14 | if (debug) debug_log("write_chunk_r(", chunk_name, ")");
2160 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2162 As mentioned in section ?, a chunk name may contain a part specifier in square brackets, limiting the parts that should be emitted.
2164 64b <write_chunk()[2]() ⇑64a, lang=> +≡ ▵64a 64c▿
2165 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2166 15 | if (match(chunk_name, "^(.*)\\[([0-9]*)\\]$", chunk_name_parts)) {
2167 16 | chunk_name = chunk_name_parts[1];
2168 17 | only_part = chunk_name_parts[2];
2170 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2171 We then create a mode tracker
2173 64c <write_chunk()[3]() ⇑64a, lang=> +≡ ▵64b 65a⊳
2174 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2175 19 | =<\chunkref{new-mode-tracker}(context, chunks[chunk_name, "language"], "")>
2176 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2177 We extract into chunk_params the names of the parameters that this chunk accepts, whose values were (optionally) passed in chunk_args.
2179 65a <write_chunk()[4]() ⇑64a, lang=> +≡ ⊲64c 65b▿
2180 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2181 20 | split(chunks[chunk_name, "params"], chunk_params, " *; *");
2182 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2183 To assemble a chunk, we write out each part.
2185 65b <write_chunk()[5]() ⇑64a, lang=> +≡ ▵65a
2186 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2187 21 | if (! (chunk_name in chunk_names)) {
2188 22 | error(sprintf(_"The root module <<%s>> was not defined.\nUsed by: %s",\
2189 23 | chunk_name, chunk_path));
2192 26 | max_part = chunks[chunk_name, "part"];
2193 27 | for(part = 1; part <= max_part; part++) {
2194 28 | if (! only_part || part == only_part) {
2195 29 | =<\chunkref{write-part}>
2198 32 | if (! finalize_mode_tracker(context)) {
2199 33 | dump_mode_tracker(context);
2200 34 | error(sprintf(_"Module %s did not close context properly.\nUsed by: %s\n", chunk_name, chunk_path));
2203 |________________________________________________________________________
2206 A part can either be a chunklet of lines, or an include of another chunk.
2207 Chunks may also have parameters, specified in LaTeX style with braces after the chunk name --- looking like this in the document: chunkname{param1, param2}. Arguments are passed in square brackets: \chunkref{chunkname}[arg1, arg2].
2208 Before we process each part, we check that the source position hasn't changed unexpectedly, so that we can know if we need to output a new file-line directive.
2210 65c <write-part[1](), lang=> ≡
2211 ________________________________________________________________________
2212 1 | =<\chunkref{check-source-jump}>
2214 3 | chunklet = chunks[chunk_name, "part", part];
2215 4 | if (chunks[chunk_name, "part", part, "type"] == part_type_chunk) {
2216 5 | =<\chunkref{write-included-chunk}>
2217 6 | } else if (chunklet SUBSEP "line" in chunks) {
2218 7 | =<\chunkref{write-chunklets}>
2220 9 | # empty last chunklet
2222 |________________________________________________________________________
2225 To write an included chunk, we must detect any optional chunk arguments in parenthesis. Then we recurse calling write_chunk().
2227 65d <write-included-chunk[1](), lang=> ≡
2228 ________________________________________________________________________
2229 1 | if (match(chunklet, "^([^\\[\\(]*)\\((.*)\\)$", chunklet_parts)) {
2230 2 | chunklet = chunklet_parts[1];
2231 3 | parse_chunk_args("c-like", chunklet_parts[2], call_chunk_args, "(");
2232 4 | for (c in call_chunk_args) {
2233 5 | call_chunk_args[c] = expand_chunk_args(call_chunk_args[c], chunk_params, chunk_args);
2236 8 | split("", call_chunk_args);
2238 10 | # update the transforms arrays
2239 11 | new_src = mode_escaper(context, s, r, src);
2240 12 | =<\chunkref{awk-delete-array}(new_context)>
2241 13 | write_chunk_r(chunklet, new_context,
2242 14 | chunks[chunk_name, "part", part, "indent"] indent,
2243 15 | chunks[chunk_name, "part", part, "tail"],
2244 16 | chunk_path "\n " chunk_name,
2245 17 | call_chunk_args,
2246 18 | s, r, new_src);
2247 |________________________________________________________________________
2250 Before we output a chunklet of lines, we first emit the file and line number if we have one, and if it is safe to do so.
2251 Chunklets are generally broken up by includes, so the start of a chunklet is a good place to do this. Then we output each line of the chunklet.
2252 When it is not safe, such as in the middle of a multi-line macro definition, lineno_suppressed is set to true, and in such a case we note that we want to emit the line statement when it is next safe.
2254 66a <write-chunklets[1](), lang=> ≡ 66b▿
2255 ________________________________________________________________________
2256 1 | max_frag = chunks[chunklet, "line"];
2257 2 | for(frag = 1; frag <= max_frag; frag++) {
2258 3 | =<\chunkref{write-file-line}>
2259 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2260 We then extract the chunklet text and expand any arguments.
2262 66b <write-chunklets[2]() ⇑66a, lang=> +≡ ▵66a 66c▿
2263 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2265 5 | text = chunks[chunklet, frag];
2267 7 | /* check params */
2268 8 | text = expand_chunk_args(text, chunk_params, chunk_args);
2269 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2270 If the text is a single newline (which we keep separate - see 5) then we increment the line number. In the case where this is the last line of a chunk and it is not a top-level chunk we replace the newline with an empty string --- because the chunk that included this chunk will have the newline at the end of the line that included this chunk.
2271 We also note by newline = 1 that we have started a new line, so that indentation can be managed with the following piece of text.
2273 66c <write-chunklets[3]() ⇑66a, lang=> +≡ ▵66b 66d▿
2274 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2276 10 | if (text == "\n") {
2278 12 | if (part == max_part && frag == max_frag && length(chunk_path)) {
2284 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2285 If this text does not represent a newline, but we see that we are the first piece of text on a newline, then we prefix our text with the current indent.
2286 Note 1. newline is a global output-state variable, but the indent is not.
2288 66d <write-chunklets[4]() ⇑66a, lang=> +≡ ▵66c 67a⊳
2289 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2290 18 | } else if (length(text) || length(tail)) {
2291 19 | if (newline) text = indent text;
2295 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2296 Tail will soon no longer be relevant once mode-detection is in place.
2298 67a <write-chunklets[5]() ⇑66a, lang=> +≡ ⊲66d 67b▿
2299 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2300 23 | text = text tail;
2301 24 | mode_tracker(context, text);
2302 25 | print untab(transform_escape(s, r, text, src));
2303 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2304 If a line ends in a backslash --- suggesting continuation --- then we supress outputting file-line as it would probably break the continued lines.
2306 67b <write-chunklets[6]() ⇑66a, lang=> +≡ ▵67a
2307 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2309 27 | lineno_suppressed = substr(lastline, length(lastline)) == "\\";
2312 |________________________________________________________________________
2315 Of course there is no point in actually outputting the source filename and line number (file-line) if they don't say anything new! We only need to emit them if they aren't what is expected, or if we we not able to emit one when they had changed.
2317 67c <write-file-line[1](), lang=> ≡
2318 ________________________________________________________________________
2319 1 | if (newline && lineno_needed && ! lineno_suppressed) {
2320 2 | filename = a_filename;
2321 3 | lineno = a_lineno;
2322 4 | print "#line " lineno " \"" filename "\"\n"
2323 5 | lineno_needed = 0;
2325 |________________________________________________________________________
2328 We check if a new file-line is needed by checking if the source line matches what we (or a compiler) would expect.
2330 67d <check-source-jump[1](), lang=> ≡
2331 ________________________________________________________________________
2332 1 | if (linenos && (chunk_name SUBSEP "part" SUBSEP part SUBSEP "FILENAME" in chunks)) {
2333 2 | a_filename = chunks[chunk_name, "part", part, "FILENAME"];
2334 3 | a_lineno = chunks[chunk_name, "part", part, "LINENO"];
2335 4 | if (a_filename != filename || a_lineno != lineno) {
2336 5 | lineno_needed++;
2339 |________________________________________________________________________
2342 Chapter 13Storing Chunks
2343 Awk has pretty limited data structures, so we will use two main hashes. Uninterrupted sequences of a chunk will be stored in chunklets and the chunklets used in a chunk will be stored in chunks.
2345 69a <constants[1](), lang=> ≡
2346 ________________________________________________________________________
2347 1 | part_type_chunk=1;
2349 |________________________________________________________________________
2352 The params mentioned are not chunk parameters for parameterized chunks, as mentioned in 8.2, but the lstlistings style parameters used in the \Chunk command1. The params parameter is used to hold the parameters for parameterized chunks ^1.
2354 69b <chunk-storage-functions[1](), lang=> ≡ 69c▿
2355 ________________________________________________________________________
2356 1 | function new_chunk(chunk_name, params,
2360 5 | # HACK WHILE WE CHANGE TO ( ) for PARAM CHUNKS
2361 6 | gsub("\\(\\)$", "", chunk_name);
2362 7 | if (! (chunk_name in chunk_names)) {
2363 8 | if (debug) print "New chunk " chunk_name;
2364 9 | chunk_names[chunk_name];
2365 10 | for (p in params) {
2366 11 | chunks[chunk_name, p] = params[p];
2367 12 | if (debug) print "chunks[" chunk_name "," p "] = " params[p];
2369 14 | if ("append" in params) {
2370 15 | append=params["append"];
2371 16 | if (! (append in chunk_names)) {
2372 17 | warning("Chunk " chunk_name " is appended to chunk " append " which is not defined yet");
2373 18 | new_chunk(append);
2375 20 | chunk_include(append, chunk_name);
2376 21 | chunk_line(append, ORS);
2379 24 | active_chunk = chunk_name;
2380 25 | prime_chunk(chunk_name);
2382 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2384 69c <chunk-storage-functions[2]() ⇑69b, lang=> +≡ ▵69b 70a⊳
2385 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2387 28 | function prime_chunk(chunk_name)
2389 30 | chunks[chunk_name, "part", ++chunks[chunk_name, "part"] ] = \
2390 31 | chunk_name SUBSEP "chunklet" SUBSEP "" ++chunks[chunk_name, "chunklet"];
2391 32 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "FILENAME"] = FILENAME;
2392 33 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "LINENO"] = FNR + 1;
2395 36 | function chunk_line(chunk_name, line){
2396 37 | chunks[chunk_name, "chunklet", chunks[chunk_name, "chunklet"],
2397 38 | ++chunks[chunk_name, "chunklet", chunks[chunk_name, "chunklet"], "line"] ] = line;
2400 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2401 Chunk include represents a chunkref statement, and stores the requirement to include another chunk. The parameter indent represents the quanity of literal text characters that preceded this chunkref statement and therefore by how much additional lines of the included chunk should be indented.
2403 70a <chunk-storage-functions[3]() ⇑69b, lang=> +≡ ⊲69c 70b▿
2404 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2405 41 | function chunk_include(chunk_name, chunk_ref, indent, tail)
2407 43 | chunks[chunk_name, "part", ++chunks[chunk_name, "part"] ] = chunk_ref;
2408 44 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "type" ] = part_type_chunk;
2409 45 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "indent" ] = indent_string(indent);
2410 46 | chunks[chunk_name, "part", chunks[chunk_name, "part"], "tail" ] = tail;
2411 47 | prime_chunk(chunk_name);
2414 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2415 The indent is calculated by indent_string, which may in future convert some spaces into tab characters. This function works by generating a printf padded format string, like %22s for an indent of 22, and then printing an empty string using that format.
2417 70b <chunk-storage-functions[4]() ⇑69b, lang=> +≡ ▵70a
2418 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2419 50 | function indent_string(indent) {
2420 51 | return sprintf("%" indent "s", "");
2422 |________________________________________________________________________
2426 I use Arnold Robbins public domain getopt (1993 revision). This is probably the same one that is covered in chapter 12 of âĂIJEdition 3 of GAWK: Effective AWK Programming: A User's Guide for GNU AwkâĂİ but as that is licensed under the GNU Free Documentation License, Version 1.3, which conflicts with the GPL3, I can't use it from there (or it's accompanying explanations), so I do my best to explain how it works here.
2427 The getopt.awk header is:
2429 71a <getopt.awk-header[1](), lang=> ≡
2430 ________________________________________________________________________
2431 1 | # getopt.awk --- do C library getopt(3) function in awk
2433 3 | # Arnold Robbins, arnold@skeeve.com, Public Domain
2435 5 | # Initial version: March, 1991
2436 6 | # Revised: May, 1993
2438 |________________________________________________________________________
2441 The provided explanation is:
2443 71b <getopt.awk-notes[1](), lang=> ≡
2444 ________________________________________________________________________
2445 1 | # External variables:
2446 2 | # Optind -- index in ARGV of first nonoption argument
2447 3 | # Optarg -- string value of argument to current option
2448 4 | # Opterr -- if nonzero, print our own diagnostic
2449 5 | # Optopt -- current option letter
2452 8 | # -1 at end of options
2453 9 | # ? for unrecognized option
2454 10 | # <c> a character representing the current option
2456 12 | # Private Data:
2457 13 | # _opti -- index in multi-flag option, e.g., -abc
2459 |________________________________________________________________________
2462 The function follows. The final two parameters, thisopt and i are local variables and not parameters --- as indicated by the multiple spaces preceding them. Awk doesn't care, the multiple spaces are a convention to help us humans.
2464 71c <getopt.awk-getopt()[1](), lang=> ≡ 72a⊳
2465 ________________________________________________________________________
2466 1 | function getopt(argc, argv, options, thisopt, i)
2468 3 | if (length(options) == 0) # no options given
2470 5 | if (argv[Optind] == "--") { # all done
2474 9 | } else if (argv[Optind] !~ /^-[^: \t\n\f\r\v\b]/) {
2478 13 | if (_opti == 0)
2480 15 | thisopt = substr(argv[Optind], _opti, 1)
2481 16 | Optopt = thisopt
2482 17 | i = index(options, thisopt)
2485 20 | printf("%c -- invalid option\n",
2486 21 | thisopt) > "/dev/stderr"
2487 22 | if (_opti >= length(argv[Optind])) {
2494 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2495 At this point, the option has been found and we need to know if it takes any arguments.
2497 72a <getopt.awk-getopt()[2]() ⇑71c, lang=> +≡ ⊲71c
2498 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2499 29 | if (substr(options, i + 1, 1) == ":") {
2500 30 | # get option argument
2501 31 | if (length(substr(argv[Optind], _opti + 1)) > 0)
2502 32 | Optarg = substr(argv[Optind], _opti + 1)
2504 34 | Optarg = argv[++Optind]
2508 38 | if (_opti == 0 || _opti >= length(argv[Optind])) {
2515 |________________________________________________________________________
2518 A test program is built in, too
2520 72b <getopt.awk-begin[1](), lang=> ≡
2521 ________________________________________________________________________
2523 2 | Opterr = 1 # default is to diagnose
2524 3 | Optind = 1 # skip ARGV[0]
2526 5 | if (_getopt_test) {
2527 6 | while ((_go_c = getopt(ARGC, ARGV, "ab:cd")) != -1)
2528 7 | printf("c = <%c>, optarg = <%s>\n",
2530 9 | printf("non-option arguments:\n")
2531 10 | for (; Optind < ARGC; Optind++)
2532 11 | printf("\tARGV[%d] = <%s>\n",
2533 12 | Optind, ARGV[Optind])
2536 |________________________________________________________________________
2539 The entire getopt.awk is made out of these chunks in order
2541 72c <getopt.awk[1](), lang=> ≡
2542 ________________________________________________________________________
2543 1 | =<\chunkref{getopt.awk-header}>
2545 3 | =<\chunkref{getopt.awk-notes}>
2546 4 | =<\chunkref{getopt.awk-getopt()}>
2547 5 | =<\chunkref{getopt.awk-begin}>
2548 |________________________________________________________________________
2551 Although we only want the header and function:
2553 73a <getopt[1](), lang=> ≡
2554 ________________________________________________________________________
2555 1 | # try: locate getopt.awk for the full original file
2556 2 | # as part of your standard awk installation
2557 3 | =<\chunkref{getopt.awk-header}>
2559 5 | =<\chunkref{getopt.awk-getopt()}>
2560 |________________________________________________________________________
2563 Chapter 15Fangle LaTeX source code
2565 Here we define a L Y X .module file that makes it convenient to use L Y X for writing such literate programs.
2566 This file ./fangle.module can be installed in your personal .lyx/layouts folder. You will need to Tools Reconfigure so that L Y X notices it. It adds a new format Chunk, which should precede every listing and contain the chunk name.
2568 75a <./fangle.module[1](), lang=lyx-module> ≡
2569 ________________________________________________________________________
2570 1 | #\DeclareLyXModule{Fangle Literate Listings}
2571 2 | #DescriptionBegin
2572 3 | # Fangle literate listings allow one to write
2573 4 | # literate programs after the fashion of noweb, but without having
2574 5 | # to use noweave to generate the documentation. Instead the listings
2575 6 | # package is extended in conjunction with the noweb package to implement
2576 7 | # to code formating directly as latex.
2577 8 | # The fangle awk script
2580 11 | =<\chunkref{gpl3-copyright.hashed}>
2585 16 | =<\chunkref{./fangle.sty}>
2588 19 | =<\chunkref{chunkstyle}>
2590 21 | =<\chunkref{chunkref}>
2591 |________________________________________________________________________
2594 Because L Y X modules are not yet a language supported by fangle or lstlistings, we resort to this fake awk chunk below in order to have each line of the GPL3 license commence with a #
2596 75b <gpl3-copyright.hashed[1](), lang=awk> ≡
2597 ________________________________________________________________________
2598 1 | #=<\chunkref{gpl3-copyright}>
2600 |________________________________________________________________________
2603 15.1.1 The Chunk style
2604 The purpose of the chunk style is to make it easier for L Y X users to provide the name to lstlistings. Normally this requires right-clicking on the listing, choosing settings, advanced, and then typing name=chunk-name. This has the further disadvantage that the name (and other options) are not generally visible during document editing.
2605 The chunk style is defined as a LaTeX command, so that all text on the same line is passed to the LaTeX command Chunk. This makes it easy to parse using fangle, and easy to pass these options on to the listings package. The first word in a chunk section should be the chunk name, and will have name= prepended to it. Any other words are accepted arguments to lstset.
2606 We set PassThru to 1 because the user is actually entering raw latex.
2608 76a <chunkstyle[1](), lang=> ≡ 76b▿
2609 ________________________________________________________________________
2611 2 | LatexType Command
2613 4 | Margin First_Dynamic
2614 5 | LeftMargin Chunk:xxx
2616 7 | LabelType Static
2617 8 | LabelString "Chunk:"
2621 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2622 To make the label very visible we choose a larger font coloured red.
2624 76b <chunkstyle[2]() ⇑76a, lang=> +≡ ▵76a
2625 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2634 |________________________________________________________________________
2637 15.1.2 The chunkref style
2638 We also define the Chunkref style which can be used to express cross references to chunks.
2640 76c <chunkref[1](), lang=> ≡
2641 ________________________________________________________________________
2642 1 | InsetLayout Chunkref
2643 2 | LyxType charstyle
2644 3 | LatexType Command
2645 4 | LatexName chunkref
2652 |________________________________________________________________________
2656 We require the listings, noweb and xargs packages. As noweb defines it's own \code environment, we re-define the one that L Y X logical markup module expects here.
2658 76d <./fangle.sty[1](), lang=tex> ≡ 77a⊳
2659 ________________________________________________________________________
2660 1 | \usepackage{listings}%
2661 2 | \usepackage{noweb}%
2662 3 | \usepackage{xargs}%
2663 4 | \renewcommand{\code}[1]{\texttt{#1}}%
2664 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2665 We also define a CChunk macro, for use as: \begin{CChunk} which will need renaming to \begin{Chunk} when I can do this without clashing with \Chunk.
2667 77a <./fangle.sty[2]() ⇑76d, lang=> +≡ ⊲76d 77b▿
2668 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2669 5 | \lstnewenvironment{Chunk}{\relax}{\relax}%
2670 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2671 We also define a suitable \lstset of parameters that suit the literate programming style after the fashion of noweave.
2673 77b <./fangle.sty[3]() ⇑76d, lang=> +≡ ▵77a 77c▿
2674 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2675 6 | \lstset{numbers=left, stepnumber=5, numbersep=5pt,
2676 7 | breaklines=false,basicstyle=\ttfamily,
2677 8 | numberstyle=\tiny, language=C}%
2678 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2679 We also define a notangle-like mechanism for escaping to LaTeX from the listing, and by which we can refer to other listings. We declare the =<...> sequence to contain LaTeX code, and include another like this chunk: =<\chunkref{chunkname}>. However, because =<...> is already defined to contain LaTeX code for this document --- this is a fangle document after all --- the code fragment below effectively contains the LaTeX code: }{. To avoid problems with document generation, I had to declare an lstlistings property: escapeinside={} for this listing only; which in L Y X was done by right-clicking the listings inset, choosing settings->advanced. Therefore =< isn't interpreted literally here, in a listing when the escape sequence is already defined as shown... we need to somehow escape this representation...
2681 77c <./fangle.sty[4]() ⇑76d, lang=> +≡ ▵77b 77d▿
2682 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2683 9 | \lstset{escapeinside={=<}{>}}%
2684 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2685 Although our macros will contain the @ symbol, they will be included in a \makeatletter section by L Y X; however we keep the commented out \makeatletter as a reminder. The listings package likes to centre the titles, but noweb titles are specially formatted and must be left aligned. The simplest way to do this turned out to be by removing the definition of \lst@maketitle. This may interact badly if other listings want a regular title or caption. We remember the old maketitle in case we need it.
2687 77d <./fangle.sty[5]() ⇑76d, lang=> +≡ ▵77c 77e▿
2688 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2690 11 | %somehow re-defining maketitle gives us a left-aligned title
2691 12 | %which is extactly what our specially formatted title needs!
2692 13 | \global\let\fangle@lst@maketitle\lst@maketitle%
2693 14 | \global\def\lst@maketitle{}%
2694 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2695 15.2.1 The chunk command
2696 Our chunk command accepts one argument, and calls \ltset. Although \ltset will note the name, this is erased when the next \lstlisting starts, so we make a note of this in \lst@chunkname and restore in in lstlistings Init hook.
2698 77e <./fangle.sty[6]() ⇑76d, lang=> +≡ ▵77d 78a⊳
2699 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2701 16 | \lstset{title={\fanglecaption},name=#1}%
2702 17 | \global\edef\lst@chunkname{\lst@intname}%
2704 19 | \def\lst@chunkname{\empty}%
2705 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2706 15.2.1.1 Chunk parameters
2707 Fangle permits parameterized chunks, and requires the paramters to be specified as listings options. The fangle script uses this, and although we don't do anything with these in the LaTeX code right now, we need to stop the listings package complaining.
2709 78a <./fangle.sty[7]() ⇑76d, lang=> +≡ ⊲77e 78b▿
2710 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2711 20 | \lst@Key{params}\relax{\def\fangle@chunk@params{#1}}%
2712 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2713 As it is common to define a chunk which then needs appending to another chunk, and annoying to have to declare a single line chunk to manage the include, we support an append= option.
2715 78b <./fangle.sty[8]() ⇑76d, lang=> +≡ ▵78a 78c▿
2716 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2717 21 | \lst@Key{append}\relax{\def\fangle@chunk@append{#1}}%
2718 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2719 15.2.2 The noweb styled caption
2720 We define a public macro \fanglecaption which can be set as a regular title. By means of \protect, It expands to \fangle@caption at the appopriate time when the caption is emitted.
2722 78c <./fangle.sty[9]() ⇑76d, lang=> +≡ ▵78b 78d▿
2723 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2724 \def\fanglecaption{\protect\fangle@caption}%
2725 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2726 22c ⟨some-chunk 19b⟩≡+ ⊲22b 24d⊳
2728 In this example, the current chunk is 22c, and therefore the third chunk on page 22.
2729 It's name is some-chunk.
2730 The first chunk with this name (19b) occurs as the second chunk on page 19.
2731 The previous chunk (22d) with the same name is the second chunk on page 22.
2732 The next chunk (24d) is the fourth chunk on page 24.
2734 Figure 1. Noweb Heading
2736 The general noweb output format compactly identifies the current chunk, and references to the first chunk, and the previous and next chunks that have the same name.
2737 This means that we need to keep a counter for each chunk-name, that we use to count chunks of the same name.
2738 15.2.3 The chunk counter
2739 It would be natural to have a counter for each chunk name, but TeX would soon run out of counters1. ...soon did run out of counters and so I had to re-write the LaTeX macros to share a counter as described here. ^1, so we have one counter which we save at the end of a chunk and restore at the beginning of a chunk.
2741 78d <./fangle.sty[10]() ⇑76d, lang=> +≡ ▵78c 79c⊳
2742 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2743 22 | \newcounter{fangle@chunkcounter}%
2744 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2745 We construct the name of this variable to store the counter to be the text lst-chunk- prefixed onto the chunks own name, and store it in \chunkcount.
2746 We save the counter like this:
2748 79a <save-counter[1](), lang=> ≡
2749 ________________________________________________________________________
2750 \global\expandafter\edef\csname \chunkcount\endcsname{\arabic{fangle@chunkcounter}}%
2751 |________________________________________________________________________
2754 and restore the counter like this:
2756 79b <restore-counter[1](), lang=> ≡
2757 ________________________________________________________________________
2758 \setcounter{fangle@chunkcounter}{\csname \chunkcount\endcsname}%
2759 |________________________________________________________________________
2762 If there does not already exist a variable whose name is stored in \chunkcount, then we know we are the first chunk with this name, and then define a counter.
2763 Although chunks of the same name share a common counter, they must still be distinguished. We use is the internal name of the listing, suffixed by the counter value. So the first chunk might be something-1 and the second chunk be something-2, etc.
2764 We also calculate the name of the previous chunk if we can (before we increment the chunk counter). If this is the first chunk of that name, then \prevchunkname is set to \relax which the noweb package will interpret as not existing.
2766 79c <./fangle.sty[11]() ⇑76d, lang=> +≡ ⊲78d 79d▿
2767 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2768 23 | \def\fangle@caption{%
2769 24 | \edef\chunkcount{lst-chunk-\lst@intname}%
2770 25 | \@ifundefined{\chunkcount}{%
2771 26 | \expandafter\gdef\csname \chunkcount\endcsname{0}%
2772 27 | \setcounter{fangle@chunkcounter}{\csname \chunkcount\endcsname}%
2773 28 | \let\prevchunkname\relax%
2775 30 | \setcounter{fangle@chunkcounter}{\csname \chunkcount\endcsname}%
2776 31 | \edef\prevchunkname{\lst@intname-\arabic{fangle@chunkcounter}}%
2778 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2779 After incrementing the chunk counter, we then define the name of this chunk, as well as the name of the first chunk.
2781 79d <./fangle.sty[12]() ⇑76d, lang=> +≡ ▵79c 79e▿
2782 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2783 33 | \addtocounter{fangle@chunkcounter}{1}%
2784 34 | \global\expandafter\edef\csname \chunkcount\endcsname{\arabic{fangle@chunkcounter}}%
2785 35 | \edef\chunkname{\lst@intname-\arabic{fangle@chunkcounter}}%
2786 36 | \edef\firstchunkname{\lst@intname-1}%
2787 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2788 We now need to calculate the name of the next chunk. We do this by temporarily skipping the counter on by one; however there may not actually be another chunk with this name! We detect this by also defining a label for each chunk based on the chunkname. If there is a next chunkname then it will define a label with that name. As labels are persistent, we can at least tell the second time LaTeX is run. If we don't find such a defined label then we define \nextchunkname to \relax.
2790 79e <./fangle.sty[13]() ⇑76d, lang=> +≡ ▵79d 80a⊳
2791 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2792 37 | \addtocounter{fangle@chunkcounter}{1}%
2793 38 | \edef\nextchunkname{\lst@intname-\arabic{fangle@chunkcounter}}%
2794 39 | \@ifundefined{r@label-\nextchunkname}{\let\nextchunkname\relax}{}%
2795 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2796 The noweb package requires that we define a \sublabel for every chunk, with a unique name, which is then used to print out it's navigation hints.
2797 We also define a regular label for this chunk, as was mentioned above when we calculated \nextchunkname. This requires LaTeX to be run at least twice after new chunk sections are added --- but noweb requried that anyway.
2799 80a <./fangle.sty[14]() ⇑76d, lang=> +≡ ⊲79e 80b▿
2800 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2801 40 | \sublabel{\chunkname}%
2802 41 | % define this label for every chunk instance, so we
2803 42 | % can tell when we are the last chunk of this name
2804 43 | \label{label-\chunkname}%
2805 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2806 We also try and add the chunk to the list of listings, but I'm afraid we don't do very well. We want each chunk name listing once, with all of it's references.
2808 80b <./fangle.sty[15]() ⇑76d, lang=> +≡ ▵80a 80c▿
2809 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2810 44 | \addcontentsline{lol}{lstlisting}{\lst@name~[\protect\subpageref{\chunkname}]}%
2811 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2812 We then call the noweb output macros in the same way that noweave generates them, except that we don't need to call \nwstartdeflinemarkup or \nwenddeflinemarkup — and if we do, it messes up the output somewhat.
2814 80c <./fangle.sty[16]() ⇑76d, lang=> +≡ ▵80b 80d▿
2815 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2819 48 | \subpageref{\chunkname}%
2826 55 | \nwtagstyle{}\/%
2827 56 | \@ifundefined{fangle@chunk@params}{}{%
2828 57 | (\fangle@chunk@params)%
2830 59 | [\csname \chunkcount\endcsname]~%
2831 60 | \subpageref{\firstchunkname}%
2833 62 | \@ifundefined{fangle@chunk@append}{}{%
2834 63 | \ifx{}\fangle@chunk@append{x}\else%
2835 64 | ,~add~to~\fangle@chunk@append%
2838 67 | \global\def\fangle@chunk@append{}%
2839 68 | \lstset{append=x}%
2842 71 | \ifx\relax\prevchunkname\endmoddef\else\plusendmoddef\fi%
2843 72 | % \nwstartdeflinemarkup%
2844 73 | \nwprevnextdefs{\prevchunkname}{\nextchunkname}%
2845 74 | % \nwenddeflinemarkup%
2847 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2848 Originally this was developed as a listings aspect, in the Init hook, but it was found easier to affect the title without using a hook — \lst@AddToHookExe{PreSet} is still required to set the listings name to the name passed to the \Chunk command, though.
2850 80d <./fangle.sty[17]() ⇑76d, lang=> +≡ ▵80c 81a⊳
2851 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2852 76 | %\lst@BeginAspect{fangle}
2853 77 | %\lst@Key{fangle}{true}[t]{\lstKV@SetIf{#1}{true}}
2854 78 | \lst@AddToHookExe{PreSet}{\global\let\lst@intname\lst@chunkname}
2855 79 | \lst@AddToHook{Init}{}%\fangle@caption}
2856 80 | %\lst@EndAspect
2857 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2858 15.2.4 Cross references
2859 We define the \chunkref command which makes it easy to generate visual references to different code chunks, e.g.
2862 \chunkref[3]{preamble}
2863 \chunkref{preamble}[arg1, arg2]
2865 Chunkref can also be used within a code chunk to include another code chunk. The third optional parameter to chunkref is a comma sepatarated list of arguments, which will replace defined parameters in the chunkref.
2866 Note 1. Darn it, if I have: =<\chunkref{new-mode-tracker}[{chunks[chunk_name, "language"]},{mode}]> the inner braces (inside [ ]) cause _ to signify subscript even though we have lst@ReplaceIn
2868 81a <./fangle.sty[18]() ⇑76d, lang=> +≡ ⊲80d 82a⊳
2869 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2870 81 | \def\chunkref@args#1,{%
2872 83 | \lst@ReplaceIn\arg\lst@filenamerpl%
2874 85 | \@ifnextchar){\relax}{, \chunkref@args}%
2876 87 | \newcommand\chunkref[2][0]{%
2877 88 | \@ifnextchar({\chunkref@i{#1}{#2}}{\chunkref@i{#1}{#2}()}%
2879 90 | \def\chunkref@i#1#2(#3){%
2881 92 | \def\chunk{#2}%
2882 93 | \def\chunkno{#1}%
2883 94 | \def\chunkargs{#3}%
2884 95 | \ifx\chunkno\zero%
2885 96 | \def\chunkname{#2-1}%
2887 98 | \def\chunkname{#2-\chunkno}%
2889 100 | \let\lst@arg\chunk%
2890 101 | \lst@ReplaceIn\chunk\lst@filenamerpl%
2891 102 | \LA{%\moddef{%
2894 105 | \nwtagstyle{}\/%
2895 106 | \ifx\chunkno\zero%
2899 110 | \ifx\chunkargs\empty%
2901 112 | (\chunkref@args #3,)%
2903 114 | ~\subpageref{\chunkname}%
2906 117 | \RA%\endmoddef%
2908 |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2911 82a <./fangle.sty[19]() ⇑76d, lang=> +≡ ⊲81a
2912 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2915 |________________________________________________________________________
2918 Chapter 16Extracting fangle
2919 16.1 Extracting from Lyx
2920 To extract from L Y X, you will need to configure L Y X as explained in section ?.
2921 And this lyx-build scrap will extract fangle for me.
2923 83a <lyx-build[2]() ⇑20a, lang=sh> +≡ ⊲20a
2924 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2928 14 | =<\chunkref{lyx-build-helper}>
2929 15 | cd $PROJECT_DIR || exit 1
2931 17 | /usr/local/bin/fangle -R./fangle $TEX_SRC > ./fangle
2932 18 | /usr/local/bin/fangle -R./fangle.module $TEX_SRC > ./fangle.module
2934 20 | =<\chunkref{test:helpers}>
2935 21 | export FANGLE=./fangle
2936 22 | export TMP=${TMP:-/tmp}
2937 23 | =<\chunkref{test:run-tests}>
2938 24 | # Now check that we can extract a fangle that also passes the tests!
2939 25 | $FANGLE -R./fangle $TEX_SRC > ./new-fangle
2940 26 | export FANGLE=./new-fangle
2941 27 | =<\chunkref{test:run-tests}>
2942 |________________________________________________________________________
2946 83b <test:run-tests[1](), lang=sh> ≡
2947 ________________________________________________________________________
2949 2 | $FANGLE -Rpca-test.awk $TEX_SRC | awk -f - || exit 1
2950 3 | =<\chunkref{test:cromulence}>
2951 4 | =<\chunkref{test:escapes}>
2952 5 | =<\chunkref{test:chunk-params}>
2953 |________________________________________________________________________
2956 With a lyx-build-helper
2958 83c <lyx-build-helper[2]() ⇑19b, lang=sh> +≡ ⊲19b
2959 ./\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
2960 5 | PROJECT_DIR="$LYX_r"
2961 6 | LYX_SRC="$PROJECT_DIR/${LYX_i%.tex}.lyx"
2962 7 | TEX_DIR="$LYX_p"
2963 8 | TEX_SRC="$TEX_DIR/$LYX_i"
2964 |________________________________________________________________________
2967 16.2 Extracting documentation
2969 83d <./gen-www[1](), lang=> ≡
2970 ________________________________________________________________________
2971 1 | #python -m elyxer --css lyx.css $LYX_SRC | \
2972 2 | # iconv -c -f utf-8 -t ISO-8859-1//TRANSLIT | \
2973 3 | # sed 's/UTF-8"\(.\)>/ISO-8859-1"\1>/' > www/docs/fangle.html
2975 5 | python -m elyxer --css lyx.css --iso885915 --html --destdirectory www/docs/fangle.e \
2976 6 | fangle.lyx > www/docs/fangle.e/fangle.html
2978 8 | ( mkdir -p www/docs/fangle && cd www/docs/fangle && \
2979 9 | lyx -e latex ../../../fangle.lyx && \
2980 10 | htlatex ../../../fangle.tex "xhtml,fn-in" && \
2981 11 | sed -i -e 's/<!--l\. [0-9][0-9]* *-->//g' fangle.html
2984 14 | ( mkdir -p www/docs/literate && cd www/docs/literate && \
2985 15 | lyx -e latex ../../../literate.lyx && \
2986 16 | htlatex ../../../literate.tex "xhtml,fn-in" && \
2987 17 | sed -i -e 's/<!--l\. [0-9][0-9]* *-->$//g' literate.html
2989 |________________________________________________________________________
2992 16.3 Extracting from the command line
2993 First you will need the tex output, then you can extract:
2995 84a <lyx-build-manual[1](), lang=sh> ≡
2996 ________________________________________________________________________
2997 1 | lyx -e latex fangle.lyx
2998 2 | fangle -R./fangle fangle.tex > ./fangle
2999 3 | fangle -R./fangle.module fangle.tex > ./fangle.module
3000 |________________________________________________________________________
3005 84b <test:helpers[1](), lang=> ≡
3006 ________________________________________________________________________
3009 3 | then echo "Passed"
3010 4 | else echo "Failed"
3017 11 | then echo "Passed"
3018 12 | else echo "Failed"
3022 |________________________________________________________________________
3026 Chapter 17Chunk Parameters
3028 87a <test:chunk-params:sub[1](THING, colour), lang=> ≡
3029 ________________________________________________________________________
3030 1 | I see a ${THING},
3031 2 | a ${THING} of colour ${colour},
3032 3 | and looking closer =<\chunkref{test:chunk-params:sub:sub}(${colour})>
3033 |________________________________________________________________________
3037 87b <test:chunk-params:sub:sub[1](colour), lang=> ≡
3038 ________________________________________________________________________
3039 1 | a funny shade of ${colour}
3040 |________________________________________________________________________
3044 87c <test:chunk-params:text[1](), lang=> ≡
3045 ________________________________________________________________________
3046 1 | What do you see? "=<\chunkref{test:chunk-params:sub}(joe, red)>"
3048 |________________________________________________________________________
3051 Should generate output:
3053 87d <test:chunk-params:result[1](), lang=> ≡
3054 ________________________________________________________________________
3055 1 | What do you see? "I see a joe,
3056 2 | a joe of colour red,
3057 3 | and looking closer a funny shade of red"
3059 |________________________________________________________________________
3062 And this chunk will perform the test:
3064 87e <test:chunk-params[1](), lang=> ≡
3065 ________________________________________________________________________
3066 1 | $FANGLE -Rtest:chunk-params:result $TEX_SRC > $TMP/answer || exit 1
3067 2 | $FANGLE -Rtest:chunk-params:text $TEX_SRC > $TMP/result || exit 1
3068 3 | passtest diff $TMP/answer $TMP/result || (echo test:chunk-params:text failed ; exit 1)
3069 |________________________________________________________________________
3072 Chapter 18Compile-log-lyx
3074 89a <Chunk:./compile-log-lyx[1](), lang=sh> ≡
3075 ________________________________________________________________________
3077 2 | # can't use gtkdialog -i, cos it uses the "source" command which ubuntu sh doesn't have
3080 5 | errors="/tmp/compile.log.$$"
3081 6 | # if grep '^[^ ]*:\( In \|[0-9][0-9]*: [^ ]*:\)' > $errors
3082 7 | if grep '^[^ ]*(\([0-9][0-9]*\)) *: *\(error\|warning\)' > $errors
3084 9 | sed -i -e 's/^[^ ]*[/\\]\([^/\\]*\)(\([ 0-9][ 0-9]*\)) *: */\1:\2|\2|/' $errors
3085 10 | COMPILE_DIALOG='
3088 13 | <label>Compiler errors:</label>
3090 15 | <tree exported_column="0">
3091 16 | <variable>LINE</variable>
3092 17 | <height>400</height><width>800</width>
3093 18 | <label>File | Line | Message</label>
3094 19 | <action>'". $SELF ; "'lyxgoto $LINE</action>
3095 20 | <input>'"cat $errors"'</input>
3098 23 | <button><label>Build</label>
3099 24 | <action>lyxclient -c "LYXCMD:build-program" &</action>
3101 26 | <button ok></button>
3105 30 | export COMPILE_DIALOG
3106 31 | ( gtkdialog --program=COMPILE_DIALOG ; rm $errors ) &
3113 38 | file="${LINE%:*}"
3114 39 | line="${LINE##*:}"
3115 40 | extraline=`cat $file | head -n $line | tac | sed '/^\\\\begin{lstlisting}/q' | wc -l`
3116 41 | extraline=`expr $extraline - 1`
3117 42 | lyxclient -c "LYXCMD:command-sequence server-goto-file-row $file $line ; char-forward ; repeat $extraline paragraph-down ; paragraph-up-select"
3121 46 | if test -z "$COMPILE_DIALOG"
3124 |________________________________________________________________________